abstract
stringlengths
1
4.43k
claims
stringlengths
14
189k
description
stringlengths
5
1.46M
Some embodiments include an integrated assembly having digit lines which extend along a first direction, and which are spaced from one another by intervening regions. Each of the intervening regions has a first width along a cross-section. Pillars extend upwardly from the digit lines; and the pillars include transistor channel regions extending vertically between upper and lower source/drain regions. Storage elements are coupled with the upper source/drain regions. Wordlines extend along a second direction which crosses the first direction. The wordlines include gate regions adjacent the channel regions. Shield lines are within the intervening regions and extend along the first direction. The shield lines may be coupled with at least one reference voltage node. Some embodiments include methods of forming integrated assemblies.
1.An integrated assembly including:Digit lines extending along a first direction; the digit lines are spaced from each other by an intermediate region; each of the digit lines has a first width along a cross section orthogonal to the first direction; the Each of the intermediate areas also has the first width along the cross section; each of the number lines has a top surface at a first height;Vertically extending pillars above the digit line; each of the vertically extending pillars includes a transistor channel region and an upper source/drain region; a lower source/drain region is below the channel region And coupled with the digit line; the transistor channel region extends vertically between the lower source/drain region and the upper source/drain region; each of the vertically extending pillars extends along The cross section has the first width; the intermediate region extends upward to between the vertically extending pillars and includes a distance from the top surface of the upper source/drain region to the bottom surface of the digit line The first width;A storage element, which is coupled to the upper source/drain region;A word line extending along a second direction that intersects the first direction; the word line includes a gate region adjacent to the channel region; andShielding wires extending in the intermediate region and along the first direction; each of the shielding wires has a top surface at a second height greater than or equal to the first height.2.The integrated assembly of claim 1, wherein the storage element is a capacitor.3.The integrated assembly of claim 1, wherein the vertically extending pillars include one or more semiconductor materials.4.The integrated assembly of claim 1, wherein the storage element is composed of memory cells of a memory array; wherein the digit line extends along a column of the memory array and the word line extends along a column of the memory array One of the columns is an edge column; the edge column has one of the intermediate regions extending along one side and has an edge extending along a second side that is opposite to the one side The edge area; the shielding line in the intermediate area is a first shielding line and is configured as a vertical extension plate; one of the shielding lines is in the edge area and is a second shielding line; the first The second shielding wire is configured to be different from the first shielding wire and includes an elbow area connecting the vertical extension area to the horizontal extension area.5.The integrated assembly of claim 1, wherein each of the shielding lines has a second width along the cross section; and wherein the second width is less than or equal to about half of the first width .6.The integrated assembly of claim 5, wherein the second width is less than or equal to about one third of the first width.7.The integrated assembly of claim 1, wherein each of the lower source/drain regions has a top surface at a third height, and wherein the second height is greater than or equal to the third height .8.8. The integrated assembly of claim 7, wherein each of the word lines has a bottom surface at a fourth height, and wherein the second height is less than the fourth height.9.The integrated assembly of claim 1, wherein the digit line includes a first conductive material, the shield line includes a second conductive material, and the word line includes a third conductive material; and wherein the first and second At least one of the second and third conductive materials is different from at least the other of the first, second, and third conductive materials.10.The integrated assembly of claim 1, wherein the digit line includes a first conductive material, the shield line includes a second conductive material, and the word line includes a third conductive material; wherein the first and second And the third conductive material are the same composition; and wherein the same composition includes a metal.11.The integrated assembly of claim 1, wherein the storage element is composed of memory cells of a memory array; wherein the digit line extends along a column of the memory array and the word line extends along a column of the memory array And further include a metal-containing reference structure under the memory array; each of the shielding lines has a bottom surface directly adjacent to the upper surface of the metal-containing reference structure.12.The integrated assembly of claim 1, wherein the storage element is composed of memory cells of a memory array; wherein the digit line extends along a column of the memory array and the word line extends along a column of the memory array Row extension; wherein each of the shield lines has an end along the peripheral edge of the memory array; and further includes:A reference structure, which is offset from the memory array; andAn interconnection member extending from the end of the shielded wire to the reference structure.13.The integrated assembly of claim 12, wherein the reference structure is a metal-containing plate.14.The integrated assembly of claim 12, wherein the reference structure is vertically offset from the memory array.15.The integrated assembly of claim 12, wherein the reference structure is laterally offset from the memory array.16.The integrated assembly of claim 12, wherein at least a portion of the reference structure is offset laterally from the memory array and also vertically offset from the memory array.17.The integrated assembly of claim 12, wherein the memory array is in a memory level arranged in a vertically stacked level.18.18. The integrated assembly of claim 17, wherein the vertically stacked layer arrangement includes a lower layer below the memory layer; the lower layer includes a control circuit system coupled with the circuit system of the memory layer.19.The integrated assembly of claim 18, wherein the reference structure is along the lower level.20.The integrated assembly of claim 1, wherein the storage element is composed of memory cells of a memory array; wherein the digit line extends along a column of the memory array and the word line extends along a column of the memory array Row extension; wherein each of the shielded wires has a first end and has a second end in an opposite relationship with the first end; and further includes:A first reference structure that is laterally offset from the first side of the memory array;A second reference structure that is laterally offset from the second side of the memory array;A first interconnection that extends from the first end of the shield wire to the first reference structure; andA second interconnection extending from the second end of the shielded wire to the second reference structure.21.The integrated assembly of claim 1, wherein the storage element is composed of memory cells of a memory array; wherein the digit line extends along a column of the memory array and the word line extends along a column of the memory array Row extension; wherein each of the shielded wires has a first end and has a second end in an opposite relationship with the first end; and further includes:A first reference structure that is laterally offset from the first side of the memory array;A second reference structure that is laterally offset from the second side of the memory array;A first interconnection that extends from the first end of the first set of the shielded wires to the first reference structure; andA second interconnection that extends from the second end of the shielded wire in a second group to the second reference structure; the second group includes a shielded wire different from the first group.22.The integrated assembly of claim 1, wherein the storage element is composed of memory cells of a memory array; wherein the digit line extends along a column of the memory array and the word line extends along a column of the memory array Line extension; and further include:A reference structure that surrounds the memory array at the periphery; andAn interconnection that extends from the shielded line to the reference structure.23.The integrated assembly of claim 22, wherein the reference structure is vertically offset from the memory array.24.A method of forming an integrated assembly, which includes:Forming a support structure including an insulating material on the reference structure; the reference structure includes metal and is configured as a horizontally extending extension;Forming a stack on the support structure; the stack including a semiconductor material on a digit line material;The stack is patterned into rails extending along the first direction; the rails are spaced from each other by the first trench; the patterning penetrates the insulating material to leave exposed along the bottom of the first trench The upper surface of the reference structure; each of the rails has a top surface and has a sidewall surface extending downward from the top surface; the patterning of the stack into the rails makes the digit line The material is formed as a number line extending along the first direction;An insulating shell covering the top surface and the sidewall surface of the rail is formed; the insulating shell narrows the first groove; the upper surface of the reference structure is along the narrowed first groove The bottom of a trench is exposed;A conductive shield line formed in the narrowed first trench and directly abutted against the exposed upper surface of the reference structure at the bottom of the narrowed first trench;Forming a second groove extending along a second direction; the second direction intersects the first direction; the second groove patterning the upper region of the rail into pillars without patterning the rail Lower area; the lower area of the track contains the digital line;Forming a word line in the second trench;Doping the bottom section of the semiconductor material to form a lower source/drain region; the lower source/drain region is coupled with the digit line;Doping the top section of the semiconductor material to form an upper source/drain region; a channel region is vertically located between the lower source/drain region and the upper source/drain region; the The word line is adjacent to the channel region; andA storage element coupled with the upper source/drain region is formed.25.The method of claim 24, wherein the bottom section of the semiconductor material is doped before forming the word line; and wherein the top section of the semiconductor material is after forming the word line Do doping.26.The method of claim 24, further comprising:Forming a conductive shielding material in the narrowed first trench; the conductive shielding material substantially fills the narrowed first trench; andThe height of the conductive shielding material is reduced so that the conductive shielding material vertically overlaps only the lower section of the semiconductor material of the digit line and the rail; the conductive shielding material having the reduced height is the Conductive shielded wire.27.The method of claim 26, wherein the lower section of the semiconductor material that vertically overlaps the shielding material includes all of the lower source/drain regions.28.The method of claim 26, wherein the height of the conductive shielding material is reduced before the word line is formed.29.The method of claim 26, wherein the height of the conductive shielding material is reduced after the word line is formed.30.The method of claim 24, wherein the narrowed trench has a uniform width from the top of the semiconductor material to the bottom of the digit line material.31.The method of claim 24, further comprising forming an electrical connection from the reference structure to a circuit system configured to maintain the reference structure at a reference voltage.32.A method of forming an integrated assembly, which includes:Forming a stack including semiconductor material on top of the digit line material;The stack is patterned into rails extending along a first direction; the rails are spaced apart from each other by a first groove; the rails have a top surface and have sidewall surfaces extending downward from the top surface; Patterning the stack into the tracks to form the digit line material into digit lines extending along the first direction;Forming an insulating material covering the top surface and the sidewall surface of the rail; the insulating material narrows the first trench;Forming a conductive shield line in the narrowed first trench;Forming a second groove extending along a second direction; the second direction intersects the first direction; the second groove patterning the upper region of the rail into pillars without patterning the rail Lower area; the lower area of the track contains the digital line;Forming a word line in the second trench;Doping the bottom section of the semiconductor material to form a lower source/drain region; the lower source/drain region is coupled with the digit line;Doping the top section of the semiconductor material to form an upper source/drain region; a channel region is vertically located between the lower source/drain region and the upper source/drain region; the The word line is adjacent to the channel region;Forming a storage element coupled with the upper source/drain region; wherein the storage element is composed of memory cells of a memory array; wherein the digit line extends along the column of the memory array and the word line is along The rows of the memory array extend; wherein each of the conductive shield lines has a first end along the first peripheral edge of the memory array and has a first end along the first peripheral edge of the memory array The second end of the second peripheral edge of the memory array whose edges are in an opposite relationship; andAt least one of the first and second ends of each of the conductive shielding wires is electrically connected to a reference voltage source.33.The method of claim 32, wherein the conductive shield line comprises conductive doped silicon.34.The method of claim 32, wherein the bottom section of the semiconductor material is doped before forming the word line; and wherein the top section of the semiconductor material is after forming the word line Do doping.35.The method of claim 32, further comprising:Forming a conductive shielding material in the narrowed first trench; the conductive shielding material substantially fills the narrowed first trench; andThe height of the conductive shielding material is reduced so that the conductive shielding material vertically overlaps only the lower section of the semiconductor material of the digit line and the rail; the conductive shielding material having the reduced height is the Conductive shielded wire.36.The method of claim 35, wherein the lower section of the semiconductor material that vertically overlaps the shielding material includes all of the lower source/drain regions.37.The method of claim 35, wherein the height of the conductive shielding material is reduced before the word line is formed.38.The method of claim 35, wherein the height of the conductive shielding material is reduced after the word line is formed.39.The method of claim 32, wherein the narrowed trench has a uniform width from the top of the semiconductor material to the bottom of the narrowed trench.40.The method according to claim 32, wherein said electrically connecting said at least one of said first and second ends of each of said conductive shielding wires and said reference voltage source comprises electrically connecting said The at least one of the first and second ends of each of the conductive shield lines and a metal-containing reference structure.41.The method of claim 40, wherein the reference structure is a plate.42.The method of claim 40, wherein the reference structure is vertically offset from the memory array.43.40. The method of claim 40, wherein the reference structure is a neighbor of the first and second peripheral edges of the memory array, and from the first and second peripheral edges of the memory array The one in is offset laterally.44.The method of claim 40, wherein the reference structure surrounds the memory array at the periphery.45.The method of claim 44, wherein the reference structure is vertically offset from the memory array.46.The method of claim 32, wherein the reference voltage source is a first reference voltage source adjacent to the first peripheral edge of the memory array, and comprises:Forming electrical connections from at least some of the first ends of the conductive shield line to the first reference voltage source; andAn electrical connection is formed from at least some of the second ends of the conductive shield line to a second reference voltage source adjacent to the second peripheral edge of the memory array.47.The method of claim 32, wherein the reference voltage source is a first reference voltage source and comprises:Using a first interconnect to form an electrical connection from the first end of the first set of the conductive shielding line to the first reference voltage source; andA second interconnect is used to form an electrical connection from the second end of the second group of the conductive shielding lines to a second reference voltage source; the second group includes a conductive shielding line different from the first group.
Integrated assembly having shielded wires between digital lines and forming integrated assembly methodCross reference of related applicationsThis application claims the priority and rights of U.S. Provisional Patent Application No. 62/814,664 filed on March 6, 2019, the full text of which is incorporated herein by reference.Technical fieldAn integrated assembly with shielded wires between digital lines and a method for forming the integrated assembly.Background techniqueMemory is a type of integrated circuit system and is used in computer systems to store data. The example memory is DRAM (Dynamic Random Access Memory). The DRAM cells may each include a transistor combined with a capacitor. DRAM cells can be arranged in an array; where word lines extend along the rows of the array, and where digit lines extend along the columns of the array. The word line may be coupled with the transistor of the memory cell. Each memory cell can be uniquely addressed by a combination of one of the word lines and one of the digit lines.A problem that may be encountered in conventional memory architectures is that capacitive coupling (ie, parasitic capacitance) may occur between adjacent digit lines, thereby causing interference along the inactive digit line when the neighbor of the inactive digit line is activated. As memory architectures are scaled to increase integration, capacitive coupling becomes increasingly problematic. It would be desirable to alleviate or prevent this capacitive coupling.It is also expected to develop new methods to manufacture highly integrated memories (such as DRAM), and to develop new architectures manufactured by such methods.Description of the drawings1 to 1C are diagrammatic views of regions of an example structure at an example initial process stage of an example method of forming an example integrated assembly. 1A, 1B, and 1C are diagrammatic cross-sectional views along the lines A-A, B-B, and C-C of FIG. 1, respectively.2 to 2C are diagrammatic views of regions of the example configuration of FIGS. 1 to 1C at an example processing stage after the example processing stage of FIGS. 1 to 1C. Fig. 2A is a diagrammatic cross-sectional view along the line A-A of Fig. 2. 2B and 2C are diagrammatic cross-sectional views along the lines B-B and C-C of FIGS. 2 and 2A, respectively.3 to 3C are diagrammatic views of the area of the example configuration of FIGS. 1 to 1C at the example processing stage after the example processing stage of FIGS. 2 to 2C. Fig. 3A is a diagrammatic cross-sectional view along line A-A of Fig. 3. 3B and 3C are diagrammatic cross-sectional views along the lines B-B and C-C of FIGS. 3 and 3A, respectively.4 to 4C are diagrammatic views of the area of the example configuration of FIGS. 1 to 1C at the example processing stage after the example processing stage of FIGS. 3 to 3C. Fig. 4A is a diagrammatic cross-sectional view along the line A-A of Fig. 4. 4B and 4C are diagrammatic cross-sectional views along the lines B-B and C-C of FIGS. 4 and 4A, respectively.5 to 5C are diagrammatic views of the regions of the example configuration of FIGS. 1 to 1C at the example processing stage after the example processing stage of FIGS. 4 to 4C. Fig. 5A is a diagrammatic cross-sectional view along the line A-A of Fig. 5. 5B and 5C are diagrammatic cross-sectional views along the lines B-B and C-C of FIGS. 5 and 5A, respectively.6 to 6C are diagrammatic views of the area of the example configuration of FIGS. 1 to 1C at the example processing stage after the example processing stage of FIGS. 5 to 5C. Fig. 6A is a diagrammatic cross-sectional view along line A-A of Fig. 6. 6B and 6C are diagrammatic cross-sectional views along the lines B-B and C-C of FIGS. 6 and 6A, respectively.7 to 7C are diagrammatic views of the area of the example configuration of FIGS. 1 to 1C at the example processing stage after the example processing stage of FIGS. 6 to 6C. Fig. 7A is a diagrammatic cross-sectional view along the line A-A of Fig. 7. 7B and 7C are diagrammatic cross-sectional views along the lines B-B and C-C of FIGS. 7 and 7A, respectively.8 to 8C are diagrammatic views of the regions of the example configuration of FIGS. 1 to 1C at the example processing stage after the example processing stage of FIGS. 7 to 7C. Fig. 8A is a diagrammatic cross-sectional view taken along line A-A of Fig. 8. 8B and 8C are diagrammatic cross-sectional views along the lines B-B and C-C of FIGS. 8 and 8A, respectively.9 to 9C are diagrammatic views of the regions of the example configuration of FIGS. 1 to 1C at the example processing stage after the example processing stage of FIGS. 8 to 8C. Fig. 9A is a diagrammatic cross-sectional view taken along line A-A of Fig. 9. 9B and 9C are diagrammatic cross-sectional views along the lines B-B and C-C of FIGS. 9 and 9A, respectively.Fig. 10 is a diagrammatic view of a zone of the example configuration of Fig. 9A at an example processing stage after the example processing stage of Fig. 9A. Fig. 10 is a view along the same cross-section as Fig. 9A.Figure 11 is a schematic diagram of the regions of an example memory array.Figures 12 to 12B are diagrammatic top views of areas of an example integrated assembly.Figures 12C and 12D are diagrammatic cross-sectional side views along the line C-C of Figure 12B and illustrate a pair of example integrated assemblies.Figure 12E is a diagrammatic cross-sectional side view illustrating another example integrated assembly.FIG. 13 is a diagrammatic view of the area of the example construction of FIG. 6A at an example processing stage after the example processing stage of FIG. 6A and which is an alternative to the construction shown in FIG. 7A. Fig. 13 is a view along the same cross-section as Figs. 6A and 7A.14 to 14C are diagrammatic views of the regions of the example structure at the example initial process stage of the example method of forming the example integrated assembly. 14A, 14B, and 14C are diagrammatic cross-sectional views along the lines A-A, B-B, and C-C of FIG. 14, respectively.15 to 15C are diagrammatic views of the regions of the example configuration of FIGS. 14 to 14C at the example processing stage after the example processing stage of FIGS. 14 to 14C. Fig. 15A is a diagrammatic cross-sectional view taken along line A-A of Fig. 15. 15B and 15C are diagrammatic cross-sectional views along the lines B-B and C-C of FIGS. 15 and 15A, respectively.16 to 16C are diagrammatic views of the regions of the example configuration of FIGS. 14 to 14C at the example processing stage after the example processing stage of FIGS. 15 to 15C. Fig. 16A is a diagrammatic cross-sectional view taken along line A-A of Fig. 16. 16B and 16C are diagrammatic cross-sectional views along the lines B-B and C-C of FIGS. 16 and 16A, respectively.17 to 17C are diagrammatic views of the regions of the example configuration of FIGS. 14 to 14C at the example processing stage after the example processing stage of FIGS. 16 to 16C. Fig. 17A is a diagrammatic cross-sectional view taken along line A-A of Fig. 17. 17B and 17C are diagrammatic cross-sectional views along the lines B-B and C-C of FIGS. 17 and 17A, respectively.18 to 18C are diagrammatic views of the regions of the example configuration of FIGS. 14 to 14C at the example processing stage after the example processing stage of FIGS. 17 to 17C. Fig. 18A is a diagrammatic cross-sectional view along line A-A of Fig. 18. 18B and 18C are diagrammatic cross-sectional views along the lines B-B and C-C of FIGS. 18 and 18A, respectively.19 to 19C are diagrammatic views of the regions of the example configuration of FIGS. 14 to 14C at the example processing stage after the example processing stage of FIGS. 18 to 18C. Fig. 19A is a diagrammatic cross-sectional view taken along line A-A of Fig. 19. 19B and 19C are diagrammatic cross-sectional views along the lines B-B and C-C of FIGS. 19 and 19A, respectively.20 to 20C are diagrammatic views of the regions of the example configuration of FIGS. 14 to 14C at the example processing stage after the example processing stage of FIGS. 19 to 19C. Fig. 20A is a diagrammatic cross-sectional view taken along line A-A of Fig. 20. 20B and 20C are diagrammatic cross-sectional views along the lines B-B and C-C of FIGS. 20 and 20A, respectively.21 to 21C are diagrammatic views of the regions of the example configuration of FIGS. 14 to 14C at the example processing stage after the example processing stage of FIGS. 20 to 20C. FIG. 21A is a diagrammatic cross-sectional view along the line A-A of FIG. 21. FIG. 21B and 21C are diagrammatic cross-sectional views along the lines B-B and C-C of FIGS. 21 and 21A, respectively.22 to 22C are diagrammatic views of the regions of the example configuration of FIGS. 14 to 14C at the example processing stage after the example processing stage of FIGS. 21 to 21C. FIG. 22A is a diagrammatic cross-sectional view along the line A-A of FIG. 22. FIG. 22B and 22C are diagrammatic cross-sectional views along the lines B-B and C-C of FIGS. 22 and 22A, respectively.FIG. 23 is a diagrammatic view of the area of the example configuration of FIG. 22B at an example processing stage after the example processing stage of FIG. 22B. Fig. 23 is a view along the same cross-section as Fig. 22B.Figure 24 is a diagrammatic cross-sectional side view of a region of an example assembly including stacked levels.Detailed waysSome embodiments include a memory architecture (e.g., DRAM) with shielded lines between digit lines. The shielded wire may be coupled with a reference voltage (for example, ground, Vcc/2, etc.) so that it is not electrically floating. The shielded line can ease the capacitive coupling between adjacent digital lines. Some embodiments include methods of manufacturing memory architectures. Example embodiments are described with reference to FIGS. 1 to 24.1 to 1C, the integrated assembly (construction) 10 includes a base 12. The substrate 12 includes a semiconductor material 18; this semiconductor material may, for example, include single crystal silicon, consist essentially of single crystal silicon, or consist of single crystal silicon. The base 12 may be referred to as a semiconductor substrate. The term "semiconductor substrate" means any structure that includes semi-conductive materials, including (but not limited to) bulk semi-conductive materials, such as semi-conductive wafers (alone or in a combination including other materials) and layers of semi-conductive materials (alone Or in combination with other materials). The term "substrate" refers to any support structure, including (but not limited to) the semiconductor substrate described above. In some applications, the substrate 12 may correspond to a semiconductor substrate containing one or more materials associated with integrated circuit manufacturing. Such materials may include, for example, one or more of refractory metal materials, barrier materials, diffusion materials, insulator materials, and the like.The support structure 14 is on the base 12. The support structure includes an insulating material 16 on top of the semiconductor material 18. The gap is provided between the support structure 14 and the base 12 to indicate that there may be intermediate materials, components, etc. between the support structure 14 and the base 12. In some embodiments, the gap may be omitted.The insulating material 16 may include any suitable composition; and in some embodiments, may include silicon dioxide, consist essentially of silicon dioxide or consist of silicon dioxide.The stack 20 is formed on the support structure 14. The stack 20 includes a semiconductor material 22 on top of a digit line material 24.The digit line material 24 may include any suitable conductive composition; for example, various metals (e.g., titanium, tungsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitrogen, etc.) One or more of conductive doped semiconductor materials (such as conductive doped silicon, conductive doped germanium, etc.) and/or conductive doped semiconductor materials. In some embodiments, the digit line material may be a metal-containing material including one or more of the following: tungsten, titanium, titanium nitride, tungsten nitride, and the like.The digit line material 24 has a bottom surface 23 directly abutting the insulating material 16 and has a top surface 25 in an opposite relationship with the bottom surface 23.The semiconductor material 22 may include any suitable semiconductor composition; and in some embodiments, it may include one or more of silicon, germanium, III/V group semiconductor materials (such as gallium phosphide), semiconductor oxides, etc., basically The upper is composed of one or more of silicon, germanium, III/V group semiconductor materials (such as gallium phosphide), semiconductor oxide, etc. or is composed of silicon, germanium, III/V group semiconductor materials (such as gallium phosphide), semiconductor One or more of oxides, etc.; wherein the term III/V group semiconductor material refers to a semiconductor material that includes elements selected from groups III and V of the periodic table (group III and V are the old nomenclature, And now called 13 and 15 families). In some embodiments, the semiconductor material 22 may include silicon (such as single crystal silicon, polycrystalline silicon, etc.), consist essentially of silicon (such as single crystal silicon, polycrystalline silicon, etc.), or consist of silicon (such as single crystal silicon, polycrystalline silicon, etc.).The bottom section 26 of the semiconductor material 22 is conductively doped and is eventually incorporated into the source/drain regions of the transistor (with example transistors described below). Depending on whether the transistor will be an n-channel device or a p-channel device, the bottom section 26 may be n-type doped or p-type doped. In the embodiment shown, the bottom section 26 directly abuts the top surface 25 of the digit line material 24 and is therefore electrically coupled with the digit line material 24. The approximate upper boundary of the bottom section 26 is illustrated by a dashed line 27.The semiconductor material 22 has a bottom surface 19 directly abutting the top surface 25 of the digit line material 24 and has a top surface 21 in an opposite relationship with the bottom surface 19.The protective cover material 28 is formed on the stack 20 and directly abuts the top surface 21 of the semiconductor material 22. The capping material 28 may include any suitable composition; and in some embodiments, it may include silicon nitride, consist essentially of silicon nitride or consist of silicon nitride.2 to 2C, the stack 20 is patterned into a rail 30 extending laterally along a first direction (ie, the y-axis direction, where the y-axis direction is shown in FIGS. 2, 2B, and 2C). The rails are spaced from each other by grooves 32. The trench 32 may be referred to as a first trench to distinguish it from other trenches formed at a subsequent process stage.The rail 30 extends vertically along the z-axis direction, where the z-axis is shown in FIGS. 2A to 2C. Each of the tracks has a top surface corresponding to the top surface 21 of the semiconductor material 22 and has a bottom surface corresponding to the bottom surface 23 of the digit line material 24. Each of the rails has a side wall surface 33 extending from the top surface 21 to the bottom surface 23. The individual rails are covered by a cover which protects the cover material 28.The patterned digit line material 24 within the track 30 is configured as digit lines 34; wherein such digit lines extend laterally along the first direction (ie, the y-axis direction).The rail 30 can be formed using any suitable process. For example, in some embodiments, a patterned mask (such as a photolithographic patterned photoresist mask) may be provided to define the positions of the tracks 30 and trenches 32; one or more etches may be used to change the pattern from The patterned mask is transferred to the material under the mask, thereby forming tracks 30 and trenches 32; and then, the mask can be removed to leave the structure of FIGS. 2 to 2C.Each of the digital lines 34 has a width W along the cross section of FIG. 2A. This width can be referred to as the first width. The cross section of FIG. 2A is orthogonal to the first direction of the y-axis and extends along the x-axis. The orthogonal relationship between the x and y axes is shown in Figure 2.Each of the digit lines 34 has a height H from the top of the insulating material 16 to the upper surface 25. In some embodiments, this height may be referred to as the first height.The trench 32 can be considered to include an intermediate region 36 between the digit lines 34. In the illustrated embodiment, such an intermediate region also has a first width W along the cross section of FIG. 2A. In the illustrated embodiment, each of the grooves has a uniform width W from the bottom surface 23 of the digit line 34 to the top surface 21 of the rail 30 and even to the top surface of the cover material 28. In other embodiments, the width of the intermediate region 36 may be different from the width of the digit line, but the groove may still have a uniform width from the bottom surface of the digit line to the top surface of the track.FIGS. 2 and 2A show the edge region 38 along one side of the patterned track 30. In some embodiments, the tracks 30 are patterned into the components of the memory array, and therefore are within the memory array area 40. In such embodiments, the edge area 38 can be used to illustrate processing along the peripheral edge of the memory array area 40.3 to 3C, the insulating material 42 is formed to cover the top surface 21 and the sidewall surface 33 of the rail 30. The insulating material 42 narrows the trench 32.The insulating material 42 may include any suitable composition; and in some embodiments may include silicon dioxide (such as silicon dioxide deposited using tetraethylorthosilicate TEOS); porous silicon oxide, carbon-doped silicon dioxide, and the like. The insulating material 42 can be formed using any suitable process, such as, for example, atomic layer deposition, chemical vapor deposition, and the like.The narrowed trench 32 has a uniform width W1 from the top surface 21 of the semiconductor material material 22 to the bottom surface 31 of the trench 32. In some embodiments, the width W1 may be referred to as the second width to distinguish it from the first width W of the digit line 34 and the intermediate region 36. In some embodiments, the second width W1 may be less than or equal to about half of the first width W, less than or equal to about one third of the first width W, and so on.4 to 4C, the conductive shielding material 44 is formed in the narrowed trench 32. The conductive shielding material 44 may include any suitable conductive composition; for example, various metals (e.g., titanium, tungsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitrogen, etc.) One or more of conductive doped semiconductor materials (such as conductive doped silicon (e.g. polysilicon), conductive doped germanium, etc.) and/or conductive doped semiconductor materials. In some embodiments, the conductive shielding material 44 may be referred to as a second conductive material to distinguish it from the first conductive material 24 used as a digit line material. In some embodiments, the shielding material 44 may include the same composition as the digit line material 24 or may include a different composition from the digit line material 24. In some embodiments, the shielding material 44 may include one or more metals and/or metal-containing materials; and may, for example, include one or more of titanium nitride, tantalum nitride, tungsten, tantalum, ruthenium, and the like.In the illustrated embodiment, the conductive shielding material 44 fills the narrowed trench 32. In some embodiments, the shielding material 44 may be considered to substantially fill the narrowed trench 32; wherein the term "substantially fills" means that the shielding material 44 fills the trench to at least the top surface 21 of the semiconductor material 22 within the rail 30的tier.Referring to FIGS. 5 to 5C, optional slitting is used to pass through the shielding material 44 along the edge region 38 and thereby form the recessed region 46. The shielding material 48 adjacent to the recessed area 46 may be considered to include a horizontally extending edge area 48.6 to 6C, an additional insulating material 42 is formed on the shielding material 44 and in the recessed area 46. The additional insulating material 42 may include any suitable composition(s); and in some embodiments may include silicon dioxide. Silicon dioxide can be formed using a spin-on-dielectric (SOD) process. In the illustrated embodiment, the planarized upper surface 51 extends across the materials 44 and 42. This planarized upper surface can be formed using a suitable treatment; for example, chemical mechanical treatment (CMP).Referring to FIGS. 7 to 7C, the second trench 52 is formed to extend along the second direction (ie, the x-axis direction). The second direction of the second groove 52 intersects the first direction (ie, the y-axis direction); and therefore intersects the direction of the first groove 32 (shown in FIGS. 2 to 2C). In the illustrated embodiment, the second direction of the second trench 52 is substantially orthogonal to the first direction of the first trench 32.The second trench 52 patterns the upper region 54 of the track 30 and does not pattern the lower region 56 of the track (as shown in FIG. 7B); and the digit line 34 remains in the unpatterned lower region 56 of the track. The second trench 52 also extends into the conductive shielding material 44 (as shown in Figure 7C).The patterned upper region 54 includes vertically extending pillars 58 of semiconductor material 22, where such pillars are above the digit line 34.The pillar 58 has a side wall surface 33 patterned together with the first trench 30 (such side wall surface 33 is described above with reference to FIGS. 2 to 2C). The side wall surface 33 is schematically indicated by a broken line in the top view of FIG. 7.Referring to FIGS. 8 to 8C, the word line 60 is formed in the second trench 52. The word line includes conductive word line material 62. The conductive word line material 62 may include any suitable conductive composition, such as (for example) various metals (such as titanium, tungsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (such as metal silicide, metal One or more of nitrides, metal carbides, etc.) and/or conductive doped semiconductor materials (for example, conductive doped silicon, conductive doped germanium, etc.). In some embodiments, the conductive word line material 62 can be considered as a third conductive material so that it can be distinguished from the second conductive material 44 of the shield line and the first conductive material 24 of the digit line. The first, second, and third conductive materials may be the same composition as each other; and in some embodiments will include the same metal-containing composition (for example, a composition including one or more of the following: tungsten, titanium, tantalum , Ruthenium, tungsten nitride, tantalum nitride, titanium nitride, etc.). Alternatively, at least one of the first, second, and third conductive materials may be a different composition from at least another of the first, second, and third conductive materials.In the illustrated embodiment, the insulating material 64 is disposed in the second trench 52, and the word line 60 is embedded in this insulating material. The insulating material 64 may include any suitable composition; and in some embodiments, may include one or both of silicon dioxide and silicon nitride.The area of the insulating material 64 between the word line 60 and the semiconductor material 22 corresponds to the gate dielectric material (or gate insulating material) 63. The gate dielectric material may include any suitable composition; and in some embodiments, may include silicon dioxide, consist essentially of silicon dioxide, or consist of silicon dioxide.The word line 60 is illustrated diagrammatically in the top view of FIG. 8 to help the reader understand the orientation of the word line relative to other structures in the assembly 10.In the illustrated embodiment, word line 60 is shown as corresponding to word lines WL1, WL2, and WL3. Such word lines are examples of word lines that can extend along the rows of the memory array. Also, the digital line 34 is indicated to correspond to the digital lines DL1, DL2, DL3, and DL4. Such digit lines are examples of digit lines that can extend along the columns of the memory array.9 to 9C, the shielding material 44 is recessed (ie, reduced in height) to form a conductive shielding line 66; wherein the conductive shielding line extends along the first direction of the y-axis. In the illustrated embodiment, the conductive shield line vertically overlaps the upper section (region) 68 of the digit line (eg, DL1) and the lower section (region) 70 of the semiconductor material 22. In some embodiments, the lower section 70 may correspond to a section along the unpatterned portion 56 of the track 30 (shown in FIG. 7B). In some embodiments, the lower region 70 may include all of the doped bottom section 26 of the semiconductor material 22. In some embodiments, the digit line (such as DL4) can be considered to extend to a first height H above the upper surface of the insulating material 16, and the shielding line 66 can be considered to include a second height above the upper surface of the insulating material 16. Top surface 67 of H1. The second height H1 may be greater than or equal to the first height H. The doped region 26 can be considered to extend to the third height H2, and the second height H1 can also be greater than or equal to the third height H2. In addition, each of the word lines (eg, WL3) can be considered to have a bottom surface at a fourth height H3 (shown in FIG. 9C), and the second height H1 (FIG. 9A) may be less than the fourth height H3.It is obvious that the shield wire 66 in the edge region 38 has a different configuration from the shield wire 66 in the intermediate region 36. Specifically, the shielding wire 66 in the intermediate region 36 is configured as a vertically extending plate, and the shielding wire 66 in the edge region 38 is configured as a corner plate. Specifically, the shield line 66 in the edge area 38 has a vertical extension area 72, a horizontal extension area 74, and an elbow area 73 connecting the vertical extension area and the horizontal extension area. In some embodiments, the digit line DL1 can be considered as an edge digit line along the edge of the memory array, and defines an edge column 76. The edge row 76 has an intermediate region 36 on one side and an edge region 38 on the other side in an opposite relationship with the one side. The shielded wire 66 having a gusset configuration extends along the edge row 76.The shield line 66 in the intermediate region 36 has a horizontal width corresponding to the width W1 described above with reference to FIG. 3A.The insulating material 42 is formed on the recessed shield wire 66.The structure 10 is subjected to planarization (eg, CMP) to form a planarized upper surface 65 that extends across the insulating materials 42 and 64 and across the semiconductor material 22.The top section 78 of the pillar 58 of semiconductor material is doped. The top section 78 can be doped with the same type of dopant used in the bottom section 26. The approximate lower boundary of the doped section 78 is illustrated by the dashed line 79. The doped top section 78 forms the upper source/drain region 80 of the transistor 86, and the doped bottom section 26 forms the lower source/drain region 82 of the transistor. The transistor channel region 84 is inside the semiconductor pillar 58 and extends vertically between the lower source/drain region 82 and the upper source/drain region 80. The channel region can be intrinsically doped or lightly doped to achieve a desired threshold voltage. The word line (eg WL3) is adjacent to the channel region 84 and is separated from the channel region by the gate dielectric material 63. The word line includes the gate of the transistor 86 and can be used to gate-couple the source/drain regions 80 and 82 of the individual transistors to each other through the channel region 84. Figure 9B shows gates 88 along word line 60, where such gates correspond to regions of the word line adjacent to channel region 84. In some embodiments, the gate 88 may be considered to correspond to the gate region of the word line 60.In the embodiment of FIGS. 1 to 9, the bottom section 26 of the semiconductor material 22 is doped before the word line 60 is formed (specifically, it is shown as being doped at the processing stage of FIG. 1), and the semiconductor The top section 78 of the material 22 is doped after the word line 60 is formed (specifically, it is doped at the processing stage of FIG. 9). In other embodiments, the top and bottom sections 26 and 78 may be doped at other processing stages. For example, both the top and bottom sections 26 and 78 can be doped at the process stage of FIG. 1.The shielding line 66 can be used to reduce and even prevent undesirable parasitic capacitance between adjacent digital lines (for example, parasitic capacitance between the digital lines DL1 and DL2). The shielded wire 66 is shown as being coupled with a reference structure 90 (ie, a reference voltage source, a reference voltage node, etc.), which in turn is coupled with a circuit system 92 configured to provide a reference voltage to the reference structure; and in some embodiments It is configured to maintain the reference structure 90 at a reference voltage. The reference voltage is thus supplied to the shield line 66. The reference voltage can be any suitable reference voltage; and in some embodiments can be ground, Vcc/2, etc. It may be advantageous to keep the shielded line at a reference voltage instead of allowing the shielded line to float electrically, because this allows the shielded line to better reduce the undesirable parasitic capacitance between adjacent digital lines. The reference structure 90 may be a conductive plate (for example, a metal-containing plate) or any other suitable conductive structure. In some embodiments, the reference structure 90 may be omitted, and the shield line 66 may simply be coupled to circuitry configured to induce a desired reference voltage along the shield line.The intermediate region 36 includes a first width W from the bottom surface 23 of the digit line 34 to the top surface 81 of the upper source/drain region 80.Referring to FIG. 10, the storage element 94 is formed to be conductively coupled with the upper source/drain region 80. The storage element can be any suitable device having at least two detectable states; and in some embodiments can be, for example, a capacitor, a resistive memory device, a conductive bridge device, a phase change memory (PCM) device, a programmable metallization unit ( PMC) etc. In the illustrated embodiment, the storage element 94 is a capacitor. Each capacitor has a node coupled with a reference voltage 96. This reference voltage can be any suitable reference voltage, and can be the same as the reference voltage used at the shield line 66, or can be different from this reference voltage. In some embodiments, the reference voltage 96 may be ground or Vcc/2.The storage element 94 and the transistor 86 may be incorporated into the memory cell 100 of the memory array 98. In some embodiments, the transistor 86 may be referred to as the access transistor of the memory cell. Figure 11 schematically illustrates a portion of a memory array 98 and shows this memory array including digit lines DL1, DL2, and DL3 and word lines WL1, WL2, and WL3. Each of the memory cells 100 within the memory array is uniquely addressed by a combination of one of the word lines and one of the digit lines. The memory array may include any suitable number of memory cells 100; and in some embodiments may include hundreds, millions, tens of millions, etc. memory cells.The reference structure 90 of FIG. 10 can be placed in any suitable position relative to the memory array 98. 12 to 12E show example arrangements of memory array 98 and reference structure 90. Each of Figures 12-12E shows a memory array 98 (labeled memory array) illustrated as a square or other suitable polygon. Figures 12 to 12B illustrate the conductive shield line 66 with a dashed line intersecting the memory array.The memory array 98 of FIGS. 12-12B can be considered to have a peripheral boundary 102 and have peripheral edges 101, 103, 105, and 107 along the peripheral boundary. In some embodiments, the edges 101 and 103 may be referred to as the first and second peripheral edges of the memory array, and may be considered to be in a relative relationship with each other. Each of the shielding wires 66 has a first end 109 along the first peripheral edge 101 and a second end 111 along the second peripheral edge 103. The first and second ends 109 and 111 can be considered to be in a relative relationship with each other.FIG. 12 shows an embodiment in which the first end 109 of the shielded wire 66 is electrically coupled with the reference structure 90 (labeled REF in FIG. 12) through the interconnect 104.12A shows an embodiment in which the first reference structure 90a (REF 1) is disposed adjacent to the first peripheral edge 101 of the memory array 98 and the second reference structure 90b (REF 2) is disposed adjacent to the second peripheral edge 103 of the memory array. In the illustrated embodiment, the first reference structure 90 a is laterally offset from the first peripheral edge 101, and the second reference structure 90 b is laterally offset from the second peripheral edge 103. Both reference structures 90a and 90b are coupled to a common circuitry 92 configured to provide a desired reference voltage on structures 90a and 90b (ie, reference voltage nodes 90a and 90b, reference voltage sources 90a and 90b, etc.). The shielded wires 66 are divided into a first group 66a and a second group 66b. The first group has a first end 109 coupled with the first reference structure 90a via a first interconnect 104a, and the second group has a second end 111 coupled with the second reference structure 90b via a second interconnect 104b.The use of two reference structures 90a and 90b in the embodiment of FIG. 12A enables the connection between the reference structure and the shielding wire 66 to be spread better than can be achieved with a single reference structure of FIG. 12. This can simplify the formation of the connection between the shielding line and the reference structure, and can make the desired interval between adjacent interconnections able to avoid parasitic capacitance between adjacent interconnections.FIG. 12B shows an embodiment in which a reference structure 90 (REF) surrounds the memory array 98 on the periphery. This allows the connections to the shield lines to be more evenly distributed around the memory array, which can further reduce the parasitic capacitance between adjacent interconnects 104.The reference structure can be arranged to be along the same plane as the memory array, or can be offset vertically with respect to the memory array. For example, FIGS. 12C and 12D show cross-sections along the line CC of FIG. 12B, which illustrate where the reference structure 90 is along the same horizontal plane as the memory array 98 (FIG. 12C) or is vertically offset relative to the memory 98 (FIG. 12D ) Examples of embodiments.Figure 12E shows another embodiment in which the reference structure 90 is vertically offset from the memory array 98; but in the embodiment of Figure 12E, the reference structure is not offset laterally with respect to the memory array, but instead is directly below the memory array.The embodiments of FIGS. 1 to 10 reduce the height of the conductive shielding material 44 after the word line 60 is formed. Specifically, the word line 64 is formed at the processing stage of FIG. 8, and the height of the shielding material is reduced at the processing stage of FIG. 9 so as to form the conductive shielding line 66. In other embodiments, the height of the conductive shielding material may be reduced before the word line is formed. For example, FIG. 13 shows the construction 10 at a process stage that is an alternative to the process stage of FIG. 7A and shows that the height of the shield wire material 44 is reduced to form the conductive shield wire 66. The configuration 10 of FIG. 13 can then be processed in a method similar to that of FIGS. 8 to 10 to form the memory array 98 described with reference to FIG. 10.The process of FIGS. 1 to 10 utilizes interconnects extending from the end of the shield line 66 to couple the shield line with one or more reference structures. In other embodiments, the reference structure may be disposed under the shielding wire and directly against the bottom surface of the shielding wire. 14 to 23 illustrate example embodiments in which the shielding wire is formed to have a bottom surface directly abutting the reference structure.14 to 14C, the integrated assembly (structure) 10a includes a support structure 14a on the base 12. The support structure includes an insulating material 16 and a semiconductor material 18, and further includes a reference structure 90 between the materials 16 and 18. The reference structure 90 includes a conductive material 120. The conductive material 120 may include any suitable conductive composition, such as, for example, various metals (e.g., titanium, tungsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitride, etc.) , Metal carbide, etc.) and/or conductive doped semiconductor materials (for example, conductive doped silicon, conductive doped germanium, etc.). In some embodiments, the reference structure 90 includes a metal-containing material; for example, one or more of titanium, tantalum, titanium nitride, tantalum nitride, ruthenium, tungsten, and the like. In the illustrated embodiment, the reference structure can be considered to be configured as a horizontally extending extension.The stack 20 is formed on the support structure 14a. The stack 20 includes a semiconductor material 22 on top of a digit line material 24. The bottom section 26 of the semiconductor material 22 is conductively doped. The protective cover material 28 is above the stack 20.The reference structure 90 is shown coupled with circuitry 92 configured to maintain the reference structure at a desired voltage (e.g., ground, Vcc/2, etc.). Although this coupling of the reference structure 90 to the circuitry 92 is shown at the process stage of FIGS. 14 to 14C, in other embodiments, the coupling may be provided at a later process stage.15 to 15C, the stack 20 is patterned into a rail 30 extending laterally along the first direction (y-axis direction). The rails are spaced from each other by the first groove 32. The rail 30 extends vertically along the z-axis direction. Each of the rails has a top surface corresponding to the top surface 21 of the semiconductor material 22 and has a sidewall surface 33.The patterning of the rail 30 penetrates the insulating material 16 to expose the upper surface 121 of the reference structure 90 along the bottom of the trench 32.The patterned digit line material 24 within the track 30 is configured as a digit line 34; it is labeled as digit lines DL1 to DL4.The rail 30 may be formed using any suitable process, including, for example, a process similar to the process described above with reference to FIGS. 2 to 2C.The number line 34 has a first width W along the cross section of FIG. 15A and extends to a first height H.The trench 32 includes an intermediate region 36 between the digit lines 34, and such an intermediate region also has a first width W. In the illustrated embodiment, each of the grooves has a uniform width W from the top surface 121 of the reference structure 90 to the top surface of the cover material 28.The edge region 38 along one side of the patterned track 30 is shown. The edge region of the embodiment of FIGS. 15 to 15C is similar to the edge region described above with respect to the embodiment of FIGS. 2 to 2C.16 to 16C, the insulating material 42 is formed on the rail 30 and patterned into the insulating shell 122. The insulating shell covers the top surface 21 of the rail and the side wall surface 33 of the rail. The insulating shell 122 narrows the trench 32, and the upper surface 121 of the reference structure 90 is exposed along the bottom of the narrowed trench.The narrowed trench 32 has a uniform second width W1 from the upper surface 121 of the reference structure 90 to the top surface 21 of the semiconductor material 22. In some embodiments, the second width W1 may be less than or equal to about half of the first width W, less than or equal to about one third of the first width W, and so on.17 to 17C, the conductive shielding material 44 is formed in the narrowed trench 32 and directly abuts against the exposed upper surface 121 of the reference structure 90 at the bottom of the narrowed trench.In the illustrated embodiment, the conductive shielding material fills the narrowed trench 32. In some embodiments, the shielding material 44 may be considered to substantially fill the narrowed trench 32; wherein the term "substantially fills" means that the shielding material 44 fills the trench to at least the top surface 21 of the semiconductor material 22 within the rail 30的tier.Referring to FIGS. 18 to 18C, the shielding material 44 is recessed (ie, reduced in height) to form a conductive shielding line 66; wherein the conductive shielding line extends along the first direction of the y-axis. In the illustrated embodiment, the conductive shielding line vertically overlaps the entire height of the digit line (eg, DL1), and vertically overlaps the lower section 70 of the semiconductor material 22. In some embodiments, the digit line (eg, DL4) can be considered to extend to a first height H above the reference structure 90, and the shield line 66 can be considered to include a top surface 67 at a second height H1 above the reference structure. The second height H1 may be greater than or equal to the first height H. The doped region 26 can be considered to extend to the third height H2, and the second height H1 can also be greater than or equal to the third height H2.The shield line 66 in the intermediate region 36 has a horizontal width corresponding to the width W1 described above with reference to FIG. 16A.Referring to FIGS. 19 to 19C, an additional insulating material 50 is formed on the conductive shield line 66. The additional insulating material 50 may include any suitable composition(s); and in some embodiments may include silicon dioxide. Silicon dioxide can be formed using a spin-on-dielectric (SOD) process. The additional insulating material 50 may include the same composition as the insulating material 42 or may be a different composition from the insulating material 42.Referring to FIGS. 20 to 20C, the second trench 52 is formed to extend along the second direction (ie, the x-axis direction). The second trench 52 patterns the upper region 54 of the track 30 and does not pattern the lower region 56 of the track (as shown in FIG. 20B); and the digit line (e.g., DL2) remains in the unpatterned lower region 56 of the track.The patterned upper region 54 includes vertically extending pillars 58 of semiconductor material 22, where such pillars are above the digit line 34.Referring to FIGS. 21 to 21C, the word line 60 is formed in the second trench 52. The word line includes conductive word line material 62.The insulating material 64 is also disposed in the second trench 52, and the word line 60 is embedded in this insulating material. The insulating material 64 may include any suitable composition; and in some embodiments, may include one or both of silicon dioxide and silicon nitride.The gate dielectric material (or gate insulating material) 63 is disposed between the word line and the semiconductor pillar 58.Word line 60 is shown as corresponding to word lines WL1, WL2, and WL3.The structure 10 is subjected to planarization (eg, CMP) to form a planarized upper surface 65 that extends across the insulating materials 42, 50, and 64 and across the semiconductor material 22.22 to 22C, the top section 78 of the pillar 58 of semiconductor material is doped. The top section 78 can be doped with the same type of dopant used in the bottom section 26. The doped top section 78 forms the upper source/drain region 80 of the transistor 86, and the doped bottom section 26 forms the lower source/drain region 82 of the transistor. The transistor channel region 84 is inside the semiconductor pillar 58 and extends vertically between the lower source/drain region 82 and the upper source/drain region 80. The word line (eg WL3) is adjacent to the channel region and is separated from the channel region by the gate dielectric material 63. The word line includes the gate of the transistor 86 and can be used to gate-couple the source/drain regions 80 and 82 of the individual transistors to each other through the channel region 84. 22B shows gates 88 along word line 60, where such gates correspond to regions of the word line adjacent to channel region 84. In some embodiments, the gate 88 may be considered to correspond to the gate region of the word line 60.The shielding line 66 can be used to reduce and even prevent undesirable parasitic capacitance between adjacent digit lines (for example, parasitic capacitance between digit lines DL1 and DL2) in a manner similar to that described above with reference to FIG. 9.In the embodiment of FIGS. 14 to 22, the bottom section 26 of the semiconductor material 22 is doped before the word line 60 is formed (specifically, it is shown as being doped at the processing stage of FIG. 14), and the semiconductor The top section 78 of the material 22 is doped after the word line 60 is formed (specifically, it is doped at the processing stage of FIG. 22). In other embodiments, the top and bottom sections 26 and 78 may be doped at other processing stages. For example, both the top and bottom sections 26 and 78 may be doped in the semiconductor material 22 at the process stage of FIG. 14.In the embodiment of FIGS. 14 to 22, the height of the conductive shielding material 44 is reduced before the word line 60 is formed. In other embodiments, the height of the conductive shielding material may be reduced after the word line 60 is formed, similar to the embodiments described above with reference to FIGS. 1 to 10.Referring to FIG. 23, the structure 10a at the process stage after the process stage of FIG. 22B is shown. The storage element 94 is formed to be conductively coupled with the upper source/drain region 80. In the illustrated embodiment, the storage element 94 is a capacitor. Each capacitor has a node coupled with a reference voltage 96.The storage element 94 and the transistor 86 may be incorporated into the memory cell 100 of the memory array 98. In some embodiments, the transistor 86 may be referred to as the access transistor of the memory cell. The memory array 98 may be similar to the memory array described above with reference to FIG. 11.The reference voltage source 92 (ie, the reference voltage circuit system) may be disposed in any suitable position relative to the reference structure 90; and in some embodiments may be below the reference structure, above the reference structure, laterally outside the reference structure, and so on. In some embodiments, one or more dummy word lines may be used to supply the reference voltage to the reference structure 90.In some embodiments, the memory array 98 (for example, the memory array 98 of FIG. 10 or the memory array 98 of FIG. 23) may be in a memory hierarchy (ie, a memory level) within a hierarchy (or level) arranged in a vertical stack. For example, Figure 24 shows a portion of an integrated assembly 10b that includes levels 168, 170, 172, and 174 (also labeled as levels 1 to 4) arranged in a vertical stack. The vertical stacking arrangement can extend upward to include additional levels. Levels 1 to 4 can be considered as examples of levels stacked on top of each other. The levels can be in different semiconductor dies (wafers), or at least two of the levels can be in the same semiconductor die. The bottom level (level 1) may include control circuitry and/or sense circuitry (for example, it may include word line drivers, sense amplifiers, reference voltage control circuitry 92, etc.; and in some embodiments may include CMOS circuitry ). The upper levels (levels 2 to 4) may include memory arrays, such as, for example, memory array 98. The memory arrays in each level may be the same as each other (for example, all may be DRAM arrays), or may be different from each other (for example, some may be DRAM arrays, and others may be NAND arrays). Moreover, one or more of the upper levels may include control circuitry or other logic circuitry. FIG. 24 diagrammatically shows the upper level (level 2) including the memory array and the lower level (level 1) including the control circuitry, and shows the control circuitry of the lower level coupled with the circuitry of the upper level through the conductive interconnect 175.The assemblies and structures discussed above can be used within an integrated circuit (where the term "integrated circuit" means an electronic circuit supported by a semiconductor substrate); and can be incorporated into an electronic system. Such electronic systems can be used in, for example, memory modules, device drivers, power modules, communication modems, processor modules, and dedicated modules, and can include multi-layer multi-chip modules. The electronic system can be any of a wide range of systems, such as (for example) cameras, wireless devices, displays, chipsets, set-top boxes, games, light-emitting devices, vehicles, clocks, televisions, mobile phones, personal computers, automobiles , Industrial control systems, aircraft, etc.Unless otherwise specified, the various materials, substances, compositions, etc. described herein can be formed by any suitable method currently known or yet to be developed, including, for example, atomic layer deposition (ALD), chemical vapor deposition (CVD), Physical vapor deposition (PVD), etc.The terms "dielectric" and "insulating" can be used to describe materials with insulating electrical properties. In this disclosure, the terms are considered to be synonymous. In some examples, the term “dielectric” is used, and in other examples, the term “insulation” (or “electrical insulation”) may be used to provide language changes in the present disclosure to simplify the preamble in the appended claims. It is not used to indicate any significant chemical or electrical differences.Both the terms "electrically connected" and "electrically coupled" may be utilized in this disclosure. The terms are considered synonymous. The use of one term in some examples and the use of another term in another example may be used in order to provide language changes within the present disclosure to simplify the preceding basis in the appended claims.The specific orientations of the various embodiments in the figures are for illustrative purposes only, and in some applications the embodiments may be rotated relative to the orientation shown. The description provided herein and the appended claims relate to any structure having the described relationship between the various features, regardless of whether the structure is in a particular orientation of the figure or rotated relative to this orientation.Unless otherwise indicated, the cross-sectional views of the accompanying description only show features in the plane of the cross-section, and do not show materials behind the plane of the cross-section, so as to simplify the drawings.When a structure is referred to above as being “on another structure”, “adjacent to another structure” or “against another structure”, it can be directly on the other structure or an intervening structure may also be present. In contrast, when a structure is referred to as being "directly on another structure," "directly adjacent to another structure," or "directly against another structure," there is no intervening structure. The terms "directly below", "directly above", etc. do not indicate direct physical contact (unless expressly stated otherwise), but rather indicate upright alignment.A structure (e.g., layer, material, etc.) may be referred to as "extending vertically" to indicate that the structure generally extends upward from the underlying base (e.g., substrate). The vertically extending structure may extend substantially orthogonal to the upper surface of the substrate, or may not extend orthogonal to the upper surface of the substrate.Some embodiments include an integrated assembly having digit lines extending along a first direction. The digital lines are spaced apart from each other by the intermediate region. Each of the digit lines has a first width along a cross section orthogonal to the first direction. Each of the intermediate regions also has the first width along the cross section. Each of the number lines has a top surface at a first height. Vertically extending pillars are above the number line. Each of the pillars includes a transistor channel region extending vertically between the upper source/drain region and the lower source/drain region. The lower source/drain region is coupled with the digit line. Each of the struts has the first width along the cross section. The intermediate region extends upward between the pillars and has the first width from the top surface of the upper source/drain region to the bottom surface of the digit line. The storage element is coupled with the upper source/drain region. The word line extends along a second direction that intersects the first direction. The word line includes a gate region adjacent to the channel region. The shielding line extends along the first direction in the intermediate area. Each of the shielded wires has a top surface at a second height greater than or equal to the first height.Some embodiments include a method of forming an integrated assembly. The support structure is formed to include an insulating material over the reference structure. The reference structure includes metal and is configured as a horizontal extension. A stack is formed on the support structure. The stack includes semiconductor material on top of the digit line material. The stack is patterned into rails extending along the first direction. The rails are spaced from each other by the first groove. The patterning penetrates the insulating material to leave the upper surface of the reference structure exposed along the bottom of the first trench. Each of the rails has a top surface, and has a sidewall surface extending downward from the top surface. The patterning of the stack into the tracks forms the digit line material as a digit line extending along the first direction. An insulating shell covering the top surface and the side wall surface of the rail is formed. The insulating case narrows the first groove. The upper surface of the reference structure is exposed along the bottom of the narrowed first trench. A conductive shield line formed in the narrowed first trench and directly abuts against the exposed upper surface of the reference structure at the bottom of the narrowed first trench. A second groove extending along the second direction is formed. The second direction intersects the first direction. The second trench patterns the upper area of the track into pillars and does not pattern the lower area of the track. The lower area of the track includes the digit line. The word line is formed in the second trench. The bottom section of the semiconductor material is doped to form lower source/drain regions. The lower source/drain region is coupled with the digit line. The top section of the semiconductor material is doped to form upper source/drain regions. The channel region is vertically located between the lower source/drain region and the upper source/drain region. The word line is adjacent to the channel region. The storage element is formed to be coupled with the upper source/drain region.Some embodiments include a method of forming an integrated assembly. The formation includes a stack of semiconductor materials over the digit line material. The stack is patterned into rails extending along the first direction. The rails are spaced from each other by the first groove. The rail has a top surface and has a side wall surface extending downward from the top surface. The patterning of the stack into the tracks forms the digit line material as a digit line extending along the first direction. An insulating material covering the top surface and the sidewall surface of the rail is formed. The insulating material narrows the first trench. A conductive shield line is formed in the narrowed first trench. A second groove extending along the second direction is formed. The second direction intersects the first direction. The second trench patterns the upper area of the track into pillars and does not pattern the lower area of the track. The lower area of the track includes the digit line. The word line is formed in the second trench. The bottom section of the semiconductor material is doped to form lower source/drain regions. The lower source/drain region is coupled with the digit line. The top section of the semiconductor material is doped to form upper source/drain regions. The channel region is vertically located between the lower source/drain region and the upper source/drain region. The word line is adjacent to the channel region. The storage element is formed to be coupled with the upper source/drain region. The storage element includes the memory cells of the memory array. The digit lines extend along the columns of the memory array and the word lines extend along the rows of the memory array. Each of the shield lines has a first end along a first peripheral edge of the memory array and has a first end along a second peripheral edge of the memory array that is aligned with the first peripheral edge of the memory array. The second end of the relative relationship. At least one of the first and second ends of each of the conductive shield lines is electrically connected to a reference voltage source.According to laws and regulations, the subject matter disclosed in this article has been described in more or less specific language regarding structure and method characteristics. However, it should be understood that the claims are not limited to the specific features shown and described, as the meaning disclosed herein includes example embodiments. Therefore, the claims should provide the full scope literally and should be appropriately interpreted in accordance with the principle of equivalents.
An apparatus to verify firmware in a computing system, comprising a non-volatile memory, including firmware memory to store agent firmware associated with each of a plurality of interconnect protocol (IP) agents and version memory to store security version numbers (SVNs) included in the agent firmware, a security controller comprising verifier logic to verify an integrity of the version memory by applying a hash algorithm to contents of the version memory to generate a SVN hash, and a trusted platform module (TPM) to store the SVN hash.
1.A device for verifying firmware in a computing system, including:Non-volatile memory, including:Firmware storage for storing agent firmware associated with each of the multiple agents; andThe version memory is used to store the security version number (SVN) included in the agent firmware;A security controller including verifier logic for: verifying the integrity of the version memory by applying a hash algorithm to the content of the version memory to generate an SVN hash; andThe Trusted Platform Module (TPM) is used to store the SVN hash.2.The apparatus of claim 1, wherein the verifier logic generates a check hash by applying the hash algorithm to the content of the version memory and stores the check hash in the The SVN hash in the TPM is compared to verify the integrity of the version memory when receiving the proxy firmware.3.3. The apparatus of claim 2, wherein when it is determined that the verification hash matches the SVN hash, the verifier logic verifies the integrity of the proxy firmware.4.The apparatus of claim 3, wherein the verifier logic verifies the proxy by determining whether the SVN included in the received proxy firmware is greater than the SVN associated with the proxy firmware stored in the version memory The integrity of the firmware.5.The apparatus of claim 4, wherein when it is determined that the SVN included in the received proxy firmware is greater than the SVN associated with the proxy firmware stored in the version memory, the received proxy firmware is stored in The firmware memory.6.The apparatus according to claim 5, wherein the security controller further comprises a submission logic for: when storing the received proxy firmware in the firmware memory, the received proxy firmware is included in the The SVN is stored in the version memory.7.The apparatus according to claim 6, wherein, when it is determined that the function of the received proxy firmware has been confirmed, the submission logic stores the SVN in the version memory.8.The apparatus of claim 1, wherein the verifier logic further performs SVN rollback to store the refurbished proxy firmware.9.A method for verifying firmware in a computing system includes:Verifying the integrity of the version memory included in the non-volatile memory storing the secure version number (SVN) associated with the agent firmware, including applying a hash algorithm to the content of the version memory to generate an SVN hash; andStore the SVN hash in a Trusted Platform Module (TPM).10.The method according to claim 9, further comprising:Receive proxy firmware;Verify the integrity of the version memory by applying the hash algorithm to the content of the version memory to generate a check hash; andThe verification hash is compared with the SVN hash stored in the TPM.11.The method according to claim 10, further comprising: verifying the integrity of the proxy firmware when it is determined that the verification hash matches the SVN hash.12.The method according to claim 11, wherein verifying the integrity of the agent firmware comprises: determining whether the SVN included in the received agent firmware is greater than the SVN associated with the agent firmware stored in the version memory .13.The method according to claim 12, further comprising: when it is determined that the SVN included in the received agent firmware is greater than the SVN associated with the agent firmware stored in the version memory, storing the received agent firmware In the firmware memory.14.The method according to claim 13, further comprising: when storing the received agent firmware in the firmware memory, storing the SVN included in the received agent firmware in the version memory.15.A computer-readable medium with at least one instruction that, when executed by one or more processors, causes the processor to perform the method according to claims 9-14.16.A system comprising a mechanism for implementing or executing the method according to any one of claims 9-14.17.A device comprising means for performing the method according to any one of claims 9-14.18.A computing device arranged to implement or execute the method according to any one of claims 9-14.19.A communication device arranged to implement or execute the method according to any one of claims 9-14.
Firmware verification mechanismTechnical fieldA system on chip (SOC) is an integrated circuit that integrates all the components of a computer or other electronic system. These components include a central processing unit (CPU), memory, input/output (IO) ports, and auxiliary storage devices, all of which are included on a single substrate or microchip. In addition, SOC realizes the integration of third-party components via standardized on-chip interconnection protocols. However, adding such components may lead to security vulnerabilities.Background techniqueIn order to understand the manner of the above-mentioned features in detail, a more detailed description briefly outlined above can be made by referring to embodiments, some of which are shown in the accompanying drawings. However, it should be noted that the drawings only show typical embodiments, and therefore should not be considered as limiting the scope thereof, as the present disclosure may allow other equivalent embodiments.Description of the drawingsFigure 1 shows an embodiment of a computing device.Figures 2A-2C show embodiments of the platform.Figure 3 shows another embodiment of the platform.4A and 4B are flowcharts showing one embodiment of the verifier process.Figure 5 is a flowchart showing one embodiment of the submission process.Fig. 6 is a flowchart showing one embodiment of a rollback process.Figure 7 is a schematic diagram of an illustrative electronic computing device.Detailed waysIn the following description, many specific details are explained to provide a more thorough understanding. However, it is obvious to those skilled in the art that the embodiments can be practiced without one or more of these specific details. In other cases, well-known features are not described in order to avoid obscuring the embodiments.In an embodiment, a mechanism is provided to verify the firmware in the SOC platform. In such embodiments, the security controller verifies the integrity of the version memory by applying a hash algorithm to the contents of the version memory to generate a secure version number (SVN) hash. Subsequently, the security controller stores the SVN hash in the Trusted Platform Module (TPM). In another embodiment, whenever a new proxy firmware is detected at the SOC platform, the security controller uses the SVN hash stored in the TPM to verify the integrity of the version memory. Therefore, unless the integrity of the version memory has been verified, the new agent firmware will not be downloaded to the platform.References to "one embodiment", "embodiments", "exemplary embodiments", "various embodiments", etc. indicate that the embodiments described in this way may include specific features, structures, or characteristics, but not necessarily for every embodiment Including specific features, structures or characteristics. In addition, some embodiments may have some, all, or none of the features described for other embodiments.In the following description and claims, the term "coupled" and its derivatives may be used. "Coupled" is used to indicate that two or more elements cooperate or interact with each other, but they may or may not have intermediate physical or electrical components between them.As used in the claims, unless otherwise stated, the ordinal adjectives "first", "second", "third", etc. are used to describe common elements, only to indicate that different instances of similar elements are being cited, and It is not intended to mean that the described elements must be in a given sequence in time, space, hierarchy, or any other way.FIG. 1 shows an embodiment of a computing device 100. According to one embodiment, the computing device 100 includes a computer platform that hosts an integrated circuit ("IC") such as a system on a chip ("SoC" or "SOC") that integrates various hardware and/or software components of the computing device 100 Integrated on a single chip. As shown in the figure, in one embodiment, the computing device 100 may include any number and type of hardware and/or software components, such as (but not limited to) a graphics processing unit 114 ("GPU" or simply "graphics processor") ), graphics driver 116 (also known as "GPU driver", "graphics driver logic", "drive logic", user mode driver (UMD), UMD, user mode driver framework (UMDF), UMDF or simply "drive") , Central processing unit 112 ("CPU" or simply "application processor"), memory 108, network devices, drives, etc., and input/output (I/O) sources 104, such as touch screens, touch panels, touch panels, virtual Or conventional keyboard, virtual or conventional mouse, port, connector, etc. The computing device 100 may include an operating system (OS) 106 that serves as an interface between the hardware and/or physical resources of the computing device 100 and a user.It should be appreciated that for certain embodiments, a system with less or more equipment than the above examples may be preferred. Therefore, the configuration of the computing device 100 may vary according to the implementation, which depends on many factors, such as price constraints, performance requirements, technical improvements, or other conditions.Embodiments can be implemented as any one or combination of the following: one or more microchips or integrated circuits interconnected using a motherboard, hard-wired logic, software stored by a memory device and executed by a microprocessor, firmware , Application Specific Integrated Circuit (ASIC) and/or Field Programmable Gate Array (FPGA). The terms "logic", "module", "component", "engine" and "mechanism" may include, for example, software or hardware and/or a combination thereof, such as firmware.The implementation can be implemented using one or more memory chips, controllers, CPUs (central processing units), microchips or integrated circuits interconnected using motherboards, application specific integrated circuits (ASIC) and/or field programmable gate arrays (FPGA) example. The term "logic" may include, for example, software or hardware and/or a combination of software and hardware.Figures 2A-2C illustrate an embodiment of a platform 200 that includes a SOC 210 similar to the computing device 100 discussed above. As shown in FIG. 2A, the platform 200 includes a SOC 210 communicatively coupled to one or more software components 250 via a CPU 112. In addition, the SOC 210 includes other computing device components (eg, memory 108) coupled via the system architecture 205. In one embodiment, the system architecture 205 includes an integrated system-on-chip architecture (IOSF) to provide a standardized on-chip interconnection protocol to couple interconnection protocol (IP) agents 230 (eg, IP blocks 230A and 230B) within the SOC 210. In such embodiments, the interconnection protocol provides a standardized interface to enable third parties to design logic (such as an IP proxy) incorporated into the SOC 210.According to an embodiment, the IP proxy 230 may include a general-purpose processor (for example, an in-order or out-of-order core), a fixed function unit, a graphics processor, an I/O controller, a display controller, and the like. In such embodiments, each IP proxy 230 includes a hardware interface 235 that is used to provide standardization to enable the IP proxy 230 to communicate with SOC 210 components. For example, in an embodiment where the IPA agent 230 is a third-party visual processing unit (VPU), the interface 235 provides standardization to enable the VPU to access the memory 108 via the architecture 205.The SOC 210 also includes a security controller 240 that serves as a security engine to perform various security operations on the SOC 210 (for example, security processing, cryptographic functions, etc.). In one embodiment, the security controller 240 includes an IP proxy 230 that is implemented to perform security operations. In addition, the SOC 210 includes a non-volatile memory 250. The non-volatile memory 250 may be implemented as a Peripheral Component Interconnect Express (PCIe) storage drive, such as a solid state drive (SSD) or a non-volatile memory Express (NVMe) drive.Figure 2B shows another embodiment of a platform 200 that includes a component 270 coupled to the SOC 210 via IP 230A. In one embodiment, IP 230A is used as a bridge connecting component 260 to SOC 210, such as a PCIe root port. In this embodiment, the component 260 may be implemented as a PCIe device (for example, a switch or an endpoint) including a hardware interface 235 to enable the component 260 to communicate with the SOC 210 component. FIG. 2C shows another embodiment of a platform 200 that includes a computing device 270 coupled to the platform 200 via a cloud network 210. In this embodiment, the computing device 270 includes a cloud agent that is provided to access the SOC 210 via the software 250.An IP proxy (such as proxy 230) typically includes firmware that stores software used to perform specific functions associated with the proxy. The software must be secure to prevent tampering with malicious agents. Security software usually includes a security version number (SVN) to prevent malicious access to old, vulnerable software. Specifically, SVN is used to reflect the security level of the agent software. For maximum security, this is also a general method to save SVN in non-volatile memory, which will be replayed and integrity protected to prevent damage. The common way to save SVN is to use one-time programmable (OTP) memory or fuses. However, for a SOC with a large number of agents with a single SVN, the cost of OTP memory is high. In addition, the use of OTP or fuse will lose the flexibility of refurbishing the platform, because neither OTP nor fuse can be rewritten or erased.According to one embodiment, it is implemented as a Trusted Platform Module (TPM) to facilitate SVN verification. FIG. 3 shows another embodiment of a platform 200 including a TPM 300. TPM300 is a dedicated microcontroller that protects the hardware via an integrated encryption key. In one embodiment, the security controller 240 implements the TPM 300 to prevent SVN rollback. In such an embodiment, the security controller 240 includes a verifier agent (verifier 340) that verifies the integrity of the version memory 350 in the non-volatile memory 250 and verifies the agent SVN. As shown in FIG. 3, the non-volatile memory 250 includes a firmware memory 255 for storing firmware associated with the IP proxy 230. In addition, the nonvolatile memory 250 includes a version memory 350 for storing firmware SVN.According to one embodiment, the verifier 340 receives the SVN associated with the firmware (eg, software) for each IP proxy 230 and stores the SVN in the version storage 350. In addition, the verifier 340 verifies the integrity of the SVN version storage 350 via a hash algorithm (for example, Secure Hash Algorithm 2 (SHA-2)) executed on the content of the version storage 350 to generate an SVN hash. Subsequently, the verifier 340 stores the SVN hash in the non-volatile RAM (NVRAM) 310 in the TPM 300, and locks the NVRAM 310 so that only the verifier 340 has write access to the NVRAM 310. In another embodiment, the TPM 300 is used to verify the integrity of the version memory 350 before storing the new (or updated) firmware in the firmware memory 255 and writing the associated SVN (or current SVN) to the version memory 350. In another embodiment, multiple SVNs (with limited protected persistent hardware) associated with each IP proxy 230 may be stored and submitted at the same time.Before writing the firmware SVN to the version memory 350, the verifier 340 verifies the integrity of the firmware against the manifest packaged with the firmware. Subsequently, the manifest is signed and the key is rooted in the platform 200. Once the integrity of the firmware has been verified, the verifier 340 records the current SVN (such as the SVN in the list) in the version memory 350. According to one embodiment, the firmware is approved only when it is determined that the current SVN number in the list is greater than or equal to the current SVN number stored in the version memory 350. Once approved, the firmware can be stored in the firmware memory 255.4A and 4B are flowcharts showing one embodiment of the verification process performed when new firmware for the IP proxy 230 is detected at the platform 200. At processing block 405 (FIG. 4A), the platform SVN hash (H) of the version storage 350 is generated (for example, through the subsequent application of the hash algorithm to the content of the version storage 350). At processing block 410, the platform SVN hash is read from NVRAM 310 in TPM300. At decision block 415, it is determined whether the hash H is equal to the SVN hash. If not, then an error message is generated at processing block 420. As a result, new firmware is prevented from being downloaded into the firmware memory 255. When it is determined at decision block 415 that the platform SVN hash matches (eg, is equal to) the SVN hash, the current SVN of the firmware is recorded at processing block 425 (FIG. 4B) (eg, after the integrity of the firmware has been verified).At processing block 430, the platform SVN (eg, the previous SVN) stored in the version storage 350 is retrieved. At decision block 435, it is determined whether the current SVN is greater than or equal to the platform SVN. If not, then control returns to processing block 420 where an error message is generated. When it is determined at block 435 that the current SVN is greater than or equal to the platform SVN, at processing block 440, the firmware is loaded (or stored) into the non-volatile memory 250. At processing block 450, the IP proxy associated with the firmware is notified.According to one embodiment, the security controller 240 further includes submission logic for executing a submission function that submits the SVN of the currently loaded firmware (for example, the current SVN) for storage in the version storage 350. In such an embodiment, the commit function is executed when it is determined that the function of the firmware has been confirmed. As a result, the safety controller 240 includes the commit logic 342 to perform the commit function as a separate process. In one embodiment, the commit logic 342 performs the commit function by checking the commit bit in the IP proxy 230 interface 235. In an alternative embodiment, the submission logic 342 may perform the submission function by checking a breadcrumb set by untrusted software before submitting the firmware to the non-volatile memory 250.In one embodiment, the breadcrumb navigation is a token between the untrusted operating system and the trusted BIOS. In such embodiments, the breadcrumb navigation indicates the identifier of the IP proxy to the BIOS. The identifier indicates the version number of which IP proxy needs to be updated. However, the identifier does not include the actual version number to be updated. In another embodiment, breadcrumb navigation can be implemented via shared registers between the OS and BIOS (for example, surviving after a reset). Alternatively, the breadcrumb navigation can be stored in a persistent storage device that can be accessed by both the BIOS and the OS.Figure 5 is a flowchart showing one embodiment of the submission process. At processing block 510, the current SVN is read. At processing block 520, the platform SVN is retrieved from the version storage 350. At decision block 530, it is determined whether the current SVN is not equal to the platform SVN. According to one embodiment, during implementation, the current SVN is read from the SVN storage location into the temporary storage buffer, where the previous SVN is then retrieved into the SVN storage location. As a result, the SVN value stored in the temporary storage buffer is compared with the SVN value stored in the SVN location.When it is determined at decision block 530 that the current SVN is equal to the platform SVN, the processing is complete. Otherwise, at processing block 540, the version memory 350 is updated by replacing the platform SVN with the current SVN. According to one embodiment, a backup of version storage 350 is created before performing the process. In such an embodiment, before adopting the current SVN to update the version storage 350, the content of the version storage 350 is stored in the backup version. At processing block 550, an updated platform SVN hash of version storage 350 is generated. At processing block 560, the updated platform SVN hash is stored to NVRAM 310 in TPM 300. In another embodiment, the backup version storage 350 is deleted after the updated SVN hash has been stored.According to an embodiment, the verifier 340 may also perform an SVN rollback triggered during the scenario of refurbishing the IP proxy 230. In such embodiments, the breadcrumb navigation token is received from an original equipment manufacturer (OEM) that provides the IP proxy 230 into the basic input/output system (BIOS) firmware volume (e.g., At non-volatile memory 250). Subsequently, the token is loaded into the memory 108 and sent to the validator 340, which starts the rollback process after validating the token.Fig. 6 is a flowchart showing one embodiment of a rollback process. At processing block 610, a token is received. At processing block 620, the token is verified. At processing block 630, the version storage 350 is updated with one or more allowed IP proxy software versions. At processing block 640, the content of version memory 350 is updated. At processing block 650, an updated SVN hash is generated based on the updated content. At processing block 660, the updated SVN hash is recorded to the TPM. However, in other embodiments, the SVN hash may be recorded in a playback-protected persistent storage device.The above-mentioned mechanism implements the security version number of the firmware on the SOC platform, and at the same time provides the ability to refurbish the platform when needed. In addition, this mechanism enables firmware to be added to the platform (for example, after shipment).Figure 7 is a schematic diagram of an illustrative electronic computing device that implements enhanced protection against attacks in accordance with some embodiments. In some embodiments, the computing device 900 includes one or more processors 910 that includes one or more processor cores 918 and TEE 964, and the TEE includes a machine learning service area (MLSE) 980. In some embodiments, the computing device 900 includes a hardware accelerator 968 that includes a cryptographic engine 982 and a machine learning model 984. In some embodiments, as provided in Figures 1-6, the computing device will provide enhanced protection against ML against attacks.The computing device 900 may additionally include one or more of the following: a cache 962, a graphics processing unit (GPU) 912 (which may be a hardware accelerator in some embodiments), a wireless input/output (I/O) interface 920, a wired I/O interface 930, memory circuit 940, power management circuit 950, non-transitory storage device 960, and network interface 970 for connecting to network 972. The following discussion provides a brief overview of the components that form the illustrative computing device 900. For example, the non-limiting computing device 900 may include a desktop computing device, blade server device, workstation, or similar device or system.In an embodiment, the processor core 918 can execute a machine-readable instruction set 914, read data from one or more storage devices 960 and/or an instruction set 914, and write data to one or more storage devices 960. Those skilled in the relevant art will understand that the illustrated embodiments and other embodiments can be practiced with other processor-based device configurations, including portable electronic or handheld electronic devices, such as smart phones, portable computers, wearable computers, Consumer electronics, personal computers ("PCs"), network PCs, minicomputers, server blades, mainframe computers, etc.The processor core 918 may include any number of hard-wired or configurable circuits, some or all of which may include programmable and/or configurable combinations of electronic components, semiconductor devices, and/or logic elements, some or all of which are It is installed in a PC, server or other computing system capable of executing processor-readable instructions.The computing device 900 includes a bus or similar communication link 916 that is communicatively coupled and facilitates including a processor core 918, a cache 962, a graphics processor circuit 912, and one or more wireless interfaces. /O interface 920, one or more wired I/O interfaces 930, one or more storage devices 960, and/or one or more network interfaces 970 for the exchange of information and/or data between various system components. The computing device 900 may be referred to in the singular form herein, but this is not intended to limit the embodiments to a single computing device 900, because in some embodiments, there may be more than one computing device 900, which incorporates , Includes or contains any number of communicatively coupled, juxtaposed, or remotely networked circuits or devices.The processor core 918 may include currently available or future-developed devices capable of executing any number, type, or combination of machine-readable instruction sets.The processor core 918 may include (or be coupled to) but is not limited to any current or future single-core or multi-core processor or microprocessor, such as: one or more system-on-chips (SOC); central processing unit (CPU); Digital signal processor (DSP); graphics processing unit (GPU); application specific integrated circuit (ASIC), programmable logic unit, field programmable gate array (FPGA), etc. Unless otherwise specified, the structure and operation of each block shown in FIG. 7 are all conventional designs. Therefore, as those skilled in the relevant art will understand, there is no need to further describe such blocks in detail here. The bus 916 that interconnects at least some components of the computing device 900 may utilize any serial or parallel bus structure or architecture currently available or developed in the future.The system memory 940 may include a read only memory (“ROM”) 942 and a random access memory (“RAM”) 946. A portion of ROM 942 may be used to store or otherwise retain a basic input/output system ("BIOS") 944. The BIOS 944 provides basic functions to the computing device 900, for example, by causing the processor core 918 to load and/or execute one or more machine-readable instruction sets 914. In an embodiment, at least some of the one or more machine-readable instruction sets 914 enable at least a portion of the processor core 918 to provide, create, produce, convert, and/or serve as a dedicated, special, and specific machine, for example, text Processors, digital image capture machines, media players, game systems, communication equipment, smart phones, etc.The computing device 900 may include at least one wireless input/output (I/O) interface 920. The at least one wireless I/O interface 920 may be communicatively coupled to one or more physical output devices 922 (haptic devices, video displays, audio output devices, hard copy output devices, etc.). The at least one wireless I/O interface 920 may be communicatively coupled to one or more physical input devices 924 (pointing device, touch screen, keyboard, haptic device, etc.). The at least one wireless I/O interface 920 may include any wireless I/O interface currently available or developed in the future. Example wireless I/O interfaces include but are not limited to:, Near Field Communication (NFC), and so on.The computing device 900 may include one or more wired input/output (I/O) interfaces 930. The at least one wired I/O interface 930 may be communicatively coupled to one or more physical output devices 922 (haptic devices, video displays, audio output devices, hard copy output devices, etc.). The at least one wired I/O interface 930 may be communicatively coupled to one or more physical input devices 924 (pointing device, touch screen, keyboard, haptic device, etc.). The wired I/O interface 930 may include any I/O interface currently available or developed in the future. Example wired I/O interfaces include, but are not limited to: Universal Serial Bus (USB), IEEE 1394 ("FireWire"), etc.The computing device 900 may include one or more non-transitory data storage devices 960 that are communicatively coupled. The data storage device 960 may include one or more hard disk drives (HDD) and/or one or more solid state storage devices (SSD). The one or more data storage devices 960 may include any storage components, network storage devices and/or systems currently or in the future developed. Non-limiting examples of such data storage devices 960 may include, but are not limited to, any currently or future non-transitory storage components or devices, such as one or more magnetic storage devices, one or more optical storage devices, one or more One or more resistive storage devices, one or more molecular storage devices, one or more quantum storage devices, or various combinations thereof. In some embodiments, the one or more data storage devices 960 may include one or more removable storage devices, such as one or more flash drives, flash memory, flash memory storage units, or capable of being communicatively coupled to the computing device 900 and slave computing devices. A similar component or device that is decoupled from the device 900.The one or more data storage devices 960 may include an interface or controller (not shown) that communicatively couples the corresponding storage device or system to the bus 916. The one or more data storage devices 960 may store, maintain, or otherwise contain machine-readable instruction sets, data structures, program modules, data storage, databases, logical structures, and/or processor core 918 and/or graphics processing. Other data useful to the processor circuit 912, and/or one or more applications on or executed by the processor core 918 and/or the graphics processor circuit 912. In some cases, one or more data storage devices 960 may be communicatively coupled to the processor core 918 via the bus 916 or via: one or more wired communication interfaces 930 (for example, universal serial bus or USB); One or more wireless communication interfaces 920 (for example, near field communication or NFC); and/or one or more network interfaces 970 (IEEE 802.3 or Ethernet, IEEE 802.11 or etc.).The processor-readable instruction set 914 and other programs, applications, logic sets, and/or modules may be stored in the system memory 940 in whole or in part. Such a set of instructions 914 may be transmitted in whole or in part from one or more data storage devices 960. During execution by the processor core 918 and/or the graphics processor circuit 912, the instruction set 914 may be loaded, stored, or otherwise retained in the system memory 940 in whole or in part.The computing device 900 may include a power management circuit 950 that controls one or more operational aspects of the energy storage device 952. In an embodiment, the energy storage device 952 may include one or more primary (ie, non-rechargeable) or secondary (ie, rechargeable) batteries or similar energy storage devices. In an embodiment, the energy storage device 952 may include one or more super capacitors or super capacitors. In an embodiment, the power management circuit 950 may change, adjust, or control the flow of energy from the external power source 954 to the energy storage device 952 and/or to the computing device 900. The power source 954 may include, but is not limited to, a solar power system, a commercial power grid, a portable generator, an external energy storage device, or any combination thereof.For convenience, the processor core 918, the graphics processor circuit 912, the wireless I/O interface 920, the wired I/O interface 930, the storage device 960, and the network interface 970 are shown as being communicatively coupled to each other via the bus 916, thereby providing the above Connectivity between components. In alternative embodiments, the above-mentioned components may be communicatively coupled in a manner different from that shown in FIG. 7. For example, one or more of the aforementioned components may be directly coupled to other components, or may be coupled to each other via one or more intermediate components (not shown). In another example, one or more of the aforementioned components may be integrated into the processor core 918 and/or the graphics processor circuit 912. In some embodiments, all or part of the bus 916 may be omitted, and using a suitable wired or wireless connection, the components may be directly coupled to each other.For example, an embodiment may be provided as a computer program product. The computer program product may include one or more machine-readable media on which machine-executable instructions are stored. When executed by one or more machines of the device, it may cause one or more machines to perform operations according to the embodiments described herein. Machine-readable media may include, but are not limited to, floppy disks, optical disks, CD-ROM (Compact Disc Read Only Memory) and magneto-optical disks, ROM, RAM, EPROM (Erasable Programmable Read Only Memory), EEPROM (Electrically Erasable Programmable Read Only Memory) Memory), magnetic or optical cards, flash memory, or other types of media/machine-readable media suitable for storing machine-executable instructions.In addition, the embodiments can be downloaded as a computer program product, where the program can be embodied in a carrier wave or other propagation medium and/or modulated by the carrier wave or other propagation medium via a communication link (such as a modem and/or network connection). One or more data signals are transmitted from a remote computer (such as a server) to a requesting computer (such as a client).Throughout this document, the term "user" is interchangeably referred to as "viewer", "observer", "speaker", "person", "individual", "end user", etc. It should be noted that throughout this document, terms such as "graphics domain" can be used interchangeably with "graphics processing unit", "graphics processing unit" or "GPU" for short, and similarly, "CPU domain" or " "Host domain" can be used interchangeably with "computer processing unit", "application processor" or "CPU" for short.It should be noted that, such as "node", "computing node", "server", "server equipment", "cloud computer", "cloud server", "cloud server computer", "machine", "host", "device" "", "computing equipment", "computer", "computing system", etc. are used interchangeably in this document. It should be further noted that terms such as "application", "software application", "program", "software program", "program package", "software program package", etc. can be used interchangeably in this document. In addition, terms such as "work", "input", "request", "message", etc. can be used interchangeably in this document.In various embodiments, the computing device may be a laptop computer, netbook, notebook, ultrabook, smart phone, tablet computer, personal digital assistant (PDA), ultra-mobile PC, mobile phone, desktop computer, server, set-top box , Entertainment control unit, digital camera, portable music player or digital video recorder. The computing device can be stationary, portable, or wearable. In a further embodiment, the computing device may be any other electronic device that processes data or records data for processing elsewhere.The drawings and the foregoing description give examples of embodiments. Those skilled in the art will understand that one or more of the described elements can be well combined into a single functional element. Alternatively, certain elements may be divided into multiple functional elements. Elements from one embodiment can be added to another embodiment. For example, the order of the processing described herein may be changed, and is not limited to the manner described herein. In addition, the actions of any flowchart do not need to be executed in the order shown; nor are all actions required to be executed. In addition, those actions that do not depend on other actions can be executed in parallel with other actions. The scope of the embodiment is in no way limited by these specific examples. Regardless of whether various changes are clearly given in the specification, such as differences in structure, size, and material use, they are all possible. The scope of the embodiments is at least as broad as that given by the appended claims.For example, an embodiment may be provided as a computer program product. The computer program product may include a transient or non-transitory machine-readable storage medium on which machine-executable instructions are stored. Or other electronic devices may cause one or more machines to perform operations according to the embodiments described herein. Machine-readable media may include, but are not limited to, floppy disks, optical disks, CD-ROM (Compact Disc Read Only Memory) and magneto-optical disks, ROM, RAM, EPROM (Erasable Programmable Read Only Memory), EEPROM (Electrically Erasable Programmable Read Only Memory) Memory), magnetic or optical cards, flash memory, or other types of media/machine-readable media suitable for storing machine-executable instructions.Some embodiments are related to Example 1, which includes a device for non-volatile memory, the non-volatile memory including firmware memory, which stores each of a plurality of interconnection protocol (IP) agents The agent firmware associated with each agent, and the version memory, which stores the security version number (SVN) included in the agent firmware; the security controller, which includes the validator logic to generate by applying a hash algorithm to the contents of the version memory The SVN hash is used to verify the integrity of the version memory; and the Trusted Platform Module (TPM) is used to store the SVN hash.Example 2 includes the subject of Example 1, wherein the validator program logic generates a check hash by applying a hash algorithm to the contents of the version memory and compares the check hash with the SVN hash stored in the TPM, thereby The integrity of the version memory is verified when the agent firmware is received.Example 3 includes the subject matter of Examples 1 and 2, where the verifier logic verifies the integrity of the proxy firmware when determining that the verification hash matches the SVN hash.Example 4 includes the subject matter of Examples 1-3, wherein the verifier program logic verifies the integrity of the proxy firmware by determining whether the SVN included in the received proxy firmware is greater than the SVN associated with the proxy firmware stored in the version memory.Example 5 includes the subject matter of Examples 1-4, wherein when it is determined that the SVN included in the received agent firmware is greater than the SVN associated with the agent firmware stored in the version memory, the received agent firmware is stored in the firmware memory.Example 6 includes the subject of Examples 1-5, wherein the security controller further includes commit logic for storing the SVN included in the received agent firmware in the version memory when storing the received agent firmware in the firmware memory .Example 7 includes the subject matter of Examples 1-6, in which the submission logic stores the SVN in the version memory when determining that the function of the received proxy firmware has been confirmed.Example 8 includes the subject matter of Examples 1-7, where the verifier logic further performs SVN rollback to store the refurbished proxy firmware.Some embodiments are related to Example 9, which includes at least one computer-readable medium having instructions stored thereon that, when executed by one or more processors, cause the processor to verify that the storage is included in the storage associated with the agent firmware The integrity of the version memory in the non-volatile memory of the connected secure version number (SVN), including applying a hash algorithm to the content of the version memory to generate an SVN hash and storing the SVN hash in the trusted platform module ( TPM).Example 10 includes the subject matter of Example 9, which has instructions stored thereon that, when executed by one or more processors, also cause the processors to receive proxy firmware; by applying a hash algorithm to the version The content of the memory is used to generate a check hash to verify the integrity of the version memory; and the check hash is compared with the SVN hash stored in the TPM.Example 11 includes the subject matter of Examples 9 and 10, and the subject matter of Examples 9 and 10 has instructions stored thereon that, when executed by one or more processors, also cause the processor to determine whether to verify the hash and SVN Verify the integrity of the proxy firmware when the hash matches.Example 12 includes the subject matter of Examples 9-11, wherein verifying the integrity of the agent firmware includes determining whether the SVN included in the received agent firmware is greater than the SVN associated with the agent firmware stored in the version memory.Example 13 includes the subject matter of Examples 9-12, which has instructions stored thereon that, when executed by one or more processors, cause the processor to determine whether the received agent firmware includes When the SVN is greater than the SVN associated with the agent firmware stored in the version memory, the received agent firmware is stored in the firmware memory.Example 14 includes the subject matter of Examples 9-13, which has instructions stored thereon that, when executed by one or more processors, cause the processor to store the received proxy firmware in When in the firmware memory, the SVN included in the received agent firmware is stored in the version memory.Some embodiments are related to Example 15, which includes a method of verifying firmware in a computing system, including verifying the integrity of a version memory included in a non-volatile memory storing a secure version number (SVN) associated with the agent firmware , Including applying the hash algorithm to the contents of the version memory to generate the SVN hash and storing the SVN hash in the Trusted Platform Module (TPM).Example 16 includes the subject of Example 15, which also includes receiving proxy firmware; verifying the integrity of the version memory by applying a hash algorithm to the contents of the version memory to generate a check hash; and combining the check hash with storage Compare with SVN hash in TPM.Example 17 includes the subject matter of Examples 15 and 16, which also includes verifying the integrity of the proxy firmware when it is determined that the verification hash matches the SVN hash.Example 18 includes the subject matter of Examples 15-17, wherein verifying the integrity of the agent firmware includes determining whether the SVN included in the received agent firmware is greater than the SVN associated with the agent firmware stored in the version memory.Example 19 includes the subject matter of Examples 15-18, and further includes: when it is determined that the SVN included in the received agent firmware is greater than the SVN associated with the agent firmware stored in the version memory, storing the received agent firmware in the firmware memory .Example 20 includes the subject matter of Examples 15-19, and further includes: when storing the received agent firmware in the firmware memory, storing the SVN included in the received agent firmware into the version memory.The exemplary embodiments have been described above with reference to specific embodiments. However, those skilled in the art will understand that various modifications and changes can be made thereto without departing from the broader spirit and scope set forth in the appended claims. Therefore, the foregoing description and drawings should be considered illustrative rather than restrictive.
Provided are a method, apparatus, and a system in which an initiator node is configured to communicate with a target node that is coupled to a memory. At system initialization time, a memory address map of the initiator node is generated to include addresses corresponding to the memory to which the target node is coupled. The initiator node accesses the memory coupled to the target node, by using the memory address map of the initiator node.
WHAT IS CLAIMED IS1. A method for accessing memory, the method comprising:configuring an initiator node to communicate with a target node that is coupled to a memory;generating, at system initialization time, a memory address map of the initiator node to include addresses corresponding to the memory to which the target node is coupled; andaccessing, by the initiator node, the memory coupled to the target node, by using the memory address map of the initiator node.2. The method of claim 1, wherein the memory that is coupled to the target node includes at least one of a volatile memory and a non-volatile memory.3. The method of claim 2, wherein the volatile memory and the non-volatile memory are included in one or more dual inline memory module (DIMM) devices, wherein the addresses corresponding to the memory to which the target node is coupled comprises DIMM device physical addresses, the method furthercomprising:in response to receiving a request for accessing a system physical address of the initiator node from a core or input/output (I/O) device, converting, the system physical address to a DIMM device physical address;converting the DIMM device physical address to a system physical address of the target node; andsending a message to the target node to access the system physical address of the target node, in response to converting the DIMM device physical address to the system physical address of the target node.4. The method of claim 3, the method further comprising:receiving, a response from the target node, wherein the response from the target node is based on accessing the one or more DIMM devices via the system physical address sent to the target node; andforwarding the response to the core or the I/O device from which the request for accessing the system physical address was received.5. The method of claim 2, wherein in response to a hot plugging of the target node, the initiator node configures or updates the memory address map of the initiator node during runtime, and notifies the configured or updated memory address map of the initiator node to an operating system, and wherein:the initiator node comprises a central processing complex having a plurality of cores;the target node comprises a fabric memory expander;the volatile memory comprises a volatile memory DIMM; andthe non-volatile memory comprises a non-volatile memory DIMM.6. The method of claim 1, wherein the initiator node prioritizes memory access requests for I/O requests.7. The method of claim 1, wherein different opcodes corresponding to different patterns in data are used for communication between the initiator node and the target node. 8. The method of claim 1, wherein a selected opcode is used for indicating compressed data for communication between the initiator node and the target node.9. An apparatus for accessing memory, the apparatus comprising:an initiator node;a target node coupled to the initiator node;a memory coupled to the target node, wherein the initiator node is configurable to:communicate with the target node that is coupled to the memory;generate, at system initialization time, a memory address map of the initiator node to include addresses corresponding to the memory to which the target node is coupled; andaccess the memory coupled to the target node, by using the memory address map of the initiator node.10. The apparatus of claim 9, wherein the memory that is coupled to the target node includes at least one of a volatile memory and a non-volatile memory.11. The apparatus of claim 10, wherein the volatile memory and the non-volatile memory are included in one or more dual inline memory module (DIMM) devices, wherein the addresses corresponding to the memory to which the target node is coupled comprises DIMM device physical addresses, wherein the initiator node is further configurable to:in response to receiving a request for accessing a system physical address of the initiator node from a core or input/output (I/O) device, convert, the system physical address to a DIMM device physical address;convert the DIMM device physical address to a system physical address of the target node; andsend a message to the target node to access the system physical address of the target node, in response to converting the DIMM device physical address to the system physical address of the target node.12. The apparatus of claim 11, wherein the initiator node is further configurable to: receive, a response from the target node, wherein the response from the target node is based on accessing the one or more DIMM devices via the system physical address sent to the target node; andforward the response to the core or the I/O device from which the request for accessing the system physical address was received.13. The apparatus of claim 10, wherein in response to a hot plugging of the target node, the initiator node is operable to perform operations to configure or update the memory address map of the initiator node during runtime, and notify the configured or updated memory address map of the initiator node to an operating system, and wherein:the initiator node comprises a central processing complex having a plurality of cores;the target node comprises a fabric memory expander; the volatile memory comprises a volatile memory DIMM; andthe non-volatile memory comprises a non-volatile memory DIMM.14. The apparatus of claim 9, wherein the initiator node prioritizes memory access requests for I/O requests.15. The apparatus of claim 9, wherein different opcodes corresponding to different patterns in data are used for communication between the initiator node and the target node.16. The apparatus of claim 9, wherein a selected opcode is used for indicating compressed data for communication between the initiator node and the target node. 17. A system for accessing memory, the system comprising:a display;an initiator node coupled to display;a target node coupled to the initiator node; anda memory coupled to the target node, wherein the initiator node is configurable to:communicate with the target node that is coupled to the memory;generate, at system initialization time, a memory address map of the initiator node to include addresses corresponding to the memory to which the target node is coupled; andaccess the memory coupled to the target node, by using the memory address map of the initiator node.18. The system of claim 17, wherein the memory that is coupled to the target node includes at least one of a volatile memory and a non-volatile memory.19. The system of claim 18, wherein the volatile memory and the non-volatile memory are included in one or more dual inline memory module (DIMM) devices, wherein the addresses corresponding to the memory to which the target node is coupled comprises DIMM device physical addresses, wherein the initiator node is further configurable to:in response to receiving a request for accessing a system physical address of the initiator node from a core or input/output (I/O) device, convert, the system physical address to a DIMM device physical address;convert the DIMM device physical address to a system physical address of the target node; andsend a message to the target node to access the system physical address of the target node, in response to converting the DIMM device physical address to the system physical address of the target node.20. The system of claim 19, wherein the initiator node is further configurable to: receive, a response from the target node, wherein the response from the target node is based on accessing the one or more DIMM devices via the system physical address sent to the target node; andforward the response to the core or the I/O device from which the request for accessing the system physical address was received.21. The system of claim 18, wherein in response to a hot plugging of the target node, the initiator node is operable to perform operations to configure or update the memory address map of the initiator node during runtime, and notify the configured or updated memory address map of the initiator node to an operating system, and wherein:the initiator node comprises a central processing complex having a plurality of cores;the target node comprises a fabric memory expander;the volatile memory comprises a volatile memory DIMM; andthe non-volatile memory comprises a non-volatile memory DIMM.22. The system of claim 17, wherein the initiator node prioritizes memory access requests for I/O requests.23. The system of claim 17, wherein different opcodes corresponding to different patterns in data are used for communication between the initiator node and the target node. 24. The system of claim 17, wherein a selected opcode is used for indicating compressed data for communication between the initiator node and the target node.25. A system for accessing memory, the system comprising:means for configuring an initiator node to communicate with a target node that is coupled to a memory;means for generating, at system initialization time, a memory address map of the initiator node to include addresses corresponding to the memory to which the target node is coupled; andmeans for accessing, by the initiator node, the memory coupled to the target node, by using the memory address map of the initiator node.
ACCESSING MEMORY COUPLED TO A TARGET NODE FROMAN INITIATOR NODEBACKGROUNDA dual in-line memory module (DIMM) comprises a series of dynamic random-access memory integrated circuits. Such modules may be mounted on a printed circuit board and may be designed for use in computational devices. A central processing unit (CPU) in a computational device may access the DIMM for performing read or write operations.Volatile memory is a type of computer memory whose contents are erased when the system's power is turned off or interrupted. For example, dynamic random access memory (DRAM) is a type of volatile memory. Non-volatile memory is a type of computer memory that can retain stored information even after having been power cycled (i.e., turned off and back on). Examples of non-volatile memory includes read-only memory (ROM), flash memory, etc. DIMMs may be comprised of volatile or non-volatile memory.BRIEF DESCRIPTION OF THE DRAWINGSReferring now to the drawings in which like reference numbers represent corresponding parts throughout:FIG. 1 illustrates a block diagram of a computing environment in which an initiator node is coupled to a target node, where both volatile memory DIMMs and non-volatile memory DIMMs are coupled to the target node, in accordance with certain embodiments;FIG. 2 illustrates a block diagram that shows a memory address map of the initiator node, in accordance with certain embodiments;FIG. 3 illustrates a flowchart that shows operations performed by the initiator node, in accordance with certain embodiments;FIG. 4 illustrates a block diagram that shows components of the initiator node and the target node, in accordance with certain embodiments; FIG. 5 illustrates block diagram that shows a system physical address map of the initiator node and a system physical address map of the target node, in accordance with certain embodiments;FIG. 6 illustrates a flowchart that shows operations performed by the initiator node and the target node, in accordance with certain embodiments;FIG. 7 illustrates a block diagram that shows a preference in processing being provided to memory access requests over input/output (I/O) access requests, in accordance with certain embodiments;FIG. 8 illustrates a block diagram that shows exemplary opcodescorresponding to different types of data, in accordance with certain embodiments;FIG. 9 illustrates a flowchart that shows operations performed by a controller included within the initiator node, in accordance with certain embodiments; andFIG. 10 illustrates a block diagram of a device comprising the initiator node and the target node, in accordance with certain embodiments.DETAILED DESCRIPTIONA computing system may include a CPU complex that is in communication with a plurality of volatile memory DIMMs and a plurality of non-volatile DIMMs. While the data stored in the volatile memory DIMMs is expected to be lost in the event of a power failure or in the event of a replacement of the CPU complex, the data stored in the non-volatile memory DFMMs is expected to be retained in the event of a power failure or in the event of a replacement of the CPU complex.In certain computing systems in which the memory provided by the volatile memory DIMMs and the non-volatile memory DIMMs are intermixed, the computing systems may become harder to service even though a higher performing computing system may be achieved via the intermixing of the DIMMs. For example, interleaving of memory that spreads memory addresses evenly across memory banks may require such DIMMs to be replaced in a specific order after removal of the DIMMs, where the removal and replacement of the DIMMs may be necessitated by replacement of components such as a CPU complex of the computing system.In order to make it easier to service such computing systems, in certain embodiments, volatile memory DIMMs and non-volatile memory DIMMs are mechanically separated from the CPU complex by coupling the volatile memory DIMMs and the non-volatile memory DIMMs to a fabric memory expander and by coupling the fabric memory expander to the CPU complex via a fabric. In such embodiments, the replacement of components, such as a CPU complex, may be performed without removal and replacement of the DIMMs.In the following description, reference is made to the accompanying drawings which form a part hereof and which illustrate several embodiments. It is understood that other embodiments may be utilized and structural and operational changes may be made.FIG. 1 illustrates a block diagram of a computing environment 100 in which an initiator node 102 is coupled to a target node 104, where both volatile memory DIMMs 106 and non-volatile memory DIMMs 108 are coupled to the target node 104, in accordance with certain embodiments. In certain embodiments, the initiator node 102 may be coupled to more than one target node. While FIG. 1 shows the volatile and non-volatile DFMMs separately, in certain embodiments a single DIMM may include both volatile and non-volatile memory.In certain embodiments, the initiator node 102 may comprise a CPU complex comprising one or more cores. The initiator node 102 may be coupled via a low latency fabric 110 to the target node 104, where the target node 104 may be a fabric memory expander.Volatile memory is a storage medium that requires power to maintain the state of data stored by the medium. Examples of volatile memory may include various types of random access memory (RAM), such as dynamic random access memory (DRAM) or static random access memory (SRAM). One particular type of DRAM that may be used in a memory module (for example, a DFMM) is synchronous dynamic random access memory (SDRAM). In certain embodiments, the volatile memory DIMMs 106 may be comprised of double data rate version 4 (DDR4) DFMMs or any other type of volatile memory technologies. In particular embodiments, DRAM of the volatile memory DIMMs 106 complies with a standard promulgated by JEDEC, such as JESD79F for Double Data Rate (DDR) SDRAM, JESD79-2F for DDR2 SDRAM, JESD79-3F for DDR3 SDRAM, or JESD79-4A for DDR4 SDRAM (these standards are available at www.jedec.org). Such standards (and similar standards) may be referred to as DDR-based standards. The volatile memory DIMMs 106 may be in certain embodiments be comprised of other versions of DDR memory, including DDR memory based on future versions of JEDEC DDR standards.The non-volatile memory DIMMs 108 may in certain embodiments be comprised of non-volatile memory integrated circuits, where a non-volatile memory is a storage medium that does not require power to maintain the state of data stored by the storage medium. In certain embodiments, the non-volatile memory DIMMs 108 are electronically compatible and pin-compatible with DDR4, whereas in other embodiments the non-volatile memory DIMMs 108 need not be pin-compatible with DDR4 or other technologies and may be based on a different form factor than DDR4 or other technologies. In certain embodiments the non-volatile memory DIMMs 108 may be comprised of a Triple Level Cell (TLC) NAND or any other type of NAND [e.g., Single Level Cell (SLC), Multi Level Cell (MLC), Quad Level Cell (QLC), etc.] or any other type of non-volatile memory complex. In other embodiments the non-volatile memory DIMMs 108 may be comprised certain other types of nonvolatile memory, such as NOR memory or some other suitable non-volatile memory. Nonlimiting examples of non-volatile memory may include any or a combination of: solid state memory [such as planar or three Dimensional (3D) NAND flash memory or NOR flash memory], storage devices that use chalcogenide phase change material (e.g., chalcogenide glass), byte addressable nonvolatile memory devices,ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, polymer memory (e.g., ferroelectric polymer memory), three dimensional (3D) crosspoint memory, ferroelectric transistor random access memory (Fe-TRAM) ovonic memory, nanowire memory, electrically erasable programmable read-only memory (EEPROM), other various types of non-volatile random access memories (RAMs), and magnetic storage memory. In some embodiments, the 3D crosspoint memory may comprise a transistor-less stackable cross point architecture in which memory cells sit at the intersection of words lines and bit lines and are individually addressable and in which bit storage is based on a change in bulk resistance. In certain embodiments, a DIMM with non-volatile memory may comply with one or more standards promulgated by the Joint Electron Device Engineering Council (JEDEC), such as JESD218, JESD219, JESD220-1, JESD223B, JESD223-1, or other suitable standard (the JEDEC standards cited herein are available at www.jedec.org).The initiator node 102 may also have local memory 112 coupled to the initiator node, where the local memory may be volatile memory that is relatively small in size in comparison to the memory made available to the initiator node 102 via the volatile memory DIMMs 106 and non-volatile memory DIMMs 108. The non-volatile memory DIMMs 108 may act as a remote non-volatile memory and the volatile memory DIMMs 106 may act as a remote volatile memory for the initiator node 102, in contrast to the local memory 112 of the initiator node 102.In certain embodiments, the initiator node 102 or the initiator node 101 in combination with the target node 104 may be comprised of any suitablecomputational device, such as a personal computer, a mainframe, a telephony device, a smart phone, a storage controller, a blade computer, a processor with memory, etc. The initiator node 102 may be referred to as a host, a host computing system or as a computational device. In addition to or instead of using the fabric 110, the initiator node 102 may communicate with the target node 104 over a bus (such as Peripheral Component Interconnect (PCIe), Serial Advanced Technology Attachment (SATA), Serial Attached Small Computer System Interface (SAS)) or a network, such as the Internet, a storage area network (SAN), a local area network (LAN), etc. Further details of the SATA specification may be found in the publication titled "Serial ATA Specification, Revision 3.2," released August 2013, by SATA International Organization (SATA-IO), Beaverton, OR. In another example, the interface and/or interconnect protocols for communication between the initiator node 102 and the target node 104 may comply and/or be compatible with an NVMe (Non- Volatile Memory Host Controller Interface Express). Further details of NVMe may be found in the publication titled "NVM Express™, Revision 1.2," released November 3, 2014 by NVM Express™ Work Group, and/or earlier and/or later versions of this specification (NVM Express is a trademark of NVM Express, Inc.).In certain embodiments, an operating system 114 may execute in the computing environment 100, wherein the operating system 114 may be used to control operations performed by the initiator node 102 and the control node 104. FIG. 2 illustrates a block diagram 200 that shows a memory address map 202 of the initiator node 102, in accordance with certain embodiments. The memory address map 202 of the initiator node 102 shows the address space of memory available to the initiator node 102.The memory address map 202 includes a range of addresses referred to as the remote non-volatile memory address range 204 that is a logical representation of physical non-volatile memory coupled to the target node 104, where the physical non-volatile memory is provided by the non-volatile memory DIMMs 108 coupled to the target node 104.The memory address map 202 also includes a range of addresses referred to as the remote volatile memory address range 206 that is a logical representation of physical volatile memory coupled to the target node 104, where the physical volatile memory is provided by the volatile memory DIMMs 106 coupled to the target node 104.Additionally, the memory address map 202 also includes a range of addresses referred to as the local memory address range 208 that is a logical representation of physical local memory 112 coupled to the initiator node 102. In certain embodiments, the initiator node's local memory may act as a cache to the target node's memory, and in such embodiments, the local memory address range 208 may not be present as part of the initiator memory address map 202.It should be noted that the remote non-volatile memory address range 204 and the remote volatile memory address range 206 together represent the memory provided by the target node 104 to the initiator node 102, and the memory that is provided by the target node 104 to the initiator node 102 is referred to as target node memory 210. The target node memory 210 is set in the memory address map 202 during system initialization, where system initialization is the process of initializing the initiator node 102 during bootup, during which the initiator node 102 establishes communications with the target node 104. In certain embodiments, in which the initiator node 102 is coupled to a plurality of target nodes, the initiator node 102 may provide support to the plurality of target nodes via the initiator memory address map 202.FIG. 3 illustrates a flowchart 300 that shows operations performed by the initiator node 102, in accordance with certain embodiments. Control starts at block 302 in which, the initiator node 102 is configured to communicate with the target node 104 that is coupled to a memory (e.g., the volatile memory DIMMs 104 and/or the non-volatile memory DEVIMs 108). At system initialization time, a memory address map 202 of the initiator node 102 is generated (at block 304) to include addresses corresponding to the memory to which the target node 104 is coupled. The initiator node 102 accesses (at block 306) the memory coupled to the target node 104, by using the memory address map 202 of the initiator node 102.In certain embodiments, hot plugging of the target node 104 may be supported in the computing environment 100, where hot plugging is the addition of a component to a running computer system without significant interruption to the operation of the computer system and the hot plugging of the component does not require a restart of the computer system. In such embodiments, when the target node 104 is hot plugged, the initiator node 102 may configure and/or update the initiator memory address map 202 during runtime, and notify the configured and/or updated initiator memory address map 202 to the operating system 114. In certain embodiments, DIMM interfaces or a management controller such as a baseboard management controller (BMC) may be used to notify the operating system 114 that a new memory is available, where the new memory is provided by the target node 104.FIG. 4 illustrates a block diagram 400 that shows components of the initiator node 102 and the target node 104, in accordance with certain embodiments.The initiator node 102 may comprise a multi-core processor that is comprised of a plurality of cores 402, 404, where a core is an independent processing unit that reads and executes program instructions. The initiator node may also include one or more I/O devices 405. The initiator node 102 may include a plurality of components that are built in hardware (or alternatively via a combination of hardware, software, and/or firmware), where the plurality of components include a caching agent 406, an integrated memory controller 408 (it should be noted that in certain embodiments a memory controller that is not integrated may be used instead of the integrated memory controller 408), a fabric memory controller 412, and a fabric controller 414. The caching agent 406, the integrated memory controller 408, the fabric memory controller 412, and the fabric controller 414 may also be referred to as an initiator caching agent 406, an initiator integrated memory controller 408, an initiator fabric memory controller 412, and an initiator fabric controller 414 respectively. In certain embodiments the caching agent 406 may not be present if there is only a single integrated memory controller, as the caching agent 406 may determine which of a plurality of integrated memory controllers to forward a request to, if there are more than one integrated memory controllers in the initiator node 402. In certain embodiments instead of having the integrated memory controller 108 internally within the initiator node 102, the integrated memory controller 408 may be external to the initiator node 102. While a plurality of components 406, 408, 412, 414 of the initiator node 102 are shown in different blocks, in alternativeembodiments the functions performed by one or more of the plurality of components 406, 408, 412, 414 may be performed by a single component.The target node 104 may comprise a fabric memory expander that is comprised of a plurality of cores 416, 418. The target node 104 may also include a plurality of components that are built in hardware (or alternatively via a combination of hardware, software, and/or firmware), where the plurality of components comprise an integrated memory controller 422, a fabric memory controller 426, and a fabric controller 428. The integrated memory controller 422, the fabric memory controller 426, and the fabric controller 428 are also referred to as a target integrated memory controller 422, a target fabric memory controller 426, and a target fabric controller 428 respectively. While a plurality of components 422, 426 of the target node 104 are shown in different blocks, in alternative embodiments the functions performed by one or more of the plurality of components 422, 426 may be performed by a single component.The integrated memory controller 408 of the initiator node 102 is shown to have three channels 429, 430, 432, where channels 429, 430 are used tocommunicate with local memory 112 included in DIMM slots 434, 436, and channel 432 is used to communicate with the fabric memory controller 412 of the initiator node 102, where the fabric memory controller 412 of the initiator node 102 communicates with the target node 104 via the fabric controller 414 of the initiator node 102 to access the memory of the volatile memory DIMM and non-volatile memory DIMMs placed in slots 440, 442 that are coupled to the integrated memory controller 422 of the target node 104. The integrated memory controller 422 of the target node 104 communicates via channel 444 with the volatile memory DIMM 106 placed in slot 440, and via channel 446 with the non-volatile memory DIMM 108 placed in slot 442.FIG. 5 illustrates block diagram 500 that shows a system physical address map 502 of the initiator node 102 and a system physical address map 504 of the target node 104, in accordance with certain embodiments. The system physical address map 502 of the initiator node 102 is also referred to as an initiator physical memory address map or an initiator system physical map. The physical address map 504 of the target node 102 is also referred to as a target physical memory address map or a target system physical map.The physical memory address map 502 of the initiator node 102 is analogous to that described in block 200 of FIG. 2. The physical memory address map 502 of the initiator node 102 includes addresses for the target node memory (shown via reference numeral 506) and addresses for the local memory of the initiator node (shown via reference numeral 508). The target node memory 506 corresponds to the DIMM device physical address of the target node 104, where the memory of all DIMMs of the target node 104 are in the range of addresses for the target node memory 506. The local memory of the initiator node (reference numeral 508) corresponds to the DIMM device physical address range of local memory of the initiator node 102. An offset 510 from the starting address of the target node memory 504 indicates the DFMM device physical address.The physical memory address map 504 of the target node 104 includes addresses for the volatile memory of the target node (shown via reference numeral 512) and the address of the non-volatile memory of the target node (shown via reference numeral 514) and they include address ranges corresponding to the volatile memory DIMM 106 and the non-volatile memory DIMM 108 respectively.Therefore, the physical memory address map 502 of the initiator node 102 includes address for the memory of the target node 104, and the physical memory address map 504 of the target node 104 includes the addresses of the volatile and non-volatile memory coupled to the target node 104. The dashed lines 516, 518 show how a target node's memory address range becomes part of the initiator node's memory address map. The initiator node 102 may secure information on the memory size of the target node 104 and the type of memory of the target node 104, via a variety of mechanisms in different computing environments. In certain embodiments, the initiator node's baseboard management controller (BMC) may communicate with the target node's BMC to secure the information on the memory size and type of memory of the target node 102. In another embodiment, a pod manager (PODM) in a rack scale design architecture may communicate with the pooled systemmanagement engine (PSME) and/or BMC of the target node 104 and provide the memory map information to the initiator node 102 via the BMC or PSME of the initiator node 102. In yet another embodiment, a system management bus (SMBus) connection between the initiator node 102 and the target node 104 may be used to secure the memory size and type of memory of the target node 104. In another embodiment, the target node 104 may submit its information on the memory size and type of memory to a predetermined network storage and the initiator node 102 may read the information from the predetermined network storage. In other embodiments, the target node 104 may itself allocate a memory region. For example, if the target node 104 uses a target system memory range from a representative region labeled as a "2GB- 1MB region" to a representative region labeled as a "2GB region" to contain the memory ranges and memory types (volatile or persistent), then the initiator node 102 may temporarily set aside a 2GB memory region and communicate with the "2GB-1MB region" to the "2GB region" of the target node 104 to secure the target memory map settings and reconfigure the initiator node's memory address ranges to cover the target node's memory address ranges.FIG. 6 illustrates a flowchart 600 that shows operations performed by the initiator node 102 and the target node 104, in accordance with certain embodiments. In FIG. 6, the initiator node operations 602 that are performed by the initiator node 102 are shown to the left of the dashed line 604, and the target node operations 606 that are performed by the target node 104 are shown to the right of the dashed line 604.Control starts at block 608 in which a core 402 or an I/O device 405 generates a system physical address access request and the system physical address request is sent to initiator caching agent 406. The initiator caching agent 406 forwards (at block 610) the system physical address request to the initiator integrated memory controller 408.The initiator integrated memory controller 408 converts (at block 614) the system physical address request into a DIMM device physical address (that is an offset 510 in the address range corresponding to the "memory of the target node" in the initiator physical memory address map) and sends the DIMM device physical address to the initiator fabric memory controller 412. The initiator fabric memory controller 412 receives the DIMM device physical address from the initiator integrated memory controller 408 and converts (at block 616) the DIMM device physical address to the system physical address of the target node 104. The system physical address range of the target node may be retrieved via a management controller or by dedicating a system physical address range in the target node 104, or by sending a target system configuration read request. If multiple integrated memory controller channels or initiator integrated memory controllers are used to increase bandwidth, the target node's system physical address range may be divided among the initiator integrated memory controllers.Control proceeds to block 618 in which the initiator fabric memory controller 412 sends a system physical address access request to the target node 104 through the fabric controller 414. The target fabric memory controller 426 decodes (at lock 620) the incoming message into a memory access request for the target system physical address.Control proceeds to block 622 in which the target fabric memory controller 426 forwards the target system physical address to the target integrated memory controller 422 which sends a response to the target fabric memory controller 426 after securing access to the target system physical address corresponding to the DIMMs coupled to the target node 104. The target fabric memory controller 426 then sends (at block 624) the received response to the initiator node 102.On receiving the system physical address response at the initiator node 102 via the fabric controller 414, the initiator fabric memory controller 412 sends (at block 626) the system physical address response back to the initiator integrated memory controller 408. The initiator integrated memory controller 408 sends (at block 628) the system physical address response back to the initiator caching agent 406. The initiator caching agent 406 sends (at block 630) the system physical address response back to the initiating core 402 or I/O device 405.Therefore FIGs. 1-6 illustrate certain embodiments in which an initiator fabric memory controller 412 acts as a messenger to convert an initiator node's 102 system physical address to the target node's 104 system physical address and sends the message to the target node 104. In the target node 104, target fabric memory controller 426 decodes the system physical address access request and performs a memory access to the system physical address of the target node 104. The resulting response is sent back to the initiator node's 102 initiator fabric memory controller 412.FIG. 7 illustrates a block diagram 700 that shows the initiator node 102 providing preferential processing to memory access requests over input/output (I/O) access requests, in accordance with certain embodiments.A plurality of memory access requests 702, 704 and a plurality of I/O requests 706, 708 may be generated for the initiator node 102. Since I/O requests can generally wait while memory requests need to be serviced as soon as possible, the initiator node 102 provides preference in processing to the memory access requests 702 over the I/O requests 706, 708, via the caching agent 406, the integrated memory controller 408, and the initiator fabric memory controller 412.FIG. 8 illustrates a block diagram 800 that shows exemplary opcodes corresponding to different types of data, in accordance with certain embodiments. An opcode is the portion of a machine language instruction that specifies an operation to be performed. Beside the opcode itself, most machine language instructions also specify the data they will process, in the form of operands.Four exemplary opcodes 802, 804, 806, 808 are shown in FIG. 8. Opcode802 is used to indicate data having all bits set to zero (reference numeral 810). Opcode 804 is used to indicate data having all bits set to one (reference numeral 812). Opcode 806 is used to indicate data with specific repeated patterns such as a repeated "01" pattern (reference numeral 814). Opcode 808 is used to indicate data that is compressed (reference numeral 816). The initiator node 102 and the target node 104 interpret the opcodes to reduce the number of operations needed to read or write data. For example, in certain embodiments when an address is sent and followed by 64 bytes of data, then by using the exemplary opcodes 802, 804, 806 that are used for all zeros, all ones, or repeated patterns, the entire 64 bytes of data does not have to be sent. Additionally, if the opcode 808 indicates that compressed data is included in the 64 bytes of data, then when the compressed data is uncompressed more than 64 bytes of data is received. As a result, the overall read and write time are improved over systems that do not include such exemplary opcodes.FIG. 9 illustrates a flowchart 900 that shows operations performed by the initiator fabric memory controller 412 included within the initiator node 102, in accordance with certain embodiments.Control starts at block 902 in which in response to receiving a request for accessing a system physical address of the initiator node 102 from a core or I/O device, the initiator node 102 converts the system physical address to a DEVIM device physical address. Control proceeds to block 904 in which the initiator node 102 converts the DIMM device physical address to a system physical address of the target node 104. The initiator node 102 sends (at block 906) a message to the target node 104 to access the system physical address of the target node, in response to converting the DIMM device physical address to the system physical address of the target node 104.The initiator node 102 receives (at block 908) a response from the target node 104, wherein the response from the target node 104 is based on accessing the one or more DIMM devices 106, 108 via the system physical address sent to the target node 104. The response is forwarded (at block 910) to the core or the I/O device from which the request for accessing the system physical address was received.Therefore, FIGs. 1-9 illustrate certain embodiments in which an initiator node 102 accesses the volatile and non-volatile memory DIMMs that are coupled to a target node 104. Preference is provided to memory accesses over I/O accesses. Additionally, specialized opcodes are used for communication certain data patterns and for compressed data.The described operations may be implemented as a method, apparatus or computer program product using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof. The described operations may be implemented as code maintained in a "computer readable storage medium", where a processor may read and execute the code from the computer storage readable medium. The computer readable storage medium includes at least one of electronic circuitry, storage materials, inorganic materials, organic materials, biological materials, a casing, a housing, a coating, and hardware. A computer readable storage medium may comprise, but is not limited to, a magnetic storage medium (e.g., hard drive drives, floppy disks, tape, etc.), optical storage (CD-ROMs, DVDs, optical disks, etc.), volatile and non-volatile memory devices (e.g., EEPROMs, ROMs, PROMs, RAMs, DRAMs, SRAMs, FlashMemory, firmware, programmable logic, etc.), Solid State Devices (SSD), etc. The code implementing the described operations may further be implemented in hardware logic implemented in a hardware device (e.g., an integrated circuit chip, Programmable Gate Array (PGA), Application Specific Integrated Circuit (ASIC), etc.). Still further, the code implementing the described operations may be implemented in "transmission signals", where transmission signals may propagate through space or through a transmission media, such as an optical fiber, copper wire, etc. The transmission signals in which the code or logic is encoded may further comprise a wireless signal, satellite transmission, radio waves, infrared signals, Bluetooth, etc. The program code embedded on a computer readable storage medium may be transmitted as transmission signals from a transmitting station or computer to a receiving station or computer. A computer readable storage medium is not comprised solely of transmission signals. Those skilled in the art will recognize that many modifications may be made to this configuration, and that the article of manufacture may comprise suitable information bearing medium known in the art.Computer program code for carrying out operations for aspects of the certain embodiments may be written in any combination of one or more programming languages. Blocks of the flowchart and block diagrams may be implemented by computer program instructions.FIG. 10 illustrates a block diagram of a system 1000 that includes both the initiator node 102 and the target node 104, in accordance with certain embodiments. For example, in certain embodiments the system 1000 may be a computer (e.g., a laptop computer, a desktop computer, a tablet, a cell phone or any other suitable computational device) that has the initiator node 102 and the target node 104 both included in the system 1000. For example, in certain embodiments the system 1000 may be a computer with a plurality of racks where each rack includes the initiator node 102, the target node 104, the volatile memory DIMMs 106, and the non- volatile memory DIMMs 108. The system 1000 may include a circuitry 1002 that may in certain embodiments include at least a processor 1004. The system 1000 may also include a memory 1006 (e.g., a volatile memory device), and storage 1008. The storage 1008 may include the solid state drive 102 or other drives or devices including a non-volatile memory device (e.g., EEPROM, ROM, PROM, flash, firmware, programmable logic, etc.). The storage 1008 may also include a magnetic disk drive, an optical disk drive, a tape drive, etc. The storage 1008 may comprise an internal storage device, an attached storage device and/or a network accessible storage device. The system 1000 may include a program logic 1010 including code 1012 that may be loaded into the memory 1006 and executed by the processor 1004 or circuitry 1002. In certain embodiments, the program logic 1010 including code 1012 may be stored in the storage 1008. In certain other embodiments, the program logic 1010 may be implemented in the circuitry 1002. Therefore, while FIG. 10 shows the program logic 1010 separately from the other elements, the program logic 1010 may be implemented in the memory 1006 and/or the circuitry 1002. The system 1000 may also include a display 1014 (e.g., an liquid crystal display (LCD), a light emitting diode (LED) display, a cathode ray tube (CRT) display, a touchscreen display, or any other suitable display). The system 1000 may also include one or more input devices 1016, such as, a keyboard, a mouse, a joystick, a trackpad, or any other suitable input devices). Other components or devices beyond those shown in FIG. 10 may also be found in the system 1000.Certain embodiments may be directed to a method for deploying computing instruction by a person or automated processing integrating computer-readable code into a computing system, wherein the code in combination with the computing system is enabled to perform the operations of the described embodiments.The terms "an embodiment", "embodiment", "embodiments", "the embodiment", "the embodiments", "one or more embodiments", "someembodiments", and "one embodiment" mean "one or more (but not all)embodiments" unless expressly specified otherwise. The terms "including", "comprising", "having" and variations thereof mean "including but not limited to", unless expressly specified otherwise.The enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise.The terms "a", "an" and "the" mean "one or more", unless expressly specified otherwise.Devices that are in communication with each other need not be in continuous communication with each other, unless expressly specified otherwise. In addition, devices that are in communication with each other may communicate directly or indirectly through one or more intermediaries.A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary a variety of optional components are described to illustrate the wide variety of possible embodiments.Further, although process steps, method steps, algorithms or the like may be described in a sequential order, such processes, methods and algorithms may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps be performed in that order. The steps of processes described herein may be performed in any order practical. Further, some steps may be performedsimultaneously.When a single device or article is described herein, it will be readily apparent that more than one device/article (whether or not they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether or not they cooperate), it will be readily apparent that a single device/article may be used in place of the more than one device or article or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments need not include the device itself.At least certain operations that may have been illustrated in the figures show certain events occurring in a certain order. In alternative embodiments, certain operations may be performed in a different order, modified or removed. Moreover, steps may be added to the above described logic and still conform to the described embodiments. Further, operations described herein may occur sequentially or certain operations may be processed in parallel. Yet further, operations may be performed by a single processing unit or by distributed processing units.The foregoing description of various embodiments has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to be limited to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching.ExamplesThe following examples pertain to further embodiments.Example 1 is a method for accessing memory. An initiator node is configured to communicate with a target node that is coupled to a memory. At system initialization time, a memory address map of the initiator node is generated to include addresses corresponding to the memory to which the target node is coupled. The initiator node accesses the memory coupled to the target node, by using the memory address map of the initiator node.In example 2, the subject matter of example 1 may include that the memory that is coupled to the target node includes at least one of a volatile memory and a non-volatile memory.In example 3, the subject matter of example 2 may include that the volatile memory and the non-volatile memory are included in one or more dual inline memory module (DIMM) devices, wherein the addresses corresponding to the memory to which the target node is coupled comprises DFMM device physical addresses, and wherein the example further comprises: in response to receiving a request for accessing a system physical address of the initiator node from a core or input/output (I/O) device, converting, the system physical address to a DIMM device physical address; converting the DIMM device physical address to a system physical address of the target node; and sending a message to the target node to access the system physical address of the target node, in response to converting the DIMM device physical address to the system physical address of the target node.In example 4, the subject matter of claim 3 may include receiving, a response from the target node, wherein the response from the target node is based on accessing the one or more DIMM devices via the system physical address sent to the target node; and forwarding the response to the core or the I/O device from which the request for accessing the system physical address was received.In example 5, the subject matter of example 2 may include that in response to a hot plugging of the target node, the initiator node configures or updates the memory address map of the initiator node during runtime, and notifies the configured or updated memory address map of the initiator node to an operating system, wherein: the initiator node comprises a central processing complex having a plurality of cores; the target node comprises a fabric memory expander; the volatile memory comprises a volatile memory DIMM; and the non-volatile memory comprises a non-volatile memory DIMM.In example 6, the subject matter of example 1 may include that the initiator node prioritizes memory access requests for I/O requests.In example 7, the subject matter of example 1 may include that different opcodes corresponding to different patterns in data are used for communication between the initiator node and the target node.In example 8, the subject matter of example 1 may include that a selected opcode is used for indicating compressed data for communication between the initiator node and the target node.Example 9 is a system for accessing memory. The system comprises an initiator node; a target node coupled to the initiator node; a memory coupled to the target node, wherein the initiator node is configurable to: communicate with the target node that is coupled to the memory; generate, at system initialization time, a memory address map of the initiator node to include addresses corresponding to the memory to which the target node is coupled; and access the memory coupled to the target node, by using the memory address map of the initiator node.In example 10, the subject matter of example 9 may include that the memory that is coupled to the target node includes at least one of a volatile memory and a non-volatile memory.In example 11, the subject matter of example 10 may include that the volatile memory and the non-volatile memory are included in one or more dual inline memory module (DIMM) devices, wherein the addresses corresponding to the memory to which the target node is coupled comprises DIMM device physical addresses, and wherein the initiator node is further configurable to: in response to receiving a request for accessing a system physical address of the initiator node from a core or input/output (I/O) device, convert, the system physical address to a DIMM device physical address; convert the DIMM device physical address to a system physical address of the target node; and send a message to the target node to access the system physical address of the target node, in response to converting the DIMM device physical address to the system physical address of the target node.In example 12, the subject matter of example 11 may include that the initiator node is further configurable to: receive, a response from the target node, wherein the response from the target node is based on accessing the one or more DIMM devices via the system physical address sent to the target node; and forward the response to the core or the I/O device from which the request for accessing the system physical address was received.In example 13, the subject matter of example 10 may include that in response to a hot plugging of the target node, the initiator node is operable to perform operations to configure or update the memory address map of the initiator node during runtime, and notify the configured or updated memory address map of the initiator node to an operating system, wherein: the initiator node comprises a central processing complex having a plurality of cores; the target node comprises a fabric memory expander; the volatile memory comprises a volatile memory DIMM; and the non-volatile memory comprises a non-volatile memory DIMM.In example 14, the subject matter of example 9 may include that the initiator node prioritizes memory access requests for I/O requests.In example 15, the subject matter of example 9 may include that different opcodes corresponding to different patterns in data are used for communication between the initiator node and the target node.In example 16, the subject matter of example 9 may include that a selected opcode is used for indicating compressed data for communication between the initiator node and the target node.Example 17 is a system for accessing memory. The system comprises a display; an initiator node coupled to display; a target node coupled to the initiator node; and a memory coupled to the target node, wherein the initiator node is configurable to: communicate with the target node that is coupled to the memory; generate, at system initialization time, a memory address map of the initiator node to include addresses corresponding to the memory to which the target node is coupled; and access the memory coupled to the target node, by using the memory address map of the initiator node.In example 18, the subject matter of example 17 may include that the memory that is coupled to the target node includes at least one of a volatile memory and a non-volatile memory.In example 19, the subject matter of example 18 may include that volatile and the non-volatile memory are included in one or more dual inline memory module (DIMM) devices, wherein the addresses corresponding to the memory to which the target node is coupled comprises DIMM device physical addresses, and wherein the initiator node is further configurable to: in response to receiving a request for accessing a system physical address of the initiator node from a core or input/output (I/O) device, convert, the system physical address to a DIMM device physical address; convert the DIMM device physical address to a system physical address of the target node; and send a message to the target node to access the system physical address of the target node, in response to converting the DIMM device physical address to the system physical address of the target node.In example 20, the subject matter of example 19 may include that the initiator node is further configurable to: receive, a response from the target node, wherein the response from the target node is based on accessing the one or more DIMM devices via the system physical address sent to the target node; and forward the response to the core or the I/O device from which the request for accessing the system physical address was received.In example 21, the subject matter of example 18 may include that in response to a hot plugging of the target node, the initiator node is operable to perform operations to configure or update the memory address map of the initiator node during runtime, and notify the configured or updated memory address map of the initiator node to an operating system, wherein: the initiator node comprises a central processing complex having a plurality of cores; the target node comprises a fabric memory expander; the volatile memory comprises a volatile memory DIMM; and the non-volatile memory comprises a non-volatile memory DIMM. In example 22, the subject matter of example 17 further includes that the initiator node prioritizes memory access requests for I/O requests.In example 23, the subject matter of example 17 further includes that different opcodes corresponding to different patterns in data are used forcommunication between the initiator node and the target node.In example 24, the subject matter of example 17 further includes that a selected opcode is used for indicating compressed data for communication between the initiator node and the target node.Example 25 is a system for accessing memory, wherein the systems includes: means for configuring an initiator node to communicate with a target node that is coupled to a memory; means for generating, at system initialization time, a memory address map of the initiator node to include addresses corresponding to the memory to which the target node is coupled; and means for accessing, by the initiator node, the memory coupled to the target node, by using the memory address map of the initiator node.
A method of fabricating a quantum well device includes forming a diffusion barrier on sides of a delta layer of a quantum well to confine dopants to the quantum well.
What is claimed is: 1. A method of fabricating a quantum well device comprising forming a diffusion barrier on sides of a delta layer of a quantum well to confine dopants to the quantum well. 2. The method of claim 1 further comprising forming the quantum well with a high mobility, narrow band gap channel layer. 3. The method of claim 1 wherein an electronic band structure at a hetero- junction interface of the quantum well confines electron carriers using conduction band offset. 4. The method of claim 1 wherein an electronic band structure at a hetero- junction interface of the quantum well confines hole carriers using valence band offset. 5. The method of claim 1 further comprising: forming a graded transitional layer over a substrate; and forming a relaxed film epitaxial layer over the transitional layer. 6. The method of claim 5 wherein the forming of the transitional layer and the relaxed film epitaxial layer reduce a dislocation defect of the quantum well layer. 7. The method of claim 5 further comprising forming a first Sil-yGey layer over the relaxed film epitaxial layer. 8. The method of claim 7 further comprising forming the quantum well over the first Sil-yGey layer. 9. The method of claim 8 further comprising: forming a first diffusion barrier over the quantum well; forming a second Sil-yGey layer over the first diffusion barrier; and forming a second diffusion barrier over the second Sil-yGey layer. 10. The method of claim 7 further performing Complementary Metal-Oxide Semiconductor (CMOS) processing to complete the fabrication of the quantum well device. 11. A quantum well semiconductor device comprising: a quantum well; a delta layer; and a first diffusion barrier formed below the delta layer; and a second diffusion barrier formed above the delta layer. 12. The device of claim 11 further comprising: a substrate; a graded transitional layer formed over the substrate; and a relaxed film epitaxial layer formed over the transitional layer. 13. The device of claim 12 wherein the transitional layer and the relaxed film epitaxial layer are formed to reduce a dislocation defect of the quantum well layer. 14. The device of claim 5 further comprising a first Sil-yGey layer formed over the relaxed film epitaxial layer and below the quantum well. 15. The device of claim 11 wherein an electronic band structure at a hetero- junction interface of the quantum well confines electron carriers using conduction band offset. 16. The device of claim 11 wherein an electronic band structure at a hetero- junction interface of the quantum well confines hole carriers using valence band offset. forming a relaxed film epitaxial layer over the transitional layer; forming a first Sil-yGey layer over the relaxed film epitaxial layer; forming the quantum well over the first Sil-yGey layer, forming a first diffusion barrier over the quantum well; forming a second Sil-yGey layer over the first diffusion barrier; and forming a second diffusion barrier over the second Sil-yGey layer. 18. The method of claim 17 further performing Complementary Metal-Oxide Semiconductor (CMOS) processing to complete the fabrication of the quantum well semiconductor device. 19. The method of claim 17 wherein an electronic band structure at a hetero- junction interface of the quantum well confines electron carriers using conduction band offset. 20. The method of claim 17 wherein an electronic band structure at a hetero- junction interface of the quantum well confines hole carriers using valence band offset.
FIELD OF THE INVENTIONEmbodiments of the present invention relate to semiconductor integrated circuits, and more particularly to field effect transistors, and methods for fabricating the transistors.BACKGROUNDQuantum wells are formed in semiconductor devices such as diode lasers, High Electron Mobility Transistors (HEMTs) used in low-noise electronics and infrared photodetectors used for infrared imaging. Particularly, a quantum well is a potential well that confines particles, which were originally free to move in three dimensions, to two dimensions, forcing them to occupy a planar region. The effects of quantum confinement take place when the quantum well thickness becomes comparable at the de Brogue wavelength of the carriers (generally electrons and holes); leading to energy levels called "energy subbands", i.e., the carriers can only have discrete energy values.Quantum wells are formed in semiconductors by having a material, like gallium arsenide sandwiched between two layers of a material with a wider bandgap, like aluminum arsenide. These structures can be grown by molecular beam epitaxy or chemical vapor deposition with control of the layer thickness down to monolayers.In order to achieve high mobility quantum well device structures, a key element is the ability to confine dopants in close proximity to the intrinsic quantum well. Such a requirement is not easily met in many cases due to the uncontrolled diffusivity of such dopants. The dopants in a delta doped layer can diffuse or "spill into" the quantum well during the subsequent growth and annealing steps and hence degrade the device mobility/performance.A partial solution to the problem of dopant out-diffusion from the delta doped layer during subsequent dopant activation annealing steps is the use of ultra fast ramping RTA (rapid thermal annealing). This does not address dopant diffusion/spread entirely though since dopants can also diffuse during the formation, etc. may not be compatible with the ultra low thermal budget requirements for maintaining the delta doped layer. BRIEF DESCRIPTION OF THE DRAWINGSThe invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which:Figures 1 illustrates one embodiment of a method of fabricating a quantum well device;Figures 2-6 illustrate one embodiment of various stages in the fabrication of a quantum well device; andFigures 7 A and 7B are graphs illustrating dopant diffusion.DETAILED DESCRIPTION A mechanism for forming a doped quantum well structure is described. In the following detailed description of the present invention numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well- known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention.Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.As Complementary Metal-Oxide Semiconductor (CMOS) devices continue to scale down the gate length, one device parameter that is severely impacted by the continual increase of dopants in the channel is the carrier mobility. Thus, remotely doped quantum well structures are increasingly being implemented. The surface roughness and impurity scattering (e.g., dopant not present in the quantum well) in the channel, and incorporation of strain in quantum well and with strain stabilization from bottom and cap hetero-epitaxial (Epi) layers. However, as discussed above, dopant out diffusion is a main concern of controlling the high concentration of dopants in the delta doped layer.According to one embodiment, a quantum well structure is fabricated by forming of a diffusion barrier material on either side of a delta doping layer in order to confine the dopants in close proximity to a quantum well. In such an embodiment, a hetero-epitaxial quantum well structure is grown with a high mobility, narrow band gap, channel layer that is sandwiched between two wider bandgap layers. The electronic band structure at the hetero-junction interface confines either electron or hole carriers using conduction band offset or valence band offset, respectively.During the growth of the wide band gap layers, heavily doped delta doping layers are grown sufficiently close to the quantum well layer as a carrier reservoir. Prior and after growth of the heavily doped delta layer, thin dopant diffusion barrier layers are grown above and below the heavily doped delta doping layer. The dopant diffusion barrier is formed, in one embodiment, by introducing a layer which has low dopant diffusivity (such as Si in a Ge quantum well structure), or by adding impurity in the wide band gap layers to suppress dopant diffusion (e.g. by adding carbon (C) in Si or SiGe to effectively suppress Boron (B) and Phosphorus (P) diffusion). Figure 1 illustrates one embodiment of fabrication processes of one embodiment of a Ge quantum well and a sharp boundary delta doping layer. At processing block 110, a quantum well structure is formed by grading a transitional SiGe layer and a thick relaxed film Epi layer (e.g., Sil-xGex) to reduce dislocation defect of the Ge quantum well layer. Figure 2 illustrates one embodiment of the graded SiGe and Sil-xGex layers formed on the Si substrate.Referring back to Figure 1, a Sil-yGey layer is formed with the Ge composition tailored to have a desired valence band offset with the Ge quantum well valence band, processing block 120. Figure 3 illustrates one embodiment of Sil-yGey layer, processing block 130. Figure 4 illustrates one embodiment of the formed Ge quantum well layer. Referring back to Figure 1, a Si barrier/heavily doped Sil-yGey/Si barrier sandwich is grown to contain the delta dopants, processing block 150. Figure 5 illustrates one embodiment of the Si barrier/heavily doped Sil-yGey/Si barrier formed over the Ge quantum well layer.Referring back to Figure 1, industry standard CMOS processing is then carried out to fabricate the remainder of the Ge QW PMOS device on the above substrate, processing block 150. Such processing includes. Figure 5 illustrates one embodiment of a quantum well device having a diffusion layer surrounded delta doping area. In other embodiments, the diffusion barrier/delta doping layer stack can also be placed under the quantum well.Figures 7 A and 7B illustrate examples of dopant diffusion barrier layers on blanket wafers for the case of high mobility Germanium (Ge) quantum well layers. The figures show mass spectrometry (SIMS) profile of Phosphorus in a Ge Epi layer grown on a silicon (Si) substrate. A thin 5OA Si or a 5OA 69% SiGe layer is embedded in Ge as a dopant diffusion layer. Comparing the 5OA Si barrier in Figure 7A and the 5OA 69% SiGe barrier of Figure 7B, the 5OA Si effectively blocked the P diffusion in the top n-Ge from diffusing into the undoped i-Ge bottom layer.Although described above with respect to a GE quantum well structure and, the above-described method may be implemented in other embodiments using any kind of high mobility quantum well structure. In further embodiments, any kind of diffusion barrier may be implemented; including a C doped Si or SiGe.Figure 8 illustrates that quantum well devices 800, according to various embodiments of the invention, may be used in an integrated circuit 810 (or another chip, monolith device, semiconductor device, or microelectronic device, as they are generally understood in the field) and incorporated into a computer system 850 (or other electrical system). The computer system, which may be a portable, laptop, desktop, server, mainframe, or other computer system, may also include other conventional computer system components, such as a bus to communicate data, a memory to store data (e.g., main memory, read only memory, and/or a mass electrical systems.Whereas many alterations and modifications of the present invention will no doubt become apparent to a person of ordinary skill in the art after having read the foregoing description, it is to be understood that any particular embodiment shown and described by way of illustration is in no way intended to be considered limiting. Therefore, references to details of various embodiments are not intended to limit the scope of the claims, which in themselves recite only those features regarded as essential to the invention.
Apparatuses, systems and methods associated with causing a physical layer (PHY) device are disclosed herein. In embodiments, an apparatus may include a memory device to store one or more activity lists associated with one or more external PHY devices, external to the apparatus, including the first external PHY device. The apparatus may further include a processor, that executes an engine, to receive a request for performance of the operation by the first external PHY device, identify an activity list associated with the first external PHY device from the one or more activity lists, identify an activity to effectuate performance of the operation from the activity list associated with the first external PHY device and cause the first external PHY device to perform the operation in accordance with the activity.
ClaimsWhat is claimed is: 1. An apparatus to cause a first external physical layer (PHY) device, external to the apparatus, to perform an operation, comprising:a memory device to store one or more activity lists associated with one or more external PHY devices including the first external PHY device; anda processor, that executes an engine, to:receive a request for performance of the operation by the first externalPHY device;identify an activity list associated with the first external PHY device from the one or more activity lists;identify an activity to effectuate performance of the operation from the activity list associated with the first external PHY device; andcause the first external PHY device to perform the operation in accordance with the activity.2. The apparatus of claim 1, wherein the apparatus is a system on chip (SoC) device, and the engine is located in a firmware driver of the SoC device.3. The apparatus of any of the claims 1 and 2, wherein the processor is to further perform a checksum operation on data included in a header of the activity list to verify the activity list has not been corrupted, wherein causation of performance of the operation in accordance with the activity occurs in response to verification that the activity list has not been corrupted.4. The apparatus of claim 3, wherein the checksum operation includes a CRC8 checksum operation, and wherein the data included in the header of the activity list includes a checksum value on which the CRC8 checksum operation is performed. 5. The apparatus of any of the claims 1 and 2, wherein causation of performance of operation in accordance with the activity includes translation of one or more actions included in the activity into one or more PHY commands that cause the first external PHY device to perform the operation.6. The apparatus of any of the claims 1 and 2, wherein causation of performance of operation in accordance with the activity includes translation of one or more actions included in the activity into one or more media access control commands that cause the first external PHY device to perform the operation.7. The apparatus of any of the claims 1 and 2, wherein :the memory device is further to store a table of contents associated with the activity list; andthe processor is further to access the table of contents, wherein identification of the activity is based on data contained in the table of contents.8. The apparatus of any of the claims 1 and 2, wherein a header of the activity indicates storage locations of one or more actions included in the activity, wherein the processor is further to retrieve the one or more actions from the storage locations, and wherein causation of performance of the operation in accordance with the activity includes execution of the one or more actions retrieved from the storage locations.9. The apparatus of claim 8, wherein the processor is further to:identify a first action of the one or more actions, wherein the first action includes an action call to a second action not located in the storage locations of the one or more actions included in the activity; andprevent execution of the first action in response to identification of the first action that includes the action call to the second action not located in the storage locations. 10. The apparatus of any of the claims 1 and 2, wherein the request for performance of the operation includes a PHY device indicator associated with the first external PHY device, and wherein the activity list associated with the first external PHY device is identified based, at least in part, on the PHY device indicator.11. A method to cause a physical layer (PHY) device to perform a first PHY-level task, comprising:receiving, by a system on chip (SoC) device, a request to cause the PHY device to perform the first PHY-level task, the PHY device being external to the SoC device;retrieving, by the SoC device, an activity associated with the PHY-level task from an activity list associated with the PHY device, the activity list being stored in memory and including one or more activities associated with one or more PHY-level tasks including the first PHY-level task; andcausing, by the SoC device, the PHY device to perform the first PHY-level task in accordance the activity. 12. The method of claim 11, further comprising performing, by the SoC device, a checksum operation on data included in a header of the activity to verify the activity has not been corrupted, wherein causation of the performance of the first PHY-level task occurs in response to verification that the activity has not been corrupted. 13. The method of any of the claims 11 and 12, wherein causation of the performance of the first PHY-level task includes translating one or more actions included in the activity into one or more PHY commands that cause the external PHY device to perform the first PHY-level task. 14. The method of any of the claims 11 and 12, wherein causation of the performance of the first PHY-level task includes translating one or more actions included in the activity into one or more media access control commands that cause the external PHY device to perform the first PHY-level task. 15. The method of any of the claims 11 and 12, further comprising:accessing, by the SoC device, a table of contents stored in the memory in response to reception of the request; andidentifying, by the SoC device, the activity based on data included in the table of contents.16. The method of any of the claims 11 and 12, further comprising:identifying, by the SoC device, storage locations of one or more actions included in the activity based on data included in a header of the activity; andretrieving, by the SoC device, the one or more actions from the storage locations, wherein causation of the performance of the first PHY-level task includes executing, by the SoC device, the one or more actions retrieved from the storage locations. 17. The method of claim 16, further comprising:identifying, by the SoC device, a first action of the one or more actions, wherein the first action includes an action call to a second action not located in the storage locations of the one or more actions; andpreventing, by the SoC device, execution of the first action in response to identification of the first action that includes the action call to the second action not located in the storage locations. 18. The method of claim 11, wherein the request includes a PHY device identifier associated with the PHY device, and wherein the method further comprises identifying, by the SOC device, the activity list associated with the PHY device based on the PHY device identifier. 19. One or more computer-readable media having instructions stored thereon, wherein the instructions, in response to execution by a device, cause the device to perform the methods of any of the claims 11-18. 20. An apparatus to cause a first external physical layer (PHY) device to perform an operation, comprising:means for receiving a request to cause the PHY device to perform the first PHY-level task, the PHY device being external to the SoC device;means for retrieving an activity associated with the PHY-level task from an activity list associated with the PHY device, the activity list being stored in memory and including one or more activities associated with one or more PHY-level tasks including the first PHY-level task; andmeans for causing the PHY device to perform the first PHY-level task, through a firmware driver, in accordance the activity.21. The apparatus of claim 20, further comprising means for performing a checksum operation on data included in a header of the activity to verify the activity has not been corrupted, wherein causation of the performance of the first PHY-level task occurs in response to verification that the activity has not been corrupted.22. The apparatus of any of the claims 20 and 21, wherein the means for causing of the performance of the first PHY-level task includes means for translating one or more actions included in the activity into one or more PHY commands that cause the external PHY device to perform the first PHY-level task.23. The apparatus of any of the claims 20 and 21, wherein the means for causing of the performance of the first PHY-level task includes means for translating one or more actions included in the activity into one or more media access control commands that cause the external PHY device to perform the first PHY-level task.24. The apparatus of any of the claims 20 and 21, further comprising:means for accessing a table of contents stored in the memory in response to reception of the request; andmeans for identifying the activity based on data included in the table of contents.25. The apparatus of any of the claims 20 and 21, further comprising:means for identifying storage locations of one or more actions included in the activity based on data included in a header of the activity; andmeans for retrieving the one or more actions from the storage locations, wherein causation of the performance of the first PHY-level task includes executing, by the SoC device, the one or more actions retrieved from the storage locations.
PHYSICAL LAYER DEVICE OPERATION SYSTEM AND METHODRelated ApplicationThis application claims priority to U.S. Patent Application 14/959,440, entitled"PHYSICAL LAYER DEVICE OPERATION SYSTEM AND METHOD," filed December 4, 2015.Technical FieldThe present disclosure relates to the fields of electronic circuits and computing. More particularly, the present disclosure relates to operation of a physical layer (PHY) device, e.g., within a computing device.BackgroundThe background description provided herein is for the purpose of generally presenting the context of the disclosure. Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.Enablement of a PHY device within a computing device often involvescommunication between a media access controller (MAC) and the PHY device. In order for the communication to occur between the components, both the MAC and the PHY device must be communicating using the same programming construct. As no standard programming construct has been defined for communication between MACs and PHY devices, multiple different programming constructs may be utilized for communicating between MACs and PHY devices depending on the MACs and/or the PHY devices implemented.In legacy computing devices, often the MAC and the PHY device were produced by different manufacturers who would develop the components without interaction between the manufacturers. The lack of interaction between the manufacturers would lead to the MAC and the PHY device communicating in a different programming construct from each other. In order to rectify this issue, the MAC manufacturer, the producer of the computing device and/or motherboard, and/or a third party would reprogram binary code within a driver associated with the PHY device to enable communication between the MAC and the PHY device. This approach was complicated, time consuming and costly to perform. Brief Description of the DrawingsEmbodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.Figure 1 illustrates an example communication flow among components of an example computing device, according to various embodiments.Figure 2 illustrates an example abstract representation of computing structures utilized for PHY enablement, according to various embodiments.Figure 3 illustrates an example process of PHY enablement, according to various embodiments.Figure 4 illustrates an example activity list header structure, according to various embodiments.Figure 5 illustrates an example table of contents entry structure, according to various embodiments.Figure 6 illustrates an example table of contents entry header structure, according to various embodiments.Figure 7 illustrates another example table of contents entry structure, according to various embodiments.Figure 8 illustrates an example action structure, according to various embodiments.Figure 9 illustrates an example activity header structure, according to variousembodiments.Figure 10 illustrates an example computing device that may employthe apparatuses and/or methods described herein.Detailed DescriptionApparatuses, methods and storage medium associated with physical layer (PHY) device operation are disclosed herein. In embodiments, an apparatus may include one or more processors, devices, and/or circuitry to identify a PHY device and access an activity list associated with the PHY device. The apparatus may access activities within the activity list that enable communication between a media access controller (MAC) and the PHY device to initiate PHY-level tasks to be performed by the PHY device. The apparatuses, methods and storage medium disclosed herein may configure and control third parry PHY devices withoutmodifications to the driver binary.In the following detailed description, reference is made to the accompanying drawings which form a part hereof wherein like numerals designate like parts throughout, and in which is shown by way of illustration embodiments that may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents.Aspects of the disclosure are disclosed in the accompanying description. Alternate embodiments of the present disclosure and their equivalents may be devised without parting from the spirit or scope of the present disclosure. It should be noted that like elements disclosed below are indicated by like reference numbers in the drawings.Various operations may be described as multiple discrete actions or operations in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations may not be performed in the order of presentation. Operations described may be performed in a different order than the described embodiment. Various additional operations may be performed and/or described operations may be omitted in additional embodiments.For the purposes of the present disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C" means (A), B), (C), (A and B), (A and C), (B and C), or (A, B and C).For the purposes of the present disclosure, the phrase "user" is not limited to an individual. The phrase "user" may be interpreted as an entity, a corporation, a group of individuals, or some combination thereofThe description may use the phrases "in an embodiment," "in embodiments," or "in some embodiments," which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous. As used herein, the term "circuitry" may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and or other suitable components that provide the described functionality.Referring now to Figure 1, wherein an example communication flow among components of an example computing device 100, according to various embodiments, is shown. As illustrated, an apparatus according to embodiments may include a memory device 102, a system on a chip (SoC) device including a driver 106 and SoC hardware, a PHY device 114, or some combination thereof. The memory device 102, the SoC device and the PHY device 114 may be coupled to each other such that each device is able to communicate with the other devices.The memory device 102 may include a non-volatile memory device, a volatile memory device, memory residing on the SoC device, a memory device external to the SoC device, a random-access memory (RAM) device, a read-only memory (ROM) device, or somecombination thereof. The memory device 102 may store one or more activity lists 104 that enable the SoC device to communicate with the PHY device 114 to cause the PHY device 114 to perform a PHY-level task. Each of the activity lists 104 may be associated with a PHY device, a type of PHY device, a programming construct for communicating with one or more PHY devices, a SoC device, a type of SoC device, a MAC residing on the SoC device, a type of MAC residing on the SoC device, or some combination thereof Each activity list may include one or more activities, wherein each activity may cause the PHY device to perform a PHY-level task in response to execution of the activity.A driver 106 may communicate with the memory device 102 and may access the one or more activity lists 104 stored on the memory device 102. The driver 106 may reside on the SoC device. The driver may reside in what would commonly be called firmware and may include a firmware driver. In some embodiments, the driver 106 may reside external to the SoC device and be coupled to the SoC device such that the driver 106 may communicate with the SoC device and/or vice versa.The driver may be software to enable the SoC device to communicate with the PHY device 114. Execution of a portion of the software may cause the PHY device 114 to perform a PHY-level task. The driver 106 may include a firmware driver, a device driver, or some combination thereof. The driver 106 may be associated with the SoC device, the PHY device, or some combination thereof.The driver 106 may include a processing engine 110. The processing engine 110 may have been installed on the apparatus with the driver, may have been added to the driver after installation of the driver, may have replaced and/or altered code within the driver to generate the processing engine 110, or some combination thereof. In some embodiments, the processing engine 110 may be separate from the driver 106 and coupled to the driver 106.The processing engine 110 may identify the PHY device 114, a programming construct to be utilized for communicating with the PHY device 114, a MAC of the SoC device, or some combination thereof. The identification of the PHY device 114 may include receiving, by the processing engine, a PHY device identifier associated with the PHY device 114. The PHY device identifier may be received in a request for the PHY device to perform some PHY-level task. The processing engine 110 may access the activity lists 104 stored on the memory device 102 and may determine which of the activity lists 104 should be utilized for communicating with the PHY device 114 based the identified PHY device 114, the programming construct, the MAC, or some combination thereof.The driver 106 may receive a request for performance of a PHY-level task from the SoC device and/or the PHY device 114. The processing engine 110 may utilize the request to determine which PHY-level task for which performance is being requested. The processing engine 110 may utilize the identified PHY-level task to identify one of the activities associated with the PHY-level task within the activity list that should be utilized for communicating with the PHY device 114. The activity associated with the PHY-level task may cause the PHY device 114 to perform the PHY-level task when executed.The processing engine 110 may operate in combination with the code portion 108 of the driver 106. The processing engine 110 may utilize the code portion 108 of the driver 106 to translate the activity into one or more commands in a programming construct that can be read by the PHY device 114. The code portion 108 may provide for translation of the activity into a specific programming construct, such as MDI code and/or I2C code. In some examples, the activity may be translated into PHY commands, MAC commands, or some combination thereof.The one or more commands produced through the translation may be communicated bySoC hardware 112 to the PHY device 114. The PHY device 114 may be located external to the SoC device, including the SoC hardware 112. The one or more commands may be communicated from the SoC hardware 112 to the PHY device 114 via one or more busses, including an advanced graphics port, an enhanced integrated drive electronics type bus, an extended industry standard architecture bus, an IEEE 1394 compliant bus, a personal computer memory card international association compliant bus, a peripheral component interconnect bus, a small computer system interface bus, a universal serial bus, a parallel port, a PS/2 port, a serial port, or some combination thereof. The one or more commands may cause the PHY device 114 to perform a PHY-level task associated with the activity. In some examples, the PHY device 114 may transmit a confirmation signal back to the SoC hardware 112 indicating that the PHY-level task has been completed.Referring now to Figure 2, wherein a representation of computing structures utilized for PHY operation, according to various embodiments, is shown. The computing structures may be stored in a PHY activity lists portion 202 of a memory device, such as the memory device 102 of Figure 1 that stores the one or more activity lists 104 in a portion of the memory device 102.The PHY activity lists portion 202 may include one or more activity lists, including activity list 204. The one or more activity lists may have similar features to other activity lists described throughout this disclosure, including the activity lists 104 of Figure 1. Each activity list may be associated with a PHY device, a type of PHY device, a programming construct utilized for communicating with a type of PHY device, a SoC device, a MAC device, or some combination thereof The activity lists may be utilized for communication between a SoC device and a PHY device.The PHY activity lists portion 202 may further include a table of contents 205 associated with the one or more activity lists. The table of contents 205 may be utilized to determine which one of the activity lists should be utilized for communication between the SoC device and the PHY device based on the PHY device, the programming construct, the SoC device, and/or the MAC. A processing engine, such as processing engine 110 of Figure 1, may access the table of contents 205 and perform the determination. The table of contents 205 may further indicate a storage location of each of the activity lists in the memory device. In other embodiments, another means of determining which one of the activity lists should be utilized may be performed, such as by comparing some identifying factor of the PHY device with data included in the activity list 204. In some embodiments, in response to the computing device determining which one of the activity lists should be utilized for communication between the SoC device and the PHY device, the activity list, or a copy thereof, may be stored in a different location, from where initially accessed, within the computing device allowing for quicker access to the activity list. For example, when the one or more activity lists are stored on a memory device separate from the SoC device, a copy of the activity list to be utilized may be stored in a memory of the SoC device allowing for quicker access to the activity list by the SoC device.Expanded activity list representation 206 is a representation of the activity list 204 showing contents of the activity list 204. The expanded activity list representation 206 may be representative of one, all or some portion of the activity lists stored in the PHY activity lists portion 202.The activity list 204 may include one or more activities, including activity 208, as shown in exampled activity list representation 206. Each of the activities within the activity list 204 may be associated with a PHY-level task to be performed by a PHY device associated with the activity list 204. In some embodiments, the activity list 204 may be limited to activities for making the PHY device operational, while other activities that are not for making the PHY device operational may be excluded from the activity list 204.The activity list 204 may include a table of contents 209. The table of contents 209 may include data for identifying which activity is associated with which PHY-level task. The processing engine may access the table of contents 209 and may identify an activity from the activity list 204, associated with a desired PHY-level task, to be performed by the PHY device based on the data included in the table of contents 209. The table of contents 209 may further indicate a location of each of the activities within the activity list 204.Expanded activity representation 210 is a representation of activity 208. The expanded activity representation 210 may be representative of one, all or some portion of the activities within the activity list 204.The activity 208 may include one or more actions, including action 212, and a header 211. The header 211 may include one or more of the features described in relation to activity header structure 900 below. Each action may include code to cause a PHY device, such as PHY device 114 of Figure 1, to perform a portion of a PHY-level task associated with the activity 208. The actions may cause the PHY device to perform a PHY-level task associated with the activity 208 in response to a SoC device executing the actions. The SoC device may begin execution of the activity by executing the first action within the activity and progressively executing the actions within the activity until the last action within the activity is executed. In some embodiments, the SoC device may begin execution at the first action and may execute a specified number of actions after the first action, regardless of the number of actions within the activity 208.Further, the activity list 204 may include a table of contents identifying a location of the activities within the corresponding activity list, a number of the activities within thecorresponding activity list, a length of the activities within the activity list, a PHY-level task associated with each of the activities, or some combination thereof. In some embodiments, the table of contents may be located in a memory location between a header of the activity list and a first activity within the activity list.In some embodiments, each activity list of the activity lists 104 may include a checksum value used for performing a checksum operation. The checksum operation may be performed on the checksum value to verify that the corresponding activity list has not been corrupted, has not been altered by an unauthorized user, or some combination thereof. The checksum operation may be performed in response to the computing device determining that the activity list is to be utilized for communication with the PHY device, the computing device accessing the activity list, or some combination thereof.Referring now to Figure 3, wherein a process 300 of PHY operation, according to various embodiments, is shown. The process 300 may start in block 302, wherein a PHY device identifier may be obtained. The PHY device identifier may identify a PHY device, such as PHY device 114 of Figure 1. The PHY device identifier may be obtained by a SoC device, such as the SoC device of Figure 1 encompassing the driver 106 and the SoC hardware 112, and may be obtained in response to receiving a request for the PHY device to perform a PHY-level task, detecting that the PHY device is newly installed, initialization of the PHY device, the computing device being turned on, initialization of the computing device, driver startup, or somecombination thereof.Once the PHY device identifier has been obtained, the process 300 may proceed to block 304 where the SoC device determines if there is an activity list associated with the PHY device identifier stored in a memory device accessible by the SoC device and which of the activity lists stored within the memory device is associated with the PHY device. Determining whether there is an activity list and which activity list is associated with the PHY device may be performed in accordance with any of the processes described throughout this disclosure, including the process of the SoC device accessing a table of contents, such as table of contents 205 of Figure 2, to determine which activity list is associated with the PHY device.In response to the SoC device determining that there is not an activity list associated with the PHY device identifier stored within the memory device, the process 300 may proceed to block 306, where a default action is performed. The default action may include causing the computing device to display on a display device, coupled to the computing device, a message that the PHY-level task cannot be performed. In some embodiments, the default action may include performing a default PHY-level task, including displaying an indication on the PHY device that the PHY-level task cannot be performed, causing the PHY device and/or the SoC device to continue operation with a subsequent PHY-level task rather than waiting for completion of the current PHY-level task, reinitializing the PHY device, proceeding to block 302 to obtain that the PHY device identifier and continue operation from block 302, or some combination thereof.In response to the SoC device identifying which activity list is associated with the PHY device, the process 300 may proceed to block 308, where the activity list associated with the PHY device is accessed by the SoC device. In some embodiments, accessing the activity list may include generating a copy of the activity list in a local memory of the SoC device for quicker access to the activity list.After accessing the activity list, the process 300 may proceed to block 310, where a signature of the activity list is checked in order to verify that the activity list is valid. The signature of the activity list may be located in a header of the activity list. The signature may indicate a user who created of the activity list, a user who modified the activity list, a software program that created and/or modified the activity list, an address (such as an internet protocol address) of a user who created and/or modified the activity list, or some combination thereof. The SoC device may compare the signature to a list of trusted signature to verify that the activity list was created and/or modified by a trusted party.In some embodiments, the signature may include a checksum value. Verification that the activity list is valid may include performing a checksum calculation on the checksum value. If the result of the checksum calculation is that the checksum value is the expected value, the activity list may be determined to be valid.In other embodiments, the activity list may not include a signature. In theseembodiments, the process may surpass block 310 and proceed directly from block 308 to block 312.If the activity list is determined to be invalid based on the signature, the process 300 may proceed to block 306, where the default action is performed. If the activity list is determined to be valid, the process 300 may proceed to block 312 where the SoC device locates an activity within the activity list that is associated with the PHY-level task. Location of the activity may involve comparing an identifying feature of the PHY-level task with data within each activity to determine which activity is associated with the PHY-level task.In some embodiments, the activity list may include a table of contents, such as table of contents 209 of Figure 2. The SoC device may access the table of contents and utilize data stored in the table of contents to determine which activity is associated with the PHY-level task. The table of contents may further include an indication of where each activity is located within the memory device, the local memory of the SoC device, or some combination thereof. The SoC device may utilize the indication of where each activity is located to access activity data included in the activity.The process 300 may proceed to block 314, where the SoC device determines if it is able to locate the activity within the activity list. If the SoC device is unable to locate the activity within the activity list, the process 300 may proceed to block 306 where the default action is performed.If the SoC device is able to locate the activity within the activity list, the process 300 may proceed to block 316, where the activity data included in the activity is accessed by the SoC device. The activity data may include an activity header, one or more actions, or some combination thereof. The activity data may include the features described in relation to the expanded activity representation 210 of the activity 208, illustrated in Figure 2.The SoC device may start accessing the activity by proceeding to a header of the activity. The header of the activity may include data associated with the contents of the activity, including a checksum value.The process 300 may proceed to block 318, where a checksum operation is performed on the checksum value included in the header of the activity. The checksum operation may be performed to verify that the activity has not been corrupted. The checksum operation may include a parity byte and/or parity word checksum operation, a modular checksum operation, a position-dependent checksum operation, or some combination thereof. In some embodiments, the checksum operation may include a CRC8 checksum operation. The CRC8 checksum operation may utilize the polynomial expression X8+ X2+ X1+ X°, or 0x07 in normal polynomial representation.If the checksum operation indicates that the activity has been corrupted (i.e. the checksum value is not an expected value), the process 300 may proceed to block 306 where the default action is performed. If the checksum operation indicates that the activity has not been corrupted (i.e. the checksum value is the expected value), the process 300 may proceed to block 320 where the activity is executed.In some embodiments, the header may not include a checksum value. Further, in some embodiments, the activity may not include a header. In some embodiments where the activity does not include a header and/or a checksum value, the process 300 may exclude block 318 and may proceed directly from block 316 to block 320. In these embodiments, the SoC device may proceed directly to the first action of the activity list in response to accessing the activity.In block 320, the SoC device may execute the activity. In response to execution of the activity, the SoC device may cause the PHY device to perform the PHY-level task. Execution of the activity may include any of the processes of executing an activity, including the process described in Figure 1 performed by the driver 106 and the SoC hardware 112. In particular, execution of the activity may involve translating the actions included in the activity list into one or more commands in a programming construct associated with the PHY device, where the PHY device performs the PHY-level task in response to executing the commands. The actions included in the activity may be translated into one or more PHY commands, MAC commands, or some combination thereof.Referring now to Figure 4, wherein an activity list header structure 400, according to various embodiments, is shown. The activity list header structure 400 may be included in an activity list, such as any of the activity lists in the activity lists 104 of Figure 1 and/or the activity list 204 of Figure 2. The activity list header structure 400 may be located at the beginning of an activity list, prior to a table of contents and activities within the activity list, or some combination thereof. The activity list header structure 400 may be a structure array data type. The activity list header structure 400 may include a unique activity list name 402.The activity list header structure 400 may include an identi ier token 404. The identifier token 404 may be a 16-bit unsigned integer. The identifier token 404 may be associated with a PHY device, a type of PHY device, a programming construct for communicating with one or more PHY devices, a SoC device, a type of SoC device, a MAC residing on the SoC device, a type of MAC residing on the SoC device, or some combination thereof.In some embodiments, the identifier token 404 may be utilized for determining which activity list should be utilized in communicating with a PHY device. A SoC device may perform a comparison of a PHY device identifier, such as the PHY device identifier described in relation to block 302 of Figure 3, with the identifier token being used to determine if the activity list should be utilized in communicating with the PHY device.The activity list header structure 400 may include an activity list size indicator 406. The activity list size indicator may be a 16-bit unsigned integer. The activity list size may indicate an amount of memory space occupied by the activity list, an amount of memory space occupied by each activity within the activity list, or some combination thereof.The activity list header structure 400 may include a number of activities indicator 408. The number of activities indicator 408 may be a 16-bit unsigned integer. The number of activities indicator 408 may indicate a number of activities within the activity list for which the activity list header structure is associated.The activity list header structure 400 may include a version indicator 410. The version indicator 410 may be a 16-bit unsigned integer. The version indicator 410 may indicate a version of the activity list in which the activity list header structure 400 is included. The version indicator 410 may be incremented and/or updated in another matter in response to the activity list being updated and/or amended. In some embodiments, the version indicator 410 may further indicate the user that generated the updated version of the activity list, a software program that generated the updated version of the activity list, or some combination thereof.The activity list header structure 400 may include a checksum value 412. A checksum operation may be performed on the checksum value 412 in order to verify that the activity list has not been corrupted. The checksum operation may be performed in accordance with any of the checksum operations described throughout this disclosure, including the checksum operation described in relation to the activity lists 104 of Figure 1 and/or the checksum operation performed in relation to block 310 of Figure 3.Referring now to Figure 5, wherein a table of contents entry structure 500, according to various embodiments, is shown. The table of contents entry structure 500 may be included in an activity list, such as any of the activity lists in the activity lists 104 of Figure 1 and/or the activity list 204 of Figure 2. The table of contents entry structure 500 may be located after a header of the activity list and prior to activities within the activity list. The table of contents entry structure500 may be a structure array data type. The table of contents entry structure 500 may include a unique table of contents entry name 502.The table of contents entry structure 500 may include an activity name 504. The activity name 504 may be associated with an activity to be performed by a PHY device. The activity name 504 may be a 16-bit unsigned integer.The table of contents entry structure 500 may include an activity location 506. The activity location 506 may be a 16-bit unsigned integer. The activity location 506 may indicate a location in memory at which the activity is located, a location of the activity relative to beginning and/or end of the activity list, or some combination thereof.Referring now to Figure 6, wherein a table of contents entry header structure 600, according to various embodiments, is shown. The table of contents entry header structure 600 may be included in an activity list, such as any of the activity lists in the activity lists 104 of Figure 1 and or the activity list 204 of Figure 2. The table of contents entry header structure 600 may be a structure array data type. The table of contents entry header structure 600 may include a unique header name 602.The table of contents entry header structure 600 may include a number of activities indication 604. The number of activities indication 604 may be a 16-bit unsigned integer data type. The number of activities indication 604 may indicate a number of the activities within the activity list.The table of contents entry header structure 600 may include a table of contents item pointer 606. The table of contents item pointer 606 may define a pointer of the type of the table of contents entry structure 500. Accordingly, the table of contents item pointer 606 may point to a table of contents entry within memory, wherein the table of contents entry includes both an activity name, such as the activity name 504 of Figure 5, and an activity location, such as the activity location 506 of Figure 5. As access to the table of contents by a SoC device progresses through the table of contents, the table of content item pointer 606 may be progressively incremented to a next activity entry in the table of contents until the SoC device identifies the activity associated with a desired PHY-level task to be performed by a PHY device and/or the SoC device has accessed all the activity entries within the table of contents.Referring now to Figure 7, wherein another table of contents entry structure 700, according to various embodiments, is shown. This table of contents entry structure 700 may be easier to process than table of contents entry structure 500. Accordingly, table of contents entry structure 700 may be utilized for computing devices and or SoC devices with limited abilities. The table of contents entry structure 700 may be included in an activity list, such as any of the activity lists in the activity lists 104 of Figure 1 and/or the activity list 204 of Figure 2. The table of contents entry structure 700 may be located after a header of the activity list and prior to activities within the activity list. The table of contents entry structure 700 may be a structure array data type. The table of contents entry structure 700 may include a unique table of contents entry name 702.The table of contents entry structure 700 may include a validity identifier 704. The validity identifier 704 may be a 16-bit unsigned integer. The validity identifier 704 may be utilized to verify that a table of contents entry is a valid entry.The table of contents entry structure 700 may include an identifier offset 706. The identifier offset 706 may be a 16-bit unsigned integer. The identifier offset 706 may indicate an offset amount that a first activity is after a header of the activity list.Referring now to Figure 8, wherein an action structure 800, according to various embodiments, is shown. The action structure 800 may define an action, such as action 212 of Figure 2. The action structure 800 may be a structure array data type. The action structure 800 may include a unique action name 802.The action structure 800 may include a command field 804. The command field 804 may be an 8-bit unsigned integer. The command field 804 may include a command to execute a certain function. The command within the command field 804 may be a MAC command and/or a PHY command. The command field 804 may include a hardware level command, such as a load command, a swap command, a copy command, a test command, a less than comparison command, a greater than comparison command, an equal to comparison command, a jump command, a nested jump command, an unconditional jump command, a read from multi-media domain (MMD) command, a read from I2C bus command, a read from multiple document interface (MDI) command, a write to MMD command, a write to I2C bus command, a write to MDI command, an increment command, a decrement command, an logic or command, a logic and command, a logical not command, a write to MAC command, a read to MAC command, a mathematical add command, a right shift command, a left shift command, a wait command, a version print command, and/or a RET command (placing a return address in an instruction pointer of the computing device).The action structure 800 may include a first argument 806 and a second argument 808 associated with the command field 804. The first argument 806 and the second argument 804 may each comprise an 8-bit unsigned integer. The first argument 806 and the second argument 808 may be used to indicate parameter usage for the command field 804. For most actions, arguments may denote source and destination for the operation being performed by the action. For example, the first argument 806 could be "Use Internal Register A as source" and the second argument 808 could be "Use Internal Register B as destination" if the command field 804 is "Copy". In other embodiments, either or both of the first argument 806 and the second argument 808 may be "no argument". The value and use of the first argument 806 and second argument 808 may be dependent of the value and design of the command field 804. Further, some examples of the first argument 806 and/or the second argument 808 may include "Use immediate data" (which may be a reference to data within a data field 810), use Internal Register C or Internal Register D. The command field 804 may determine what the first argument 806 and/or the second argument 808 are in context.The action structure 800 may include the data field 810. The data field 810 may be a 32- bit unsigned integer. The data within the data field 810 may be used by the command field 804 if directed so by either or both of the first argument 806 and the second argument 808. While other embodiments may use two (or more) data fields, such as data field 810, one is shown here for clarity and, in practice, may be all that is desired. Data fields may not always used, although may be maintained for uniformity of the action structure 800 and/or prospective use at another time.Referring now to Figure 9, wherein an activity header structure 900, according to various embodiments, is shown. The activity header structure 900 may be included in an activity, such as activity 208 of Figure 2. The activity header structure 900 may be located at a beginning of the activity, prior to any actions within the activity. The activity header structure 900 may be a structure array data type. The activity header structure 900 may include a unique activity header name 902.The activity header structure 900 may include an activity name 904. The activity name 904 may be a 16-bit unsigned integer. The activity name 904 may indicate a PHY-level operation to be performed by a PHY device.The activity header structure 900 may include a size indication 906. The size indication 906 may be a 16-bit unsigned integer. The size indication 906 may indicate a size of the activity. The size indication 906 may indicate an amount of actions (such as action 212 of Figure 2) within the activity, an amount of memory occupied by the activity, an amount of memory occupied by each action within the activity, or some combination thereof.The activity header structure 900 may include a checksum value 908. The checksum value 908 may be a 16-bit unsigned integer. A checksum operation may be performed with the checksum value 908 to verify that the activity has not been corrupted. The checksum operation may include any of the checksum operations described throughout this disclosure, such as the checksum operation described in block 318 of Figure 3.The activity header structure 900 may include an action pointer 910. The action pointer 910 may be an action structure as defined by the action structure 800 of Figure 8. The action pointer 910 may be initialized to point to a storage location of a first action within the action list. As the activity is executed, the action pointer 910 may increment to point to a storage location of the next action in response to the current action being completed. Each action within the action list may be the same size, such that each time the action pointer 910 increments to the next action the action pointer 910 increments by a same amount of memory locations for each increment. The action pointer 910 may be continually incremented through the actions within the activity until the last action within the activity is executed, at which point the action pointer 910 may be directed to point to the storage location of the first action within the activity.In some embodiments, the action pointer 910 may be prevented from pointing to a memory location outside of the current activity. The current activity may be defined to be certain size based on the size indication 906. The current activity may be defined as being a size indicated by the size indication 906 and may be measured as the indicated size from the first memory location of the activity, from the first memory location of the activity header, from the after the last memory location of the activity header, or some combination thereof.In response to an action within the activity attempting to address the action pointer 910 to a location outside of the current activity, the SoC device may determine that the activity has been corrupted. The SoC device may prevent performance of any further actions, including the contents of the location outside of the current activity to which the action pointer 910 is attempting to be addressed, in response to the determination that the activity has been corrupted. Further, the SoC device may perform a default action, such as the default action described in relation to block 306 of Figure 3, in response to the determination that the activity has been corrupted.In other embodiments, the SoC device may ignore any attempts to address the action pointer 910 to a location outside of the current activity and continue incrementing the action pointer 910 to the storage location of the next action within the current activity. The SoC device may continue to perform the actions within the current activity as the action pointer 910 is incremented through the actions within the current activity without addressing the action pointer 910 to the location outside of the current activity and/or performing the activity associated with the location outside of the current activity.Figure 10 illustrates an example computing device 1000 that may employthe apparatuses and/or methods described herein (e.g., memory 102, driver 106, SoC hardware 112, PHY device 114 and process 300), in accordance with various embodiments. As shown, computing device 1000 may include a number of components, such as one or more processors) 1004 (one shown) and at least one communication chip 1006.In various embodiments, the one or more processors) 1004 each may include one or more processor cores. In various embodiments, the at least one communication chip 1006 may be physically and electrically coupled to the one or more processors) 1004. In furtherimplementations, the communication chip 1006 may be part of the one or more processors) 1004. In various embodiments, computing device 1000 may include printed circuit board (PCB) 1002. For these embodiments, the one or more processors) 1004 and communication chip 1006 may be disposed thereon. In alternate embodiments, the various components may becoupled without the employment of PCB 1002. Depending on its applications, computing device 1000 may include other components that may or may not be physically and electrically coupled to the PCB 1002. These other components include, but are not limited to, memory controller 1005, volatile memory (e.g., dynamic random access memory (DRAM) 1008), non-volatile memory such as read only memory (ROM) 1010, flash memory 1012, storage device 1011 (e.g., a hard-disk drive (HDD)), an I/O controller 1014, a digital signal processor (not shown), a crypto processor (not shown), a graphics processor 1016, one or more antenna 1018, a display (not shown), a touch screen display 1020, a touch screen controller 1022, a battery 1024, an audio codec (not shown), a video codec (not shown), a global positioning system (GPS) device 1028, a compass 1030, an accelerometer (not shown), a gyroscope (not shown), a speaker 1032, a camera 1034, and a mass storage device (such as hard disk drive, a solid state drive, compact disk (CD), digital versatile disk (DVD)) (not shown), and so forth.In some embodiments, the one or more processors) 1004, flash memory1012, and/or storage device 1011 may include associated firmware (not shown)storing programming instructions configured to enable computing device 1000, in response to execution of the programming instructions by one or more processors) 1004, to practice all or selected aspects of the methods described herein. In various embodiments, these aspects may additionally or alternatively be implemented using hardware separate from the one or more processors) 1004, flash memory 1012, or storage device 1011.In various embodiments, one or more components of the computing device 1000 may include the memory device 102, the driver 106, the SoC hardware 112 and/or the PHY device 114 described herein. For example, the memory device 102, the driver 106, the SoC hardware 112 and/or the PHY device 114 may be included in I/O controller 1014,processor 1004, memory controller 1005, and/or another component of computing device 1000. In some embodiments, I/O controller 1014 may interface with one or moreexternal devices to receive a data signal using the memory device 102, the driver 106, the SoC hardware 112 and/or the PHY device 114. Additionally, or alternatively, the memory device 102, the driver 106, the SoC hardware 112 and/or the PHY device 114 may be used to receive a data signal transmitted between two components of the computing device 1000.The communication chips 1006 may enable wired and/or wireless communications for the transfer of data to and from the computing device 1000. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 1006 may implement any of a number of wireless standards or protocols, including but not limited to IEEE 702.20, Long Term Evolution (LTE), LTE Advanced (LTE- A), General Packet Radio Service (GPRS), Evolution Data Optimized (Ev-DO), Evolved High Speed Packet Access (HSPA+), Evolved High Speed Downlink Packet Access (HSDPA+), Evolved High Speed Uplink Packet Access (HSUPA+), Global System for MobileCommunications (GSM), Enhanced Data rates for GSM Evolution (EDGE), Code DivisionMultiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced Cordless Telecommunications (DECT), Worldwide Interoperability for Microwave Access (WiMAX), Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 1000 may include a plurality of communication chips 1006. For instance, a first communication chip 1006 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth, and a second communication chip 1006 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.In various implementations, the computing device 1000 may be a laptop, a netbook, a notebook, an ultrabook, a smartphone, a computing tablet, a personal digital assistant (PDA), an ultra-mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit (e.g., a gaming console or automotiveentertainment unit), a digital camera, an appliance, a portable music player, or a digital video recorder. In further implementations, the computing device 1000 may be any other electronic device that processes data.It will be apparent to those skilled in the art that various modifications and variations can be made in the disclosed embodiments of the disclosed device and associated methods without departing from the spirit or scope of the disclosure. Thus, it is intended that the present disclosure covers the modifications and variations of the embodiments disclosed above provided that the modifications and variations come within the scope of any claims and their equivalents.Example 1 may include an apparatus to cause a first external physical layer (PHY) device, external to the apparatus, to perform an operation. The apparatus may include a memory device to store one or more activity lists associated with one or more externalPHY devices including the first external PHY device. The apparatus may further includea processor, that executes an engine, to receive a request for performance of the operation by the first external PHY device, identify an activity list associated with the first external PHY device from the one or more activity lists, identify an activity to effectuate performance of the operation from the activity list associated with the first external PHY device, and cause the first external PHY device to perform the operation in accordance with the activity.Example 2 may include the apparatus of example 1, wherein the apparatus is a system on chip (SoC) device, and the engine is located in a firmware driver of the SoC device.Example 3 may include the apparatus of any of the examples 1 and 2, wherein the processor is to further perform a checksum operation on data included in a header of the activity list to verify the activity list has not been corrupted, wherein causation of performance of the operation in accordance with the activity occurs in response to verification that the activity list has not been corrupted.Example 4 may include the apparatus of example 3, wherein the checksum operation includes a CRC8 checksum operation, and wherein the data included in the header of the activity list includes a checksum value on which the CRC8 checksum operation is performed.Example S may include the apparatus of any of the examples 1 and 2, wherein causation of performance of operation in accordance with the activity includes translation of one or more actions included in the activity into one or more PHY commands that cause the first external PHY device to perform the operation.Example 6 may include the apparatus of any of the examples 1 and 2, wherein causation of performance of operation in accordance with the activity includes translation of one or more actions included in the activity into one or more media access control commands that cause the first external PHY device to perform the operation.Example 7 may include the apparatus of any of the examples 1 and 2, wherein the memory device is further to store a table of contents associated with the activity list and the processor is further to access the table of contents, wherein identification of the activity is based on data contained in the table of contents.Example 8 may include the apparatus of any of the examples 1 and 2, wherein a header of the activity indicates storage locations of one or more actions included in the activity, wherein the processor is further to retrieve the one or more actions from the storage locations, and wherein causation of performance of the operation in accordance with the activity includes execution of the one or more actions retrieved from the storage locations.Example 9 may include the apparatus of example 8, wherein the processor is further to identify a first action of the one or more actions, wherein the first action includes an action call to a second action not located in the storage locations of the one or more actions included in the activity and prevent execution of the first action in response to identification of the first action that includes the action call to the second action not located in the storage locations.Example 10 may include the apparatus of any of the examples 1 and 2, wherein the request for performance of the operation includes a PHY device indicator associated with the first external PHY device, and wherein the activity list associated with the first external PHY device is identified based, at least in part, on the PHY device indicator.Example 11 may include one or more computer-readable media having instructions stored thereon, wherein the instructions, in response to execution by a device, cause the device to retrieve an activity, which includes one or more actions to effectuate performance of a PHY- level task by an external physical layer (PHY) device located external to the device, from an activity list stored on a memory device, the activity list associated with the external PHY device and cause the external PHY device to perform the PHY-level task in accordancewith the activity.Example 12 may include the one or more computer-readable media of example 11, wherein the instructions further cause the device to receive a request for the external PHY device to perform the PHY-level task, wherein the activity is retrieved in response to reception of the request.Example 13 may include the one or more computer-readable media of any of the examples 11 and 12, wherein the instructions further cause the device to access a table of contents associated with the activity list, the table of contents stored on the memory device and identify the activity based on data contained in the table of contents.Example 14 may include the one or more computer-readable media of any of the examples 11 and 12, wherein the instructions further cause the device to perform a checksum operation on data included in a header of the activity to verify that the activity has not been corrupted, wherein causation of performance of the PHY-level task in accordance with the activity occurs in response to verifying the activity has not been corrupted.Example 15 may include the one or more computer-readable media of any of the examples 11 and 12, wherein causation of performance of the PHY-level task includes translation of the one or more actions into one or more PHY commands that effectuate performance of the PHY-level task by the external PHY device.Example 16 may include the one or more computer-readable media of any of the examples 11 and 12, wherein causation of performance of the PHY-level task includes translation of the one or more actions into one or more media access control commands that effectuate performance of the PHY-level task by the external PHY device.Example 17 may include the one or more computer-readable media of any of the examples 11 and 12, wherein a header of the activity indicates storage locations of the one or more actions, wherein the instructions further cause the device to retrieve the one or more actions from the storage locations, and wherein causation of performance of the PHY-level task in accordance with the activity includes execution of the one or more actions.Example 18 may include the one or more computer-readable media of example 17, wherein instructions further cause the device to identify a first action of the one or more actions, wherein the first action includes an action call to a second action not located in the storage locations of the one or more actions and prevent execution of the first action in response to identification of the first action that includes the action call to the second action not located in the storage locations.Example 19 may include the one or more computer-readable media of example 12, wherein the request includes a PHY device identifier associated with the external PHY device, and wherein the instructions further cause the device to identify the activity list associated with the external PHY device based on the PHY device identifier.Example 20 may include a method to cause a physical layer (PHY) device to perform a first PHY-level task, comprising receiving, by a system on chip (SoC) device, a request to cause the PHY device to perform the first PHY-level task, the PHY device being external to the SoC device, retrieving, by the SoC device, an activity associated with the PHY-level taskfrom an activity list associated with the PHY device, the activity list being stored inmemory and including one or more activities associated with one or more PHY-level tasks including the first PHY-level task, and causing, by the SoC device, the PHY device to perform the first PHY-level task in accordance the activity.Example 21 may include the method of example 20, further comprising performing, by the SoC device, a checksum operation on data included in a header of the activity to verify the activity has not been corrupted, wherein causation of the performance of the first PHY-level task occurs in response to verification that the activity has not been corrupted.Example 22 may include the method of any of the examples 20 and 21, wherein causation of the performance of the first PHY-level task includes translating one or more actions included in the activity into one or more PHY commands that cause the external PHY device to perform the first PHY-level task.Example 23 may include the method of any of the examples 20 and 21, wherein causation of the performance of the first PHY-level task includes translating one or more actions included in the activity into one or more media access control commands that cause the external PHY device to perform the first PHY-level task.Example 24 may include the method of any of the examples 20 and 21, further comprising accessing, by the SoC device, a table of contents stored in the memory inresponse to reception of the request and identifying, by the SoC device, the activity based on data included in the table of contents.Example 25 may include the method of any of the examples 20 and 21, further comprising identifying, by the SoC device, storage locations of one or more actions included in the activity based on data included in a header of the activity and retrieving, by the SoC device, the one or more actions from the storage locations, wherein causation of the performance of the first PHY-level task includes executing, by the SoC device, the one or more actions retrieved from the storage locations.Example 26 may include the method of example 25, further comprising identifying, by the SoC device, a first action of the one or more actions, wherein the first action includes an action call to a second action not located in the storage locations of the one or more actions and preventing, by the SoC device, execution of the first action in response to identification of the first action that includes the action call to the second action not located in the storage locations.Example 27 may include the method of example 20, wherein the request includes a PHY device identifier associated with the PHY device, and wherein the method further comprises identifying, by the SOC device, the activity list associated with the PHY device based on the PHY device identifier.Example 28 may include an apparatus to cause a first external physical layer (PHY) device to perform an operation, comprising means for receiving a request to cause the PHY device to perform the first PHY-level task, the PHY device being external to the SoC device, means for retrieving an activity associated with the PHY-level task from an activitylist associated with the PHY device, the activity list being stored in memory and including one or more activities associated with one or more PHY-level tasks including the first PHY-level task, and means for causing the PHY device to perform the first PHY-level task, through a firmware driver, in accordance the activity.Example 29 may include the apparatus of example 28, further comprising means for performing a checksum operation on data included in a header of the activity to verify the activity has not been corrupted, wherein causation of the performance of the first PHY-level task occurs in response to verification that the activity has not been corrupted.Example 30 may include the apparatus of any of the examples 28 and 29, wherein the means for causing of the performance of the first PHY-level task includes means for translating one or more actions included in the activity into one or more PHY commands that cause the external PHY device to perform the first PHY-level task.Example 31 may include the apparatus of any of the examples 28 and 29, wherein the means for causing of the performance of the first PHY-level task includes means for translating one or more actions included in the activity into one or more media access control commands that cause the external PHY device to perform the first PHY-level task.Example 32 may include the apparatus of any of the examples 28 and 29, further comprising means for accessing a table of contents stored in the memory in response to reception of the request and means for identifying the activity based on data included in the table of contents.Example 33 may include the apparatus of any of the examples 28 and 29, further comprising means for identifying storage locations of one or more actions included in the activity based on data included in a header of the activity and means for retrieving the one or more actions from the storage locations, wherein causation of the performance of the first PHY- level task includes executing, by the SoC device, the one or more actions retrieved from the storage locations.Example 34 may include the apparatus of example 33, further comprising means for identifying a first action of the one or more actions, wherein the first action includes an action call to a second action not located in the storage locations of the one or more actions and means for preventing execution of the first action in response to identification of the first action that includes the action call to the second action not located in the storage locations.
Disclosed herein are structures and methods for large integrated circuit (IC) dies, as well as related assemblies and devices. For example, in some embodiments, an IC die may include: a first subvolume including first electrical structures, wherein the first electrical structures include devices in a first portion of a device layer of the IC die; a second subvolume including second electrical structures, wherein the second electrical structures include devices in a second portion of the device layer of the IC die; and a third subvolume including electrical pathways between the first subvolume and the second subvolume; wherein the IC die has an area greater than 750 square millimeters.
An integrated circuit, IC, die (100), comprising:a first subvolume (102-1) including first electrical structures, wherein the first electrical structures include devices in a first portion of a device layer (106) of the IC die (100), wherein electrical structures include power delivery structures, dynamic random access memory, DRAM, static random access memory, SRAM, camera sensors, and high yield, low density logic, wherein the first electrical structures provide a processing unit and include first conductive vias;a second subvolume (102-2) including second electrical structures, wherein the second electrical structures include devices in a second portion of the device layer (106) of the IC die (100), wherein the second electrical structures provide a memory device and include second conductive vias, and the first conductive vias have a material composition different than a material composition of the second conductive vias, wherein the devices in the first portion of the device layer (106) of the IC die (100) include tri-gate transistors having a first fin height, the devices in the second portion of the device layer (106) of the IC die (100) include tri-gate transistors having a second fin height, and the first fin height is different than the second fin height; anda third subvolume including electrical pathways between the first subvolume (102-1) and the second subvolume (102-2);wherein the IC die has an area greater than 750 square millimeters.The IC die (100) of claim 1, wherein the IC die (100) has a lateral dimension (116, 118) greater than 33 millimeters.The IC die (100) of claim 2, wherein the lateral dimension (116, 118) is a first lateral dimension (116, 118), and the IC die (100) also has a second lateral dimension (116, 118) greater than 22 millimeters.The IC die (100) of claim 1, wherein the third subvolume includes a first set of metallization layers and a second set of metallization layers, the first set of metallization layers is between the second set of metallization layers and the device layer (106), and the first set of metallization layers does not include any electrical pathways.The IC die (100) of claim 1, wherein the memory device includes static random access memory, SRAM, devices or dynamic random access memory, DRAM, devices.The IC die (100) of claim 1, wherein metallization of the third subvolume connects metallization of the first subvolume (102-1) with metallization of the second subvolume (102-2).
BackgroundIntegrated circuit (IC) dies are typically formed in an array on a semiconductor wafer, then separated by singulation.US 2017/194248 A1 discloses a multi-layer semiconductor structure and methods for fabricating multi-layer semiconductor structures. The multi-layer semiconductor structure includes at least two semiconductor structures, each of the at least two semiconductor structures having first and second opposing surfaces. Additionally, each of the at least two semiconductor structures includes a first section having first and second opposing surfaces and a plurality of electrical connections extending between select portions of the first and second surfaces. Each of the at least two semiconductor structures also includes a second section having first and second opposing surfaces, with the first surface of the second section disposed over and coupled to the second surface of the first section. Methods for fabricating a multi-layer semiconductor structure from a plurality of semiconductor structures are also provided.US 2015/179568 A1 discloses a method and an apparatus of a three dimensional integrated circuit. The apparatus includes a first tier and a second tier, wherein the second tier is above the first tier. The first tier includes a first cell. The second tier includes a second cell and a third cell. The third cell includes a first ILV to couple the first cell in the first tier to the second cell in the second tier. The third cell further includes a second ILV, the first ILV and the second ILV are extended along a first direction. The first tier further includes a fourth cell. The second tier further includes a fifth cell. The second ILV of the third cell is arranged to connect the fourth cell of the first tier with the fifth cell of the second tier. In some embodiments, the second tier further includes a spare cell including a spare ILV for ECO purpose.US 2009/283898 A1 discloses pass-through 3D interconnects and microelectronic dies and systems of stacked dies that include such interconnects to disable electrical connections. In one embodiment of this document, a system of stacked dies includes a first microelectronic die having a backside, an interconnect extending through the first die to the backside, an integrated circuit electrically coupled to the interconnect, and a first electrostatic discharge (ESD) device electrically isolated from the interconnect. A second microelectronic die has a front side coupled to the backside of the first die, a metal contact at the front side electrically coupled to the interconnect, and a second ESD device electrically coupled to the metal contact. In another embodiment of this document, the first die further includes a substrate carrying the integrated circuit and the first ESD device, and the interconnect is positioned in the substrate to disable an electrical connection between the first ESD device and the interconnect.US 2011/246746 A1 discloses various embodiments including apparatuses, stacked devices and methods of forming dice stacks on an interface die. In one such apparatus, a dice stack includes at least a first die and a second die, and conductive paths coupling the first die and the second die to the common control die. In some embodiments of this document, the conductive paths may be arranged to connect with circuitry on alternating dice of the stack. In other embodiments of this document, a plurality of dice stacks may be arranged on a single interface die, and some or all of the dice may have interleaving conductive paths.US 2015/102419 A1 discloses a semiconductor device and a method of manufacturing thereof. According to one embodiment of this document, a semiconductor device includes a first complementary semiconductor device provided on a semiconductor substrate, and including a CMOS circuit, a metal electrode provided above the first complementary semiconductor device, a semiconductor layer provided above the metal electrode, including an nMOS region and a pMOS region separated from each other, and containing Ge, and a second complementary semiconductor device including an nMOSFET provided on the first portion of the semiconductor layer and a pMOSFET provided on the second portion of the semiconductor layer.US 2018/182709 A1 discloses integrated circuit interconnect structures having a metal oxide adhesive layer between conductive interconnects and dielectric material, as well as related apparatuses and methods. For example, in some embodiments of this document, an integrated circuit interconnect structure may include a dielectric layer having 60% or more filler, a conductive layer, and a metal oxide adhesive layer between the dielectric and conductive layers. In some embodiments, the metal oxide adhesive layer may include one or more of aluminum oxide, chromium oxide, and nickel oxide.WO 2016/048753 A1 relates to integration of electronic elements on the backside of a semiconductor die. Systems and methods include a first semiconductor die with a substrate having a first side and a second side opposite to the first side. A first set of electronic elements is integrated on the first side. A second set of electronic elements is integrated on the second side. One or more through-substrate vias through the substrate are used to couple one or more of the first set of electronic elements and one or more of the second set of electronic elements. The through-substrate vias may be through-silicon vias or a through-glass vias. The first semiconductor die may be stacked with a second semiconductor die, with the first side or the second side of the first semiconductor die interfacing an active side of the second semiconductor die.US 2017/358562 A1 discloses an integrated display system with multi-color light emitting diodes. The display system comprises a light emitting diode (LED) device and a backplane (BP) device. The LED device comprises a plurality of LEDs having LED terminals. An LED bonding surface comprising a dielectric layer with LED bonding surface contact pads is coupled to diode terminals of the LEDs. The BP device comprises a BP substrate having top and bottom surfaces. A plurality of system on chip (SoC) chips are bonded to chip pads disposed on a bottom surface of the BP device. The SoC chips are electrically coupled to the CMOS components of the BP device and LEDs of the LED device.SummaryThe object of the present invention is solved by claim 1. Advantageous embodiments are described by the dependent claims.Brief Description of the DrawingsEmbodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example, not by way of limitation, in the figures of the accompanying drawings.FIGS. 1A-1C are views of a large integrated circuit (IC) die, in accordance with various embodiments.FIG. 2 is a side, cross-sectional view of an example large IC die, in accordance with various embodiments.FIG. 3 is a side, cross-sectional view of an example IC die assembly including a large IC die, in accordance with various embodiments.FIGS. 4A and 4B are views of another example IC die assembly including a large IC die, in accordance with various embodiments.FIGS. 5A and 5B are views of another example IC die assembly including a large IC die, in accordance with various embodiments.FIG. 6 is a top view of another example IC die assembly including a large IC die, in accordance with various embodiments.FIG. 7 is a top view of another example large IC die, in accordance with various embodiments.FIGS. 8A-8C illustrate stages in an example process of manufacturing a large IC die, in accordance with various embodiments.FIGS. 9A-9C illustrate stages in an example process of manufacturing a large IC die, in accordance with various embodiments.FIG. 10 is a flow diagram of an example method of manufacturing a large IC die, in accordance with various embodiments.FIG. 11 is a top view of a wafer and dies that may include a large IC die, in accordance with any of the embodiments disclosed herein.FIG. 12 is a side, cross-sectional view of an IC package that may include a large IC die, in accordance with various embodiments.FIG. 13 is a side, cross-sectional view of an IC device assembly that may include a large IC die, in accordance with any of the embodiments disclosed herein.FIG. 14 is a block diagram of an example electrical device that may include a large IC die, in accordance with any of the embodiments disclosed herein.Detailed DescriptionDisclosed herein are structures and methods for large integrated circuit (IC) dies, as well as related assemblies and devices. For example, an IC die according to the invention as claimed includes the features of claim 1. The IC die comprises: a first subvolume including first electrical structures, wherein the first electrical structures include devices in a first portion of a device layer of the IC die; a second subvolume including second electrical structures, wherein the second electrical structures include devices in a second portion of the device layer of the IC die; and a third subvolume including electrical pathways between the first subvolume and the second subvolume; wherein the IC die has an area greater than 750 square millimeters.Complex computing devices may require a large number of different computing components, such as processing devices, memory, sensors, and controllers. Conventionally, each of these components is manufactured and packaged separately, then the separate components are coupled together to form the computing device. However, utilizing separately packaged components may limit how close interacting components may be positioned to each other, and thus limit the speed with which the components can interact. Further, a manufacturer of one component may need to utilize a packaged component from another manufacturer, and thus there may be a limit on how tightly the design and operation of the components may be coupled (and thus an associated limit on performance).Integrating multiple different ones of such computing components into a single die may reduce latency and allow for tighter coupling during the design phase, but existing photolithographic techniques and related fabrication processes have been limited in the size of dies that can be reliably fabricated. For example, existing photolithographic techniques that are suitable for high volume manufacturing (HVM) utilize photomasks (also called "reticles") that can pattern an area having lateral dimensions no greater than 22 millimeters by 33 millimeters, the limit of currently commonly available lithography tools. This has meant that an IC die fabricated using such techniques may have lateral dimensions no greater than 22 millimeters by 33 millimeters. This limitation in the area of an IC die also limits the number and type of circuits that can be included in a single IC die. Conventionally, an array of such dies are formed on a semiconductor wafer, then separated into individual dies by cutting the wafer along scribe streets between adjacent dies.Disclosed herein are structures and methods for forming IC dies larger than those conventionally achievable using HVM lithography techniques (referred to herein as "large IC dies"). Such large IC dies may include subvolumes having different functionality and/or structure, reducing latency relative to conventional assemblies of separately packaged dies, and/or providing more computing power in a single die. The large IC dies disclosed herein may be stacked with other dies to form IC die assemblies, further increasing functionality.In the following detailed description, reference is made to the accompanying drawings that form a part hereof wherein like numerals designate like parts throughout, and in which is shown, by way of illustration, embodiments that may be practiced. It is to be understood that other embodiments may be utilized, and structural or logical changes may be made, without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense.Various operations may be described as multiple discrete actions or operations in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations may not be performed in the order of presentation. Operations described may be performed in a different order from the described embodiment. Various additional operations may be performed, and/or described operations may be omitted in additional embodiments.For the purposes of the present disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B, and C). The drawings are not necessarily to scale. Although many of the drawings illustrate rectilinear structures with flat walls and right-angle corners, this is simply for ease of illustration, and actual devices made using these techniques will exhibit rounded corners, surface roughness, and other features.The description uses the phrases "in an embodiment" or "in embodiments," which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous. As used herein, a "package" and an "IC package" are synonymous. When used to describe a range of dimensions, the phrase "between X and Y" represents a range that includes X and Y. For convenience, the phrase " FIG. 1 " may be used to refer to the collection of drawings of FIGS. 1A-1C , the phrase " FIG. 4 " may be used to refer to the collection of drawings of FIGS. 4A-4B , etc.FIG. 1 illustrates an example large IC die 100. In particular, FIG. 1A is a top view of the large IC die 100, and FIG. 1B is a side, cross-sectional view through the section A-A of FIG. 1A . The large IC die 100 includes a subvolume 102-1 and a subvolume 102-2 spaced laterally apart from the subvolume 102-1. The subvolume 102-1 may be formed using a first set of photomasks, and the lateral dimensions 120 and 122 of the subvolume 102-1 may be limited to the lateral dimensions achievable using conventional HVM photolithography. For example, the lateral dimension 120 may be 22 millimeters or less and the lateral dimension 122 may be 33 millimeters or less (or vice versa). The lateral dimensions of the subvolume 102-2 may be similarly constrained.As shown in figure 1 B , the subvolume 102-1 and the subvolume 102-2 may extend through various regions of the large IC die 100. In particular, the large IC die 100 may include top conductive contacts 110, a top metallization stack 108, a device layer 106, a bottom metallization stack 112, and bottom conductive contacts 114. As used herein, a "conductive contact" may refer to a portion of conductive material (e.g., metal) serving as an interface between different components; conductive contacts may be recessed in, flush with, or extending away from a surface of a component, and may take any suitable form (e.g., a conductive pad or socket). The subvolume 102-1 (second subvolume 102-2) may include a first portion 110-1 (second portion 110 -2) of the top conductive contacts 110, a first portion 108-1 (second portion 108 -2) of the top metallization stack 108, a first portion 106-1 (second portion 106-2) of the device layer 106, a first portion 112-1 (second portion 112 -2) of the bottom metallization stack 112, and a first portion 114-1 (second portion 114 -2) of the bottom conductive contacts 114.The large IC die 100 may include one or more device layers 106. Although only a single device layer 106 is depicted in FIG. 1 (and FIG. 2 , discussed below), this is simply for ease of illustration, and the large IC die 100 may include more than one device layer 106. The device layer 106 may include features of one or more transistors (e.g., the transistors 1640 discussed below with reference to figure 2 ) or other devices. Electrical signals, such as power and/or input/output (I/O) signals, may be routed to and/or from the devices of the device layer 106 and/or other devices embedded in the large IC die 100 through the metallization stacks 108 and 112 disposed on the device layer 106. As discussed further below with reference to FIG. 2 , the metallization stacks 108 and 112 may include conductive material arranged (e.g., in conductive vias and lines) to act as electrical pathways through the large IC die 100. The top conductive contacts 110 and the bottom conductive contacts 114 may provide contact points for electrical connections to be made between the large IC die 100 and other components (e.g., other dies, interposers, package substrates, etc.), as discussed further herein.The large IC die 100 may also include a stitching subvolume 104. The stitching subvolume 104 may include electrical pathways between the subvolume 102-1 and the subvolume 102-2, and thus may electrically "stitch" circuitry of the subvolume 102-1 with circuitry of the subvolume 102-2. As illustrated in FIG. 1B , the stitching subvolume 104 may include a third portion 110-3 of the top conductive contacts 110, a third portion 108-3 of the top metallization stack 108, a third portion 106-3 of the device layer 106, a third portion 112-3 of the bottom metallization stack 112, and a third portion 114-3 of the bottom conductive contacts 114. In some embodiments, the third portion 106-3 of the device layer 106 (part of the stitching subvolume 104) may not include any active devices (e.g., may not include any transistors); in such embodiments, the stitching subvolume 104 may principally provide electrical pathways between the subvolume 102-1 and the subvolume 102-2 via the third portion 108-3 of the top metallization stack 108 and/or the third portion 112-3 of the bottom metallization stack 112. In other embodiments, the third portion 106-3 of the device layer 106 may include active devices. In some embodiments, the stitching subvolume 104 may not include any top conductive contacts 110 and/or bottom conductive contacts 114.In some embodiments, layers of the metallization stack 108 closest to the device layer 106 may include electrical pathways (e.g., conductive vias and lines) in the first portion 108-1 and the third portion 108-3, but may not include electrical pathways in the second portion 108-2 (the portion of the metallization stack 108 in the stitching subvolume 104); the electrical pathways in the layers of the metallization stack 108 closest to the device layer 106 (in the first portion 108-1 and the third portion 108-3) may be electrical coupled by electrical pathways in the second portion 108-2 in layers "higher up" in the metallization stack 108. For example, FIG. 1C is a side view of an embodiment in which the portions of the top metallization stack 108 are shown as having "upper" and "lower" regions; the first portion 108-1 of the metallization stack 108 has an upper region 108-11 and a lower region 108-12, the second portion 108-2 of the metallization stack 108 has an upper region 108-21 and a lower region 108-22, and the third portion 108-3 of the metallization stack 108 has an upper region 108-31 and a lower region 108-32. The lower regions 108-x2 may include one or more layers of the metallization stack 108, and these one or more layers may be between the device layer 106 and one or more layers of the layers of the metallization stack in the corresponding upper regions 108-x1. In some embodiments, the lower regions 108-12 and 108-32 may include electrical pathways, while the lower region 108-22 (of the stitching subvolume 104) may not include any electrical pathways; electrical pathways between the subvolume 102-1 and the subvolume 102-2 through the stitching subvolume 104 may be made through the upper region 108-21 of the second portion 108-2 of the metallization stack 108. In some such embodiments, the second portion 106-2 of the device layer 106 may not include any devices. Such embodiments may be fabricated by first fabricating the device layer 106, then fabricating the electrical pathways in the lower regions 108-12 and 108-32, then fabricating the electrical pathways in the upper region 108-21 (and in the upper regions 108-11 and 108-31, as appropriate).While the subvolume 102-1 and the subvolume 102-2 may have lateral dimensions that are achievable with conventional lithography (e.g., less than or equal to 22 millimeters by 33 millimeters), the subvolume 102-1, the subvolume 102-2, and the stitching subvolume 104 may together form a large IC die 100 whose lateral dimensions are larger than those achievable using conventional lithography. For example, in some embodiments, a large IC die 100 may have a lateral area (i.e., the product of the lateral dimensions 116 and 118) that is greater than 750 square millimeters (e.g., greater than 1500 square millimeters, greater than 3000 square millimeters, or greater than 6000 square millimeters). In some embodiments, the large IC die 100 may have at least one lateral dimension 116 or 118 that is greater than 33 millimeters (e.g., greater than 66 millimeters, greater than 99 millimeters, or greater than 132 millimeters).Different ones of the subvolumes 102 in a large IC die 100 may include different types and/or arrangements of electrical structures. In some embodiments, the subvolume 102-1 may include transistors (e.g., the transistors 1640 discussed below with reference to FIG. 2 ) having a first structure and the subvolume 102-2 may include transistors having a second structure different from the first structure. For example, the subvolume 102-1 may include planar transistors in the device layer (e.g., the device layer 106 or another device layer) and the subvolume 102-2 may include non-planar transistors in the device layer. Examples of non-planar transistors may include dual-gate transistors, tri-gate transistors, or all-around gate transistors (e.g., nanoribbon transistors or nanowire transistors). Utilizing two different types of transistors in different ones of the subvolumes 102 of a large IC die may allow the transistor type to be tailored to the functional circuitry of which it is a part. For example, planar transistors may be particularly useful for high voltage I/O or logic circuitry, while non-planar transistors (e.g., dual-gate or tri-gate transistors) may be particularly useful for processing unit logic circuitry (e.g., in a central processing unit (CPU)). In another example, the subvolume 102-1 may include dual-gate transistors and the subvolume 102-2 may include tri-gate transistors.In another example, the transistors in the subvolume 102-1 and the transistors in the subvolume 102-2 may be of the same type (e.g., planar, dual-gate, tri-gate, etc.) but parameters of those transistors may differ between the subvolumes 102. For example, the transistors (e.g., the transistors 1640 discussed below with reference to FIG. 2 ) in the subvolume 102-1 and the transistors in the subvolume 102-2 may be planar transistors, but the transistors in the subvolume 102-1 may have a different channel thickness and/or gate length than the transistors in the subvolume 102-2. In another example, the transistors in the subvolume 102-1 and the transistors in the subvolume 102-2 may be dual-gate transistors (or tri-gate transistors), but the transistors in the subvolume 102-1 may have a different gate length, fin height as claimed in claim 1 according to the invention, and/or fin width than the transistors in the subvolume 102-2. Utilizing the same type of transistors, but with different dimensions, in different ones of the subvolumes 102 of a large IC die allow the transistor characteristics to be tailored to the functional circuitry of which it is a part. For example, FinFETs having a lower fin height may be well-suited for lower power circuitry (e.g., logic with lower performance) and FinFETS having a higher fin height may be well-suited for higher power circuitry (e.g., logic with higher performance).In some embodiments, different processing operations may be performed to electrical structures in different ones of the subvolumes 102. For example, in some embodiments, the devices (e.g., the transistors 1640 discussed below with reference to FIG. 2 ) in the first portion 106-1 of the device layer 106 may be subjected to different local processing conditions (e.g., laser annealing or ion implantation) than the devices in the second portion 106-2. Different types of processing may confer advantages to certain devices (e.g., may modify transistor performance or leakage properties), but may also incur significant process costs; selectively performing such processing in subvolumes 102 in which its advantages may be more fully realized may improve performance without incurring excessive cost.In some embodiments, different ones of the subvolumes 102 of a large IC die 100 may include different functional circuitry. For example, the subvolume 102-1 may provide a processing unit (e.g., general logic for a CPU, such as a control unit, an arithmetic/logic unit, and/or a register storage area), while the subvolume 102-2 may provide a memory device (e.g., a dynamic random access memory (DRAM) array, including storage cells, sense amplifiers, and word lines, or a static random access memory (SRAM) array).In some embodiments, the structures of the electrical pathways in the metallization stacks 108 and/or 112 in different ones of the subvolumes 102 of a large IC die 100 may be different. In accordance with the invention as claimed, different materials are used in some of the conductive vias and/or lines (e.g., the conductive lines 1628a and the conductive vias 1628b discussed below with reference to FIG. 2 ) of the first portion 108-1 of the metallization stack 108 (first portion 112-1 of the metallization stack 112) relative to some of the conductive vias and/or lines of the second portion 108-2 of the metallization stack 108 (second portion 112-2 of the metallization stack 112). In one particular example, some or all conductive vias of the first portion 108-1 (first portion 112-1) may include tungsten (e.g., as a fill material) and some or all conductive vias of the second portion 108-2 (second portion 112-2) may include copper (e.g., as a fill material). In another example, the conductive vias and/or lines in different ones of the subvolumes 102 of a large IC die 100 may have different dimensions; for example, some of the conductive lines in the subvolume 102-1 may be thicker than conductive lines in the corresponding layer of the subvolume 102-2.In some embodiments, different ones of the subvolumes 102 in a large IC die 100 may share a number of layers having the same structure, and then may have a set of layers that differ. For example, the subvolumes 102-1 and 102-2 may have a first set of layers in the metallization stack 108 (the metallization stack 112) that have the same structure between the subvolumes 102-1 and 102-2 (e.g., the first ten layers) and a second set of layers in the metallization stack 10 (the metallization stack 112) that are different. In such embodiments, a same set of photomasks may be used to pattern the first set of layers of the subvolume 102-1 and the first set of layers of the subvolume 102-2, and different sets of photomasks may be used to pattern the second set of layers of the subvolume 102-1 and the second set of layers of the subvolume 102-2. In some embodiments, the different second set of layers of the subvolume 102-1 and the subvolume 102-2 may be used to pattern special electrical structures, such as a capacitor (e.g., a metal-insulator-metal capacitor), copper bumps, or a magnetic material (e.g., in an inductor). In some embodiments, the different second set of layers of the subvolume 102-1 and the subvolume 102-2 may be used to achieve different dimensions of the conductive lines and/or vias in the subvolume 102-1 and the subvolume 102-2 (e.g., to form thicker conductive lines in the subvolume 102-1 or the subvolume 102-2, as discussed above).Although only two subvolumes 102 and one stitching subvolume 104 are depicted in FIG. 1 , this is simply for ease of illustration, and the techniques and structures disclosed herein may be used to "stitch" together any desired number and arrangement of subvolumes 102 with stitching subvolumes 104 to form a large IC die 100. Different ones of the subvolumes 102 in a large IC die 100 may have the same structure or different structures (e.g., in accordance with any of the embodiments discussed herein). A number of example large IC dies 100 with various arrangements of subvolumes 102 and stitching subvolumes 104 are illustrated herein. In some embodiments, the techniques and structures disclosed herein may be used to form a large IC die 100 whose lateral dimensions are equal or approximately equal to the lateral dimensions of the semiconductor wafer underlying the large IC die 100.The large IC die 100 illustrated in FIG. 1B is a "double-sided" die in that the large IC die 100 includes top conductive contacts 110 at one face and bottom conductive contacts 114 at the opposite face, allowing electrical connections to the large IC die 100 to be made at both faces. In some embodiments, the large IC dies 100 disclosed herein may only be "single-sided," having only a set of conductive contacts at a single face (e.g., the conductive contacts 110 or the conductive contacts 114). Double-sided large IC dies 100 may be depicted in various ones of the accompanying drawings for illustrative purposes, but any suitable ones of the large IC dies 100 disclosed herein may be single-sided.FIG. 2 is a side, cross-sectional view showing example details of a large IC die 100. The elements illustrated in and discussed below with reference to FIG. 2 may be embodiments of any of the corresponding elements discussed above with reference to FIG. 1 (or others of the accompanying figures). FIG. 2 also illustrates a subvolume 102-1, a subvolume 102-2, and a stitching subvolume 104 that provides conductive pathways between the subvolume 102-1 and the subvolume 102-2. In the embodiment of FIG. 2 , no transistors 1640 are illustrated in the stitching subvolume 104; in various embodiments, the stitching subvolume 104 may or may not include transistors 1640 or other active devices.The large IC die 100 may include a substrate 1602 (e.g., the wafer 1500 of FIG. 11 ). The substrate 1602 may be a semiconductor substrate composed of semiconductor material systems including, for example, n-type or p-type materials systems (or a combination of both). The substrate 1602 may include, for example, a crystalline substrate formed using a bulk silicon or a silicon-on-insulator (SOI) substructure. In some embodiments, the substrate 1602 may be formed using alternative materials, which may or may not be combined with silicon, that include but are not limited to germanium, indium antimonide, lead telluride, indium arsenide, indium phosphide, gallium arsenide, or gallium antimonide. Further materials classified as group II-VI, III-V, or IV may also be used to form the substrate 1602. Although a few examples of materials from which the substrate 1602 may be formed are described here, any material that may serve as a foundation for the large IC die 100 may be used. In some embodiments, the substrate 1602 may be glass. The substrate 1602 may be part of a singulated die (e.g., the dies 1502 of FIG. 11 ) or a wafer (e.g., the wafer 1500 of FIG. 11 ).The device layer 106 may include features of one or more transistors 1640 (e.g., metal oxide semiconductor field-effect transistors (MOSFETs)) formed on an/or in the substrate 1602. The device layer 106 may include, for example, one or more source and/or drain (S/D) regions 1620, a gate 1622 to control current flow in the transistors 1640 between the S/D regions 1620, and one or more S/D contacts 1624 to route electrical signals to/from the S/D regions 1620. The transistors 1640 may include additional features not depicted for the sake of clarity, such as device isolation regions, gate contacts, and the like. The transistors 1640 are not limited to the type and configuration depicted in FIG. 2 and may include a wide variety of other types and configurations such as, for example, planar transistors, non-planar transistors, or a combination of both. Planar transistors may include bipolar junction transistors (BJT), heterojunction bipolar transistors (HBT), or high-electron-mobility transistors (HEMT). Non-planar transistors may include FinFET transistors, such as dual-gate transistors or tri-gate transistors, and wraparound or all-around gate transistors, such as nanoribbon transistors or nanowire transistors.Each transistor 1640 may include a gate 1622 formed of at least two layers, a gate dielectric and a gate electrode. The gate dielectric may include one layer or a stack of layers. The one or more layers may include silicon oxide, silicon dioxide, silicon carbide, and/or a high-k dielectric material. The high-k dielectric material may include elements such as hafnium, silicon, oxygen, titanium, tantalum, lanthanum, aluminum, zirconium, barium, strontium, yttrium, lead, scandium, niobium, and zinc. Examples of high-k materials that may be used in the gate dielectric include, but are not limited to, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, and lead zinc niobate. In some embodiments, an annealing process may be carried out on the gate dielectric to improve its quality when a high-k material is used.The gate electrode may be formed on the gate dielectric and may include at least one p-type work function metal or n-type work function metal, depending on whether the transistor 1640 is to be a p-type metal oxide semiconductor (PMOS) or an n-type metal oxide semiconductor (NMOS) transistor. In some implementations, the gate electrode may consist of a stack of two or more metal layers, where one or more metal layers are work function metal layers and at least one metal layer is a fill metal layer. Further metal layers may be included for other purposes, such as a barrier layer. For a PMOS transistor, metals that may be used for the gate electrode include, but are not limited to, ruthenium, palladium, platinum, cobalt, nickel, conductive metal oxides (e.g., ruthenium oxide), and any of the metals discussed below with reference to an NMOS transistor (e.g., for work function tuning). For an NMOS transistor, metals that may be used for the gate electrode include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, alloys of these metals, carbides of these metals (e.g., hafnium carbide, zirconium carbide, titanium carbide, tantalum carbide, and aluminum carbide), and any of the metals discussed above with reference to a PMOS transistor (e.g., for work function tuning).In some embodiments, when viewed as a cross-section of the transistor 1640 along the source-channel-drain direction, the gate electrode may consist of a U-shaped structure that includes a bottom portion substantially parallel to the surface of the substrate and two sidewall portions that are substantially perpendicular to the top surface of the substrate. In other embodiments, at least one of the metal layers that form the gate electrode may simply be a planar layer that is substantially parallel to the top surface of the substrate and does not include sidewall portions substantially perpendicular to the top surface of the substrate. In other embodiments, the gate electrode may consist of a combination of U-shaped structures and planar, non-U-shaped structures. For example, the gate electrode may consist of one or more U-shaped metal layers formed atop one or more planar, non-U-shaped layers.In some embodiments, a pair of sidewall spacers may be formed on opposing sides of the gate stack to bracket the gate stack. The sidewall spacers may be formed from materials such as silicon nitride, silicon oxide, silicon carbide, silicon nitride doped with carbon, and silicon oxynitride. Processes for forming sidewall spacers are well known in the art and generally include deposition and etching process steps. In some embodiments, a plurality of spacer pairs may be used; for instance, two pairs, three pairs, or four pairs of sidewall spacers may be formed on opposing sides of the gate stack.The S/D regions 1620 may be formed within the substrate 1602 adjacent to the gate 1622 of each transistor 1640. The S/D regions 1620 may be formed using an implantation/diffusion process or an etching/deposition process, for example. In the former process, dopants such as boron, aluminum, antimony, phosphorous, or arsenic may be ion-implanted into the substrate 1602 to form the S/D regions 1620. An annealing process that activates the dopants and causes them to diffuse farther into the substrate 1602 may follow the ion-implantation process. In the latter process, the substrate 1602 may first be etched to form recesses at the locations of the S/D regions 1620. An epitaxial deposition process may then be carried out to fill the recesses with material that is used to fabricate the S/D regions 1620. In some implementations, the S/D regions 1620 may be fabricated using a silicon alloy such as silicon germanium or silicon carbide. In some embodiments, the epitaxially deposited silicon alloy may be doped in situ with dopants such as boron, arsenic, or phosphorous. In some embodiments, the S/D regions 1620 may be formed using one or more alternate semiconductor materials such as germanium or a group III-V material or alloy. In further embodiments, one or more layers of metal and/or metal alloys may be used to form the S/D regions 1620.As noted above, electrical signals may be routed to and/or from the devices (e.g., the transistors 1640) of the device layer 106, or other electrical components included in the large IC die 100, through electrically conductive structures 1628 in the metallization stacks 108 and 112. The electrically conductive structures 1628 may be arranged within the metallization stacks 108 and 112 to route electrical signals according to a wide variety of designs (in particular, the arrangement is not limited to the particular configuration of electrically conductive structures 1628 depicted in FIG. 2 ). Although a particular number of layers is depicted in each of the metallization stacks 108 and 112 of FIG. 2 , embodiments of the present disclosure include large IC dies 100 having more or fewer metallization stack layers than depicted.In some embodiments, the electrically conductive structures 1628 may include lines 1628a and/or vias 1628b filled with an electrically conductive material such as a metal. The lines 1628a may be arranged to route electrical signals in a direction of a plane that is substantially parallel with a surface of the substrate 1602 upon which the device layer 106 is formed. For example, the lines 1628a may route electrical signals in a direction in and out of the page from the perspective of FIG. 2 . The vias 1628b may be arranged to route electrical signals in a direction of a plane that is substantially perpendicular to the surface of the substrate 1602 upon which the device layer 106 is formed. In some embodiments, the vias 1628b may electrically couple lines 1628a of different layers in a metallization stack together. Although the lines 1628a and the vias 1628b are structurally delineated with a line within a layer for the sake of clarity, the lines 1628a and the vias 1628b may be structurally and/or materially contiguous (e.g., simultaneously filled during a dual-damascene process) in some embodiments. In some embodiments, the metallization stack layers that are "higher up" (i.e., farther away from the device layer 106) may be thicker. In some embodiments, a through-substrate via 1628b may extend through the substrate 1602 to connect the device layer 106 and/or the top metallization stack 108 with the bottom metallization stack 112 in embodiments in which the large IC die 100 is double-sided.The metallization stacks 108 and 112 may include a dielectric material 1626 disposed between the electrically conductive structures 1628, as shown in FIG. 2 . In some embodiments, the dielectric material 1626 disposed between the electrically conductive structures 1628 in different ones of the layers of the metallization stacks 108 and 112 may have different compositions; in other embodiments, the composition of the dielectric material 1626 between different layers of the metallization stacks 108 and 112 may be the same.The top conductive contacts 110 and the bottom conductive contacts 114 may be conductive contacts formed on the metallization stacks 108 and 112, respectively, and spaced apart by a solder resist material 1634 (e.g., polyimide or similar material). In FIG. 10 , the conductive contacts are illustrated as taking the form of bond pads. The conductive contacts 110 and/or 114 may be electrically coupled with the electrically conductive structures 1628 and configured to route the electrical signals of the transistor(s) 1640 or other electrical elements of the large IC die 100 to other external devices. For example, solder bonds may be formed on the one or more conductive contacts 110 and/or 114 to mechanically and/or electrically couple the large IC die 100 with another component (e.g., another die or a package substrate, as discussed further below). The large IC die 100 may include additional or alternate structures to route the electrical signals from the metallization stacks 108 or 112; for example, the conductive contacts 110 or 114 may include other analogous features (e.g., posts) that route the electrical signals to external components.As noted above, in embodiments in which a large IC die 100 is double-sided, one or more other IC dies may be coupled to the top conductive contacts 110 and/or the bottom conductive contacts 114. For example, FIG. 3 is a side, cross-sectional view of an IC assembly 200 including a large IC die 100 and two other IC dies 150 coupled to the top conductive contacts 110 of the large IC die 100. Although the large IC die 100 of FIG. 3 is illustrated as including two subvolumes 102 and a stitching subvolume 104, the large IC die 100 may take the form of any of the large IC dies 100 disclosed herein.Conductive contacts 1654 of the IC dies 150 may be coupled to the large IC die 100 by first-level interconnects 1658. The first-level interconnects 1658 illustrated in FIG. 3 are solder bumps, but any suitable first-level interconnects 1658 may be used. First-level interconnects 1665 may be present on the conductive contacts 114 of the large IC die 100; the first-level interconnects 1665 may be used to couple the large IC die 100 to a package substrate (e.g., as discussed further below with reference to FIG. 12 ), to an interposer, or to another IC die.Although the IC dies 150 are depicted in various ones of the accompanying figures as coupled to conductive contacts 110 of the subvolumes 102 of the large IC die 100, this is simply illustrative, and IC dies 150 may be coupled to conductive contacts 110 of a stitching subvolume 104, as suitable. Further, although the large IC die 100 is depicted in various ones of the accompanying figures as having the IC dies 150 coupled to the conductive contacts 110, this is simply illustrative, and IC dies 150 may be coupled to the conductive contacts 114 instead of or in addition to the conductive contacts 110 (and the conductive contacts 110 may be coupled to a package substrate or an interposer, as desired).In some embodiments, an IC assembly 200 may include the more complex (and therefore lower yield) structures in the smaller IC dies 150 while locating the less complex (and therefore higher yield) structures in the large IC die 100. The size of a large IC die 100 may mean that it is costly to manufacture, and thus a loss of such a die may be expensive; fabricating the large IC die 100 with more reliably manufactured electrical structures may reduce the likelihood that the large IC die 100 will fail to meet performance requirements and will be counted as a loss. Examples of electrical structures that may be suitable for inclusion in the large IC die 100 of an IC assembly 200 may include power delivery structures, DRAM, SRAM, camera sensors, and high yield, low density logic.FIGS. 4-6 illustrate various arrangements of the large IC die 100 and the other IC dies 150 in example IC assemblies 200. For example, FIG. 4 illustrates an IC assembly 200 having multiple IC dies 150 coupled to a large IC die 100; FIG. 4A is a top view, and FIG. 4B is a side, cross-sectional view through the section A-A of FIG. 4A . In the IC assembly 200 of FIG. 4 , the large IC die 100 has four subvolumes 102-1, 102-2, 102-3, and 102-4 arranged in an array and "stitched" together by intervening stitching subvolumes 104-1, 104-2, and 104-3, as shown. The IC dies 150-1, 150-2, 150-3, and 150-4 are coupled to conductive contacts 110 of the subvolumes 102-1, 102-2, 102-3, and 102-4, respectively. Elements of the IC assembly 200 may take the form of corresponding elements of FIG. 2 , for example.In some embodiments of the IC assembly 200 of FIG. 4 , the subvolumes 102 of the large IC die 100 may include memory devices, such as SRAM. In some embodiments, one or more of the subvolumes 102 of the large IC die 100 may also include router circuitry. The IC die 150-1 may be a logic die, and the IC dies 150-2 may be artificial intelligence (Al) dies, such as deep neural network (DNN) dies. The IC dies 150-3 may be high bandwidth memory (HBM) dies (e.g., dies in accordance with the HBM or HBM2 standard). Such an embodiment of the IC assembly 200 may provide an Al processing assembly, and may be packaged into an IC package (e.g., as discussed below with reference to the IC package 1650 of FIG. 12 ). In some embodiments, the lateral dimension 118 of the large IC die 100 of FIG. 4 may be between 40 millimeters and 60 millimeters (e.g., between 44 millimeters and 58 millimeters). In some embodiments, the lateral dimension 116 of the large IC die 100 of FIG. 4 may be between 30 millimeters and 40 millimeters (e.g., between 32 millimeters and 35 millimeters). The area of the IC dies 150-1 and 150-2 may be between 200 square millimeters and 250 square millimeters, and the area of the IC dies 150-3 may be between 80 square millimeters and 100 square millimeters.FIG. 5 illustrates an IC assembly 200 having multiple IC dies 150 coupled to a large IC die 100; FIG. 5A is a top view of the IC assembly, omitting details of the large IC die 100, and FIG. 5B is a top view of the large IC die 100. In the IC assembly 200 of FIG. 5 , the large IC die 100 has many subvolumes 102 "stitched" together by intervening stitching subvolumes 104, as shown. The IC dies 150 are coupled to conductive contacts (not shown) of the large IC die 100 (e.g., as discussed above with reference to FIG. 3 ). Elements of the IC assembly 200 of FIG. 5 may take the form of corresponding elements of FIG. 2 , for example.In some embodiments of the IC assembly 200 of FIG. 5 , the IC dies 150 may be HBM dies, the subvolumes 102-1 may be computing clusters, the subvolumes 102-2 may be serializer/deserializer (SERDES) circuitry, the subvolumes 102-3 may be HBM controller circuitry (e.g., I/O circuitry for the HBM IC dies 150), and the subvolume 102-4 may be bus circuitry (e.g., Peripheral Component Interconnect Express (PCIe) circuitry). Stitching subvolumes 104 may be arranged in any suitable manner between the subvolumes 102 of the large IC die 100 to achieve a desired pattern of connectivity between the subvolumes 102. In some embodiments, the lateral dimension of the subvolumes 102-1 may be between 4 square millimeters and 6 square millimeters. Although a particular number of subvolumes 102-1 and IC dies 150 are illustrated in FIG. 5 , an IC assembly 200 may include more or fewer components (e.g., more than 64 subvolumes 102-1). Such an embodiment of the IC assembly 200 may provide an AI processing assembly, and may be packaged into an IC package (e.g., as discussed below with reference to the IC package 1650 of FIG. 12 ).FIG. 6 is a top view of an IC assembly having multiple IC dies 150 coupled to a large IC die 100. In the IC assembly 200 of FIG. 6 , the large IC die 100 has many subvolumes 102 "stitched" together by intervening stitching subvolumes 104, as shown. The IC dies 150 are coupled to conductive contacts (not shown) of the large IC die 100 (e.g., as discussed above with reference to FIG. 3 ). Elements of the IC assembly 200 of FIG. 6 may take the form of corresponding elements of FIG. 2 , for example. In some embodiments of the IC assembly 200 of FIG. 6 , the IC dies 150 may be HBM dies, the subvolumes 102-1 may be logic circuitry, the subvolumes 102-2 may be memory devices (e.g., SRAM), and the subvolumes 102-3 may include HBM controller circuitry.As noted above, although particular types and arrangements of subvolumes 102 and stitching subvolumes 104 are illustrated herein, a large IC die 100 may include any suitable types and arrangements of subvolumes 102 and stitching subvolumes 104. For example, FIG. 7 is a top view of another example large IC die 100, including a number of different subvolumes 102 and stitching subvolumes 104. One or more of the subvolumes 102 (and/or stitching subvolumes 104) may be patterned using the same photomask sets or different photomask sets, as discussed herein.Any suitable manufacturing process may be used to fabricate the large IC dies 100 disclosed herein. For example, FIGS. 8A-8C illustrate stages in an example process of manufacturing a large IC die 100, in accordance with various embodiments. FIG. 8A is a top view of an assembly 500 subsequent to forming features 260 in a first region 160 of an assembly 170 using a first set of photomasks (and any suitable associated processes, such as deposition, polishing, desmear, etc.). The assembly 170 may be the substrate 1602 (e.g., when forming the device layer 106) or any other stage during the fabrication of the large IC die 100.FIG. 8B is a top view of an assembly 502 subsequent to forming features 262 in a second region 162 of the assembly 502 ( FIG. 8A ) using a second set of photomasks (and any suitable associated processes). In some embodiments, the first set of photomasks may be the same as the second set of photomasks (and thus the features 260 may be the same as the features 262), while in other embodiments, the first set of photomasks may be different than the second set of photomasks (and thus the features 260 may be different than the features 262).FIG. 8C is a top view of an assembly 504 subsequent to forming features 264 in a third region 164 of the assembly 502 ( FIG. 8B ) using a third set of photomasks (and any suitable associated processes). The features 264 may electrically "stitch" some of the features 260 of the first region 160 with some of the features 262 of the second region 162. The operations of FIGS. 8A-8C may be repeated so that the features formed in the first region 160 provide the subvolume 102-1, the features formed in the second region 162 provide the subvolume 102-2, and the features formed in the third region 164 provide the stitching subvolume 104 of a large IC die 100.FIGS. 9A-9C illustrate stages in an example process of manufacturing a large IC die 100, in accordance with various embodiments. FIG. 9A is a top view of an assembly 510 subsequent to forming features 266 in a first region 160 and in a second region 162 of an assembly 170 using a first set of photomasks (and any suitable associated processes, such as deposition, polishing, desmear, etc.), and forming features 265 in a third region 164 of the assembly 170 using a second set of photomasks (and any suitable associated processes). The features 265 may electrically "stitch" some of the features 266 of the first region 160 with some of the features 266 of the second region 162. The features 266 may be formed in the first region 160 and in the second region 162 in parallel, or in series, as suitable. The assembly 170 may take any of the forms discussed above with reference to FIG. 9A .FIG. 9B is a top view of an assembly 512 subsequent to forming features 268 in the first region 160 of the assembly 510 ( FIG. 9A ) using a third set of photomasks (and any suitable associated processes), and forming features 270 in the second region 162 of the assembly 510 ( FIG. 9A ) using a fourth set of photomasks (and any suitable associated processes). The third set of photomasks may be different than the fourth set of photomasks (and thus the features 268 may be different than the features 270).FIG. 9C is a top view of an assembly 514 subsequent to forming features 272 in the third region 164 of the assembly 512 ( FIG. 9B ) using a fifth set of photomasks (and any suitable associated processes). The features 272 may electrically "stitch" some of the features 268 of the first region 160 with some of the features 270 of the second region 162. The operations of FIGS. 9A-9C may be repeated (and modified as suitable) so that the features in the first region 160 provide the subvolume 102-1, the features formed in the second region 162 provide the second subvolume 102-2, and the features formed in the third region 164 provide the stitching subvolume 104 of a large IC die 100.FIG. 10 is a flow diagram of an example method 1000 of manufacturing a large IC die, in accordance with various embodiments. Although the operations of the method 1000 may be illustrated with reference to particular embodiments of the large IC dies 100 disclosed herein, the method 1000 may be used to form any suitable large IC die. Operations are illustrated once each and in a particular order in FIG. 10 , but the operations may be reordered and/or repeated as desired (e.g., with different operations performed in parallel when manufacturing multiple electronic components simultaneously). For example, the operations of 1002, 1004, and 1006 may be performed in an interleaved manner, with portions of the first die subvolume, the second die subvolume, and the third die subvolume being fabricated alternatingly.At 1002, a first die subvolume may be formed. The first die subvolume may take the form of any of the subvolumes 102 disclosed herein, and may be formed using any of the techniques disclosed herein.At 1004, a second die subvolume may be formed. The second die subvolume may take the form of any of the subvolumes 102 disclosed herein, and may be formed using any of the techniques disclosed herein.At 1006, a third die subvolume may be formed. The third die subvolume may include electrical pathways that electrically couple devices in the first die subvolume with devices in the second die subvolume to form a large die. The third die subvolume may take the form of any of the stitching subvolumes 104 disclosed herein, and may be formed using any of the techniques disclosed herein. The large die may take the form of any of the large IC dies 100 disclosed herein, for example.The large IC dies 100 disclosed herein may be included in any suitable electronic component. FIGS. 11-14 illustrate various examples of apparatuses that may include any of the large IC dies 100 disclosed herein.FIG. 11 is a top view of a wafer 1500 and dies 1502 that may include one or more large IC dies 100, or may be included in an IC package including one or more large IC dies 100 (e.g., as discussed below with reference to FIG. 12 ) in accordance with any of the embodiments disclosed herein. The wafer 1500 may be composed of semiconductor material or a non-semiconductor material, such as glass, and may include one or more dies 1502 having IC structures formed on a surface of the wafer 1500. Each of the dies 1502 may be a repeating unit of a semiconductor product that includes any suitable IC. After the fabrication of the semiconductor product is complete, the wafer 1500 may undergo a singulation process in which the dies 1502 are separated from one another to provide discrete "chips" of the semiconductor product. In some embodiments, the die 1502 may be the size of the entire wafer (e.g., when the die 1502 is a large IC die 100), and thus no singulation may be required. The die 1502 may take the form of any of the large IC dies 100 or IC dies 150 disclosed herein. In some embodiments, the wafer 1500 or the die 1502 may include a memory device (e.g., a random access memory (RAM) device, such as a static RAM (SRAM) device, a magnetic RAM (MRAM) device, a resistive RAM (RRAM) device, a conductive-bridging RAM (CBRAM) device, etc.), a logic device (e.g., an AND, OR, NAND, or NOR gate), or any other suitable circuit element. Multiple ones of these devices may be combined on a single die 1502. For example, a memory array formed by multiple memory devices may be formed on a same die 1502 as a processing device (e.g., the processing device 1802 of FIG. 14 ) or other logic that is configured to store information in the memory devices or execute instructions stored in the memory array.FIG. 12 is a side, cross-sectional view of an example IC package 1650 that may include one or more large IC dies 100. In particular, FIG. 12 illustrates an IC package 1650 including the IC assembly 200 of FIG. 3 ; other elements (not shown) may also be included in the IC package 1650. In some embodiments, the IC package 1650 may be a system-in-package (SiP).The package substrate 1652 may be formed of an organic dielectric material (e.g., a ceramic, a buildup film, an epoxy film having filler particles therein, etc.), and may have conductive pathways extending through the dielectric material between the face 1672 and the face 1674, or between different locations on the face 1672, and/or between different locations on the face 1674. These conductive pathways may take the form of any of the electrically conductive structures 1628 discussed above with reference to FIG. 2 . In some embodiments, the package substrate 1652 may be formed as a printed circuit board (PCB), as discussed below with reference to FIG. 13 .The package substrate 1652 may include conductive contacts 1663 that are coupled to conductive pathways 1662 through the package substrate 1652, allowing circuitry within the IC dies 150 and/or the large IC die 100 to electrically couple to various ones of the conductive contacts 1664. The large IC die 100 may be coupled to the conductive contacts 1663 of the package substrate 1652 by first-level interconnects 1665. The first-level interconnects 1665 illustrated in FIG. 12 are solder bumps, but any suitable first-level interconnects 1665 may be used.In some embodiments, an underfill material 1666 may be disposed between the package substrate 1652 and the large IC die 100 around the first-level interconnects 1665, and a mold compound 1668 may be disposed around the IC dies 150 and the large IC die 100 and in contact with the package substrate 1652. In some embodiments, the underfill material 1666 may be the same as the mold compound 1668. Example materials that may be used for the underfill material 1666 and the mold compound 1668 are epoxy mold materials, as suitable. Second-level interconnects 1670 may be coupled to the conductive contacts 1664. The second-level interconnects 1670 illustrated in FIG. 12 are solder balls (e.g., for a ball grid array arrangement), but any suitable second-level interconnects 16770 may be used (e.g., pins in a pin grid array arrangement or lands in a land grid array arrangement). The second-level interconnects 1670 may be used to couple the IC package 1650 to another component, such as a circuit board (e.g., a motherboard), an interposer, or another IC package, as known in the art and as discussed below with reference to FIG. 13 .In embodiments in which the IC package 1650 includes multiple dies 100/150, the IC package 1650 may be referred to as a multi-chip package (MCP). Although the IC package 1650 illustrated in FIG. 12 is a flip chip package, other package architectures may be used. For example, the IC package 1650 may be a ball grid array (BGA) package, such as an embedded wafer-level ball grid array (eWLB) package. In another example, the IC package 1650 may be a wafer-level chip scale package (WLCSP) or a panel fanout (FO) package. Although a particular number of IC dies 100/150 are illustrated in the IC package 1650 of FIG. 12 , an IC package 1650 may include any desired number of dies 100/150. An IC package 1650 may include additional passive components, such as surface-mount resistors, capacitors, and inductors disposed on the first face 1672 or the second face 1674 of the package substrate 1652, or on either face of the large IC die 100. More generally, an IC package 1650 may include any other active or passive components known in the art.FIG. 13 is a side, cross-sectional view of an IC device assembly 1700 that may include one or more IC packages including one or more large IC dies 100, in accordance with any of the embodiments disclosed herein. The IC device assembly 1700 includes a number of components disposed on a circuit board 1702 (which may be, e.g., a motherboard). The IC device assembly 1700 includes components disposed on a first face 1740 of the circuit board 1702 and an opposing second face 1742 of the circuit board 1702; generally, components may be disposed on one or both faces 1740 and 1742. Any of the IC packages discussed below with reference to the IC device assembly 1700 may take the form of any of the embodiments of the IC package 1650 discussed above with reference to FIG. 12 (e.g., may include one or more large IC dies 100).In some embodiments, the circuit board 1702 may be a PCB including multiple metal layers separated from one another by layers of dielectric material and interconnected by electrically conductive vias. Any one or more of the metal layers may be formed in a desired circuit pattern to route electrical signals (optionally in conjunction with other metal layers) between the components coupled to the circuit board 1702. In other embodiments, the circuit board 1702 may be a non-PCB substrate.The IC device assembly 1700 illustrated in FIG. 13 includes a package-on-interposer structure 1736 coupled to the first face 1740 of the circuit board 1702 by coupling components 1716. The coupling components 1716 may electrically and mechanically couple the package-on-interposer structure 1736 to the circuit board 1702, and may include solder balls (as shown in FIG. 13 ), male and female portions of a socket, an adhesive, an underfill material, and/or any other suitable electrical and/or mechanical coupling structure.The package-on-interposer structure 1736 may include an IC package 1720 coupled to a package interposer 1704 by coupling components 1718. The coupling components 1718 may take any suitable form for the application, such as the forms discussed above with reference to the coupling components 1716. Although a single IC package 1720 is shown in FIG. 13 , multiple IC packages may be coupled to the package interposer 1704; indeed, additional interposers may be coupled to the package interposer 1704. The package interposer 1704 may provide an intervening substrate used to bridge the circuit board 1702 and the IC package 1720. The IC package 1720 may be or include, for example, any of the dies disclosed herein. Generally, the package interposer 1704 may spread a connection to a wider pitch or reroute a connection to a different connection. For example, the package interposer 1704 may couple the IC package 1720 (e.g., a die) to a set of BGA conductive contacts of the coupling components 1716 for coupling to the circuit board 1702. In the embodiment illustrated in FIG. 13 , the IC package 1720 and the circuit board 1702 are attached to opposing sides of the package interposer 1704; in other embodiments, the IC package 1720 and the circuit board 1702 may be attached to a same side of the package interposer 1704. In some embodiments, three or more components may be interconnected by way of the package interposer 1704.In some embodiments, the package interposer 1704 may be formed as a printed circuit board (PCB), including multiple metal layers separated from one another by layers of dielectric material and interconnected by electrically conductive vias. In some embodiments, the package interposer 1704 may be formed of an epoxy resin, a fiberglass-reinforced epoxy resin, an epoxy resin with inorganic fillers, a ceramic material, or a polymer material such as polyimide. In some embodiments, the package interposer 1704 may be formed of alternate rigid or flexible materials that may include the same materials described above for use in a semiconductor substrate, such as silicon, germanium, and other group III-V and group IV materials. The package interposer 1704 may include metal interconnects 1708 and vias 1710, including but not limited to through-silicon vias (TSVs) 1706. The package interposer 1704 may further include embedded devices 1714, including both passive and active devices. Such devices may include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, sensors, electrostatic discharge (ESD) devices, and memory devices. More complex devices such as radio frequency devices, power amplifiers, power management devices, antennas, arrays, sensors, and microelectromechanical systems (MEMS) devices may also be formed on the package interposer 1704. The package-on-interposer structure 1736 may take the form of any of the package-on-interposer structures known in the art.The IC device assembly 1700 may include an IC package 1724 coupled to the first face 1740 of the circuit board 1702 by coupling components 1722. The coupling components 1722 may take the form of any of the embodiments discussed above with reference to the coupling components 1716, and the IC package 1724 may take the form of any of the embodiments discussed above with reference to the IC package 1720.The IC device assembly 1700 illustrated in FIG. 13 includes a package-on-package structure 1734 coupled to the second face 1742 of the circuit board 1702 by coupling components 1728. The package-on-package structure 1734 may include an IC package 1726 and an IC package 1732 coupled together by coupling components 1730 such that the IC package 1726 is disposed between the circuit board 1702 and the IC package 1732. The coupling components 1728 and 1730 may take the form of any of the embodiments of the coupling components 1716 discussed above, and the IC packages 1726 and 1732 may take the form of any of the embodiments of the IC package 1720 discussed above. The package-on-package structure 1734 may be configured in accordance with any of the package-on-package structures known in the art.FIG. 14 is a block diagram of an example electrical device 1800 that may include one or more large IC dies 100, in accordance with any of the embodiments disclosed herein. For example, any suitable ones of the components of the electrical device 1800 may include one or more of the IC device assemblies 1700, IC packages 1650, or large IC dies 100 disclosed herein. A number of components are illustrated in FIG. 14 as included in the electrical device 1800, but any one or more of these components may be omitted or duplicated, as suitable for the application. In some embodiments, some or all of the components included in the electrical device 1800 may be attached to one or more motherboards. In some embodiments, some or all of these components are fabricated onto a single system-on-a-chip (SoC) die.Additionally, in various embodiments, the electrical device 1800 may not include one or more of the components illustrated in FIG. 14 , but the electrical device 1800 may include interface circuitry for coupling to the one or more components. For example, the electrical device 1800 may not include a display device 1806, but may include display device interface circuitry (e.g., a connector and driver circuitry) to which a display device 1806 may be coupled. In another set of examples, the electrical device 1800 may not include an audio input device 1824 or an audio output device 1808, but may include audio input or output device interface circuitry (e.g., connectors and supporting circuitry) to which an audio input device 1824 or audio output device 1808 may be coupled. A housing (not shown) may be disposed around one or more components of the electrical device 1800.The electrical device 1800 may include a processing device 1802 (e.g., one or more processing devices). As used herein, the term "processing device" or "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory. The processing device 1802 may include one or more digital signal processors (DSPs), application-specific integrated circuits (ASICs), CPUs, graphics processing units (GPUs), cryptoprocessors (specialized processors that execute cryptographic algorithms within hardware), server processors, or any other suitable processing devices. The electrical device 1800 may include a memory 1804, which may itself include one or more memory devices such as volatile memory (e.g., DRAM), nonvolatile memory (e.g., read-only memory (ROM)), flash memory, solid state memory, and/or a hard drive. In some embodiments, the memory 1804 may include memory that shares a die with the processing device 1802. This memory may be used as cache memory and may include embedded DRAM (eDRAM) or spin transfer torque magnetic random access memory (STT-MRAM).In some embodiments, the electrical device 1800 may include a communication chip 1812 (e.g., one or more communication chips). For example, the communication chip 1812 may be configured for managing wireless communications for the transfer of data to and from the electrical device 1800. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a nonsolid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not.The communication chip 1812 may implement any of a number of wireless standards or protocols, including but not limited to Institute for Electrical and Electronic Engineers (IEEE) standards including Wi-Fi (IEEE 802.11 family), IEEE 802.16 standards (e.g., IEEE 802.16-2005 Amendment), Long-Term Evolution (LTE) project along with any amendments, updates, and/or revisions (e.g., advanced LTE project, ultra mobile broadband (UMB) project (also referred to as "3GPP2"), etc.). IEEE 802.16 compatible Broadband Wireless Access (BWA) networks are generally referred to as WiMAX networks, an acronym that stands for Worldwide Interoperability for Microwave Access, which is a certification mark for products that pass conformity and interoperability tests for the IEEE 802.16 standards. The communication chip 1812 may operate in accordance with a Global System for Mobile Communication (GSM), General Packet Radio Service (GPRS), Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Evolved HSPA (E-HSPA), or LTE network. The communication chip 1812 may operate in accordance with Enhanced Data for GSM Evolution (EDGE), GSM EDGE Radio Access Network (GERAN), Universal Terrestrial Radio Access Network (UTRAN), or Evolved UTRAN (E-UTRAN). The communication chip 1812 may operate in accordance with Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced Cordless Telecommunications (DECT), Evolution-Data Optimized (EV-DO), and derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The communication chip 1812 may operate in accordance with other wireless protocols in other embodiments. The electrical device 1800 may include an antenna 1822 to facilitate wireless communications and/or to receive other wireless communications (such as AM or FM radio transmissions).In some embodiments, the communication chip 1812 may manage wired communications, such as electrical, optical, or any other suitable communication protocols (e.g., the Ethernet). As noted above, the communication chip 1812 may include multiple communication chips. For instance, a first communication chip 1812 may be dedicated to shorter-range wireless communications such as Wi-Fi or Bluetooth, and a second communication chip 1812 may be dedicated to longer-range wireless communications such as global positioning system (GPS), EDGE, GPRS, CDMA, WiMAX, LTE, EV-DO, or others. In some embodiments, a first communication chip 1812 may be dedicated to wireless communications, and a second communication chip 1812 may be dedicated to wired communications.The electrical device 1800 may include battery/power circuitry 1814. The battery/power circuitry 1814 may include one or more energy storage devices (e.g., batteries or capacitors) and/or circuitry for coupling components of the electrical device 1800 to an energy source separate from the electrical device 1800 (e.g., AC line power).The electrical device 1800 may include a display device 1806 (or corresponding interface circuitry, as discussed above). The display device 1806 may include any visual indicators, such as a heads-up display, a computer monitor, a projector, a touchscreen display, a liquid crystal display (LCD), a light-emitting diode display, or a flat panel display.The electrical device 1800 may include an audio output device 1808 (or corresponding interface circuitry, as discussed above). The audio output device 1808 may include any device that generates an audible indicator, such as speakers, headsets, or earbuds.The electrical device 1800 may include an audio input device 1824 (or corresponding interface circuitry, as discussed above). The audio input device 1824 may include any device that generates a signal representative of a sound, such as microphones, microphone arrays, or digital instruments (e.g., instruments having a musical instrument digital interface (MIDI) output).The electrical device 1800 may include a GPS device 1818 (or corresponding interface circuitry, as discussed above). The GPS device 1818 may be in communication with a satellite-based system and may receive a location of the electrical device 1800, as known in the art.The electrical device 1800 may include an other output device 1810 (or corresponding interface circuitry, as discussed above). Examples of the other output device 1810 may include an audio codec, a video codec, a printer, a wired or wireless transmitter for providing information to other devices, or an additional storage device.The electrical device 1800 may include an other input device 1820 (or corresponding interface circuitry, as discussed above). Examples of the other input device 1820 may include an accelerometer, a gyroscope, a compass, an image capture device, a keyboard, a cursor control device such as a mouse, a stylus, a touchpad, a bar code reader, a Quick Response (QR) code reader, any sensor, or a radio frequency identification (RFID) reader.The electrical device 1800 may have any desired form factor, such as a handheld or mobile electrical device (e.g., a cell phone, a smart phone, a mobile internet device, a music player, a tablet computer, a laptop computer, a netbook computer, an ultrabook computer, a personal digital assistant (PDA), an ultra mobile personal computer, etc.), a desktop electrical device, a server device or other networked computing component, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a vehicle control unit, a digital camera, a digital video recorder, or a wearable electrical device. In some embodiments, the electrical device 1800 may be any other electronic device that processes data.
A method is provided for manufacturing an integrated circuit including a short channel (SC) device (16) and a long channel (LC) device (18) each overlaid by an interlayer dielectric (75). The SC device (16) has an SC gate stack (34), and the LC device (18) initially has a dummy gate (50). In one embodiment, the method includes the steps of removing the dummy gate (50) to form an LC device trench (96) and depositing metal gate material (98) over the SC device (16) and the LC device (18). The metal gate material (98) contacts the SC gate stack (34) and substantially fills the LC device trench (96).
CLAIMS What is claimed is: 1. A method for manufacturing an integrated circuit including a short channel (SC) device (16) and a long channel (LC) device (18) each overlaid by an interlayer dielectric (75), the SC device (16) having an SC gate stack (34) and the LC device (18) initially having a dummy gate (50), the method comprising: removing the dummy gate (50) to form an LC device trench (96); and depositing metal gate material (98) over the SC device (16) and the LC device (18), the metal gate material (98) contacting the SC gate stack (34) and substantially filling the LC device trench (96). 2. A method according to Claim 1 further comprising: covering the LC device (16) with a photoresist mask (84); and etching a selected portion of the interlayer dielectric (75) such that the SC gate stack (34) is exposed through the interlayer dielectric (75) while the dummy gate (50) remains covered by the interlayer dielectric (75). 3. A method according to Claim 2 further comprising the step of oxidizing the SC gate stack (3544)) aafftteerr eetching the selected portion of the interlayer dielectric (75). 4. A method according to Claim 3 wherein the SC device (16) includes a sidewall spacer (62) adjacent the SC gate stack (34), wherein the SC gate stack (34) includes a gate insulator (42), and wherein the step of oxidizing comprises annealing the gate insulator (42) while exposing the sidewall spacer (62) to an oxygen ambient. 5. A method according to Claim 2 wherein the SC device (16) and the LC device (18) are each P-type devices, wherein the integrated circuit further includes an N-type device, and wherein the step of covering comprises placing a photoresist mask (84) on the integrated circuit covering the LC device (18) and the N-type device. 6. A method according to Claim 1 wherein the SC device (16) and the LC device (18) are each P-type devices, wherein the integrated circuit further includes an N-type device, and wherein the step of removing comprises: covering the SC device (16) and the N-type device with a photoresist mask (84); and etching the dummy gate (50). 7. A method according to Claim 1 further comprising: forming an etch stop layer (72) over a portion of the integrated circuit including the SC gate stack (34) and the dummy gate (50) such that the etch stop layer (72) includes a first raised etch stop feature (74) above the SC gate stack (34) and a second raised etch stop feature (76) above the dummy gate (50); and depositing the interlayer dielectric (75) over the etch stop layer (72) to cover the first raised etch stop feature (74) and the second raised etch stop feature (76). 8. A method according to Claim 1 wherein the SC gate stack (34) includes a polycrystalline silicon layer (38) having a sidewall (88), and wherein the step of etching comprises creating an opening (86) surrounding SC gate stack (34) and exposing at least a portion of the sidewall (88). 9. A method according to Claim 8 wherein the step of depositing comprises substantially filling the opening (86) with the metal gate material (98). 10. A method according to Claim 1 wherein the metal gate material (98) comprises a metal having an effective work function of approximately 4.7 to approximately 5.1 electron volts.
INTEGRATED CIRCUIT HAVING LONG AND SHORT CHANNEL METAL GATE DEVICES AND METHOD OF MANUFACTURE TECHNICAL FIELD [0001] The present invention relates generally to an integrated circuit and, more particularly, to an integrated circuit having both long and short channel metal gate devices and a method for making such a circuit. BACKGROUND [0002] The majority of present day integrated circuits (ICs) are implemented utilizing a plurality of interconnected field effect transistors (FETs), also referred to as metal oxide semiconductor field effect transistors (MOSFETs) or simply MOS transistors. A MOS transistor includes a gate electrode, which serves as a control electrode, and source and drain electrodes. A channel extends between the source and drain electrodes. Current flows through this channel upon application of a voltage (referred to as the "threshold voltage" or Vt) to the gate electrode sufficient to form an inversion region in the transistor substrate. [0003] For MOS transistors employing metal gate stacks and high-k dielectrics, it is desirable that the target Vt (referred to herein as the "bandedge Vt") corresponds to within 100 millivolts of the conduction band or valence band edge whether the device is NMOS or PMOS. It has, however, proven difficult to construct a metal gate MOS transistor having a bandedge Vt for several reasons. Fixed positive charges due to oxygen vacancies present in the high-k material may shift the transistor's threshold voltage away from the desired bandedge Vt. Furthermore, metals having work functions that yield bandedge threshold voltages (e.g., work functions of approximately 4.7-5.1 electron volts) are typically thermally unstable at temperatures exceeding 400 degrees Celsius. Such thermally unstable metals are generally unable to withstand the high temperatures experienced during source-drain activation annealing. For this reason, a gate-last approach is typically employed to construct MOS transistors including metal gates formed from thermally unstable metals. For example, a damascene process may be employed wherein a dummy gate is initially installed and subsequently removed via etching to produce a trench. A thermally unstable metal may then be deposited into the trench and polished to define a permanent metal gate.[0004] While being generally well-suited for use in conjunction with long channel (LC) transistors (e.g., devices wherein the channel length exceeds a predetermined value, which may be, for example, approximately 0.1 μm), the above-described damascene process has certain disadvantages when utilized in conjunction with short channel (SC) transistors (e.g., devices wherein the channel length is equal to or less than the predetermined value). For example, due to the small size of the device, the entire dummy gate may not be removed during the etching process. Furthermore, when deposited over the open trench of an SC transistor, the metal gate material may pinch-off near the mouth of the trench before the trench is completely filled. Voiding can consequently occur within the body of the trench. Thus, for an IC including SC transistors and LC transistors, the damascene process is generally unacceptable and an etching process is generally utilized to construct the metal gates for both types of transistors thus generally preventing the use of thermally unstable metals in LC transistors to achieve bandedge voltage thresholds. [0005] Accordingly, it would be desirable to provide a method for manufacturing a MOS transistor having short channel devices and long channel devices that permits bandedge voltage thresholds to be achieved for both the short and long channel devices. In particular, it would be desirable for such a method to permit thermally unstable metals to be utilized in the fabrication of the long channel devices, while also permitting oxygen vacancies present in the short channel devices to be repaired. Furthermore, other desirable features and characteristics of the present invention will become apparent from the subsequent detailed description and the appended claims, taken in conjunction with the accompanying drawings and the foregoing technical field and background. BRIEF SUMMARY [0006] A method is provided for manufacturing an integrated circuit including a short channel (SC) device and a long channel (LC) device each overlaid by an interlayer dielectric. The SC device has an SC gate stack and the LC device initially has a dummy gate. In one embodiment, the method includes the steps of removing the dummy gate to form an LC device trench, and depositing metal gate material over the SC device and the LC device. The metal gate material contacts the SC gate stack and substantially fills the LC device trench. [0007] In accordance with another embodiment, an integrated circuit is provided that includes a substrate, a short channel (SC) device, a long channel (LC) device, an etch stoplayer deposited over an upper surface of the substrate, and an interlayer dielectric deposited over an upper surface of the etch stop layer. The SC device and the LC device each include a source formed in the substrate, a drain formed in the substrate and spaced apart from the source, and a channel formed in the substrate between the source and drain. The SC device further includes an SC gate stack, which, in turn, includes an SC gate insulator disposed above the channel, an SC metal gate disposed above the gate insulator, a polycrystalline silicon layer disposed above the metal gate, and a suicide layer disposed above the polycrystalline silicon layer. The LC device further includes an LC gate insulator disposed above the channel, and an LC metal gate contacting the gate insulator. An SC cap is disposed in the interlayer dielectric and contacts the SC gate stack. The SC gate stack and the LC metal gate extend through the etch stop layer, and the SC cap and the LC metal gate are exposed through the upper surface of the interlayer dielectric. [0008] In accordance with another embodiment, an integrated circuit is provided that includes a substrate, a short channel (SC) device, a long channel (LC) device, an etch stop deposited over an upper surface of the substrate, and an interlayer dielectric deposited over an upper surface of the etch stop layer. The SC device includes an SC gate insulator disposed above a first portion of the substrate, an SC metal gate disposed above the gate insulator, a polycrystalline silicon layer disposed above the metal gate, and a suicide layer formed on the polycrystalline silicon layer. The LC device includes an LC gate insulator disposed above a second portion of the substrate, and an LC metal gate overlying the gate insulator. An SC cap is disposed in the interlayer dielectric, contacts the SC gate stack, and is substantially formed from the same metal as is the LC metal gate. BRIEF DESCRIPTION OF THE DRAWINGS [0009] The present invention will hereinafter be described in conjunction with the following drawing figures, wherein like numerals denote like elements, and wherein: [0010] FIGs. 1-9 are simplified cross-sectional views illustrating a first group of steps performed during an exemplary device manufacturing process; (0011] FIG. 10 is a graph illustrating the effect of the exemplary annealing step illustrated in FIG. 9 on the short channel device threshold voltage; and[0012] FIGs. 1 1-14 are simplified cross-sectional views illustrating a second group of steps performed during the exemplary device manufacturing process. DETAILED DESCRIPTION [0013] The following detailed description is merely exemplary in nature and is not intended to limit the invention or the application and uses of the invention. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description. Although the term "MOS device" properly refers to a device having a metal gate electrode and an oxide gate insulator, that term will be used throughout to refer to any semiconductor device that includes a conductive gate electrode that is positioned over a gate insulator (whether oxide or other insulator) which, in turn, is positioned over a semiconductor substrate. [0014] An exemplary method for the manufacture of an integrated circuit having a P-type short channel (SC) transistor and a P-type long channel (LC) transistor will be described below in conjunction with FIGs. 1-14. However, it is emphasized that alternative embodiments of the inventive method can be utilized to construct an integrated circuit including other types of SC and LC devices. For example, similar method steps are suitable for use in the manufacture of an N-type MOS device with appropriate changes in dopant types. Likewise, similar method steps can used to manufacture complementary MOS transistors (CMOS). Furthermore, various steps in the manufacture of MOS transistors are well-known and, in the interests of brevity, will only be mentioned briefly herein or will be omitted entirely without providing the well-known process details. [0015] FIGs. 1-9 and 11-14 are simplified cross-sectional views illustrating various steps of an exemplary method for manufacturing an integrated circuit including a short channel (SC) device and a long channel (LC) device. For the purposes of the present description, a "short channel device" is defined as a device having a channel length less than a predetermined length (L). Conversely, a "long channel device" is defined as a device having a channel length equal to or greater than the predetermined length (L). The value of the predetermined length (L) will inevitably vary amongst different embodiments; however, as a non-limiting example, the predetermined length (L) may have a value of approximately 0.1 micrometer (μm).[0016] Referring initially to FIG. 1, the exemplary method of manufacture commences with the step of providing a semiconductor substrate 20 on which an LC transistor 16 and a SC transistor 18 will be constructed. Semiconductor substrate 20 is preferably a silicon substrate (the term "silicon substrate" is used herein to encompass the relatively pure silicon materials typically used in the semiconductor industry as well as silicon admixed with other elements, such as germanium and the like). Silicon substrate 20 can be a bulk silicon wafer. Alternatively, and as shown in FIG. 1, silicon substrate 20 can comprise a thin layer of silicon 22 on an insulating layer 24 (commonly know as a "silicon-on-insulator wafer" or "SOI wafer") that is, in turn, supported by a silicon carrier wafer 26. [0017] A gate insulator layer 28 is formed on the upper surface of silicon substrate 22. Gate insulator layer 28 may be a thermally grown silicon dioxide formed by heating the silicon substrate in an oxidizing ambient; however, it is preferred that gate insulator layer 28 is formed by the deposition of a high-k dielectric material, such as HfSiO, HfO2, ZrO2, or any other standard high-k dielectric. Any suitable deposition technique may be utilized to form gate insulator layer 28, such as chemical vapor deposition (CVD), low pressure chemical vapor deposition (LPCVD), and plasma enhanced chemical vapor deposition (PECVD). Gate insulator layer 28 is preferably deposited to a thickness less than approximately 5 nanometers (nm) and ideally to a thickness less than approximately 3 nm. [0018] Referring still to FIG. 1, a metal gate layer 30 is deposited on gate insulator layer 28 utilizing a conventional deposition technique. The metal deposited to form metal gate layer 30 will be chosen, in part, to yield a desired threshold voltage (Vt) for SC transistor 16, although it will be appreciated that other factors (e.g., the oxidation process described below) will also affect the final V1 of SC transistor 16. A non-exhaustive list of metals suitable for use in the formation of metal gate layer 30 includes TiN, TaN, HfSi, and TaC. Metal gate layer 30 is preferably deposited to a thickness of approximately 2-10 nm. [0019] In the illustrated exemplary embodiment, a layer of polycrystalline silicon 32 is deposited onto the upper surface of metal gate layer 30. Polycrystalline silicon layer 32 is preferably deposited as undoped polycrystalline silicon that is subsequently impurity doped by ion implantation, although the polycrystalline silicon may also be doped in situ. In one implementation, polycrystalline silicon layer 32 is deposited utilizing LPCVD and the hydrogen reduction of silane. Polycrystalline silicon layer 32 is preferably deposited to a thickness of approximately 50-100 nm.[0020] FIG. 2 illustrates SC transistor 16 and LC transistor 18 after the performance of conventional patterning and etching steps. SC transistor 16 is etched to define a first gate stack 34 having a channel length (indicated in FIG. 2 by arrow 33) less than a predetermined length (L) and is consequently referred to herein as a short channel (SC) gate stack. Similarly, LC transistor 18 is etched to define a second gate stack 36 that has a channel length (indicated in FIG. 2 by arrow 35) equal to or greater than the predetermined length (L) and is consequently referred to herein as a long channel (LC) gate stack. As previously stated, the predetermined length (L) may have an exemplary value of approximately 0.1 μm. [0021] SC gate stack 34 comprises a polycrystalline silicon layer 38 formed from polycrystalline silicon layer 32 (FIG. 1), a metal gate 40 formed from metal gate layer 30 (FIG. 1), and a gate insulator 42 formed from gate insulator layer 28 (FIG. 1). LC gate stack 36 likewise comprises a polycrystalline silicon layer 44 formed from polycrystalline silicon layer 32 (FIG. 1), a metal gate 46 formed from metal gate layer 30 (FIG. 1), and a gate insulator 48 formed from gate insulator layer 28 (FIG. 1). As will be described in detail below, SC gate stack 34 serves as a permanent gate stack within SC transistor 16. In contrast, a portion of LC gate stack 36, namely polycrystalline silicon layer 44 and metal gate 46, is replaced during processing. For this reason, polycrystalline silicon layer 44 and metal gate 46 may be collectively referred to as the "LC dummy gate" herein below. [0022] As indicated in FIG. 2 by arrow 52, SC transistor 16 is separated from LC transistor 18 by a non-illustrated portion of the integrated circuit. Although not shown in FIG. 2, it will be appreciated by one of ordinary skill in the art that an electrically-isolating element is formed within this non-illustrated portion between SC transistor 16 and LC transistor 18. Any suitable process can be utilized to form the electrically-isolating element; e.g., a conventional shallow trench isolation process can be employed wherein a shallow trench is etched into substrate 20, a thermal oxide liner is grown in the shallow trench, and an oxide is deposited into the trench and over the thermal oxide liner. [0023] FIG. 3 illustrates SC transistor 16 and LC transistor 18 after the formation of source drain regions 54, 56 and sidewall spacers 62 near SC gate stack 34 and source drain regions 58, 60 and sidewall spacers 64 near LC gate stack 36. To create source 54 and drain 56, selected ions are implanted into substrate 20 proximate SC gate stack 34, which serves as an ion implantation mask. Similarly, to form source 58 and drain 60, selected ions are implanted into substrates 20 proximate LC gate stack 36, which also serves as a mask. Byway of example, boron ions can be implanted for a P-type MOS transistor; however, the particular ions selected for implantation will be dependent upon the type of device being constructed (e.g., for an N-type MOS transistor arsenic or phosphorus ions may be implanted). After ion implantation, an activation anneal is performed to electrically activate the implanted ions and to repair any imperfections in the silicon lattice caused by the ion implantation process. [0024] Sidewall spacers 62 and sidewall spacers 64 are formed adjacent opposing sidewalls of SC gate stack 34 and LC gate stack 36, respectively. In accordance with one exemplary technique, a spacer-forming material (e.g., SiO2) is deposited over substrate 20, SC gate stack 34, and LC gate stack 36. The spacer- forming material can be deposited to an exemplary thickness of approximately 15 nm utilizing LPCVD. The spacer-forming material is then anisotropically etched utilizing, for example, a reactive ion etching (RIE) technique employing a CHF3, CF4, or SF6 chemistry. This results in the formation of sidewall spacers 62 on opposing sidewalls of SC gate stack 34 and sidewall spacers 64 on opposing sidewalls of LC gate stack 36. Although not shown in FIG. 3, the sidewall spacers may be formed to include an underlying, relatively thin thermally grown oxide layer commonly referred to as a "zero spacer." [0025] For the purposes of clarity, FIG. 3 illustrates SC transistor 16 and LC transistor 18 as each including only a single set of sidewall spacers and a single source drain implantation. This notwithstanding, it will be readily appreciated that multiple spacers and multiple implants can, and typically will, be utilized in the manufacture of SC transistor 16 and/or LC transistor 18. For example, after the performance of the above-described sidewall spacer formation step and shallow implantation step, a second sidewall spacer formation step and a deeper implantation step can be performed. [0026] Next, as shown in FIG. 4, suicide layers are formed within the upper surfaces of the integrated circuit. In particular, a suicide layer 66 is formed within source drain regions 54, 56, 58, 60; a suicide layer 68 is formed within polycrystalline silicon layer 38 of SC gate stack 34; and, perhaps, a suicide layer 70 is formed within polycrystalline silicon layer 44 of LC gate stack 36. In one option, these layers of suicide are formed by depositing a layer of silicide-forming metal onto the surface of substrate 20 proximate source drain regions 54, 56, 58, and 60 and subsequently heating the silicide-forming metal utilizing, for example, rapid thermal annealing (RTA). Preferred silicide-forming metals include cobalt and nickel,although other silicide-forming metals may be employed (e.g., rhenium, ruthenium, palladium, etc.). The silicide-forming metal can be deposited, for example, by sputtering to a thickness of approximately 5-30 nm. Any silicide-forming metal that is not in contact with exposed silicon (e.g., the silicide-forming metal that is deposited on sidewall spacers 62, 64) does not react during the RTA to form a suicide and can subsequently be removed via wet etching in a H2CVH2SO4 or HNO3/HC1 solution. Suicide layers 66 and 68 serve to increase conductivity and provide a convenient contact point. Suicide layer 70, if formed, is ultimately removed along with polycrystalline silicon layer 44 and metal gate 46 (i.e., dummy gate 50 labeled in FIG. 2) as described below in conjunction with FIGs. 11 and 12. [0027] FIG. 5 illustrates the exemplary integrated circuit after a layer of etch stop material 72 has been deposited over substrate 20, SC transistor 16, and LC transistor 18. In a preferred embodiment, the layer of etch stop material 72 comprises silicon nitride deposited to a thickness of approximately 50 nanometers utilizing, for example, CVD. The deposition of etch stop material 72 over SC gate stack 34 and sidewall spacers 62 results in the production of a first raised etch stop feature 74 above SC transistor 16, and the deposition of etch stop material 72 over LC gate stack 36 and sidewall spacers 64 results in the production of a second raised etch stop feature 76 above LC transistor 18. [0028] With reference to FIG. 6, an interlayer dielectric (ILD) 75 is next deposited (e.g., via CVD) over the layer of etch stop material 72 (source drain regions 54, 56, 58, 60 are not shown in FIG. 6, or any of the subsequent figures, for clarity). ILD 75 can be deposited from, for example, a TEOS (tetra-ethyl orthosilicate) source. ILD 75 is preferably deposited to a thickness sufficient to completely cover raised features 74 and 76 of etch stop layer 72. The upper surface of ILD 75 is preferably planarized utilizing, for example, a chemical mechanical polishing or planarization (CMP) process. For example, and as shown in FIG. 7, the upper surface of ILD 75 may be planarized beyond the apexes of raised etch stop features 74 and 76 to expose an upper portion of raised etch stop feature 74 and an upper portion of raised etch stop feature 76. Alternatively, the planarization may be discontinued prior to exposing raised etch stop features 74 and 76. In this latter case, the upper surface of ILD 75 may reside at a level slightly above raised etch stop features 74 and 76 after planarization as indicated in FIG. 7 by dashed line 82. Etching can then be performed to expose the upper portions of raised etch stop features 74 and 76.(0029) Turning now to FIG. 8, a photoresist mask 84 is placed over the upper surface of the integrated circuit and subsequently patterned. After patterning, photoresist mask 84 covers LC transistor 18 and any N-type devices included in the integrated circuit. Areas of the integrated circuit exposed through patterned mask 84 are then etched to produce an opening 86 in ILD 75 through which SC gate stack 34 and sidewall spacers 62 are exposed. The depth of the etch is preferably controlled such that the lower extremity of opening 86 is located below the upper surface of polycrystalline silicon layer 38. Stated differently, the etch is preferably performed to a depth sufficient to expose an upper portion of a sidewall 88 of polycrystalline silicon layer 38. In one specific exemplary embodiment, the etch depth is between approximately 200 to approximately 300 Angstrom. [0030] FIG. 9 illustrates an optional oxidizing step that can be performed after removing photoresist mask 84 (FIG. 8). In a preferred embodiment, the oxidizing step assumes the form of an oxygen annealing process wherein the exposed portions of sidewall spacers 62 are introduced to an oxygen ambient (e.g., approximately 5-10 parts per million O2) at a predetermined temperature (e.g., approximately 400-600 degrees Celsius) for a predetermined time period (e.g., up to 30 minutes or more). During this oxygen annealing process, oxygen molecules diffuse downward through sidewall spacers 62 and into gate insulator 42 to fill oxygen vacancies within insulator 42 as described in more detail below. Notably, the oxygen molecules cannot easily diffuse through etch stop layer 72; thus, oxygen annealing has little to no effect on gate insulator 48 of LC transistor 18. [0031] As previously explained, it has been discovered that positive fixed charges produced by oxygen vacancies within the gate insulator (e.g., gate insulator 42) may shift the threshold voltage (Vt) of a SC device away from the desired bandedge (BE) V1. The oxidizing step illustrated in FIG. 9 significantly reduces or entirely eliminates these fixed charges by filling the oxygen vacancies in gate insulator 42, which permits the actual threshold voltage of SC transistor 16 to approach the desired BE Vt. This concept is graphically illustrated in FIG. 10 wherein drain current (Ij) is plotted along the horizontal axis and gate voltage (Vg) is plotted along the vertical axis. Two functions are illustrated in FIG. 10, namely, a pre-oxidizing function 92 and a post-oxidizing function 90. As may be appreciated by comparing function 92 to function 90, the oxidation of the gate insulator shifts the drain current-versus-gate voltage function to the left thus permitting a band edge voltagethreshold to be achieved for a given drain current. This, in turn, permits SC transistor 16 to conduct more current at the same gate voltage.. [0032] After the performance of the above-described oxidization process, a damascene process is utilized to replace suicide layer 70, polycrystalline silicon layer 44, and metal gate 46 (again, collectively referred to as the dummy gate) with a permanent metal gate. With reference to FIG. 1 1, a photoresist mask 94 is first placed over the integrated circuit to cover SC transistor 16 and any N-channel devices that may be included in the integrated circuit. An etching process is then performed to remove the exposed upper portion of raised etch stop feature 76 (labeled in FIGs. 5-7), an upper portion of sidewall spacers 64, and a surrounding portion of ILD 75. This etching step can be substantially identical to the etching step performed to expose SC gate stack 34 as described above in conjunction with FIG. 8. The etching process forms an opening 95 within the upper surface of the integrated circuit over LC transistor 18 thus exposing an upper portion of LC gate stack 36 and sidewall spacers 64. [0033] Next, and as shown in FIG. 12, a second etching step is performed to remove suicide layer 70 and polycrystalline silicon layer 44 of LC gate stack 36. While photoresist mask 94 remains over SC transistor 16, an etchant selective to polycrystalline silicon (e.g., tetra-methyl ammonium hydroxide or TMAH) is applied to at least the exposed portion of LC gate stack 36. After polycrystalline silicon layer 44 has been adequately removed, a third etching step may be performed to remove metal gate 46 or a treatment step (e.g., alloying, oxygen annealing, fluorine implanting, etc.) may be used to modify the work function of LC gate stack 36. The particular etchant employed will, of course, depend upon the metal used to form metal gate 46. If, for example, metal gate 46 comprises titanium nitride, an ammonium hydroxide or peroxide-based chemistry can be utilized to remove gate 46. Thus, through the series of etching steps illustrated in FIG. 12, the components of dummy gate 50 (i.e., polycrystalline silicon layer 44 and metal gate 46 as labeled in FIG. 2) are removed to form an LC device trench 96 between sidewall spacers 64. [0034] FIG. 13 illustrates SC transistor 16 and LC transistor 18 after the deposition of a metal film layer 98 over the integrated circuit and into LC device trench 96. Before the deposition of metal film layer 98, photoresist mask 94 is removed and, in a preferred embodiment, a relatively thin layer of a work function-setting metal (e.g., iridium, platinum, aluminum, ruthenium, etc.) is deposited (not shown). Deposition of the work function-setting metal and metal film layer 98 can be accomplished utilizing, for example, either aconventional electroless or a electrolytic deposition plating process. In a preferred embodiment, metal film layer 98 comprises a metal having an effective work function of approximately 4.7 to approximately 5.1 electron volts. As explained above, metals having work functions falling within this idealized range tend to be unstable at temperatures exceeding 400 degrees Celsius and are consequently referred to herein as thermally unstable metals. Examples of suitable thermally unstable metals include indium, platinum, palladium, and ruthenium. After being deposited to a sufficient thickness and substantially filling trench 96, film material layer 98 is then polished (e.g., via CMP) to produce a substantially planar surface. FIG. 14 illustrates the integrated circuit after polishing. As shown in FIG. 14, polishing results in the production of a cap 100 surrounding and contacting SC gate state 34 and in the production of a permanent LC gate 102 filling trench 96 (labeled in FIGs. 12 and 13) and contacting gate insulator 48. Additional steps are performed to complete processing of the integrated circuit (e.g., the deposition of a second interlayer dielectric, further etching steps to provide vias to the source and drain regions, deposition of metal plugs, etc); however, such steps are well-known in the industry and are not described herein in the interests of concision. (0035] It should thus be appreciated that there has been provided an example of a method suitable for manufacturing an integrated circuit having both short and long channel devices. The damascene-type replacement gate process described above enables thermally unstable metals to be employed in the construction of long channel devices thus enabling bandedge threshold voltages to be achieved for long channel devices. In addition, the exemplary method repairs oxygen vacancies that may occur within the short channel PFET devices thereby further permitting bandedge threshold voltages to be achieved for short channel devices. In the above-described exemplary embodiment, dummy gate replacement is described as being performed solely for a PFET long channel device (and not for a NFET long channel device); this example notwithstanding, it should be appreciated that dummy gate replacement may be performed for both PFET long channel devices and NFET long channel devices in alternative embodiments. [0036] While at least one exemplary embodiment has been presented in the foregoing detailed description, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or exemplary embodiments are only examples, and are not intended to limit the scope, applicability, or configuration of theinvention in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing the exemplary embodiment or exemplary embodiments. Although certain embodiments of the method described above include a thin seed layer and a deposited metal layer, after subsequent heating steps that may take place during further processing the seed layer and the deposited metal layer may merge together so that a separate and distinct seed layer is not discernable. It should be understood that various changes can be made in the function and arrangement of elements without departing from the scope of the invention as set forth in the appended claims and the legal equivalents thereof.
Methods and apparatuses relating to processors that contextually optimize instructions at runtime are disclosed. In one embodiment, a processor includes a fetch circuit to fetch an instruction from aninstruction storage, a format of the instruction including an opcode, a first source operand identifier, and a second source operand identifier; wherein the instruction storage includes a sequence ofsub-optimal instructions preceded by a start-of-sequence instruction and followed by an end-of-sequence instruction. The disclosed processor further includes a decode circuit to decode the instruction, to detect the start-of-sequence instruction and the end-of-sequence instruction, to buffer the sequence of sub-optimal instructions there between, to access a lookup table to identify one or more optimized instructions to substitute for one or more of the sequence of sub-optimal instructions, and to select either the decoded instruction or the sequence of one or more optimized instructions to dispatch to an execution circuit.
1.A processor comprising:Acquiring circuitry for obtaining an instruction from an instruction storage device, the format of the instruction comprising an opcode, a first source operand identifier, and a second source operand identifier; wherein the instruction storage device comprises a sequence of suboptimal instructions The sequence of the suboptimal instruction is preceded by a sequence start instruction, followed by a sequence end instruction;a decoding circuit for decoding the instructions;An execution circuit for executing an dispatched instruction received from the decoding circuit;Wherein the decoding circuit is configured to detect the sequence start instruction and the sequence end instruction to buffer a sequence of the sub-optimal instructions between them to access a lookup table to identify a sequence of one or more optimization instructions Substituting one or more of the sequences of the sub-optimal instructions, and assigning the decoded instructions or the sequence of the one or more optimized instructions to the execution circuitry.2.A processor comprising:Acquiring circuitry for obtaining an instruction from an instruction storage device, the format of the instruction comprising an opcode, a first source operand identifier, and a second source operand identifier; wherein the instruction storage device comprises a sequence of suboptimal instructions The sequence of the suboptimal instruction is preceded by a sequence start instruction, followed by a sequence end instruction;a decoding circuit for decoding the instructions, the decoding circuit comprising means for detecting the sequence start instruction and the sequence end instruction, and means for buffering a sequence of the sub-optimal instructions between them; as well asAn execution circuit for executing an dispatched instruction received from the decoding circuit;Wherein the decoding is further for accessing a lookup table to identify a sequence of one or more optimization instructions to replace one or more of the sequences of the suboptimal instructions and for selecting the decoded instructions or the one or more A sequence of optimized instructions is dispatched to the execution circuitry.3.The processor of any of claims 1-2, wherein the sequence of suboptimal instructions comprises a scalar instruction, and the sequence of the one or more optimized instructions comprises a vector instruction.4.The processor of any of claims 1-2, wherein the sequence of suboptimal instructions comprises a scalar instruction, and the sequence of the one or more optimized instructions comprises a single instruction multiple data instruction.5.The processor of any one of claims 1 to 4, wherein the metadata describes an instruction set architecture associated with the suboptimal instruction, and wherein the sequence of the one or more optimization instructions is associated with another instruction set The architecture is associated.6.The processor of any of claims 1-5, wherein the sequence of the one or more optimization instructions utilizes a register that is wider than a register utilized by the sequence of the suboptimal instructions.7.The processor of any of claims 1-6, wherein the decoding circuit is further configured to access the lookup table to identify a plurality of alternate instructions to replace a sequence of the plurality of suboptimal instructions.8.The processor of any of claims 1-7, wherein the first source operand identifier and the second source operand identifier of the sequence start instruction comprise providing a description of the suboptimal instruction The metadata of the sequence of data.9.The processor of any of claims 1-8, wherein the sequence of suboptimal instructions loops over a sequence of execution iterations.10.The processor of any of claims 1-9, wherein the first source operand identifier specifies a number of iterations over a sequence of the suboptimal instructions, the second source operand identifier designation The variable that is incremented during each iteration, and the third source operand identifier specify the span in which the variable is incremented during each iteration.11.A method comprising:Obtaining an instruction from an instruction storage device, the format of the instruction comprising an operation code, a first source operand identifier, and a second source operand identifier; wherein the instruction storage device includes a sequence of suboptimal instructions, the suboptimal The sequence of instructions is preceded by a sequence start instruction, followed by a sequence end instruction;Decoding the instructions;Executing an dispatched instruction received from the decoding circuit;Wherein the decoding further comprises detecting the sequence start instruction and the sequence end instruction, buffering a sequence of the sub-optimal instructions between them, accessing a lookup table to identify a sequence of one or more optimization instructions instead of the sequence One or more of the sequences of suboptimal instructions, and selecting the decoded instructions to be executed or the sequence of the one or more optimized instructions.12.A method comprising:Obtaining an instruction from an instruction storage device, the format of the instruction comprising an operation code, a first source operand identifier, and a second source operand identifier; wherein the instruction storage device includes a sequence of suboptimal instructions, the suboptimal The sequence of instructions is preceded by a sequence start instruction, followed by a sequence end instruction;Decoding the instructions, including the steps of detecting the sequence start instruction and the sequence end instruction, and the step of buffering a sequence of the sub-optimal instructions between them;Executing an dispatched instruction received from the decoding circuit;Wherein the decoding further comprises accessing a lookup table to identify a sequence of one or more optimization instructions to replace one or more of the sequences of the suboptimal instructions, and selecting the decoded instructions to be executed or the one or A sequence of multiple optimization instructions.13.The method of any of claims 11-12, wherein the sequence of suboptimal instructions comprises a scalar instruction, and the sequence of the one or more optimized instructions comprises a vector instruction.14.The method of any of claims 11-12, wherein the sequence of suboptimal instructions comprises a scalar instruction, and the sequence of the one or more optimized instructions comprises a single instruction single instruction multiple data instruction.15.The method of any of claims 11-14, wherein the metadata describes an instruction set architecture associated with the suboptimal instruction, and wherein the sequence of the one or more optimization instructions is associated with another instruction set architecture Associated.16.The method of any of claims 11-15, wherein the sequence of one or more optimization instructions utilizes a register that is wider than a register utilized by the sequence of the suboptimal instructions.17.The method of any of claims 11-16, wherein the decoding circuit further accesses the lookup table to identify a plurality of alternate instructions to replace a sequence of the plurality of suboptimal instructions.18.The method of any of claims 11-17, wherein the first source operand identifier and the second source operand identifier of the sequence start instruction comprise providing a description of the suboptimal instruction The metadata of the sequence's data.19.The method of any of claims 11-18, wherein the sequence of suboptimal instructions loops over a sequence of execution iterations.20.The method of any of claims 11-19, wherein the first source operand identifier specifies a number of iterations over a sequence of the suboptimal instructions, the second source operand identifier designation The variable that is incremented during each iteration, and the third source operand identifier specify the span in which the variable is incremented during each iteration.21.An article of manufacture comprising a non-transitory machine readable storage medium storing instructions executable by a processor to perform the processes of:Obtaining an instruction from an instruction storage device, the format of the instruction comprising an operation code, a first source operand identifier, and a second source operand identifier; wherein the instruction storage device includes a sequence of suboptimal instructions, the suboptimal The sequence of instructions is preceded by a sequence start instruction, followed by a sequence end instruction;Decoding the instructions;Executing an dispatched instruction received from the decoding circuit;Wherein the decoding further comprises detecting the sequence start instruction and the sequence end instruction, buffering a sequence of the sub-optimal instructions between them, accessing a lookup table to identify a sequence of one or more optimization instructions instead of the sequence One or more of the sequences of suboptimal instructions, and selecting the decoded instructions to be executed or the sequence of the one or more optimized instructions.22.An article of manufacture comprising a non-transitory machine readable storage medium storing instructions executable by a processor to perform the processes of:Obtaining an instruction from an instruction storage device, the format of the instruction comprising an operation code, a first source operand identifier, and a second source operand identifier; wherein the instruction storage device includes a sequence of suboptimal instructions, the suboptimal The sequence of instructions is preceded by a sequence start instruction, followed by a sequence end instruction;Decoding the instructions, including the steps of detecting the sequence start instruction and the sequence end instruction, and the step of buffering a sequence of the sub-optimal instructions between them;Executing an dispatched instruction received from the decoding circuit;Wherein the decoding further comprises accessing a lookup table to identify a sequence of one or more optimization instructions to replace one or more of the sequences of the suboptimal instructions, and selecting the decoded instructions to be executed or the one or A sequence of multiple optimization instructions.23.The article of any of claims 22-23, wherein the sequence of suboptimal instructions comprises a scalar instruction, and the sequence of the one or more optimized instructions comprises a vector instruction.24.The article of any of claims 22-23, wherein the sequence of suboptimal instructions comprises a scalar instruction, and the sequence of the one or more optimized instructions comprises a single instruction multiple data instruction.25.The article of any of claims 22-25, wherein the metadata describes an instruction set architecture associated with the suboptimal instruction, and wherein the sequence of the one or more optimization instructions is associated with another instruction set The architecture is associated.
System and method for context vectorization of instructions at runtimeTechnical fieldEmbodiments described herein relate generally to processors. In particular, the described embodiments generally relate to processors that are configured to optimize instructions in context-dependent manner at runtime.Background techniqueParallel processing is generally faster than scalar execution of one data point at a time. A single instruction multiple data (SIMD) computer with multiple processing elements that performs the same operations on multiple data points achieves performance gains by utilizing parallelism and using multiple parallel execution cores simultaneously.The SIMD processor can take advantage of parallelism in performing mathematical operations and moving data. A SIMD processor can load or store multiple data items simultaneously, causing performance gains compared to a slower scalar processor that loads or stores one data at a time. Using SIMD instructions provides better performance than using scalar instructions when executing a computer program on a processor with parallel resources.However, programming with the SIMD Instruction Set Architecture (ISA) can be challenging. For example, SIMD ISA is typically processor specific. Programs that use SIMD instructions may need to be rewritten and customized to accommodate new processor generations. The work required to adapt a scalar instruction to a new instruction set architecture may need to be partially or completely repeated for each new generation with the instruction set architecture (eg, MMX, SSE, SSE2, SSE3, SSE4, AVX, AVX 2, AVX) 3.1 and AVX 3.2) Reuse, the required work includes rewriting the code, documenting the code, enabling the compiler to emit the code, training the user to use the code, debugging and collecting the records of the code execution. What is needed, therefore, is a way to allow programmers to utilize the SIMD instruction set architecture while avoiding the challenges inherent in traditional solutions.Moreover, conventional solutions are limited because they optimize code statically rather than dynamically during execution. The compiler attempts to optimize the execution of certain code sequences, but they operate in a static environment without knowing the state of the machine or registers. Even traditionally manually coded SIMD code cannot optimize code based on the runtime state of machines and registers. Therefore, there is a need for a method of optimizing instructions at runtime while knowing the state of the registers and their contents.DRAWINGS1 is a block flow diagram showing a process for a processor to optimize instructions related to context at runtime, in accordance with one embodiment.2 is a block diagram showing processing components used by a processor to optimize instructions at runtime in context, in accordance with one embodiment.Figure 3 illustrates instructions and their various fields in accordance with one embodiment.4 illustrates an exemplary register allocation by a processor that optimizes the Vectorbeam code of Table 2, in accordance with an embodiment.FIG. 5 illustrates an exemplary allocation of multiple computing resources for parallel processing of the allocated registers of FIG. 4, in accordance with one embodiment.6 shows a portion of a lookup table listing vector instruction alternatives for scalar instructions, in accordance with one embodiment.Figure 7 illustrates a vector processor register file in accordance with one embodiment.detailed descriptionIn the following description, numerous specific details are set forth. However, it is understood that the embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known circuits, structures, and techniques have not been shown in detail so as not to obscure the description.References to "one embodiment", "an embodiment", "an example embodiment" or the like in the specification are intended to include a particular feature, structure or characteristic, but each embodiment does not necessarily include that particular feature , structure or feature. Moreover, such phrases are not necessarily referring to the same embodiment. In addition, it is contemplated that such features, structures, or characteristics may be combined with other embodiments to be considered within the knowledge of those skilled in the art, whether or not explicitly described.A (eg, hardware) processor or set of processors executes instructions from an instruction set (eg, an instruction set architecture (ISA)). The instruction set is part of the programming-related computer architecture and typically includes native data types, instructions, register architecture, addressing modes, memory architecture, interrupt and exception handling, and external input and output (I/O). It should be noted that the term instruction herein may refer to a macro instruction, such as an instruction that is provided to a processor for execution. A processor (eg, having one or more cores for decoding and/or executing instructions) can operate on data, for example, in performing arithmetic, logic, data movement, or other functions.Context optimization at runtimeIn accordance with the present invention, the processor optimizes instructions in relation to context at runtime. In particular, the processor dynamically optimizes the sequence of suboptimal instructions at runtime. Suboptimal instructions as used herein refer to instructions that do not adequately use the available resources of the processor, or that can be optimized to take advantage of the processor's instruction set architecture. In one embodiment, the sequence of suboptimal instructions is stored in the instruction store and surrounded by start and end delimited instructions. Generally, for the purposes of this disclosure, a suboptimal code sequence surrounded by a sequence start and a sequence end instruction is referred to herein as a Vectorbeam code. The claims herein are not limited by the name Vectorbeam. In an alternative embodiment, the code can be referenced by a different name. The Vectorbeam instruction is not limited to the choice of a specific opcode or memory.In accordance with the disclosed embodiments, the processor is configured to detect a Vectorbeam code sequence of a sub-optimal instruction, a buffer sequence, and an access lookup table to identify one or more instructions to replace and optimize one or more sub-optimals of the Vectorbeam code sequence instruction.Delimiter: sequence start and sequence endIn one embodiment, a sequence start instruction and a sequence end instruction may be selected to provide a hint to the processor as to how to optimize the code sequence. In one embodiment, the processor can speculate that the sequence of suboptimal instructions should be an iterative loop because the sequence of suboptimal instructions is preceded by a sequence start delimited instruction, "foreach", and followed by a sequence end delimited instruction, "next" . Similarly, in an alternative embodiment, the sequence start instruction may prompt for iterating through the sequence of suboptimal instructions by using the delimited instruction "foreach" or "do until" or "repeat until" or "do while". The sequence end instruction can be similarly named to suggest its functionality (eg "end", "exit", "stop", "continue", "return"). The sequence start and sequence end instructions may also have predetermined values ​​that are not human readable, or digital, or have been automatically or randomly selected without human intervention.Sequence of suboptimal instructionsAs used herein, a sequence of sub-optimal instructions refers to computer instructions that underutilize the parallel resources available in the processor (on which those instructions should execute).For example, the sequence of suboptimal instructions may be an iterative, scalar loop that uses less parallel resources than is available in the processor as written.As another example, a sequence of sub-optimal instructions may have been written to use 64-bit registers, but the processor on which those instructions are to be executed has 128-bit registers available.As another example, the sequence of suboptimal instructions involves multimedia operations, such as dimming the brightness of a pixel screen. In such an example, scalar math operations can be replaced with wide vector instructions.MetadataIn accordance with the present disclosure, a sequence start instruction may include an operand that provides metadata to a processor that will implement a Vectorbeam code sequence. In one embodiment, the metadata gives the processor a hint on how to optimize the code. In one embodiment, the sequence of suboptimal instructions is preceded by the sequence start instruction "foreach rax, 0, 64, 1". The metadata "rax,0,64,and 1" will provide the processor with the following prompt: register rax should be used as the loop index, rax should be changed 64 times from 0, and the span should be 1 for each iteration. Code.Description of the illustrative embodiments1 is a block flow diagram showing a process for processor-optimized instructions related to context at runtime, in accordance with one embodiment. In particular, the processor configured to execute the process of context-optimized instructions at runtime according to block flow diagram 100 retrieves the instruction at 102, decodes the instruction at 104, and tests at 106 whether it is in an optimized mode. In one embodiment, the instructions to be fetched are stored in an instruction buffer. In alternative embodiments, the instructions may be stored in an instruction register, a general purpose register, a program stack or a memory, including static and dynamic random access memory. The processor executing the process of flowchart 100 decodes the fetched instructions (e.g., macros) at 104 to generate one or more micro-ops, microcode entry points, microinstructions, other instructions, or other control signals as outputs.In accordance with the present disclosure, a Vectorbeam code sequence consisting of a series of sub-optimal instructions is stored in memory, preceded by a sequence start instruction followed by a sequence end instruction. In one embodiment, the sequence start instructions may be encoded in human readable assembly code and have memory symbols defined in a manner to suggest how suboptimal instructions should be optimized. For example, the sequence start instruction can be "foreach", or "do until", or "repeatuntil", or "do while", to name a few. The sequence end instruction can be similarly named to suggest its function (eg "end", "exit", "stop", "continue", "return", "abort", "undo" or "commit"). The sequence start and sequence end instructions may also have predetermined values ​​that are not human readable, or digital, or have been automatically or randomly selected without human intervention.In one embodiment, the suboptimal code sequence of the Vectorbeam code is written to mimic scalar memory and appears to have a scalar and/or serial execution flow, making them easy to understand and debug. Delimiting the suboptimal code sequence with an easy-to-understand delimiter enhances the readability of the code and also suggests to the processor how to optimize the code at runtime.If the processor determines at 106 that it is not running in optimized mode, it dispatches a decode instruction to be executed at 114 and submits or retires the instruction at 116.As noted above, entering the optimization mode in one embodiment occurs when the instruction decoded at 104 is the runtime of the sequence start instruction. In another embodiment, the processor enters an optimized mode upon power up or after reset. In another embodiment, the processor implicitly acquires a sequence of specific instructions at 102, decodes them at 104, and detects one or more suboptimal instructions, such as scalar instructions or instructions from an older instruction set architecture. Enter optimization mode. In an alternative embodiment, in response to a predefined runtime event, the processor implicitly enters the optimization mode during runtime.As noted above, exiting the optimization mode in one embodiment occurs during runtime when the instruction decoded at 104 is a sequence end instruction. In an alternative embodiment, the processor implicitly exits the optimization mode after the condition occurs, such as a failure to identify an alternate instruction for a period of time. In an alternative embodiment, the processor remains in an optimized mode indefinitely.If the processor determines at 106 that it is already in the optimized mode, or if the processor detects a sequence start instruction, it queues the decoded instruction into the instruction buffer at 107. The instruction buffer in one embodiment is a series of general purpose registers. In other embodiments, the instruction buffer may utilize an instruction register, a scalar register, a shift register, a stack memory, a static or dynamic RAM memory, or a cache.In operation and during runtime, the processor compares the instructions in the instruction buffer with the lookup table at 108 to determine if a suitable alternate instruction is available, including alternate instructions that will result in better performance. The processor in one embodiment is directed by metadata including a sequence start delimiter when it is evaluated 108 whether a suitable alternate instruction is available. The processor in one embodiment is directed by the state and content of the register file (described further below with respect to Figure 7) when 108 evaluates whether a suitable alternate instruction is available.Examples of the disclosed processor implementing Vectorbeam code and directed by metadata are discussed below with respect to Tables 1 and 2. In one embodiment, the suboptimal instructions of the Vectorbeam code include scalar instructions, and the alternate instructions include vector instructions or SIMD instructions. In one embodiment, the suboptimal instructions of the Vectorbeam code include instructions from an instruction set architecture of an older generation processor, and the alternate instructions are selected from a newer processor's instruction set. In one embodiment, the sequence of suboptimal instructions of the Vectorbeam code consists of an iterative loop of instructions that are operated with fewer loops or with vector instructions. In another embodiment, the sequence of sub-optimal instructions in the instruction buffer consists of a conditional loop, and the alternate instruction completes the operation with fewer cycles or with vector instructions. If the processor determines at 108 that an alternate instruction that will produce better performance is available, it selects one or more alternate instructions at 110.At 112, the processor selects a decode instruction for execution (if no alternate instruction is available at 108) or selects one or more alternate instructions that are selected at 110. In one embodiment, the processor maintains a sequence of instructions in the instruction buffer. In another embodiment, the processor limits the number of instructions stored in the instruction buffer, and if the number of instructions exceeds the limit, the processor removes one or more instructions from the instruction buffer and dispatches them Executed at 114.At 114, the processor executes the decoded instruction or the one or more alternate instructions. After execution, the processor retires or submits an instruction at 116 to ensure that the execution result is written to or has been written to its destination, and the resources are freed or released for later use.2 is a block diagram showing processing components used by a processor to optimize Vectorbeam code instructions at runtime in context, in accordance with one embodiment. In particular, block diagram 200 includes instruction storage 202, acquisition circuitry 204, decoding circuitry 206, execution circuitry 220, and retirement/commit circuitry 226. Decode circuitry 206 includes decode logic 208, optimized mode detector 210, instruction buffer 212, lookup table 214, alternate evaluation logic 216, and instruction selector 218. Block diagram 200 also includes an execution circuit 220, a register 222, a memory 224, and a retirement/commit circuit 226.In operation, the processor retrieves instructions from instruction storage 202 using acquisition circuitry 204 during runtime. In one embodiment, the instruction store 202 is a register file. In alternative embodiments, instruction storage 202 may be an instruction buffer, an instruction register, a general purpose register, a program stack, or a static or dynamic random access memory.The processor passes the fetched instructions to decoding circuit 206, which decodes the fetched instructions using decoding logic 208. The instructions are described below with respect to Figure 3, including its opcode and optional operands. The processor detects whether it is in the optimization mode by optimizing the mode detector 210. If the processor is not in the optimized mode, the processor dispatches the decoded instructions to the execution circuitry 220.However, if the optimized mode detector 210 detects that the processor is in the optimized mode or sets the processor to be in the optimized mode, the decoded instructions are queued in the instruction buffer 212. As shown, the instruction buffer 212 has four entries, but the number of entries is variable: as will be understood by those skilled in the art, it can be less than four, it can be greater than four, or it can Dynamically adjusted. During runtime, the processor uses the alternate evaluation logic 216 to evaluate the decode instructions in the instruction buffer 212.In particular, the alternate evaluation logic 216 accesses the lookup table 214 during runtime to determine if any alternate instructions are available. In one embodiment, the lookup table compares the memory of the sub-optimal Vectorbeam instruction in the instruction buffer with the listing of the alternate instruction. In one embodiment, the alternate instruction is a vector instruction. In another embodiment, the alternate instruction is a SIMD instruction. In another embodiment, the alternate instruction is from an instruction set architecture that is newer than the suboptimal instruction. In evaluating alternatives, in one embodiment, the alternate evaluation logic 216 is directed by metadata provided with a sequence start instruction. In evaluating the alternative, in one embodiment, the alternate evaluation logic 216 evaluates the runtime state of the processor registers. For example, if multiple registers are used in determining the memory address, and if those memory addresses fall in the same cache line, the alternate evaluation logic 216 replaces the scalar memory accesses associated with those memory addresses with vector memory accesses.In one embodiment, if the alternate evaluation logic 216 determines that an alternate instruction is available, the alternate instruction is passed to the dispatch selector 218 to be dispatched for execution. In an alternative embodiment, the alternate instruction is written to, added to, or substituted for the instruction buffer 212. As shown, the decode circuit 206 uses the dispatch selector 218 to select instructions for dispatching from the instruction buffer 212 or from the alternate evaluation logic 216.In one embodiment, execution circuitry 220 is a vector processor. In an alternate embodiment, execution circuitry 220 may include multiple cores and parallel hardware. In one embodiment, execution circuitry 220 utilizes registers 222 and memory 224 to store intermediate results and otherwise support execution. After execution, the retirement/commit circuit 226 ensures that the execution result is written to or has been written to its destination, and the resources are freed or released for later use.FIG. 3 illustrates an instruction and its various fields in accordance with an embodiment. In particular, the instructions 300 include an opcode 302 and various optional first, second, and third operand identifiers 304, 306, and 308. The opcode 302 identifies the instruction and/or operation to be executed, as well as the type of operand (eg, "sequence start" or "sequence end"). The first, second, and third operand identifiers 304, 306, and 308 are optional. In one embodiment, opcode 302 is a "sequence start" instruction, and first, second, and third operand identifiers 304, 306, and 308 represent metadata describing subsequent suboptimal instructions that are provided to the processor Tips or suggestions on how to optimize sub-optimal instructions. In an alternative embodiment, opcode 302 is a "sequence start" instruction that identifies an iterative sequence of subsequent suboptimal instructions, and first, second, and third operand identifiers 304, 306, and 308 suggest multiple loops Iteration, the registers used for each iteration change, and the spans by which the variables are incremented for each iteration. In an alternate embodiment, opcode 302 corresponds to a "sequence start" instruction and one or more of first, second, and third operand identifiers 304, 306, and 308 are not used. In an alternate embodiment, opcode 302 corresponds to a "sequence end" instruction and the first, second, and third operand identifiers 304, 306, and 308 are not used.Table 1 shows an exemplary sequence of suboptimal instructions optimized in accordance with one embodiment. In particular, the side-by-side-by-side code comparison of Table 1 shows three versions of instructions for implementing the following loop labeled code 1: scalar code, AVX code, and Vectorbeam code:Code 1 for (intl=0; I< 64; i++) { outp[i] = inpi[l] + inp2[1];]Table 1As shown in Table 1, the scalar code includes a loop to be executed 64 times, incrementing the register rax by one each time. The scalar code is sub-optimal because it may not be able to consume all of the processor's parallel resources during its 64-cycle iteration.The AVX code differs from the scalar code in several ways. The AVX registers, ymm0, and ymm1 have different names and different sizes. AVX opcodes have different names. Data iteration by 8 instead of 1 is unique to the generation of processors associated with AVX. In general, it may be necessary to invent the scalar instructions (such as the scalar instructions in Table 1), to document them, to enable the compiler to issue them, to train users to use them, and to perform debugging and collection execution records. All are repeated for reuse with each new generation of the instruction set architecture (eg, MMX, SSE, SSE2, SSE3, SSE4, AVX, AVX2, AVX 3.1, and AVX 3.2).However, according to the present disclosure, substantially identical scalar instructions having substantially the same format are used in the Vectorbeam code. Specifically, the Vectorbeam code body is surrounded by a sequence start instruction (here "foreach rax, 0, 64, 1") sequence end instruction (here "next"). At the time of encoding, the "foreach" opcode in the sequence start instruction indicates to the processor implementing flow 100 (FIG. 1) that the suboptimal code to follow should be an iterative code. At the time of encoding, the operands "rax" "0", "64", and "1" provide metadata that suggests that the processor implementing process 100 (FIG. 1) uses register rax as a loop index and rax from 0 to 63. With a span of 1 span for each iteration to cycle the suboptimal code. In other words, the sequence start instruction suggests that the processor implementing process 100 (FIG. 1) executes 64 times of superior code, changing rax by one each time. The author of the original scalar code can prepare the Vectorbeam code sequence with essentially the same opcode and essentially the same format and leave it to the processor to optimize the code. The processor will optimize the Vectorbeam code to take advantage of its specific hardware capabilities and instruction set architecture. In this way, the code sequence used to implement Code 1 can be ported to a new generation of processor and instruction set architectures without rewriting, re-documenting, re-enabling, re-testing, retraining, re-commissioning, and re-deploying code. jobs. As new processors become available, developers can spend less time learning new opcodes and new register files and other details associated with the new instruction set architecture.4 illustrates an exemplary register renaming by a processor that optimizes the Vectorbeam code of Table 1 in accordance with an embodiment. In particular, the Vectorbeam code in this embodiment will be run by a 128-bit processor (eg, SSE) with a default policy that extends the Vectorbeam context by 4. Therefore, the first pass through the processor must execute the code for the rax with the value {0, 1, 2, 3}. When the processor reaches the Vectorbeam instruction movss (%rsp, %rax, 4), %xmm0, it must perform four loads:Xmm0[0] =(rsp + 0 * 4);Xmm0[l] =(rsp + 1 * 4);Xmm0[2] =(rsp + 2 * 4);Xmm0[3] =(rsp + 3 * 4);Although the Vectorbeam code of Table 1 relates to architectural registers xmm0 and xmm1, the processor executing Vectorbeam code 400 of Figure 4 renames the architectural registers to physical registers. (As known to those skilled in the art, a processor executing code may have a large number of physical registers to use as architectural registers referenced in various program threads, which may be run simultaneously or out of order). As shown, the processor accesses the logic to physical register table 404 to determine how to map the architectural registers to physical registers. According to table 404, the processor assigns xmm0 and xmm1 to physical register file locations 4-7 and 12-15.FIG. 5 illustrates an exemplary allocation of multiple computing resources for parallel processing of the allocation registers of FIG. 4, in accordance with one embodiment. The 128-bit SSE processor here applies its knowledge of available hardware resources to optimized code and register renaming. As shown, the processor in accordance with the present invention renames the architectural xmm registers to physical register file 502. The register allocation selected in this embodiment allows four ALUs to be added in parallel at a time, as shown by 504, 506, 508, and 510 of FIG.Table 2 shows an exemplary sequence of Vectorbeam suboptimal instructions being optimized in accordance with one embodiment. Specifically, the side-by-side code comparison of Table 2 illustrates the four versions of the instructions for implementing the conditional loop of Code 2: scalar code, SSE code, AVX3 code, and Vectorbeam code, as follows:Code 2. for(int i=0;i<PTS;i++){if(cond[i]){outp[i]=npl[i]+inp2[i];}}Table 2.As shown, the scalar code (like the scalar code shown in Tables 1 and 2) is sub-optimal because it requires 64 cycles. If the scalar code is to be run on a modern vector processor, it will use less parallel hardware resources than is available.As with the AVX code (Table 1), programmers who migrate scalar code to SSE code need to learn new opcodes, new register sets, and SSE code does not provide jump bypass. Often, the work done to invent scalar instructions, document them, have them be issued by the compiler, train users to use them, and debug and collect code records may require each generation of instruction set architecture (eg, MMX, SSE, SSE2, SSE3, SSE4, AVX, AVX2, AVX 3.1, and AVX 3.2) are partially or completely repeated.In addition, the SSE code has some drawbacks. For example, since the SSE code has no jump bypass, it will perform mathematical operations on all channels anyway, and then use the BLEND instruction to properly merge the untouched unconditional values ​​with the values ​​written in the conditional block. Performance is compromised (BLEND is about two-thirds slower than normal storage) - more registers are needed (5 to 2). As with the AVX code (Table 1), SSE code generation requires a complex compiler.AVX3 code like AVX code (Table 1) differs from scalar code in several respects. The AVX3 registers, ymm0...n have different names and different sizes. The AVX3 opcode has a different name. AVX3 uses ‘k’ registers, which are set by comparison and then used in conjunction with storage/loading. The AVX3 code like the SSE code has no branches; the processor executes all opcodes under all conditions. Often, then, the work done to invent scalar instructions, document them, have the compiler issue them, train users to use them, and debug and collect code records may require architecture for each generation of instruction sets (eg, MMX, SSE, SSE2). , SSE3, SSE4, AVX, AVX2, AVX 3.1, and AVX 3.2) are partially or completely repeated.However, according to the present disclosure, substantially the same scalar instructions as (Table 1) can be used in substantially the same format in the Vectorbeam code. Specifically, the Vectorbeam code begins with a sequence start instruction (here "foreach rax, 0, 64, 1") and a sequence end instruction (here "next"). At the time of encoding, the "foreach" opcode indicates to the processor implementing flow 100 (FIG. 1) that the suboptimal Vectorbeam code to follow is an iterative code. At the time of encoding, the operands "rax" "0", "64", and "1" provide metadata that suggests that the processor implementing process 100 (FIG. 1) uses register rax as a loop index and rax from 0 to 63. For each iteration, the span is 1 to cycle the suboptimal Vectorbeam code. In other words, the processor implementing process 100 (FIG. 1) will be essentially performing 64 times of sub-optimal code, changing rax by one each time. In this way, the code sequence that implements Code 1 can be ported to the next-generation instruction set architecture without the need to rewrite\retest and republish the code. Developers can also spend less time learning new opcodes and new register files and other details associated with the new instruction set architecture.Moreover, in accordance with the present disclosure, a CPU architect who knows the specific resources, capabilities, and instruction set architecture associated with a particular CPU can choose how to implement the functions described in the sequence of sub-optimal Vectorbeam instructions. In addition, opcode 302 (FIG. 3) and metadata 304, 306, and 308 (FIG. 3) can provide hints to the processor when selecting which instruction to use in place of the sequence of suboptimal Vectorbeam instructions. For example, a sequence start instruction can let the processor know that multiple memory loads or stores use the same offset. Alternatively, the sequence start instruction can provide multiple hints for memory to be loaded or stored to the same cache line. Alternatively, the sequence start instruction may provide hints that multiple mathematical operations use the same operand, as may be the case in a multimedia or graphics code routine such as by uniformly increasing the brightness of the pixels.6 illustrates a portion of a lookup table listing a vector instruction in place of a scalar instruction, in accordance with one embodiment. In particular, table 600 lists scalar instructions and their corresponding vector equivalents for performing the following mathematical operations: addition, subtraction, multiplication, division, square root, maximum, minimum, reciprocal, and reciprocal of the square root. Referring to the side-by-side table 600, in one embodiment, the sequence of sub-optimal Vectorbeam instructions stored in the instruction buffer 212 (FIG. 2) includes a scalar addition ADDSS, and the replacement evaluation logic 216 (FIG. 2) replaces one with the vector addition ADDPS. Multiple suboptimal Vectorbeam instructions.Figure 7 illustrates a vector processor register file in accordance with one embodiment. In particular, register file 700 includes a vector register 702 and a general purpose register 704. As shown, vector register 702 includes 32 zmm registers 706, each 512 bits wide, 16 ymm registers 708, each 256 bits wide, and 16 x mm registers 710, each 128 bits wide. As part of evaluating an alternative to the sub-optimal Vectorbeam instruction in instruction buffer 212 (FIG. 2), in one embodiment, alternative evaluation logic 216 (FIG. 2) assigns the results of eight 32-bit scalar operations to 256-bit ymm. register.In an alternative embodiment, the Vectorbeam sequence of the suboptimal instruction is replaced with an instruction from a newer instruction set architecture. In one embodiment, the sequence of sub-optimal SSE instructions utilizing the 128-bit XMM register 704 is replaced by the sequence of AVX, which operates using the 256-bit YMM register 706 and achieves better performance without requiring the author to rewrite the suboptimal The instruction of the code. In another embodiment, the sequence of sub-optimal AVX instructions utilizing YMM register 706 is replaced by a sequence of AVX-512 instructions that utilize ZMM register 708 and achieve better performance without requiring rewriting by their authors. Excellent code.In an alternate embodiment, opcode 302 (FIG. 3) may identify a particular instruction set architecture associated with the sub-optimal instructions stored in instruction buffer 212 (FIG. 2). The alternative evaluation logic 216 (FIG. 2) can then replace the suboptimal instructions with the optimal instructions associated with the processor's instruction set architecture.In one embodiment, both the suboptimal instruction and the alternate instruction are vector instructions, but the alternate instructions are from a newer generation and use a wider register.Claims (as amended by Article 19 of the Treaty)1.A processor comprising:Acquiring circuitry for obtaining an instruction from an instruction storage device, the format of the instruction comprising an opcode, a first source operand identifier, and a second source operand identifier; wherein the instruction storage device comprises a sequence of suboptimal instructions The sequence of the suboptimal instruction is preceded by a sequence start instruction, followed by a sequence end instruction;a decoding circuit for decoding the instructions;An execution circuit for executing an dispatched instruction received from the decoding circuit;Wherein the decoding circuit is configured to detect the sequence start instruction and the sequence end instruction to buffer a sequence of the sub-optimal instructions between them to access a lookup table to identify a sequence of one or more optimization instructions Substituting one or more of the sequences of the sub-optimal instructions, and assigning the decoded instructions or the sequence of the one or more optimized instructions to the execution circuitry.2.A processor comprising:Acquiring circuitry for obtaining an instruction from an instruction storage device, the format of the instruction comprising an opcode, a first source operand identifier, and a second source operand identifier; wherein the instruction storage device comprises a sequence of suboptimal instructions The sequence of the suboptimal instruction is preceded by a sequence start instruction, followed by a sequence end instruction;a decoding circuit for decoding the instructions, the decoding circuit comprising means for detecting the sequence start instruction and the sequence end instruction, and means for buffering a sequence of the sub-optimal instructions between them; as well asAn execution circuit for executing an dispatched instruction received from the decoding circuit;The decoding circuit is further configured to access a lookup table to identify a sequence of one or more optimization instructions to replace one or more of the sequences of the suboptimal instructions, and to select the decoded instruction or the one or A sequence of multiple optimized instructions is dispatched to the execution circuitry.3.The processor of any of claims 1-2, wherein the sequence of suboptimal instructions comprises a scalar instruction, and the sequence of the one or more optimized instructions comprises a vector instruction.4.The processor of any of claims 1-2, wherein the sequence of suboptimal instructions comprises a scalar instruction, and the sequence of the one or more optimized instructions comprises a single instruction multiple data instruction.5.A processor according to any of claims 1-2, wherein the metadata provided with the sequence start instruction describes an instruction set architecture associated with the sub-optimal instruction, and wherein the one or more optimizations The sequence of instructions is associated with another instruction set architecture.6.The processor of any of claims 1-2, wherein the sequence of one or more optimized instructions utilizes a register that is wider than a register utilized by the sequence of the suboptimal instructions.7.The processor of any of claims 1-2, wherein the decoding circuit is further configured to access the lookup table to identify a plurality of alternate instructions to replace a sequence of the plurality of suboptimal instructions.8.The processor of any of claims 1-2, wherein the first source operand identifier and the second source operand identifier of the sequence start instruction comprise providing a description of the suboptimal instruction The metadata of the sequence of data.9.The processor of any of claims 1-2, wherein the sequence of suboptimal instructions loops over a sequence of execution iterations.10.The processor of any of claims 1-2, wherein the first source operand identifier specifies a number of iterations over a sequence of the suboptimal instructions, the second source operand identifier designation The variable that is incremented during each iteration, and the third source operand identifier specify the span in which the variable is incremented during each iteration.11.A method comprising:Obtaining an instruction from an instruction storage device, the format of the instruction comprising an operation code, a first source operand identifier, and a second source operand identifier; wherein the instruction storage device includes a sequence of suboptimal instructions, the suboptimal The sequence of instructions is preceded by a sequence start instruction, followed by a sequence end instruction;Decoding the instructions by a decoding circuit;Executing an dispatched instruction received from the decoding circuit;Wherein the decoding further comprises detecting the sequence start instruction and the sequence end instruction, buffering a sequence of the sub-optimal instructions between them, accessing a lookup table to identify a sequence of one or more optimization instructions instead of the sequence One or more of the sequences of suboptimal instructions, and selecting the decoded instructions to be executed or the sequence of the one or more optimized instructions.12.A method comprising:Obtaining an instruction from an instruction storage device, the format of the instruction comprising an operation code, a first source operand identifier, and a second source operand identifier; wherein the instruction storage device includes a sequence of suboptimal instructions, the suboptimal The sequence of instructions is preceded by a sequence start instruction, followed by a sequence end instruction;Decoding the instructions by a decoding circuit, comprising the steps of: detecting a sequence start instruction and the sequence end instruction, and a step of buffering a sequence of the sub-optimal instructions therebetween;Executing an dispatched instruction received from the decoding circuit;Wherein the decoding further comprises accessing a lookup table to identify a sequence of one or more optimization instructions to replace one or more of the sequences of the suboptimal instructions, and selecting the decoded instructions to be executed or the one or A sequence of multiple optimization instructions.13.The method of any of claims 11-12, wherein the sequence of suboptimal instructions comprises a scalar instruction, and the sequence of the one or more optimized instructions comprises a vector instruction.14.The method of any of claims 11-12, wherein the sequence of suboptimal instructions comprises a scalar instruction, and the sequence of the one or more optimized instructions comprises a single instruction multiple data instruction.15.The method of any of claims 11-12, wherein the metadata provided with the sequence start instruction describes an instruction set architecture associated with the sub-optimal instruction, and wherein the one or more optimization instructions The sequence is associated with another instruction set architecture.16.The method of any of claims 11-12, wherein the sequence of one or more optimization instructions utilizes a register that is wider than a register utilized by the sequence of the suboptimal instructions.17.The method of any of claims 11-12, wherein the decoding circuit further accesses the lookup table to identify a plurality of alternate instructions in place of a sequence of the plurality of suboptimal instructions.18.The method of any of claims 11-12, wherein the first source operand identifier and the second source operand identifier of the sequence start instruction comprise providing a description of the suboptimal instruction The metadata of the sequence's data.19.The method of any of claims 11-12, wherein the sequence of suboptimal instructions loops over a sequence of execution iterations.20.The method of any of claims 11-12, wherein the first source operand identifier specifies a number of iterations over a sequence of the suboptimal instructions, the second source operand identifier designation The variable that is incremented during each iteration, and the third source operand identifier specify the span in which the variable is incremented during each iteration.21.An article of manufacture comprising a non-transitory machine readable storage medium storing instructions executable by a processor to perform the processes of:Obtaining an instruction from an instruction storage device, the format of the instruction comprising an operation code, a first source operand identifier, and a second source operand identifier; wherein the instruction storage device includes a sequence of suboptimal instructions, the suboptimal The sequence of instructions is preceded by a sequence start instruction, followed by a sequence end instruction;Decoding the instructions by a decoding circuit;Executing an dispatched instruction received from the decoding circuit;Wherein the decoding further comprises detecting the sequence start instruction and the sequence end instruction, buffering a sequence of the sub-optimal instructions between them, accessing a lookup table to identify a sequence of one or more optimization instructions instead of the sequence One or more of the sequences of suboptimal instructions, and selecting the decoded instructions to be executed or the sequence of the one or more optimized instructions.22.An article of manufacture comprising a non-transitory machine readable storage medium storing instructions executable by a processor to perform the processes of:Obtaining an instruction from an instruction storage device, the format of the instruction comprising an operation code, a first source operand identifier, and a second source operand identifier; wherein the instruction storage device includes a sequence of suboptimal instructions, the suboptimal The sequence of instructions is preceded by a sequence start instruction, followed by a sequence end instruction;Decoding the instructions by a decoding circuit, comprising the steps of: detecting a sequence start instruction and the sequence end instruction, and a step of buffering a sequence of the sub-optimal instructions therebetween;Executing an dispatched instruction received from the decoding circuit;Wherein the decoding further comprises accessing a lookup table to identify a sequence of one or more optimization instructions to replace one or more of the sequences of the suboptimal instructions, and selecting the decoded instructions to be executed or the one or A sequence of multiple optimization instructions.23.The article of any of claims 21-22, wherein the sequence of suboptimal instructions comprises a scalar instruction, and the sequence of the one or more optimized instructions comprises a vector instruction.24.The article of any of claims 21-22, wherein the sequence of suboptimal instructions comprises a scalar instruction, and the sequence of the one or more optimized instructions comprises a single instruction multiple data instruction.25.The article of any of claims 21-22, wherein the metadata provided with the sequence start instruction describes an instruction set architecture associated with the sub-optimal instruction, and wherein the one or more optimizations The sequence of instructions is associated with another instruction set architecture.
Systems and methods may provide a set of cores capable of parallel execution of threads. Each of the cores may run code that is provided with a progress meter that calculates the amount of work remaining to be performed on threads as they run on their respective cores. The data may be collected continuously, and may be used to alter the frequency, speed or other operating characteristic of the cores as well as groups of cores. The progress meters may be annotated into existing code.
1.A method of controlling computing resources includes:Synchronize multiple tasks globally across multiple computing resources;Calculating the workload used to complete at least one of the multiple tasks;Processing the multiple tasks in parallel to complete the work corresponding to each of the multiple tasks;With respect to the workload for completing the at least one task of the plurality of tasks, iteratively calculates corresponding to one or more of the proportion of completed work or the proportion of remaining work to be completed Ratio of work;Calculating the skewness of a plurality of measurement values obtained from the plurality of computing resources; andModifying the characteristics of at least one computing resource of the plurality of computing resources based on the working ratio and the skew.2.The method of claim 1, wherein the plurality of computing resources include a plurality of cores, and wherein the frequency of at least one core of the plurality of cores is changed based on the working ratio.3.The method of claim 1, wherein the plurality of computing resources include one or more of cores, processors, multi-core processors, nodes, cabinets, clusters, rows, or grids, and wherein the At least some of the multiple computing resources communicate with each other.4.The method of claim 1, wherein the plurality of tasks includes a plurality of threads, and wherein the plurality of computing resources includes a plurality of cores.5.The method of claim 1, further comprising:Report the stated work ratio through one or more of the application or application programming interface (API); andAn indication of the working ratio is received at the runtime monitor.6.The method of claim 1, further comprising: modifying one or more of the number, distribution, speed, or frequency of at least one of the plurality of computing resources.7.The method according to any one of claims 1 to 6, wherein the characteristic includes speed, and wherein the speed of at least one computing resource of the plurality of computing resources is provided to the at least one computing resource by changing The amount of electrical power of the resource is modified.8.5. The method according to any one of claims 1 to 6, wherein the plurality of computing resources includes a plurality of nodes, and wherein the method further includes:Calculating the skewness of the plurality of measurement values obtained from the plurality of nodes; andThe speed of the at least one node is modified based on the comparison of the characteristic of the at least one node of the plurality of nodes with the skew.9.The method of any one of claims 1-6, further comprising: synchronizing the plurality of tasks at a barrier, wherein each task in the plurality of tasks includes a waiting time at the barrier , And wherein, the method further includes: repeatedly modifying the characteristics to reduce the waiting time for the at least one task.10.A device used to process tasks, including:Multiple computing resources, the multiple computing resources are used to process multiple tasks in parallel, wherein the multiple tasks are used to be globally synchronized across the multiple computing resources;Progress meter logic, which is implemented at least partially in fixed-function hardware for:Calculating the workload for completing at least one of the multiple tasks; andWith respect to the workload for completing the at least one task, repeatedly calculating the proportion of work corresponding to one or more of the proportion of completed work or the proportion of remaining work to be completed;Skew calculator logic, the skew calculator logic is used to calculate the skew of a plurality of measurement values obtained from the plurality of computing resources; andThe performance balancer logic is implemented at least partly in fixed-function hardware, and is configured to modify the characteristics of at least one of the plurality of computing resources based on the working ratio and the skew.11.The device of claim 10, wherein the plurality of computing resources include a plurality of cores, and wherein the performance balancer logic is used to change the frequency of at least one core of the plurality of cores based on a work function .12.The device of claim 10, wherein the performance balancer logic is configured to: change all of the plurality of computing resources by changing the amount of power provided to at least one of the plurality of computing resources. State the speed of at least one computing resource.13.The device of any one of claims 10-12, wherein the performance balancer logic is configured to: by directing power from one of the plurality of computing resources to a relatively fast computing resource toward the plurality of computing resources A relatively slow computing resource among the computing resources is directed to change the speed of at least two computing resources among the plurality of computing resources.14.The device according to any one of claims 10-12, wherein the computing resource includes a plurality of cores, and wherein the performance balancer logic is configured to provide at least one of the plurality of cores by changing The amount of power of one core changes the speed of the at least one core of the plurality of cores.15.The device of claim 10, further comprising runtime monitor logic, the runtime monitor logic is at least partially implemented in fixed-function hardware, for receiving from the scheduler logic for indicating the work ratio Information.16.The device of claim 10, wherein the plurality of computing resources include one or more of cores, processors, multi-core processors, nodes, cabinets, clusters, rows, or grids, and wherein the At least a part of the plurality of computing resources has a communication channel therebetween.17.The device according to any one of claims 10-12, further comprising:Multiple nodes; andSkew calculator logic, the skew calculator logic is used to calculate the skew of a plurality of measurement values obtained from the plurality of nodes,The performance balancer logic is used to change the speed of at least one of the nodes based on the skew.18.The device of claim 10, wherein the performance balancer logic is used to modify one or more of the number, distribution, speed, or frequency of at least one of the plurality of computing resources.19.At least one computer-readable storage medium comprising one or more instructions that when executed on a computing device cause the computing device to:Synchronize multiple tasks globally across multiple computing resources;Calculating the workload used to complete at least one of the multiple tasks;With respect to the workload for completing the at least one task of the plurality of tasks, iteratively calculates corresponding to one or more of the proportion of completed work or the proportion of remaining work to be completed Ratio of work;Calculating the skewness of a plurality of measurement values obtained from the plurality of computing resources; andModifying the characteristics of at least one computing resource of the plurality of computing resources based on the working ratio and the skew.20.The at least one computer-readable storage medium of claim 19, wherein the plurality of computing resources include a plurality of cores, and wherein the instructions when executed on a computing device balance the utilization performance of the computing device To modify the frequency of at least one of the plurality of cores.21.The at least one computer-readable storage medium of claim 19, wherein the instructions, when executed on a computing device, cause the computing device to:Calculate the work ratio; andReceive information indicating the work ratio from the progress meter.22.The at least one computer-readable storage medium according to any one of claims 19-21, wherein the instructions, when executed, cause the computing device to change the value of at least one of the plurality of computing resources. Operating characteristics.23.22. The at least one computer-readable storage medium of claim 20, wherein the instructions, when executed, cause the computing device to change the amount of power provided to at least one of the plurality of cores.24.The at least one computer-readable storage medium of any one of claims 19-21, wherein each of the plurality of tasks includes a waiting time at the barrier, and wherein the instructions when executed The computing device is caused to repeatedly modify the characteristics to reduce the waiting time for at least one task.25.A device for processing tasks, comprising a device for executing the method according to any one of claims 1-9.
Progress meter in parallel computingCross-references to related applicationsThis application claims priority rights for U.S. Non-Provisional Patent Application No. 14/583,254 filed on December 26, 2014.Technical fieldThe embodiments generally relate to a schedule meter. More specifically, the embodiment relates to a schedule meter in parallel computing.Background techniqueThe complexity of computer architecture has grown from an architecture using a single processor to an architecture using parallel processors. In addition, high-performance computing (HPC) can use processor groups to process tasks according to various computing topologies and architectures. For example, HPC applications or jobs can be divided into various tasks, the tasks can be subdivided into groups of related subtasks (usually as threads), and the groups of related subtasks can run in parallel on computing resources. In some architectures, related threads can be processed in parallel, and the completion of a task may require the completion of all related parallel threads constituting the task.The computational efficiency can be enhanced by allowing parallel threads to complete and/or reaching milestones (e.g., synchronization points, global synchronization barriers, or more simply barriers) before further processing (if not fully completed). Generally, a separate thread can perform independent calculations before reaching the synchronization point. A thread can complete its work at different times. However, due to the variability of calculation work between various tasks, differences in calculation conditions may occur. Therefore, there may be a load imbalance between the used computing resources, with some threads waiting for other threads to complete. Load imbalance may lead to inefficient performance and power utilization, because computing resources may be idle while waiting for the remaining tasks to complete.Description of the drawingsBy reading the following description and appended claims and by referring to the accompanying drawings, various advantages of the embodiments will become apparent to those skilled in the art, in the accompanying drawings:FIG. 1 is a schematic diagram of an example of changes produced in parallel processing of thread groups;Fig. 2 is a schematic diagram of an example of a timeline for processing threads according to an embodiment;Fig. 3 is a flowchart of an example of a method of using a progress meter according to an embodiment;FIG. 4 is a flowchart of an example of a method of using a progress meter in the form of software according to an embodiment;Fig. 5 is a block diagram of an example of a system using a schedule meter according to an embodiment;FIG. 6 is a flowchart of an example of a method of using a progress meter to change the performance of a core according to an embodiment; and7A and 7B are schematic diagrams of examples of changes produced in parallel processing of thread groups according to an embodiment;Fig. 8 is a block diagram of an example of a system using a progress meter at a node level according to an embodiment.Detailed waysAccording to a variety of different classifications, etc., various different levels of computing resources can be considered and/or grouped together. For example, there may be a single processor with a single core at the atomic level. Above the atomic level, there can be processors that include multiple cores. A node may refer to a single computer including at least one processor and a network connection, and/or multiple processors each including multiple cores. In one example, the node may include 16 multi-core processors. At a higher level, groups of nodes can be grouped together. For example, two or more nodes may be arranged in a cabinet (e.g., a rack), where two or more cabinets may be arranged in a row of cabinets. In addition, groups between approximately 1,000 to 10,000 (and more) nodes can be connected together to form a single cluster, where multiple clusters can be connected to other clusters, and where cluster groups can form a grid.In HPC, the nodes that make up a single cluster and/or multiple clusters can be co-located in a common facility. Generally, common facilities can be served by a common power system. Clusters and/or nodes that are co-located together in a common facility can be connected to each other through a relatively low-latency, high-bandwidth structure. In addition, communication between remote clusters and/or nodes can be implemented using a network with relatively higher latency and significantly lower bandwidth (e.g., the Internet). In addition, the HPC system can be homogeneous. For example, the hardware that composes the node is constructed with a common specification. In addition, the nodes of the HPC system can share a common file system.Each level (eg, cores, processors, nodes, cabinets, clusters, grids, etc.) can refer to computing resources. In parallel processing, multiple computing resources can be used in the solution of the problem. Although the parts discussed below may include cores for illustration, the embodiments presented herein may utilize various levels of computer resources (computing resources), including processors, nodes, cabinets, clusters, grids, etc., or any combination thereof.Generally, in HPC, an application can refer to a "job", and a job can include multiple tasks that can be decomposed into multiple individual subtasks, and the subtasks can be referred to as "threads." In parallel computing, tasks can be broken down into groups of related individual threads that can run in parallel with each other, where each thread can run on a separate core within a node. The threads that collectively constitute a given task can run on a core or processor within a given node. When, for example, multiple processors within a node share the same coherent memory space, threads of a given task can run on the multiple processors. In addition, threads of more than one task may run on a given node based on, for example, multiple microprocessors and/or cores in a given node, a presented workflow, etc. Additional architecture can allow for changes. For example, in some variations, multiple threads can share a common core through various forms of multiplexing.In parallel processing, the code to be processed in parallel can be decomposed into multiple separate instances (copies) of itself. An instance can refer to a "rank" of a programming model in the form of parallel processing that uses a message passing interface (MPI) based on a communication library and runtime calls.A thread can mean a series of work assigned to a thread, or simply means "work". Generally, the first set of work performed in a thread may need to be completed before the rest of the work of the thread can begin. When all threads in the parallel thread group have reached a common milestone in terms of the work completed by the group, the work performed by the parallel thread group within the task can be completed. Generally, it may not be desirable to start a new task until the processing of the previous task related to the new task has been completed. One way to prevent such a situation is to provide a barrier to be reached by the separate parallel threads, where the parallel threads have each completed a certain limited amount of work assigned to them at the point represented by the barrier. In this regard, threads can be in a state of synchronization with each other. Barriers can be scheduled in time (for example, occur at a specific frequency), and/or can be event-based, so that when a thread completes some amount of work calculated and allocated during initialization and/or when the previous barrier is reached happen. The provision of barriers may refer to barrier synchronization, and barriers may refer to synchronization barriers, or simply “barriers”.Parallel processing can use a synchronization barrier as a global barrier, and all relevant threads are suspended at the global barrier until each thread processing (for example, each on its corresponding core) has completed the work assigned to each thread. Again, and depending on the architecture, the global barrier may be time-based and/or may be event-based.Ideally, all threads will reach a given barrier (e.g., a global barrier) at the same time. Generally, even when the computing resources used seem to be the same (for example, when the core has been designed to a common specification), and even when the problem has been broken down into multiple parts that seem to be of equal size (for example, in a large order When each node can be given a fixed equal portion of data for sorting), the threads constituting the task may still take different time to complete. There can be many reasons for this change. Generally, causes may be characterized as "static" or they may be characterized as "dynamic." In a static case, the cause may be more or less constant over time, while in a dynamic case, a certain variability in operating characteristics occurs over time.One source of static variability may include the variability that exists when the hardware is manufactured. Even if each processor is nominally the same as the other processors, the manufacturing process may permit certain changes in processor quality, such as processor frequency, speed, etc.Examples of sources of dynamic variability include input/output (I/O) interrupts from the operating system (OS), which may slow down the processor. For example, due to the I/O call, the wake-up time may also change over time, because the frequency and/or time at which a node may be interrupted by the OS may change over time. Depending on the task, memory accesses made by tasks executed on the processor may require different amounts of service time. Additional sources of variability can include jitter effects, such as the OS interrupting a core and/or processor from a thread different from other threads used to perform OS duties (such as updating the clock, running system software to support applications, etc.) . Another dynamic source of variability can come from recoverable hardware errors that occur in different ways from one node to another.Another source of variability can come from the nature of the job being processed. For example, at the software level or in allocating hardware (eg, processors, nodes, etc.) to jobs and/or tasks, tasks may not be evenly distributed among resources.Regardless of the source of the variability, addressing the consequences of the variability may require a task to wait for a global synchronization barrier (or simply a "barrier") to be placed periodically for the cores that process the set of related threads.Turning now to FIG. 1, an example of the waiting time that can occur between the first global synchronization barrier 12 and the subsequent global synchronization barrier 14 is shown. A series of threads T1, T2, T3,... Tn (T1 to Tn) may correspond to a set of related subtasks of the task, and begin to be processed at the initial time t0 marked on the time scale 10. The length of the bar representing each of the threads T1 to Tn corresponds to the duration that the thread can experience processing by the corresponding core and/or processor in its given node. For example, thread T1 may include an active processing period and/or running time 16, followed by a waiting time 18 during which the core of the thread waits for the completion of other threads T2, T3, ..., Tn being processed in other cores It distributes the work, and thus catches up with thread T1.As the work assigned to the threads T1 to Tn is completed, when the threads T1 to Tn undergo a corresponding period of processing duration 16 on their respective cores, the threads may each be referred to as active. It should be understood that the activity period associated with each of the n threads (ie, the corresponding period of runtime 16) may vary relative to each other. In FIG. 1, thread T1 takes the least time to complete (e.g., end), and thread T3 takes the longest time to complete (e.g., end).A global synchronization barrier 14 may be provided in which further processing of threads on a core may be blocked (e.g., suspended) until the slowest thread of the threads has completed processing on its corresponding core. As discussed above, the synchronization barrier may be event-based, and/or the synchronization barrier may be time-based. In addition, the interval of the barrier can be fixed and/or can be varied. In addition, multiple barriers can appear throughout the life of a thread. In addition, changes in the runtime 16 may result in changes in the waiting time 18 of each core, during which some threads may be idle and/or their corresponding cores may not be processing threads. Therefore, the waiting time may need to be idle, which may waste hardware resources.The total waiting time can be reduced by reallocating computing resources (for example, multiple cores, processors, nodes, etc. working in a task). In some embodiments, core-level latency can be reduced overall by speeding up slower cores while slowing down faster cores to allow threads and/or cores to reach the global synchronization barrier in a relatively small average time. In one embodiment, the speed control on the core may include changing the operating frequency of the core, where the operating frequency may determine the speed of the core processing the thread under certain conditions and under certain metrics. The core frequency can be scaled as the amount of power provided to the core changes. In addition, the power can be scaled with the square of the voltage supplied to the core.In one embodiment, scaling can be influenced by: obtaining information about the speed at which a thread completes its work before the thread’s next global synchronization barrier, and using that information to influence the amount of power provided to the core The speed of the nucleus. Although the use of scaling has been discussed with regard to cores as computing resources, similar approaches can be performed for aggregates of cores, processors, nodes, cabinets, clusters, grids, etc., to allow aggregates of cores to be in power and/or time. Aspects run relatively more efficiently.A series of progress meters can provide information about the speed at which a thread can complete its work. In some embodiments, the schedule meter can be provided as part of the code running on the core. The progress meter can calculate the amount of work the thread has to complete before the next synchronization global barrier. The schedule meter can then calculate the remaining workload at intervals (periodically or non-periodically) thereafter until the next global synchronization barrier is reached. The information about thread progress can then be used to control the frequency (e.g., speed) of the cores and/or the allocation of computer resources.Figure 2 shows an example of an embodiment in which a progress meter can be used to track the progress of a single thread executing on a single core. At time Ts1, the first global synchronization barrier 21 marks the start of processing, and the thread and other related threads are globally synchronized across the corresponding cores of the thread. In one example, processing starts from the serial code area 22, where threads can be processed serially. At time 24, the thread reaches the parallel code area 28. At this time, the progress meter (which can be embedded in the parallel code) calculates the total work to be completed from the beginning to the completion of the processing thread before reaching the next global synchronization barrier. Although FIG. 2 depicts the serial code area 22 before the parallel code area 28, in other embodiments, the serial code area 22 may follow the parallel code area 28 or be interleaved therewith. In fact, there can be multiple serial and parallel code regions between barriers.At subsequent times 30, the progress meter calculates the percentage of the total work remaining and/or completed at a particular point in time (ie, the "work ratio"), and shares the work ratio with other system assets discussed below. At time Ts2, a second synchronization barrier 31 may be provided, followed by a serial code area 32. As the thread enters the next parallel code region 38, a new calculation of the amount of work to be completed may occur at the time 34 for further processing of the thread (for example, if the thread has not been fully completed or discarded). At multiple subsequent moments 40, the percentage of the total work remaining and/or completed at a specific point in time (ie, "work ratio") can be calculated again, and the work ratio can be shared with other system assets discussed below. In addition, the thread continues and reaches the next synchronization barrier 41 at time Ts3. The process is repeated for each thread in the thread group until the entire job represented by the thread group has been completed.Turning now to FIG. 3, a flowchart of an example of a method 50 according to an embodiment is shown, in which a progress meter in the form of software can be used to track the completion of threads in a node. The method 50 can be implemented as a set of logical instructions, which are stored in a machine-readable or computer-readable storage medium (such as random access memory (RAM), read only memory (ROM), programmable ROM (PROM), flash memory, etc.) , Stored in configurable logic (such as, for example, programmable logic array (PLA), field programmable gate array (FPGA), complex programmable logic device (CPLD)), stored in the use of circuit technology (such as, for example, application-specific integrated circuit (ASIC) ), CMOS or Transistor-Transistor Logic (TTL) technology or any combination thereof) in fixed-functional hardware. For example, any combination of one or more programming languages may be used to write computer program codes for performing the operations shown in the method 50. The programming languages include object-oriented programming languages such as C++ and programming languages such as "C" or the like. Conventional programming languages such as programming languages. Moreover, the method 50 can be implemented using any of the circuit technologies mentioned herein.The job can begin at block 52. At the processing block 54 shown, the core and its companion threads may be globally synchronized with respect to other related threads and cores, thereby giving these threads a common start time. After executing any serial code that may be present, the thread encounters a parallel code region at processing block 56 as shown. At processing block 58, the scheduler calculates the amount of work to be processed before the thread encounters the barrier. At the processing block 60 shown, the code can be executed for a certain period of time, and the processing block 62 shown at the end of the period calculates the remaining work to be completed on the thread (expressed in absolute value or in proportion (e.g., Percentage)) how many. Information about the remaining work is shared with the monitor application processing interface (API) at the processing block 64 shown. Block 65 determines whether the thread has completed (ie, whether all the work to be completed in the thread has been completed). If the work has not been completed, control returns to processing block 60 where additional processing is performed. If block 65 determines that the job has been completed, then the illustrated processing block 66 determines whether the entire job has been completed. If so, the process ends at block 68. On the other hand, if there are additional threads for core processing, control returns to processing block 54 for another synchronization.The schedule meter provides the possibility of multiple evaluations of the work remaining in the thread, and therefore provides information that can change the work flow in a way that is relatively more efficient in the use of resources (including time and computing resources). The homework can be completed relatively earlier than traditional methods.The schedule meter can be implemented in software. In an embodiment, the implementation may be a software probe that can be inserted into existing code. Such a probe can be called a call statement that, when encountered, calculates the work to be completed in the processing of the thread when it is initially encountered, and then calculates the remaining work in the thread when encountered subsequently Proportion.Fig. 4 shows an example 70 of an embodiment of a software implementation of a schedule meter, which shows annotations of pre-existing code with a schedule meter. In Example 70, the pre-existing code starting at block 72 is a simple loop. At the processing block 74 shown, the software may pass a parameter indicating that the task is to be executed J times. The variable K can be used as a counter to track the number of passes through the code, and is initialized to the integer 1 at the processing block 76 shown. The code can be executed at the processing block 78 shown, and the variable K can be incremented at the processing block 80 shown. Block 82 determines whether K=J. If K is not equal to J, then control loops back to processing block 78. If block 82 determines that K=J, the code can end running at processing block 84 as shown.It is possible to provide a progress meter 86 in the form of an API that can be inserted into existing code or in parallel with it, as shown in FIG. 4. The progress meter 86 can be passed the value of J, and the progress meter can track the number of loops that have passed through the code and/or have not yet passed through the code. The access to the code to be executed, as well as the number of iterations that have been performed through the code (e.g., K) and the number of iterations to be performed (e.g., J), can provide a measure of progress through the level of each iteration of the loop. For example, if J=10, when K=1, it can be determined that 10% of the work has been completed on the thread. In another example, when K=8, it can be determined that 80% of the work has been completed. Alternatively, these numbers can be expressed as a percentage of work still to be completed (e.g., in the first example, 90% of the work is still to be completed, and in the second example, 20% of the work is still to be completed). The schedule meter 86 may pass a number indicating the amount of work completed and/or to be completed to the runtime monitor API discussed below to affect the processing of the thread.In other embodiments, the progress meter may determine the total work and/or percentage work automatically completed through dynamic code analysis and/or analysis of processor performance counters. In addition, the application may not pass other information to the progress meter.The progress meter can calculate the work and/or the percentage of work on a time-based scale (ie, with a certain number of occurrences/unit time or frequency), or the progress meter can be event-based (e.g., each time through a loop to calculate, and Regardless of the time, such as in the case of the example of Figure 4 discussed above). In one embodiment, the progress meter may be updated approximately every 10 microseconds. Faster updates can be adopted. If updates are calculated relatively frequently, and the schedule meter is inserted into the application code serially (not in parallel with it), then the overhead and/or application performance can be balanced and/or considered.Turning now to FIG. 5, a block diagram of an example of a system using multiple schedule meters according to an embodiment is shown. In one example, the computing resources may include cores. For example, a core group may be provided, including the first core 87 and the Nth core 88. The cores 87,..., 88 can each run threads 90-1,...,90-N that can be instances of parallel code, and the parallel code can be the same from core to core. Each core 87,...,88 can be provided with a progress meter 92. In one example, the progress meter 92 of each of the cores 87, ..., 88 may notify the runtime monitor 94 (which may itself be an API) of the progress on the thread through an explicit function call. Alternatively, the progress meter 92 of each of the cores 87, ..., 88 may update the progress value that can be queried by the runtime monitor 94. The runtime monitor 94 may be part of the OS, a standalone program, or part of a relatively comprehensive performance/power optimization framework that combines multiple optimization techniques.At the first global synchronization point, the progress meter 92 of each core 87,...,88 reports the total amount and/or percentage of work to be completed from start to completion relative to a given thread. Then, at a subsequent interval, the schedule meter 92 of each of the cores 87, ..., 88 reports the proportion of the remaining work (and/or the proportion of the work that has been completed). The runtime monitor 94 forwards the work ratio to the performance balancer 96, which can use the information provided by the progress meter 92 to modify the frequency of each core 87, ..., 88, and/or in other ways Affect the allocation of resources applied at the nuclear level.The information provided by the respective progress meters 92 of the cores 87, ..., 88 can be used in a variety of ways. In the event that a thread passes through a given core at a slower speed than other threads passes through the corresponding core, by changing the corresponding frequency of the core, the slower core can be accelerated and/or the faster core decelerated. One way to affect this control is to redistribute power from the faster cores to the slower cores. Similarly, adjustments to the power provided to the core or other adjustments to the core that affect its operating frequency can also modify the speed of its corresponding nodes and node aggregates as a whole.Therefore, the core (and/or processor) frequency can be changed within a range by changing the amount of power that can be fed to the core (and/or processor). In situations where power resources may be limited, faster thread processing time can be obtained by transferring power from cores that are faster than the average value of the cores used to cores that are slower than the average value of the cores used. In some cases, it may be advantageous to redirect power from cores that are slower than average to other cores that are even slower. The progress meter provides data that can be used to periodically adjust the power to the core, thereby relatively reducing the waiting time at the synchronization point. In some embodiments, power transfer can also reduce the power consumed in processing a given job.FIG. 6 shows a flowchart of an example of a method 100 for controlling power flow between cores within a node using information provided by multiple schedule meters. The method 100 can be implemented as a set of logical instructions, which are stored in a machine-readable or computer-readable storage medium (such as random access memory (RAM), read only memory (ROM), programmable ROM (PROM), flash memory, etc.) , Stored in configurable logic (such as, for example, programmable logic array (PLA), field programmable gate array (FPGA), complex programmable logic device (CPLD)), stored in the use of circuit technology (such as, for example, application-specific integrated circuit (ASIC) ), CMOS or Transistor-Transistor Logic (TTL) technology or any combination thereof) in fixed-functional hardware. For example, any combination of one or more programming languages can be used to write computer program codes for performing the operations shown in the method 100. The programming languages include object-oriented programming languages such as C++ and programming languages such as "C" or the like. Conventional programming languages such as programming languages.Moreover, the method 100 can be implemented using any of the circuit technologies mentioned herein.The illustrated processing block 102 may collect data from the progress meter regarding the remaining amount of work to be performed on the relevant thread at the corresponding core of the relevant thread. Data can be stored in vector or matrix form. It may be desirable to increase the amount of data collected. Therefore, block 104 determines whether sufficient data has been collected. If not, control returns to processing block 102. If so, the processing block 106 shown calculates the number provided by the cross-core progress meter. A useful metric can include the skewness of the collected samples, where skewness can refer to the variance of nuclear progress (determined in the sample) divided by their average.When the skew is within a certain limit, the core's work can be determined to be effective in terms of time and/or power resources used. Therefore, block 108 determines whether the skew is within the limit. If so, control loops back to processing block 102 for another round of data collection. If the skew lies outside the limit set by the limit, the median value of the samples of the cores can be calculated at the processing block 110 shown, and the cores can be sorted with respect to the median value at the processing block 112 shown (For example, from high to low).The processing block 114 shown arranges the nuclei in pairs, starting with the fastest nucleus paired with the slowest nucleus, followed by the second fastest nucleus pairing with the second slowest nucleus, and so on in a round-robin fashion until All cores and/or all cores located outside some predetermined band are considered. The illustrated processing block 116 directs power from the faster of the two cores in each pair of cores to the slower of the two cores. Such power transfer can be done by reducing (e.g., in a pair of cores) the operating frequency of a faster core to slow down (e.g., in a pair of cores) the faster core, and/or can be increased (e.g., in a pair of cores). In) the operating frequency of the slower core to speed up the slower core (for example, in a paired core).Advantageously, the overall speed of completing parallel processing jobs can be relatively increased. In addition, the total amount of power required to complete the job can be relatively reduced. In addition, facilities that house HPC systems may generally require a large amount of air cooling to deal with the heat generated at the core of the HPC system. Therefore, reducing the relative power consumed by the core may result in less heat being generated at the core, which may allow less intensive use of the air conditioning system in the HPC facility to additionally save power further.In an alternative embodiment, the processing block 114 may be omitted, and the slowest frequency core may be boosted by, for example, instructing the slowest frequency core to receive more power at the shown processing block 116, which may be accompanied by The faster the core’s power amount is reduced.The processing block can be implemented in various combinations of the hardware and/or software elements indicated above. Therefore, in one embodiment, the processing block 106 may be implemented in the form of hardware and/or software, and may include a skew calculator for calculating skew. It should be understood that other implementations of the method are possible.Turning now to FIGS. 7A and 7B, several effects of controlling computing resources (such as cores) by the data provided by the schedule meter are shown according to an embodiment. In one example, the core frequency can be changed (e.g., by changing the power provided). FIG. 7A is similar to FIG. 1 discussed above, and shows along the timeline 120 between the initialization 122 time t0 of the thread groups T1, T2, T3,..., Tn and the time tb at which the subsequent synchronization barrier 124 may be encountered Time interval.Each of the threads T1, T2, T3,..., Tn may have a corresponding active running time 126 when the work occurs, and may have a corresponding waiting time 128 when the work on the thread has been completed, during which the thread and/or thread are running The cores of other threads wait for other threads to complete their work on the corresponding cores of other threads. In the illustrated example, the waiting times of the threads T1, T2, T3,..., Tn are represented as WT1, WT2, WT3,..., WTn, respectively. Some waiting times can be zero, and in general, some waiting times can be longer than others. The sum of the waiting time can be given as follows:Total W=WT1+WT2+WT3+...WTnFigure 7B shows a situation in which one of the embodiments discussed herein is used to change the frequency of individual cores, accelerating those cores that are relatively slow, and/or decelerating those cores that are relatively fast. Shows along the timeline 130 between the initialization 132 time t'0 of the thread groups T'1, T'2, T'3,..., T'n and the time t'b when the subsequent synchronization barrier 134 can be encountered. The time interval between. Each of the threads T'1, T'2, T'3,..., T'n may have a corresponding active running time 136 when the work occurs, and may have a waiting time 138 when the work on the thread has been completed, during which the waiting time , The running thread and/or thread core waits for other threads to complete work on the corresponding cores of other threads. In the illustrated example, the waiting times of the threads T'1, T'2, T'3, ..., Tn' are denoted as WT'1, WT'2, WT'3, ..., WT'n, respectively. Some waiting times can be zero, and in general, some waiting times can be longer than others. The sum of the waiting time can be given as follows:W’Total=WT’1+WT’2+WT’3+...WT’nIt may be noted that the effect of using a schedule meter may be to allow the synchronization barrier 134 to be encountered faster than the situation depicted in FIG. 7A. E.g:(tb-t0)>(t’b-t’0)In addition, when using the data provided by the progress meter, the total waiting time can be relatively reduced:W total> W’ totalThe reduction in the waiting time may allow the interval between global barriers to be shortened, and computing resources may be used relatively more efficiently in terms of the time and/or power used to complete the job.Although the core is used as the basic unit of computing resources to present the examples of the embodiments proposed here, the embodiments can also be applied to other levels of computing resources, including processors, multi-core processors, racks of nodes, cabinets, and clusters. , Grid, etc. Embodiments at the level above the cores (eg, nodes) may include aggregating data from cores of related threads running on a given node.FIG. 8 shows a block diagram of an example of a system using a schedule meter at the node level (for example, the computing resource is a node). A node group can be provided, including the first node at 186 and the Nth node at 188. Each of the nodes 186,..., 188 can run one or more tasks 190 (which can be instances of parallel code), and the tasks can be the same for a group of related tasks running within a given node. As mentioned earlier, each task can include multiple related threads, and each thread can run on a single core. Each node may include multiple cores on which multiple threads are being processed, and each core may be provided with a schedule meter 192 that can report to the runtime monitor 194 (which may be an API) at different times. Therefore, embodiments may include aggregates of cores, such as nodes.At the node level, the progress meter 192 of each node 186,... 188 may provide statistical metrics based on aggregates for different threads and/or tasks being performed in the respective nodes 186,... 188. For example, the progress meter 192 of each of the nodes 186,...,188 may report the average work completed and/or to be completed across the cores in a given node. In another example, the progress meter 192 of each of the nodes 186, 188, 188 may report to indicate the least amount of completed work in any one of the cores in the node.Other statistical measures of core performance within a given node (e.g., median, variance, standard deviation, skewness, etc.) can also be reported. Subsequently, at intervals based on time and/or events, the progress meter 192 of each node 186,..., 188 can continue to report the completed work and/or the computing resources allocated to each corresponding node 186,..., 188 (for example , Nuclear) work to export statistical data. The runtime monitor 194 forwards the information to the performance balancer 196, which can use the information provided by the respective scheduler 192 of the nodes 186, 188, to modify the allocation of resources applied to the nodes. In addition, the performance balancer can aggregate the per-thread progress meter information provided on individual threads to determine the overall node progress.Node power adjustments that can be used to change node speeds can be achieved through various mechanisms. For example, the processor may be equipped with a power limit and a monitoring interface exposed by the software, and the runtime system can configure it to adjust the processor power.At a still higher level, it is desirable to track the progress of individual cabinets, clusters, and/or grids, and basic information about work progress may continue to be based on the per-thread data provided by the progress meter at the core level as discussed above. As you move to higher-level computing resources, the progress meter data can be progressively aggregated level by level. For example, when estimating the speed of a node, the slowest thread on any core within a given node can be considered, and it can be used as a proxy for the speed of the node. Similarly, when considering the progress of an aggregation of nodes (for example, in a cluster), the node data can be further aggregated by treating the slowest node in the cluster as a proxy for the speed of this cluster. The speed of the slower computing resources (nodes, clusters, etc.) can then be modified by accelerating the slower executing computing resources and possibly at the same time decelerating the faster executing computing resources. One way to affect speed can be by providing more power to slower resources.In other embodiments, the processing time of the relatively slower threads can be reduced by providing additional resources to the relatively slower threads, for example, by further dividing the work of the threads and then allocating the divided threads to additional cores.The embodiments disclosed herein can alleviate the problem of load imbalance and provide a way to speed up tasks that may take longer in other situations, while allowing tasks that may be completed faster in other situations to be more power efficient. run. It should be noted that slow-running tasks can be accelerated by acquiring additional resources. The resource may include additional electric power provided to the processing core. Such an approach can use task completion metrics. In various embodiments, a metric can be provided by providing a progress meter as an annotation to a parallel computing area that indicates the proportion of work performed by a particular thread between synchronization points.In addition, load balancing can be provided in cases where computing work may generally not be evenly balanced between parallel tasks and subtasks (threads). This situation may occur when the available computing resources may not be evenly distributed, or the problem may be somewhat related to the second power or complete cube, but the number of cores can be arbitrary. For irregular problems (graphics, adaptive grids), the best work balance may be difficult, and the physical resources at hand may not be evenly divided by the task at hand. Various embodiments may provide dynamic balance between tasks and threads.The progress of each task can be expressed in units dedicated to a specific application. For example, in a loop-based computing area, such as depicted in FIG. 4 discussed above, which may generally be in an HPC application, progress may be expressed as the proportion of loop iterations performed between synchronizations. The practical advantages of using workload-specific metrics to track application progress can include an objective representation of completed work, without relying on code generation or runtime conditions.Using observable metrics (for example, the count of instructions and/or specific operations) as a proxy for application progress may need to consider a compiler that generates two or more versions (vector and scalar, parallel and serial) of the same code area , One of the versions is dynamically selected at runtime based on certain conditions. Different runtime choices may distort application progress monitoring based on instruction or operation counts. Using workload-specific process metrics can provide more global consistency across multiple nodes.In some embodiments, a runtime monitor program can be used to track the progress of parallel tasks and identify which tasks fall behind the slowest value across all tasks in the group. The runtime monitor can then apply additional resources to the lagging task to balance the task progress. Additional resources may include increased power budgets for specific tasks, which may allow corresponding CPU cores to run at higher frequencies, thereby accelerating progress. In the case of applications with multiple levels of parallelization, such as mixed Message Passing interface (MPI)/Open Multi-Processing (Open Multi-Processing, OpenMP) applications, monitor programs can be dynamically added at slow speeds The number of OpenMP threads used in the MPI level. Similarly, tasks in parallel workloads whose progress exceeds the progress of other tasks can be slowed down by reducing their power allocation and/or the number of other resources (such as CPU cores) they use, thereby relatively improving operating efficiency without affecting operation Time or performance.In the case that the processor speed is effectively unified within a given processor type, individual processors can be assigned different amounts of power as default values, where the amount of allocated power is less than that which can be used to power the processor at full speed The amount of power. For example, two processors that may be nearly identical may be responsible for work that may require different amounts of power. Two processors may require different voltages to achieve correct operation at a given speed, and the power may be sufficient for one processor to obtain voltage while the other processor does not. Various embodiments may be used with such processors to further change performance in a manner that relatively increases the speed of such processors and/or the efficiency of parallel processing applications.The embodiments presented herein can be used in customer code and supplier-supplied libraries, which can be used across multiple applications. In situations where it may be desirable to annotate the overall code with a progress meter, applying this technique part to the most commonly used areas of the code can still produce beneficial results.To the extent of each operation or function described herein, they can be described or defined as hardware circuit systems, software codes, instructions, configurations, and/or data. Can be in hardware logic or directly executable software ("object" or "executable" form), source code, high-level shader code designed for execution on a graphics engine, or instruction set for a specific processor or graphics The low-level assembly language code of the core implements the content. The software content of the embodiments described herein may be provided via an article on which the content is stored or via a method of operating a communication interface to send data via a communication interface.A non-transitory machine-readable storage medium can enable a machine to perform the described function or operation, and includes any mechanism for storing information in a form accessible to the machine (eg, computing device, electronic system, etc.), such as recordable/ Non-recordable media (for example, read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, etc.). The communication interface includes any mechanism for interfacing with any of hard-wired, wireless, optical and other media, and the mechanism communicates with another device such as a memory bus interface, a processor bus interface, an Internet connection, a disk controller, etc. Communication. The communication interface may be configured by providing configuration parameters or sending a signal, so that the communication interface is prepared to provide a data signal describing the content of the software. The communication interface can be accessed via one or more commands or signals sent to the communication interface.The described components may be devices for performing the described operations or functions. Each component described herein includes software, hardware, or a combination thereof. These components can be implemented as software modules, hardware modules, dedicated hardware (for example, dedicated hardware, application specific integrated circuits (ASIC), digital signal processors (DSP), etc.), embedded controllers, hard-wired circuits, and the like. In addition to the content that may be described herein, various modifications can be made to the disclosed embodiments and implementations without departing from their scope. Therefore, the descriptions and examples in this article should be interpreted in an illustrative rather than a restrictive sense. The scope of the present invention should only be measured with reference to the following claims.Additional instructions and examples:Example 1 may include a method of controlling computing resources, the method comprising: globally synchronizing multiple tasks across multiple computing resources; calculating the workload for completing at least one of the multiple tasks; and processing all tasks in parallel. The multiple tasks to complete the work corresponding to each of the multiple tasks; with respect to the workload for completing the at least one of the multiple tasks, iteratively calculate and A work proportion corresponding to one or more of the proportion of completed work or the proportion of remaining work to be completed; and the characteristic of at least one of the plurality of computing resources is modified based on the work proportion.Example 2 may include the method of example 1, wherein the plurality of computing resources include a plurality of cores, and wherein the frequency of at least one core of the plurality of cores is changed based on the working ratio.Example 3 may include the method according to any one of Examples 1 and 2, wherein the plurality of computing resources include a plurality of processors, and wherein, based on the work ratio, the number of the plurality of processors is changed The frequency of at least one core.Example 4 may include the method according to any one of examples 1 to 3, wherein the plurality of computing resources includes a plurality of nodes, and wherein at least two nodes of the plurality of nodes are used to process parallel nodes .Example 5 may include the method of any one of Examples 1 to 4, wherein the plurality of tasks includes a plurality of threads, and wherein the plurality of computing resources includes a plurality of cores.Example 6 may include the method as described in any one of Examples 1 to 5, further comprising: receiving an indication of the working ratio at a runtime monitor.Example 7 may include the method of any one of Examples 1 to 6, the method further comprising modifying one or more of the number, distribution, speed, or frequency of at least one of the plurality of computing resources. By.Example 8 may include the method of any one of examples 1 to 7, wherein the characteristic includes speed, and wherein the speed of at least one of the plurality of computing resources is provided to the The amount of electrical power of at least one computing resource is modified.Example 9 may include the method of any one of examples 1 to 8, wherein the plurality of computing resources include one of a core, a processor, a multi-core processor, a node, a cabinet, a cluster, a row, or a grid Or more.Example 10 may include the method according to any one of examples 1 to 9, wherein the plurality of computing resources include a first computing resource and at least one second computing resource set, wherein each of the second computing resources has performance Metric, wherein the minimum value of the performance metric of the second computing resource is used as the performance metric of the second computing resource set, and wherein the second computing resource set is a subset of the first computing resource, and The performance of the first computing resource is a performance metric of the second computing resource set.Example 11 may include the method according to any one of Examples 1 to 10, and further include: reporting the working ratio through one or more of an application or an application programming interface (API).Example 12 may include the method of any one of examples 1 to 11, wherein at least a portion of the plurality of computing resources communicate with each other.Example 13 may include the method according to any one of examples 1 to 12, wherein the plurality of computing resources includes a plurality of core groups, and wherein the method further includes measuring And modify the speed of at least one core group in the core group based on the metric.Example 14 may include the method of any one of Examples 1 to 13, wherein the operating characteristic is speed, and wherein the method further includes increasing the amount of power provided to the first core group. The speed of the first core group is reduced, and the speed of the second core group is reduced by reducing the amount of power provided to the second core group.Example 15 may include the method of any one of Examples 1 to 14, the method further comprising synchronizing the multiple tasks at the barrier.Example 16 may include the method of any one of examples 1 to 15, wherein each task in the plurality of tasks includes a waiting time at the barrier, and wherein the method further includes repeatedly The characteristics are modified to reduce the waiting time for at least one task.Example 17 may include the method of any one of Examples 1 to 16, wherein the core group is a node, and wherein the method further includes calculating a plurality of measurement values for operating characteristics of a plurality of nodes And modify the speed of at least one node based on the skew.Example 18 may include a device for processing tasks, the device including: multiple computing resources, the multiple computing resources are used to process multiple tasks in parallel, wherein the multiple tasks are used to span the multiple Computing resources are globally synchronized; schedule logic, the schedule logic is at least partially implemented in fixed-function hardware, used to: calculate the workload for completing at least one of the multiple tasks; and relative to The workload used to complete the at least one task is repeatedly calculated for the proportion of work corresponding to one or more of the proportion of completed work or the proportion of remaining work to be completed; and performance balance The performance balancer logic is implemented at least partly in fixed-function hardware, and is used to modify the characteristics of at least one of the plurality of computing resources based on the working ratio.Example 19 may include the device of example 18, wherein the plurality of computing resources include a plurality of cores, and wherein the performance balancer logic is configured to change the number of cores in the plurality of cores based on the work function The frequency of at least one core.Example 20 may include the device of any one of Examples 18 and 19, the device further comprising runtime monitor logic, the runtime monitor logic being at least partially implemented in fixed-function hardware for monitoring The schedule meter logic receives information indicating the work ratio.Example 21 may include the device of any one of Examples 18 to 20, wherein the performance balancer logic is configured to change the amount of power provided to at least one of the plurality of computing resources. The speed of the at least one computing resource in the plurality of computing resources.Example 22 may include the device of any one of Examples 18 to 21, wherein the performance balancer logic is configured to shift power from one of the plurality of computing resources to the relatively faster computing resource. A relatively slow computing resource among the plurality of computing resources is directed to change the speed of at least two computing resources of the plurality of computing resources.Example 23 may include the device of any one of examples 18 to 22, wherein the computing resource includes a plurality of cores, and wherein the performance balancer logic is used to provide at least one of the cores by changing The amount of power to change the frequency of at least one of the plurality of cores.Example 24 may include the device of any one of examples 18 to 23, wherein the plurality of computing resources include one of a core, a processor, a multi-core processor, a node, a cabinet, a cluster, a row, or a grid Or more, and wherein at least a part of the plurality of computing resources has a communication channel therebetween.Example 25 may include the device according to any one of Examples 18 to 24, the device further comprising a plurality of nodes and a skew calculator logic, the skew calculator logic is used to calculate the data obtained from the plurality of nodes The skewness of a plurality of measured values of, wherein the performance balancer logic is used to modify the speed of at least one of the nodes based on the skewness.Example 26 may include the device of any one of examples 18 to 25, wherein the performance balancer logic is used to modify the number, distribution, speed, or frequency of at least one of the plurality of computing resources. One or more of.Example 27 may include at least one computer-readable storage medium including one or more instructions that, when executed on a computing device, cause the computing device to: globally synchronize multiple tasks across multiple computing resources; The workload for completing at least one of the multiple tasks; relative to the workload for completing the at least one task of the multiple tasks, iteratively calculate the ratio to the completed work or A work ratio corresponding to one or more of the proportions of remaining work to be completed; and modifying a characteristic of at least one of the plurality of computing resources based on the work ratio.Example 28 may include the at least one computer-readable storage medium of example 27, wherein the plurality of computing resources include a plurality of cores, and wherein the instructions when executed on the computing device cause the performance balancer to change The frequency of at least one of the plurality of cores.Example 29 may include the at least one computer-readable storage medium of any one of Examples 27 and 28, wherein the instructions, when executed on a computing device, cause the computing device to: calculate the working ratio; and Receive information indicating the work ratio from the progress meter.Example 30 may include the at least one computer-readable storage medium of any one of examples 27 to 29, wherein the instructions, when executed, cause the computing device to change at least one of the plurality of computing resources The operational characteristics of computing resources.Example 31 may include the at least one computer-readable storage medium of any one of examples 27 to 30, wherein the instructions, when executed, cause the computing device to change the information provided to at least one of the plurality of cores. The amount of power of a core.Example 32 may include the at least one computer-readable storage medium of any one of examples 27 to 31, wherein the instructions, when executed, cause the computing device to: allow the multiple tasks to be synchronized at the barrier .Example 33 may include the at least one computer-readable storage medium of any one of examples 27 to 32, wherein each task of the plurality of tasks includes a waiting time at the barrier, and wherein, The instructions, when executed, cause the computing device to repeatedly modify the characteristics to reduce waiting time for at least one task.Example 34 may include a device for controlling computing resources, the device including: a device for globally synchronizing multiple tasks across multiple computing resources; and a device for computing to complete at least one of the multiple tasks A device for the workload of a task; a device for processing the multiple tasks in parallel to complete the work corresponding to each of the multiple tasks; The workload of the at least one task repeatedly calculates the proportion of the work corresponding to one or more of the proportion of the completed work or the proportion of the remaining work to be completed; A device for proportionally modifying the characteristics of at least one of the plurality of computing resources.Example 35 may include the method of example 34, wherein the plurality of computing resources include a plurality of cores, and wherein the frequency of at least one core of the plurality of cores is changed based on the working ratio.Example 36 may include the method of any one of examples 34 and 35, wherein the plurality of computing resources include a plurality of processors, and wherein, based on the work ratio, the number of the plurality of processors is changed The frequency of at least one core.Example 37 may include the device of any one of examples 34 to 36, wherein the plurality of computing resources includes a plurality of nodes, and wherein at least two nodes of the plurality of nodes process parallel nodes.Example 38 may include the method of any one of examples 34 to 37, wherein the plurality of tasks includes a plurality of threads, and wherein the plurality of computing resources includes a plurality of cores.Example 39 may include the apparatus of any one of examples 34 to 38, the apparatus further comprising means for receiving an indication of the working ratio at a runtime monitor.Example 40 may include the device according to any one of Examples 34 to 39, the device further comprising a method for modifying one of the number, distribution, speed, or frequency of at least one of the plurality of computing resources Or multiple devices.Example 41 may include the device of any one of Examples 34 to 40, wherein the characteristic includes speed, and wherein the speed of at least one of the plurality of computing resources is provided to the The amount of electrical power of at least one computing resource is changed.Example 42 may include the method of any one of examples 34 to 41, wherein the plurality of computing resources include one of a core, a processor, a multi-core processor, a node, a cabinet, a cluster, a row, or a grid Or more.Example 43 may include the device of any one of examples 34 to 42, wherein the plurality of computing resources communicate with each other.Example 44 may include the device according to any one of Examples 34 to 43, wherein the plurality of computing resources includes a plurality of core groups, and wherein the device further includes a device for determining the plurality of core groups A device for operating characteristics of at least one group in the group, and a device for modifying the speed of at least one group in the core group based on the measurement value.Example 45 may include the device according to any one of examples 34 to 44, wherein the core group is a node, and wherein the device further includes multiple nodes for calculating operating characteristics for multiple nodes. A device for measuring the skew of a value, and a device for modifying the speed of at least one node based on the skew.Example 46 may include a device for balancing multiple computing resources, the device including: a plurality of nodes, each node has a schedule meter that can determine progress information, the progress information including the total amount of tasks to be completed to complete the task. The amount of work and the amount of work done to complete the task; and a performance balancer that uses the progress information to control the behavior of the multiple nodes.Example 47 may include the device of Example 46, the device further including a runtime monitor for obtaining the progress information and forwarding the progress information to the performance balancer.Example 48 may include the device of any one of examples 46 and 47, wherein the runtime monitor obtains the Progress information.Example 49 may include the device of any one of examples 46 to 48, wherein the runtime monitor includes an application programming interface (API).Example 50 may include the device of any one of examples 46 to 49, wherein the performance balancer is configured to speed up a first part of the plurality of nodes and slow down a second part of the plurality of nodes. To balance the multiple nodes.Example 51 may include the device of any one of Examples 46 to 50, wherein the performance balancer is used to increase the amount of electric power provided to a part of the plurality of nodes, so that the Partially accelerated.Example 52 may include the device of any one of examples 46 to 51, wherein the performance balancer is used to reduce the amount of electric power provided to a part of the plurality of nodes, so that the Partial slowdown.Example 53 may include a method of controlling computing resources, the method comprising: globally synchronizing multiple threads across multiple computing resources; making one or more determinations to the extent to which the threads have been processed; and computing used to complete all The workload of each thread in the plurality of threads, wherein the one or more determinations are used to control at least one of the plurality of computing resources.Example 54 may include the method of example 53, wherein the computing resource includes multiple cores.Example 55 may include the method of any one of Examples 53 and 54, wherein the computing resource includes a plurality of nodes.Example 56 may include the method of any one of Examples 53 to 55, wherein the computing resource includes a plurality of cabinets.Example 57 may include the method of any one of Examples 53 to 56, wherein the computing resource includes a plurality of clusters.Example 58 may include the method of any one of Examples 53 to 57, wherein the computing resource includes a plurality of grids.Example 59 may include a method for enhancing the operating efficiency of multiple computing resources, the method including: globally synchronizing multiple threads across multiple cores; Workload; the multiple threads are processed in parallel to complete the work corresponding to each of the multiple threads; relative to the workload used to complete each of the multiple threads, Iteratively calculating the work ratio corresponding to the ratio of completed work or the ratio of remaining work to be completed; and modifying the core frequency of at least one of the plurality of cores based on the work ratio.Example 60 may include the method of example 59, wherein the cores are grouped into nodes.Example 61 may include the method of any of examples 59 and 60, wherein the nodes are grouped into cabinets.Therefore, the technology and structure described herein can reduce the power consumption of the graphics processor, and are also applicable to other types of processors. As a result, graphics processors and/or other types of processors using these technologies and structures can provide relatively high energy efficiency.The embodiments and modules can be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements may include: processors, microprocessors, circuits, circuit elements (for example, transistors, resistors, capacitors, inductors, etc.), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD) , Digital Signal Processor (DSP), Field Programmable Gate Array (FPGA), logic gates, registers, semiconductor devices, chips, microchips, chipsets, etc. Examples of software may include: software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, Software interface, application programming interface (API), instruction set, calculation code, computer code, code segment, computer code segment, word, value, symbol, or any combination thereof. Determining whether an embodiment can be implemented using hardware elements and/or software elements can be based on any number of factors (e.g., desired calculation rate, power level, heat resistance, processing cycle budget, input data rate, output data rate, memory resources, data Bus speed and other design or performance constraints).An example size/model/value/range may have been given, but the embodiment is not limited to this. As manufacturing technology matures over time, it can be expected that devices with smaller sizes and smaller haptic element sizes can be manufactured. In addition, for simplicity of illustration and discussion, and to avoid obscuring certain aspects of the embodiments, well-known electrical or fluid components may or may not be shown in the drawings. In addition, the arrangement may be shown in the form of a block diagram in order to avoid obscuring the embodiment, and it is also considered that the details about the implementation of such block diagram arrangement are highly dependent on the fact that the platform on which the embodiment can be implemented, that is, such details are completely It should be within the vision of those skilled in the art. In the case of elaborating specific details (for example, circuits) to describe exemplary embodiments, it should be obvious to those skilled in the art that the embodiments can be practiced with or without changes in these specific details. The description can therefore be regarded as illustrative rather than restrictive.The term "coupling" used here refers to discussing any type of relationship between components, direct or indirect, and can be applied to electrical, mechanical, fluid, optical, electromagnetic, electromechanical or other connect. In addition, the terms "first", "second", etc. used herein only help the discussion, unless otherwise specified, they do not carry a special time or chronological meaning. In addition, it should be understood that the indefinite article "a" or "an" carries the meaning of "one or more" or "at least one". As used in this application and the claims, a series of items linked by the term "one or more of" can mean any combination of the listed terms. For example, the phrase "one or more of A, B, or C" can mean A; B; C; A and B; A and C; B and C; or A, B and C.Those skilled in the art will appreciate from the above description that the broad techniques of the embodiments can be implemented in various forms. Therefore, although these embodiments have been described in conjunction with their specific examples, the actual scope of the embodiments should not be limited thereby, because other modifications will be made after those skilled in the art have studied the drawings, specification, and appended claims. Becomes obvious.
An embodiment of a method to generate a circuit design comprising a T-coil network can include determining inductance for inductors and a parasitic bridge capacitance of the T-coil network (305-340). The parasitic bridge capacitance can be compared with a load capacitance metric that depends upon parasitic capacitance of a load coupled to an output of the T-coil network (345, 355). An amount of electrostatic discharge (ESD) protection of the circuit design that is coupled the output of the T-coil network and/or a parameter of the inductors of the T-coil network can be selectively adjusted according to the comparison (350, 360). The circuit design, which can specify inductance of the inductors, the amount of ESD protection, and/or the width of windings of the inductors can be output (365).
1.A computer implemented method of generating a circuit design including a T-coil network, the method comprising:Determining an inductance of the inductor and a parasitic bridge capacitance of the T-coil network, wherein the parasitic bridge capacitance is determined to be a parasitic capacitance according to a terminating resistor in the T-coil network, labeled as CTM, coupled to a parasitic capacitance of an input/output pad at an input of the T-coil network, labeled CPD, and an inter-winding capacitance of the inductor, labeled CBI, to determine the parasitic bridge capacitance;Comparing the parasitic bridge capacitance with a load capacitance measure, the load capacitance measure being based on a parasitic capacitance of a load coupled to an output of the T-coil network;Selecting a magnitude of the ESD protection coupled to the output of the T-coil network or a parameter of the inductor of the T-coil network in accordance with the comparison of the parasitic bridge capacitance and the load capacitance measure ;as well asThe circuit design is output, wherein the circuit design includes an inductance of the inductor, a magnitude of electrostatic discharge protection, and a width of a winding of the inductor.2.The method of claim 1 wherein selectively adjusting comprises adjusting a ratio of the parasitic bridge capacitance to a load capacitance measure that does not include a physical capacitor at an input node of the T-coil network.3.The method of claim 1 further comprising calculating the parasitic bridge capacitance according to CB = [(CTM * CPD) / (CTM + CPD)] + CBI, wherein the parasitic bridge capacitance is labeled CB.4.The method of claim 1 or 3, wherein selectively adjusting comprises increasing a width of a winding of the inductor when the parasitic bridge capacitance is less than the load capacitance measure.5.The method of claim 1 or 3, wherein selectively adjusting comprises increasing a magnitude of electrostatic discharge protection when the parasitic bridge capacitance exceeds the load capacitance measure.6.The method of any one of claims 1 to 3, further comprising selecting the load capacitance measure to be one-twelfth of a parasitic capacitance of the load.7.A system for generating a circuit design having a T-coil network therein, the system comprising:a decision module for determining an inductance of the inductor and a parasitic bridge capacitance of the T-coil network, wherein the parasitic bridge capacitance is determined to be a parasitic capacitance according to a terminating resistor in the T-coil network, labeled as CTM a parasitic capacitance coupled to an input/output pad of an input of the T-coil network, labeled CPD, and an inter-winding capacitance of the inductor, labeled CBI to determine the parasitics Bridge capacitora comparison module for comparing the parasitic bridge capacitance with a load capacitance measure, the load capacitance measure being determined according to a parasitic capacitance of a load coupled to an output of the T-coil network;An adjustment module for selectively adjusting a magnitude of electrostatic discharge protection coupled to the output of the T-coil network in the circuit design or a comparison of the T-coil network according to the comparison of the parasitic bridge capacitance and the load capacitance measure a parameter of the inductor;An output module for outputting the circuit design, wherein the circuit design includes an inductance of the inductor, a magnitude of electrostatic discharge protection, and a width of a winding of the inductor.8.The system of claim 7 wherein the adjustment module includes means for adjusting a ratio of the parasitic bridge capacitance to the load capacitance measure that does not include a physical capacitor at an input node of the T-coil network.9.The system of claim 7 wherein the system further comprises means for calculating the parasitic bridge capacitance according to CB = [(CTM * CPD) / (CTM + CPD)] + CBI, wherein the parasitic bridge capacitance is labeled CB.10.A system according to claim 7 or claim 9, wherein the adjustment module includes means for increasing the width of the winding of the inductor when the parasitic bridge capacitance is less than the load capacitance measure.11.A system according to claim 7 or claim 9, wherein the adjustment module includes means for increasing the magnitude of the electrostatic discharge protection when the parasitic bridge capacitance exceeds the load capacitance measure.
T-coil network design for improving bandwidth and electrostatic discharge immunityTechnical fieldOne or more specific embodiments disclosed in this patent document relate to integrated circuit devices (ICs). In particular, one or more embodiments are directed to circuitry for designing a high frequency input or output that includes a T-coil network for use in an IC.Background techniqueThe frequency of an input or output (hereinafter also referred to as "input/output") signal supplied to an integrated circuit device (IC) is steadily increased with time. When the frequency of the input/output signal reaches the radio frequency (RF) range and tends to the Gigahertz range, multiple impedances are often generated at the input/output nodes. The multi-impedance of the IC input/output node may cause an impedance matching problem between the source of the input/output signal and the input/output signal node of the IC. Impedance mismatch, if not the performance of the degraded IC, can generally degrade the performance of the input/output node.Multi-impedance is a function of a number of small capacitances and inductances associated with components coupled to the IC input/output node. These small capacitors and inductors can include gate capacitance, inductance and capacitance associated with the connection line, package wire inductance, capacitance associated with the input/output pads, capacitance associated with the electrostatic discharge structure, and the like.An impedance mismatch between the source of the input/output signal and the input/output signal node of the IC results in inefficient signal power delivery to the input/output node because there is a percentage of power in the input/output signal from the The input/output node reflects to the source of the input/output signal. In addition, the impedance mismatch also causes the bandwidth of the input/output node to decrease, because the small inductors and capacitors become more pronounced at higher frequencies.To avoid signal power loss, the RF system is dedicated to producing purely resistive impedance at each RF input/output and RF output. To remove multiple impedances at the IC input/output nodes, a matching network can be implemented at the input/output nodes of the IC to seek to cancel the multi-impedance. If there is no matching network, the frequency band of many IC inputs/outputs will be limited to the maximum operating frequency that is much lower than the frequency range of the desired input/output signal.Summary of the inventionOne or more specific embodiments disclosed in this patent document relate to integrated circuit devices (ICs), and more particularly to circuits that design a network containing a T-coil for use in a high frequency input or output of an IC. A particular embodiment can include a method of utilizing a system including a processor and a memory to generate a circuit design having a T-coil network therein. The method can include determining an inductance of the inductor and a parasitic bridge capacitance of the T-coil network, and comparing the parasitic bridge capacitance to a load capacitance measure, the load capacitance measure being coupled to the output of the T-coil network Depends on the parasitic capacitance of the load. According to the comparison result, the magnitude of the electrostatic discharge (ESD) protection coupled to the output of the T-coil network in the circuit design and/or the parameters of the inductor of the T-coil network are selectively adjusted. The circuit design can be output with the inductance of the inductor, the magnitude of the ESD protection, and/or the width of the winding of the inductor.In this method, selectively adjusting may include adjusting a ratio of the parasitic bridge capacitance to a load capacitance measure that does not include a physical capacitor at an input node of the T-coil network. Determining that the parasitic bridge capacitor can include a parasitic capacitance of a terminating resistor in the T-coil network, labeled as CTM, and an input/output pad coupled to an input of the T-coil network The parasitic capacitance, labeled CPD, and an inter-winding capacitance of the inductor, labeled CBI, determines the parasitic bridge capacitance.The method can further include calculating the parasitic bridge capacitance according to CB = [(CTM * CPD) / (CTM + CPD)] + CBI, wherein the parasitic bridge capacitance is labeled CB. Selectively adjusting can include increasing the width of the winding of the inductor when the bridge capacitance is less than the load capacitance measure. Selectively adjusting may include increasing the amount of electrostatic discharge protection when the bridge capacitance exceeds the load capacitance measure. Determining the inductance of the inductor and the parasitic bridge capacitance of the T-coil network may include determining a parasitic capacitance of a terminating resistor within the T-coil network, labeled CTM, an input coupled to the T-coil network a parasitic capacitance of an input/output pad, labeled as CPD, and an initial value of the parasitic capacitance of the load; estimating an initial value of the inductor; determining the inductor according to an initial value of the inductor An initial value of the inter-winding capacitance, labeled CBI; and an initial value that determines the parasitic bridging capacitance, labeled CB, where the parasitic bridging capacitance is based on each of CTM, CPD, and CBI.The method can further include utilizing an initial value of the parasitic bridge capacitance to calculate an updated value of the inductor; determining an updated value of the inter-winding capacitance of the inductor using the updated value of the inductor; The updated value of the line capacitance is used to calculate an updated value of the parasitic bridge capacitance. Additionally, the method can further include selecting the load capacitance measure to be one-twelfth of a parasitic capacitance of the load.Another embodiment may include a system for generating a circuit design in which a T-coil network is included. The system can include a decision module for determining an inductance of the inductor and a parasitic bridge capacitance of the T-coil network, wherein the parasitic bridge capacitance is determined to comprise a parasitic capacitance according to a terminating resistor within the T-coil network , labeled CTM, a parasitic capacitance of an input/output pad coupled to an input of the T-coil network, labeled CPD, and a inter-winding capacitance of the inductor, labeled CBI, To determine the parasitic bridge capacitance;a comparison module for comparing the parasitic bridge capacitance with a load capacitance measure, the load capacitance measure being determined according to a parasitic capacitance of a load coupled to an output of the T-coil network;An adjustment module for selectively adjusting a magnitude of electrostatic discharge protection coupled to the output of the T-coil network in the circuit design or a comparison of the T-coil network according to the comparison of the parasitic bridge capacitance and the load capacitance measure a parameter of the inductor;An output module for outputting the circuit design, wherein the circuit design includes an inductance of the inductor, a magnitude of electrostatic discharge protection, and a width of a winding of the inductor.In this system, the adjustment module can include means for adjusting the ratio of the parasitic bridge capacitance to a load capacitance measure that does not include a physical capacitor at an input node of the T-coil network. Determining that the parasitic bridge capacitor includes a parasitic capacitance according to a terminating resistor in the T-coil network, labeled CTM, a parasitic input to an input/output pad coupled to an input of the T-coil network The capacitor, labeled CPD, and an inter-winding capacitance of the inductor, labeled CBI, determines the parasitic bridge capacitance. The system can further include means for calculating the parasitic bridge capacitance according to CB = [(CTM * CPD) / (CTM + CPD)] + CBI, wherein the parasitic bridge capacitance is labeled CB.The adjustment module can include means for increasing the width of the winding of the inductor when the parasitic bridge capacitance is less than the load capacitance measure. The adjustment module can include means for increasing the magnitude of the electrostatic discharge protection when the parasitic bridge capacitance exceeds the load capacitance measure.Another embodiment can include an apparatus that includes a data storage medium that can be utilized by a system including a processor and a memory. The data storage medium can store program code, and when the system is executed, the system can perform a plurality of executable operations. The executable operation may include determining an inductance of the inductor and a parasitic bridge capacitance of the T-coil network, and may also include comparing the parasitic bridge capacitance to a load capacitance measure, the measure being coupled to the T- The parasitic capacitance of the load output from the coil network depends. The executable operation can further include selectively adjusting a magnitude of electrostatic discharge (ESD) protection coupled to the output of the T-coil network in the circuit design in accordance with a comparison of the parasitic bridge capacitance and the load capacitance measure Or the parameters of the inductor of the T-coil network. Moreover, the executable operations can further include outputting the circuit design. The circuit design identifies the inductance of the inductor, the magnitude of the ESD protection, and/or the width of the winding of the inductor.In this arrangement, selectively adjusting may include adjusting a ratio of the parasitic bridge capacitance to a load capacitance measure that does not include a physical capacitor at an input node of the T-coil network. Determining that the parasitic bridge capacitor includes a parasitic capacitance according to a terminating resistor in the T-coil network, labeled CTM, a parasitic input to an input/output pad coupled to an input of the T-coil network The capacitor, labeled CPD, and an inter-winding capacitance of the inductor, labeled CBI, determines the parasitic bridge capacitance. The system can perform an executable operation including calculating the parasitic bridge capacitance according to CB = [(CTM * CPD) / (CTM + CPD)] + CBI, wherein the parasitic bridge capacitance is labeled CB. Selectively adjusting can include increasing a width of the winding of the inductor when the bridge capacitance is less than the load capacitance measure, and increasing a magnitude of electrostatic discharge protection when the bridge capacitance exceeds the load capacitance measure.DRAWINGS1 is a block diagram illustrating a system for designing a T-coil network implemented in an integrated circuit device (IC), in accordance with an embodiment.2 is a circuit diagram in which an exemplary circuit including a T-coil network is illustrated in accordance with another embodiment.3 is a flow chart illustrating a method of designing a T-coil network for an IC in accordance with another embodiment.Detailed waysThis patent specification is a summary of the scope of the patent application, which is considered to have one of the novel features or more specific embodiments, and will be better understood from the detailed description and the accompanying drawings. Specific embodiment. The specific embodiments are disclosed herein, and the specific embodiments disclosed are only illustrative of the embodiments of the invention, and can be embodied in various other forms. Therefore, the specific structural and functional details of the present disclosure are not to be construed as limiting the nature of the invention, but are the basis of the scope of the application of the present invention, and the teachings of the present invention are employed in various embodiments. A representative base configuration. In addition, the words and phrases used in the present disclosure are not intended toOne or more specific embodiments disclosed in this patent text relate to semiconductor integrated circuit devices (ICs). In more detail, one or more embodiments are directed to designing a T-coil network for use in an input/output node of an IC. In accordance with the inventive arrangements disclosed herein, a T-coil network design technique can be provided to properly handle capacitances that are neglected in conventional design techniques. One or more embodiments may be by adding more electrostatic discharge (ESD) components and/or modifying parameters of the inductor of the T-coil network, such as the width of the coil of the inductor of the T-coil network, The different capacitance values are further balanced by modifying the characteristics of the T-coil network. Both approaches not only help maximize the bandwidth of the T-coil network and minimize distortion, but can also be used to improve the ESD protection provided by the input/output nodes of the IC.1 is a block diagram illustrating a system 100 for designing a T-coil network implemented in an IC, in accordance with an embodiment. In one feature, the system 100 can generate one or more T-coil network designs to provide instantiation within the IC.As shown in FIG. 1, the system 100 includes at least one processor 105 coupled to the memory component 110 via a system bus 115. Accordingly, the system 100 can store program code within the memory component 110. The processor 105 can execute program code accessed from the memory component 110 via the system bus 115. For example, in one feature, the system 100 can be implemented with a computer suitable for storing and/or executing program code. It should be understood that the system 100 can be implemented in the form of any system containing a processor and memory capable of performing the functions described herein.The memory component 110 can include one or more physical memory devices, such as, for example, local memory 120 and one or more large storage devices 125. The local memory 120 refers to a random access memory or other non-persistent memory device that is typically utilized during actual program code execution. The (s) large storage device 125 can be implemented as a hard disk drive or other persistent data storage device. The system 100 also includes one or more caches (not shown) for temporarily storing at least a portion of the program code to reduce the number of times the program code must be fetched from the large storage device 125 during execution.A plurality of input/output (I/O) devices, such as keyboard 130, display 135, and pointing device (not shown), can be selectively coupled to system 100. The I/O device can be coupled to the system 100 either directly or via an intervening I/O controller. A network adapter can also be coupled to the system 100 to enable the system 100 to be coupled to other systems, computer systems, remote printers, and/or remote storage devices through the intermediary private or public network. Modems, cable modems, and Ethernet cards are examples of different types of network adapters that can be used with the system 100.The memory component 110 can store a circuit design module 140. Circuit design module 140 implemented in the form of executable program code can be executed by system 100. The circuit design module 140 can receive design specifications for circuits in which the T-coil network is included. The circuit design module 140 can further determine and/or obtain, for example, read, for a circuit design and/or a T-coil network incorporated within the circuit design, and stored in the memory component 110 The component values obtained for more components or features. In general, a T-coil network contains two series inductors, and an input/output load is connected to the T-coil network at a coupling point between the two inductors. The T-coil network can reduce or cancel the multi-impedance associated with the capacitive load at the IC input/output. Implementing a T-coil network at an IC's input/output node increases the bandwidth of the input/output node. This improvement can result in better input/output node RF system performance by, for example, reducing return loss, reducing bit error rate, or increasing power gain.The circuit design module 140 can utilize the design specifications and the component values obtained to determine a first estimate of the total bridge capacitance across the two inductors in the T-coil network, labeled CB. The circuit design module 140 can utilize the first value of the CB, or the load capacitance measure observed at the output node of the T-coil network, which is labeled CL/12, the larger of which is calculated for the T- The values of the two inductors in the coil network are labeled L1 and L2. The circuit design module 140 can determine the second value of the CB using the inter-winding capacitance derived from the L1 and L2 values.The circuit design module 140 can compare the value of CB with a measure based on the value of CL, such as a load capacitance measure, and increase the value of CB or the value of CL until the two values are equal or approximately equal, such as at one Within the preset range or the tolerance of the other. For example, CB can be increased by increasing the inter-winding capacitance of L1 and L2. For example, CL can be increased by increasing the magnitude of ESD protection applied to the output node of the T-coil network.The obtained parameters, such as the values of CB, CL, L1, and L2, and the magnitude of the ESD protection used, and other parameters related to the inductors L1 and L2, such as the winding of the inductor, may be The width of the line, as an output, is either incorporated into the circuit design 145 and stored in the memory component 110. For example, as used in this disclosure, "output" may mean stored in the memory component 110, such as to a file stored within the memory component 110, to a display 135 or other peripheral output device, Play audible notifications, send or transfer to another system, export, and more.2 is a circuit diagram illustrating an exemplary circuit 200 including a T-coil network in accordance with another embodiment. The circuit 200 illustrates an input/output node of an IC. As an icon, a T-coil network is implemented to improve the matching of the impedance of the input/output node of the IC to the impedance of the output of an input/output signal to the source of the IC input/output. The circuit 200 can include an input/output device 205, an input/output pad 210, ESD devices 215 and 220, and a T-coil network 225.The input/output device 205 can be any input/output device configured in an IC to receive an external high frequency signal as an input/output. The input/output device 205 can be coupled to other input/output circuits within the IC. Additional input/output circuits represent additional devices or circuits that can be coupled to the input/output device 205 to process input/output signals received through the input/output pads 210.Input/output signals are provided to input/output pads 210. The input/output signal can be a radio frequency (RF) input/output signal, such as a high speed digital signal. The input/output pad 210 can be any pad structure available in the IC process, allowing signals external to the IC to be provided to the IC's internal circuitry. The input/output pad 210 is coupled to the T-coil network 225 at a T-coil input/output node (input/output node) 235. The input/output pad 210 can couple the input/output signal to a portion of the input/output device 205 in a signal path.The ESD devices 215 and 220 are coupled to a T-coil output node (output node) 240. The output node 240 can provide a signal to the input/output device 205. In Figure 2, the ESD devices 215 and 220 are implemented as ESD diodes. It should be understood that the ESD devices 215 and 220 can be any device within the IC process that can provide protection to the input/output device 205 from ESD events. For example, the ESD devices 215 and 220 can be diodes, although the ESD devices 215 and 220 are not limited to diodes.The T-coil network 225 can include two inductors, labeled L 250 and L 255, and a terminating resistor, labeled RTM 260. The T-coil network 225 can contain multiple parasitic inductances. The parasitic inductance, although not a practical circuit component, is labeled as CL 245, CBI 265, CTM 270, and CPD 275 in FIG.CL 245 represents the sum of the parasitic capacitances that occur at the output node 240, that is, at the input/output nodes of the input/output device 205. Thus, CL 245 is representative of the load capacitance observed by the T-coil network 225. CL 245 can include parasitic capacitance associated with various devices coupled to the output node 240. For example, CL 245 can include a gate capacitance associated with the input/output device 205, a capacitance associated with the interconnect path connecting the device to the output node 240, a capacitance associated with the ESD devices 215 and 220, and the like. . CL 245, along with various parasitic inductances and capacitances associated with the IC and the IC package, can create multiple impedances to the source providing the high frequency input/output signals to the input/output device 205.CBI 265 is the inter-winding capacitance associated with inductors L 250 and L 255. As used in this disclosure, "inter-winding capacitance" refers to the parasitic capacitance caused by the capacitive coupling between closely spaced turns of the inductor. The inter-winding capacitance increases as the width of the inductor winding increases. Correspondingly, the capacitance between the windings decreases as the width of the winding of the inductor becomes smaller. Therefore, the value of CBI 265 increases as the width of the winding of each of inductors L 250 and L 255 increases. The value of CBI 265 decreases as the width of the winding of each of inductors L 250 and L 255 becomes smaller. Since the values of the inductors L 250 and L 255 match, the value of the CBI 265 can be increased or decreased depending on the width of one or both of the inductors L 250 and L 255 .It should be understood that although the width of the winding of the inductor is listed as a parameter to be modified in the inductor and T-coil network, other parameters associated with the arrangement of the inductor may be modified. To achieve a change in the inter-winding capacitance CBI 265 of the inductors L 250 and L 255. For example, the spacing between the inductors L 250 and L 255 can be varied, such as distance. In other examples, a grounded metal shield can be placed under the T-coil. The characteristics of the mask can be further altered to affect the inter-winding capacitance CBI.CTM 270 can represent various capacitances associated with the terminating resistor RTM 260. For example, CTM 270 can represent the parasitic capacitance created by the capacitive coupling between the polysilicon layer of the RTM 260 and the underlying substrate layer of the IC. CPD 275 can represent various capacitances associated with the input/output pad 210. For example, CPD 275 can represent the parasitic capacitance generated by the capacitive coupling between the metal layer of the input/output pad 210 and the underlying substrate layer of the IC.Parasitic capacitances CBI 265, CTM 270, and CPD 275 may be collectively referred to as the bridge capacitance of the T-coil network 225. In one embodiment, the outline of the bridge capacitance labeled CB can be determined by the values obtained by connecting CPD 275 and CTM 270 in series and then paralleling CBI 265. This relationship can be rewritten into the following form: CB = [(CTM * CPD) / (CTM + CPD)] + CBI. For the sake of brevity, the reference number of Figure 2 has been excluded from the rewritten equation.When implemented at an input/output node, the T-coil network 225 can cancel the multi-impedance associated with the input/output device 205 and generate a high frequency input/output signal to drive the input/output device 205. The source exhibits a dominant resistive impedance. In general, the input and output nodes of an RF system are designed to have a matching characteristic impedance of 50 ohms. Therefore, the source resistance (RSOURCE) and RTM260 can each be used as a characteristic impedance of about 50 ohms. When properly implemented, the T-coil network 225 can have the effect of canceling the multi-impedance seen by the output of the source from which the input/output signal is generated, so the input/output node of the IC is considered purely resistive by the source. And the source resistance (RSOURCE) is approximately equal to RTM 260.The traditional T-coil network technology evaluates the CBI by offsetting the equation to determine if the CBI is lower than required, and adds the physical capacitor CBL to meet the offset requirements in accordance with this evaluation. In particular, according to the evaluation results of the CBI, the conventional T-coil network design technique will be incorporated into the physical capacitor CBL, which is coupled to the input/output node 235 and the node 298. The technique seeks to reduce CL 245 to a tolerable value such that the source from which the input/output signal is generated can be properly driven. Other considerations that can affect the CL 245 include, for example, the desired amount of ESD protection and the maximum allowable loss bandwidth at the input/output node of the IC. So the program starts with a lower than ideal assumption. The values of L250 and L 255 are calculated according to the function of CL 245. The value k is the mutual inductance between L 250 and L 255 and is set to 0.5 ± 0.1. Then use the electromagnetic (EM) simulation tool and set L 250 and L255 to the previously calculated values to obtain CBI 265. With the relationship of CB=CBI+CBL, CBL can be increased until CB=CL/12, thereby maximizing the bandwidth.As previously mentioned, conventional T-coil network design techniques do not address the loop back capacitance produced by the CTM 270 and CPD 275 modeled in Figure 2. Exclusion or lack of CTM 270 and CPD 275 in conventional T-coil network design techniques results in incorrect impedance matching of the T-coil network to the source from which the input/output signal is generated. Therefore, in the conventional T-coil network design technique, the bridge capacitor CB is defined as CB=CBI+CBL. The traditional T-coil network design technique will further determine the values of L 250 and L 255 and the parameters of L 250 and L 255 based on the value of CL 245. To achieve the CB=CL/12 condition to maximize the bandwidth of the IC's input/output nodes, the physical capacitor CBL is typically incorporated as described.However, in accordance with the inventive arrangements disclosed herein, CB and CL/12 can then be compared to design the inductor. Loopback capacitors CPD 275 and CTM 270 are modeled and incorporated into this design technique. This can be determined by a calculation based on silicon data, a two or three dimensional EM simulation taken from a layout database, or any other method that can be used to derive parasitic capacitance associated with RTM 260 and the input/output pad 210. CTM 270 and CPD 275. Using the technique, an initial estimate of CBI 265 for inductors L 250 and L 255 can be derived. For example, a value that can provide the desired bandwidth to the input/output node of the IC for the inductors L 250 and L 255 can be utilized to initially estimate the CBI 265. Further details regarding the design of the T-coil network can be provided with reference to FIG. 3 in accordance with one or more specific embodiments of the present disclosure.FIG. 3 is a flow diagram illustrating a method 300 of designing a T-coil network for an IC in accordance with another embodiment. The method 300 can be implemented using the system described with reference to FIG. In general, the method 300 describes a method for designing a T-coil network to increase bandwidth and ESD performance at an IC input/output node. To this end, the method 300 utilizes the circuit design modeled and illustrated with reference to FIG.Beginning in step 305, the system can determine the parasitic capacitance CTM for the termination resistor, the parasitic capacitance CPD of the pad of the input/output node, and the value of the load capacitance CL. This information can be obtained from a database, for example, with the characteristics of the pad, the T-coil resistance and the capacitance of the input/output device are known. For example, the value can be determined or determined from previous simulation results, or measured characteristics of an IC previously implemented using the same process.At step 310, the system can estimate the value of L for each inductor of the T-coil network. First, the value of L can be estimated according to various factors, such as the desired bandwidth of the input/output of the IC, the magnitude of the ESD protection for the number and type of ESD devices used, and the like. At step 315, a physical description of the inductor can be generated. The entity description of the inductor can be modeled as described in Figure 2 and includes a solid model of the various parameters of the inductor. The L value determined in step 310 can be used to determine the physical description of the inductor. For example, given the initial value of L determined in step 310, the system can automatically generate a physical description of the inductor that is expected to provide the initial value of L determined by performing the EM simulation with the EM simulator in step 310. The generated entity description may, for example, specify a plurality of values including, but not limited to, the number of windings of each inductor, the initial width of the winding, the k value, and the like. These parameters can be determined based on the initial determined value of the inductance L.At step 320, the system can determine the initial value of the inter-winding capacitance CBI. The initial value of CBI is labeled CBI1 and may be determined in accordance with the circuit design of the T-coil network described in step 315, where the L estimate from step 310 is placed into the designated T-coil as described in FIG. The circuit design of the physical layout of the network. In one embodiment, the initial CBI value labeled CBI1 can be determined by the EM simulator and taken from an EM simulation. The EM simulation can be performed by the system or another electronic automation design tool and then provided to the system. In this regard, the EM simulations described with reference to steps 315 and 320 can be, for example, a single EM simulation, and the physical parameters of the inductor and the initial values of the CBI can be determined therefrom.At step 325, the system can determine the initial value of CB using the inter-winding capacitance CBI determined in step 320 and designate it as CB1. As described in Fig. 2, CB = [(CTM * CPD) / (CTM + CPD)] + CBI. At step 330, the system may utilize the value of CB1 determined in step 325 to calculate a target value for L for the inductor of the T-coil network. The target value of L can be determined using the expression L = 4*(Cmax*RTM^2), where Cmax is the larger of the CB or CL/12 values. In this example, CB can be replaced with CB1. The value of CL can be the value determined in step 305.Using the L target value determined in step 330, the system can determine the updated value of the CBI in step 335 and mark it as CBI2. In one embodiment, the value of CBI2 can be calculated using a three-dimensional EM simulator that operates on the circuit design of the physical layout of the T-coil network using the L values determined in step 330. It should be appreciated that because the target value of L is used, the system can modify and/or update one or more other parameters of the inductor within the T-coil network entity model, such as in an automated manner or in response to specifying these updated parameters. The input/output used is to provide the target value of L calculated in step 330. At step 340, the system may determine the updated value of the CB and mark it as CB2. CB2 can be determined according to the above expression, where CB2 = [(CTM * CPD) / (CTM + CPD)] + CBI, and CBI2 is used to replace CBI.At step 345, the system can compare the latest values of the CB, such as CB2, with a load capacitance measure. In one embodiment, the load capacitance measure can be defined as CL/12. Thus the latest value of CB, such as CB2, can be compared to CL/12 to determine if CB is less than the load capacitance measure. When the value of CB is less than the value of CL/12, the method 300 can proceed to step 350. At step 350, one or more parameters of the inductor can be adjusted. For example, as previously described, the windings of the inductors within the T-coil network specified in the physical layout of the circuit design can be adjusted to change the value of CB. In particular, the winding width of the inductor of the T-coil network can be increased. Increasing the winding width of the inductor of the T-coil network increases the inter-winding capacitance CBI, thus increasing the value of CB. Increasing the winding width of the inductor of the T-coil network also reduces the series resistance through the inductor L, thus improving the ESD performance of the T-coil network. Thus, after step 350, method 300 can loop back to step 335 to continue processing.When the value of CB is greater than or equal to the value of the load capacitance measure, this example is CL/12, and method 300 may proceed to step 355. It should be understood that when the value of CL is divided by 12, for example, the load capacitance measure, equal to CB2, the bandwidth of the input/output node of the T-coil network specified by the circuit design can be maximized. In particular, the bandwidth of the flat time delay response is maximized.At step 355, the system can determine if the value of the load capacitance measure CL/12 is equal to the value of CB. When the value of CL/12 is equal to the value of CB, the method 300 can proceed to step 365 because the bandwidth of the flat time delay response has been maximized. Maximizing the flat time delay response effectively minimizes distortion of the received digital signal. And when the value of CL/12 is not equal to the value of CB, for example, when the value of CB is greater than the value of CL/12, the method 300 may proceed to step 360. At step 360, the amount of ESD protection provided to the input/output node of the IC can be increased. The circuit design that can update the physical layout of the T-coil network can be updated to incorporate the enhanced ESD protection. For example, the number of ESD devices can be expanded or the size of the ESD device located at the output of the T-coil network can be increased. The aforementioned increase in the amount of ESD protection increases the parasitic capacitance CL. The method 300 can be performed recursively, so CL will continue to increase until the value of CL/12 is equal to, or is approximately equal to, the CB value determined by step 355 within a predetermined tolerance or range.At step 365, a circuit design can be output. The circuit design can specify the physical layout of the T-coil network and thus include, but is not limited to, the value of the inductor, the width of the winding of the inductor, the load capacitance, the parasitic bridge capacitance, the magnitude of the ESD protection, etc. Multiple parameters.One or more specific embodiments disclosed herein relate to a T-coil network design for use with an IC input/output node. The one or more embodiments provide a more accurate model and procedure for determining the bridge capacitance of the T-coil network. The T-coil network design procedure disclosed herein is recursive in nature and by varying the loop width of the inductor of the T-coil network and/or increasing the ESD protection provided to the input/output nodes of the IC,俾 Seeking to maximize the bandwidth of the T-coil network. One or more embodiments disclosed herein may maximize bandwidth without incorporating a solid inductor CBL as in conventional T-coil network design techniques.Moreover, one or more of the specific embodiments disclosed herein can be utilized as part of or within a design/optimization implementation or technique to provide maximum guidance for T-coil network performance. One or more steps can be done manually and provided to the system as an input/output. For example, a circuit designer can create a test IC to determine the parasitic capacitance of the T-coil network and/or other parameters to replace the analog. The circuit designer can continue to adjust the values over multiple iterations to optimize the inductor and/or T-coil network, as described herein, and generate further test ICs to replace the simulation.The architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to one or more embodiments are illustrated in the accompanying drawings. In this regard, each block in the flowchart can represent a module, a segment, or a portion of a program code that contains one or more portions of executable program code that implements (multiple) specific logical functions.It should be noted that in some alternative implementations, the functions noted in the blocks may be performed in an order different from that shown in the drawings. For example, depending on the functionality involved, two blocks shown as contiguous may in fact be executed substantially simultaneously, or the blocks may sometimes be performed in the reverse order. It should also be noted that the various blocks of the flowchart illustrations, as well as the combinations of the blocks illustrated in the flowcharts, can be implemented by a special purpose hardware system that performs a specific function or action, or a combination of special purpose hardware and executable instructions. Implementation.One or more specific embodiments can be implemented in hardware or a combination of hardware and software. The one or more embodiments may be centralized in a single system or a distributed implementation of different components across several interconnected systems. Any type of data processing system or other device adapted to perform the methods described herein is suitable.One or more embodiments may be further embedded in a device, such as a computer program product, which includes all of the features enabling the implementation of the method described herein. The apparatus can include a data storage medium, such as computer usable or computer readable medium, for storing program code that, when loaded and executed by a system containing the memory and processor, can cause the system to perform the functions described herein. Examples of data storage media may include, but are not limited to, optical media, magnetic media, magneto-optical media, computer memory such as random access memory or hard disk, and the like.The terms "computer program", "software", "application", "computer usable program code", "program code", "executable program code" and variations and/or combinations thereof are used in any language. , a digital or annotated set of instructions that are intended to enable a system having information processing functions to perform a particular function either directly or after either or both of the following: a) converting to another a language, digital or annotation; b) reproduced in different material forms. For example, the program code may include, but is not limited to, subroutines, functions, programs, object methods, object implementations, executable applications, applets, small servos, source code, object code, shared library/dynamic Load the library and/or other sequences of instructions designed to execute on the computer system.The term "a" as used herein is defined as one or more. The term "plurality" as used herein is defined as two or more. The term "another" as used herein is defined to mean at least a second or more. The word "comprising" and/or "containing" as used herein is defined to include, that is, an open language. The term "coupled" as used herein is defined to mean, directly, without any intervening components or indirectly, with one or more intervening components, unless otherwise noted. The two components can also be mechanically, electrically coupled, or communicatively coupled through a communication channel, path, network, or system.One or more specific embodiments disclosed herein may be embodied in other forms without departing from the spirit or essential attributes. Therefore, when describing the scope of the specific embodiments of the present invention, reference should be made to the scope of the appended claims, rather than the foregoing description.
In an embodiment, a processor includes a plurality of cores and a cache unit reserved for a first core of the plurality of cores. The cache unit may include a first cache slice, a second cache slice, and power logic to switch operation of the cache unit between a first operating mode and a second operating mode. The first operating mode may include use of both the first cache slice and the second cache slice. The second operating mode may include use of the first cache slice and disabling the second cache slice. Other embodiments are described and claimed.
A processor comprising:a plurality of cores (106a,..,106n); anda cache unit (105a) reserved for a first core (106a) of the plurality of cores, the cache unit comprising a first cache slice (110a), a second cache slice (110b), and power logic (120) configured to switch operation of the cache unit (105a) between a first operating mode and a second operating mode,wherein the first operating mode comprises use of both the first cache slice and the second cache slice, and wherein the second operating mode comprises use of the first cache slice and disabling the second cache slice,characterized in that each cache slice comprises a queue (112a), a cache memory (114a) and an interface unit (116a), andwherein the second operating mode further comprises disabling the cache memory (114a) of the first cache slice.The processor of any preceding claim, wherein the processor further comprises a power control unit to generate a request to switch the operation of the cache unit.The processor of any preceding claim, wherein the power control unit is further to, upon switching to the second operating mode, initiate a count in a cache counter.A method of switching an operating mode of a cache unit, the method comprising:receiving (312), by power logic included in a cache unit of a processor, a first request to switch the cache unit from a first operating mode to a second operating mode, wherein the cache unit comprises a first cache slice and a second cache slice and wherein the first operating mode comprises use of both the first cache slice and the second cache slice; andin response to the first request, initiating (320) the second operating mode in the cache unit, the second operating mode including use of the first cache slice and disabling the second cache slice,characterized in that each cache slice comprises a queue (112a), a cache memory (114a), and an interface unit (116a), andwherein the second operating mode comprises disabling a cache memory portion (114a) of the first cache slice (110a).The method of claim 4, wherein initiating the second operating mode comprises:in response to the request, setting at least one configuration register to indicate receipt of the first request from a power control unit; andupon waking (318) from a sleep state, initiating the second operating mode in the cache unit based on the at least one configuration register.The method of either of claims 4 and 5, further comprising, upon initiating the second operating mode in the cache unit:initiating a cache counter to perform a count; andupon reaching a maximum count in the cache counter, switching the cache unit to the first operating mode, the first operating mode comprising use of both the first cache slice and the second cache slice.The method of any of claims 4 to 6, further comprising:generating the first request based on a type of processing task to be performed by a core associated with the cache unit.The method of any of claims 4 to 7, further comprising:generating the first request when a frequency of sleep states expected in a processing task exceeds a threshold level.Apparatus comprising means for performing the method of any one of claims 4 to 8.At least one machine readable medium comprising a plurality of instructions that in response to being executed on a computing device, cause the computing device to carry out a method according to any one of claims 4 to 8.
Technical FieldEmbodiments relate generally to power management of electronic devices.BackgroundConventionally, an electronic device may include one or more reduced power modes, meaning an operating mode in which at least one component of the device is placed in a reduced power state. The use of a reduced power mode may decrease the amount of electrical power consumed in comparison to an "awake" or normal operating mode.US 2006/075192 A1 discloses a plurality of processor cores each including a cache memory coupled to a cache monitor unit and a configuration unit. The configuration unit may selectively disable one or more portions of the cache memory in response to a determination that a current utilization is below a predetermined utilization value.US 2012/173907 A1 discloses a method to dynamically resize a cache to an optimal cache size based on a comparison of the cache performance parameters to their energy-efficient targets.Brief Description of the DrawingsFIGS. 1A-1B are block diagrams in accordance with one or more embodiments.FIG. 2 is a block diagram in accordance with one or more embodiments.FIGS. 3A-3C are sequences in accordance with one or more embodiments.FIG. 4 is a block diagram of a processor in accordance with an embodiment of the present invention.FIG. 5 is a block diagram of a multi-domain processor in accordance with another embodiment of the present invention.FIG. 6 is a block diagram of an embodiment of a processor including multiple cores.FIG. 7 is a block diagram of a system in accordance with an embodiment of the present invention.Detailed DescriptionThe scope of protection of the present invention is defined by independent claims 1 and 4. Optional features of embodiments of the invention are defined by the sub-claims.A cache unit associated with a core includes a first cache slice, a second cache slice, and power logic to control the operating mode of the cache unit. In a normal operating mode, the cache unit uses both the first cache slice and the second cache slice. Further, in a reduced power mode, the cache unit uses a portion of first cache slice, and disables the second cache slice. In some embodiments, this reduced power mode may be requested by a power control unit based on a type of processing task to be performed by the core. Accordingly, embodiments enable reduction in the power consumed by the cache unit.Although the following embodiments are described with reference to energy conservation and energy efficiency in specific integrated circuits, such as in computing platforms or processors, other embodiments are applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of embodiments described herein may be applied to other types of circuits or semiconductor devices that may also benefit from better energy efficiency and energy conservation. For example, the disclosed embodiments are not limited to any particular type of computer systems, and may be also used in other devices, such as handheld devices, systems on chip (SoCs), and embedded applications. Some examples of handheld devices include cellular phones, Internet protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications typically include a microcontroller, a digital signal processor (DSP), network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system that can perform the functions and operations taught below.Moreover, the apparatus, methods, and systems described herein are not limited to physical computing devices, but may also relate to software optimizations for energy conservation and efficiency. As will become readily apparent in the description below, the embodiments of methods, apparatus, and systems described herein (whether in reference to hardware, firmware, software, or a combination thereof) are vital to a 'green technology' future, such as for power conservation and energy efficiency in products that encompass a large portion of the US economy.Note that embodiments described herein may be independent of and/or complementary to an operating system (OS)-based mechanism, such as the Advanced Configuration and Platform Interface (ACPI) standard (e.g., Rev. 3.0b, published October 10, 2006). According to ACPI, a processor can operate at various performance states or levels, namely from P0 to PN. In general, the P1 performance state may correspond to the highest guaranteed performance state that can be requested by an OS. In addition to this P1 state, the OS can further request a higher performance state, namely a P0 state. This P0 state may thus be an opportunistic state in which, when power and/or thermal budget is available, processor hardware can configure the processor or at least portions thereof to operate at a higher than guaranteed frequency. In many implementations a processor can include multiple so-called bin frequencies above a guaranteed maximum frequency, also referred to as a P1 frequency. In addition, according to ACPI, a processor can operate at various power states or levels. With regard to power states, ACPI specifies different power consumption states, generally referred to as C-states, C0, C1 to Cn states. When a core is active, it runs at a C0 state, and when the core is idle it may be placed in a core low power state, also called a core non-zero C-state (e.g., C1-C6 states), with each C-state being at a lower power consumption level (such that C6 is a deeper low power state than C1, and so forth).Referring to FIG. 1A , shown is a block diagram of a system 100 in accordance with one or more embodiments. In some embodiments, the system 100 may be all or a portion of an electronic device or component. For example, the system 100 may be a cellular telephone, a computer, a server, a network device, a controller, an appliance, etc.As shown in FIG. 1A , the system 100 may include a processor 101 coupled to a memory 108. The memory 108 may be any type of computer memory (e.g., dynamic random access memory (DRAM), static random-access memory (SRAM), etc.). As shown, in some embodiments, the processor 101 may be a multicore processor including multiple execution groups 102a-102n, each including a cache unit 105 and a core 106. For example, in some embodiments, the execution groups 102a-102n may be multiple tiles included within a single die of the processor 101.In one or more embodiments, each cache unit 105 is private to its associated core 106. Further, in some embodiments, the cache unit 105 may correspond to a single cache level (e.g., a middle level cache of a cache hierarchy). Alternatively, in other embodiments, the cache unit 105 may represent a cache memory hierarchy having multiple cache levels (e.g., a three-level hierarchy with a low level cache, a middle level cache, and a high level cache).As shown, in some embodiments, the processor 101 may also include a power control unit 107. In one or more embodiments, the power control unit 107 may include functionality to control or manage one or more power states of the processor 101 (or a portion thereof). For example, the power control unit 107 may cause an execution group 102 to enter a "sleep" state (e.g., a C6 state), meaning a power state in which the execution group 102 is not active, but which may require a shorter time to restore full functionality in comparison to a full shutdown of the execution group 102. In some embodiments, such a sleep state may provide a relatively high level of power savings in comparison to a normal power state (e.g., a C0 state).Referring now to FIG. 1B , shown is an example embodiment of the cache unit 105. As shown, the cache unit 105 includes various components, including a first slice 110a, a second slice 110b, power logic 120, a main interface unit 130, a snoop unit 140, a prefetch unit 150, configuration registers 125, and a cache counter 127.In one or more embodiments, the main interface unit 130 may include functionality to handle communications between the cache unit 105 and other portions of the processor 101. For example, in some embodiments, the main interface unit 130 may provide one or more In-Die Interface (IDI) datapaths to the uncore portion of the processor 101, to another execution group 102, etc.In some embodiments, the snoop unit 140 includes functionality to monitor data transfers in order to maintain cache coherency. Further, in some embodiments, the prefetch unit 150 includes functionality to prefetch data for use by the associated core 106.Each slice 110 is a cache slice, meaning a portion of the cache unit 105 that may be independently written to and/or read from. As shown, each slice 110 includes a super queue 112, cache memory 114, and an interface unit 116. In some embodiments, the super queue 112 may include functionality to control and/or centralize access requests to the slice 110. For example, in some embodiments, the super 112 may include sixteen entries to track cache requests to the slice 110.In some embodiments, the cache memory 114 may include a portion of the cache lines available in the cache unit 105 (e.g., 128K of cache memory). Further, in some embodiments, the cache memory 114 may include a cache controller (or some equivalent functionality).In one or more embodiments, the interface unit 116 may include functionality to handle communications between the slice 110 and the cache unit 105. For example, in some embodiments, the interface unit 116 may be an IDI pipe to the main interface unit 130, and may include a data structure to track cache misses.The power logic 120 includes functionality to switch operation of the cache unit 105 between a two-slice mode and a one-slice mode. The two-slice mode involves using all cache slices of the cache unit 105 (e.g., using both the first slice 110a and the second slice 110b). The two-slice mode may also be referred to as a normal (or full-power) operating mode. The one-slice mode involves using only one cache slice, and disabling the other cache slice. The one-slice mode involves using the first slice 110a (or a portion thereof) and disabling the second slice 110b. The one-slice mode may also be referred to as a reduced power operating mode.In some embodiments, a count may be initiated in the cache counter 127 when the cache unit 105 enters the one-slice mode. Once the cache counter 127 reaches a maximum count, the power logic 120 switches the cache unit 105 from the one-slice mode to the two-slice mode. Alternatively, in some embodiments, the cache counter 127 may be initiated at the maximum level, and may be counted down to zero. For example, the cache counter 127 may be incremented for every processing cycle, for every instruction, etc.Referring now to FIG. 2 , shown is an embodiment of the cache unit 105 when operating in one-slice mode. In the example of FIG. 2 , the second slice 110b is shown with a cross-hatch pattern, indicating that the second slice 110b is disabled in the one-slice mode. In some embodiments, disabling the second slice 110b may involve gating all power from the second slice 110b. Alternatively, in other embodiments, disabling the second slice 110b may involve freezing the state of the second slice 110b, and providing a relatively low power level to maintain the frozen state of the second slice 110b.At least some portion of the first slice 110a is not disabled in the one-slice mode. As shown in FIG. 2 , the one-slice mode involves disabling the cache memory 114a of the first slice 110a, but not disabling the super queue 112a and the interface unit 116a of the first slice 110a. Not disabling the super queue 112a and the interface unit 116a enables the core 106 to function properly in the one-slice mode (e.g., to maintain data transfer to/from the core 106).Operating in the one-slice mode may always result in a cache miss for the cache unit 105. Further, in some embodiments, operating in the one-slice mode may reduce the total power consumed by the cache unit 105. In some embodiments, the power logic 120 may include functionality to balance any performance loss due to cache misses against the power savings resulting from using the one-slice mode.In one or more embodiments, the power logic 120 switches the operating mode of the cache unit 105 based on a request from the power control unit 107. For example, in some embodiments, the power control unit 107 may generate a request to switch the operation of the cache unit 105. In response, the configuration registers 125 may be set to indicate that the request from the power control unit 107. For example, a single bit of the configuration registers 125 may be set to "0" to indicate a request for the two-slice mode, and may be set to "1" to indicate a request for the one-slice mode. The power logic 120 then switches the operating mode of the cache unit 105 based on the settings of the configuration registers 125. In some embodiments, the power logic 120 may only switch the operating mode when "waking" (i.e., exiting) from a sleep state (e.g., a C6 state), after a reset of the processor 101, etc.In one or more embodiments, the power control unit 107 may include functionality to determine a type of processing task expected to be performed by a given core 106. The power control unit 107 may then generate a request to switch operating modes of the cache unit 105 based on the determined type of processing task. This request may be provided to the power logic 120.In one or more embodiments, the performance loss due to a cache miss for the cache unit 105 may be reduced when performing certain types of processing tasks in the core 106. For example, in some embodiments, the performance loss may be minimized when the type of processing task involves a high frequency of sleep states (e.g., a C6 state). Such types of processing tasks may include, e.g., video image processing.In some embodiments, the power control unit 107 may generate a request to switch the cache unit 105 from two-slice mode to one-slice mode when the frequency of sleep states expected in the processing task meets and/or exceeds a threshold level. For example, assuming a threshold of eight sleep states per second, the power control unit 107 may request a switch to the one-slice mode if the expected frequency of sleep states in a scheduled task is nine or more sleep states per second. Further, in some embodiments, the power control unit 107 may generate a request to switch the cache unit 105 from one-slice mode to two-slice mode when the frequency of sleep states expected in a processing task again drops below the threshold level.Referring now to FIG. 3A , shown is a sequence 300 for switching to a one-slice mode, in accordance with one or more embodiments. In one or more embodiments, the sequence 300 may be part of the power logic 120 shown in FIG. 1B . The sequence 300 may be implemented in hardware, software, and/or firmware. In firmware and software embodiments it may be implemented by computer executed instructions stored in a non-transitory computer readable medium, such as an optical, semiconductor, or magnetic storage device.At step 310, a cache unit is operated in a two-slice mode. For example, referring to FIG. 1B , the cache unit 105 is operated using both the first slice 110a and the second slice 110b.At step 312, a request to switch the cache unit to a one-slice mode may be received. For example, referring to FIG. 1B , the power logic 120 may receive a request from the power control unit 107 to switch the cache unit 105 to a one-slice operating mode. In some embodiments, the power control unit 107 may generate this request based on one or more processing tasks expected to be performed by the core 106 associated with the cache unit 105.At step 314, information related to the request (received at step 312) may be stored in the cache unit. For example, referring to FIG. 1B , one or more configuration registers 125 may be set to indicate that a request to switch the cache unit 105 to one-slice mode has been received.At step 316, the cache unit may enter a sleep state or may be reset. For example, referring to FIG. 1B , the cache unit 105 may enter a sleep state (e.g., a C6 state), or may be reset.At step 318, the cache unit may wake from the sleep state or reset. For example, referring to FIG. 1B , the cache unit 105 may enter a normal state (e.g., a C0 state) after waking from a sleep state or being reset.At step 320, a one-slice mode may be initiated in the cache unit. For example, referring to FIG. 1B , the power logic 120 may read the configuration registers 125 after waking from a sleep state or reset, and switches then the cache unit 105 to operate in the one-slice mode. The one-slice mode involves using only the first slice 110a, and disabling the second slice 110b. The one-slice mode involves disabling the cache memory 114a of the first slice 110a, but not disabling the super queue 112a and the interface unit 116a of the first slice 110a. After step 320, the sequence 300 ends. Optionally, in some embodiments, steps 314, 316, and 318 may be omitted from the sequence 300. That is, in some embodiments, the one-slice mode may be initiated (step 320) upon receiving the request from the power control unit 107 (step 312).Referring now to FIG. 3B , shown is a sequence 330 for switching to a two-slice mode, in accordance with one or more embodiments. In one or more embodiments, the sequence 330 may be part of the power logic 120 shown in FIG. 1B . The sequence 330 may be implemented in hardware, software, and/or firmware. In firmware and software embodiments it may be implemented by computer executed instructions stored in a non-transitory computer readable medium, such as an optical, semiconductor, or magnetic storage device.At step 340, a cache counter may be initiated upon entering a one-slice mode. For example, referring to FIG. 2 , assume that the cache unit 105 is switched into the one-slice mode (e.g., after completing sequence 300 shown in FIG. 3A ). In this example, a count may be initiated in the cache counter 127 when the cache unit 105 enters the one-slice mode. In some embodiments, the cache counter 127 may count up to a maximum count. Alternatively, in other embodiments, the cache counter 127 may count down to zero. The cache counter 127 may be incremented, e.g., for every processing cycle, for every instruction, etc.At step 344, a determination about whether a request to switch the cache unit to the two-slice mode has been received is made. For example, referring to FIG. 2 , the power logic 120 may determine whether a request to switch to the two-slice mode has been received from the power control unit 107.If it is determined at step 344 that the request to switch to the two-slice mode has not been received, the sequence 330 may continue at step 348 (described below). However, if it is determined at step 344 that the request to switch to the two-slice mode has been received, then at step 346, information related to the request may be stored in the cache unit. For example, referring to FIG. 2 , one or more configuration registers 125 may be set to indicate that a request to switch the cache unit 105 to two-slice mode has been received.At step 348, a determination about whether the cache counter (initiated at step 340) has expired is made. For example, referring to FIG. 2 , the power logic 120 may determine whether the cache counter 127 has reached the maximum count (or has counted down to zero).If it is determined at step 348 that the cache counter has expired, the sequence 330 may continue at step 352 (described below). However, if it is determined at step 348 that the cache counter has not expired, then at step 350, a determination about whether the cache unit has awakened (i.e., exited) from a sleep state or reset is made. For example, referring to FIG. 2 , the power logic 120 may determine whether the cache unit 105 has exited from a sleep state (e.g., a C6 state) or a reset.If it is determined at step 350 that the cache unit has not awakened from a sleep state or reset, then the sequence 300 may return to step 348 to again determine whether the cache counter has expired. However, if it is determined at step 350 that the cache unit has awakened from a sleep state or reset, then at step 352, the two-slice mode may be initiated in the cache unit. For example, referring to FIG. 2 , the power logic 120 switches the cache unit 105 to operate in the two-slice mode (i.e., using both the first slice 110a and the second slice 110b). After step 352, the sequence 330 ends.Referring now to FIG. 3C , shown is a sequence 360 for initiating a two-slice mode, in accordance with one or more embodiments. In particular, the sequence 360 illustrates an exemplary expansion of step 352 (shown in FIG. 3B ). The sequence 360 may be implemented in hardware, software, and/or firmware. In firmware and software embodiments it may be implemented by computer executed instructions stored in a non-transitory computer readable medium, such as an optical, semiconductor, or magnetic storage device.At step 362, a trap event may be set in an uncore portion of a processor (e.g., processor 101 shown in FIG. 1A ). At step 364, the trap event may be signaled to a re-order buffer (ROB) of the processor. At step 366, a fence may be initiated based on the trap event. At step 368, a determination about whether a drain request is set is made. If it is determined at step 368 that the drain request is not set, then the sequence 360 may return to step 368 to again determine whether the drain request is set. However, if it is determined at step 368 that the drain request is set, then at step 370, a determination about whether the super queue (e.g., super queue 112 shown in FIG. 1B ) is empty is made.If it is determined at step 370 that the super queue is not empty, then the sequence 360 may return to step 370 to again determine whether the super queue is empty. However, if it is determined at step 370 that the super queue is empty, then at step 372, the cache unit (e.g., cache unit 105 shown in FIG. 1B ) and the instruction fetch unit may be stalled.At step 374, the cache unit (e.g., cache unit 105 shown in FIG. 1B ) may be switched to a two-slice mode. At step 376, the cache unit and the instruction fetch unit may be released. After step 374, the sequence 360 ends.Note that the examples shown in FIGs. 1A-1B , 2 , and 3A-3C are provided for the sake of illustration. For instance, while embodiments may be shown in simplified form for the sake of clarity, embodiments may include any number and/or arrangement of processors, cores, and/or additional components (e.g., buses, storage media, connectors, power components, buffers, interfaces, etc.). In particular, it is contemplated that, in some embodiments, the cache unit 105 may include any number of slices 101. In such embodiments, operating in one-slice mode involves disabling a sub-portion of the slices 101 included in the cache unit 105. It is further contemplated that specifics in the examples shown in FIGs. 1A-1B , 2 , and 3A-3C may be used anywhere in one or more embodiments.Referring now to FIG. 4 , shown is a block diagram of a processor in accordance with an embodiment of the present invention. As shown in FIG. 4 , the processor 400 may be a multicore processor including first die 405 having a plurality of cores 410a - 410n of a core domain. The various cores 410a - 410n may be coupled via an interconnect 415 to a system agent or uncore domain 420 that includes various components. As seen, the uncore domain 420 may include a shared cache 430. In addition, the uncore may include an integrated memory controller 440, a power control unit (PCU) 470, and various interfaces 450. The PCU 470 may include some or all of the functionality of the power control unit 107 described above with reference to FIG. 1A . Further, although not shown for ease of illustration in FIG. 4 , in some embodiments, each of the cores 410a - 410n may be associated with a cache unit 105 shown in FIGs. 1A-1B and 2 .With further reference to FIG. 4 , the processor 400 may communicate with a system memory 460, e.g., via a memory bus. In addition, by interfaces 450, connection can be made to various off-package components such as peripheral devices, mass storage and so forth. While shown with this particular implementation in the embodiment of FIG. 4 , the scope of the present invention is not limited in this regard.Referring now to FIG. 5 , shown is a block diagram of a multi-domain processor in accordance with another embodiment of the present invention. As shown in the embodiment of FIG. 5 , processor 500 includes multiple domains. Specifically, a core domain 510 can include a plurality of cores 510a-510n, a graphics domain 520 can include one or more graphics engines, and a system agent domain 550 may further be present. Note that while only shown with three domains, understand the scope of the present invention is not limited in this regard and additional domains can be present in other embodiments. For example, multiple core domains may be present each including at least one core.In general, each core 510 may further include low level caches in addition to various execution units and additional processing elements. In turn, the various cores may be coupled to each other and to a shared cache memory formed of a plurality of units of a last level cache (LLC) 540a - 540n. In various embodiments, LLC 540 may be shared amongst the cores and the graphics engine, as well as various media processing circuitry. In some embodiments, each of the LLCs 540a - 540n may include some or all of the functionality and/or components of the cache unit 105 shown in FIGs. 1A-1B and 2 .As seen, a ring interconnect 530 thus couples the cores together, and provides interconnection between the cores, graphics domain 520 and system agent circuitry 550. In the embodiment of FIG. 5 , system agent domain 550 may include display controller 552 which may provide control of and an interface to an associated display. As further seen, system agent domain 550 may also include a power control unit 555 to allocate power to the CPU and non-CPU domains. In some embodiments, the power control unit 555 may include some or all of the functionality of the power control unit 107 shown in FIG. 1A .As further seen in FIG. 5 , processor 500 can further include an integrated memory controller (IMC) 570 that can provide for an interface to a system memory, such as a dynamic random access memory (DRAM). Multiple interfaces 580a - 580n may be present to enable interconnection between the processor and other circuitry. For example, in one embodiment at least one direct media interface (DMI) interface may be provided as well as one or more Peripheral Component Interconnect Express (PCI Express™ (PCIe™)) interfaces. Still further, to provide for communications between other agents such as additional processors or other circuitry, one or more interfaces in accordance with an Intel® Quick Path Interconnect (QPI) protocol may also be provided. As further seen, a peripheral controller hub (PCH) 590 may also be present within the processor 500, and can be implemented on a separate die, in some embodiments. Alternatively, in some embodiments, the PCH 590 may be external to the processor 500. Although shown at this high level in the embodiment of FIG. 5 , understand the scope of the present invention is not limited in this regard.Referring to FIG. 6 , an embodiment of a processor including multiple cores is illustrated. Processor 1100 includes any processor or processing device, such as a microprocessor, an embedded processor, a digital signal processor (DSP), a network processor, a handheld processor, an application processor, a co-processor, a system on a chip (SOC), or other device to execute code. Processor 1100, in one embodiment, includes at least two cores-cores 1101 and 1102, which may include asymmetric cores or symmetric cores (the illustrated embodiment). However, processor 1100 may include any number of processing elements that may be symmetric or asymmetric. Although not shown for ease of illustration in FIG. 6 , in some embodiments, each of the cores 1101 and 1102 may be associated with a cache unit 105 shown in FIGs. 1A-1B and 2 .In one embodiment, a processing element refers to hardware or logic to support a software thread. Examples of hardware processing elements include: a thread unit, a thread slot, a thread, a process unit, a context, a context unit, a logical processor, a hardware thread, a core, and/or any other element, which is capable of holding a state for a processor, such as an execution state or architectural state. In other words, a processing element, in one embodiment, refers to any hardware capable of being independently associated with code, such as a software thread, operating system, application, or other code. A physical processor typically refers to an integrated circuit, which potentially includes any number of other processing elements, such as cores or hardware threads.A core often refers to logic located on an integrated circuit capable of maintaining an independent architectural state, wherein each independently maintained architectural state is associated with at least some dedicated execution resources. In contrast to cores, a hardware thread typically refers to any logic located on an integrated circuit capable of maintaining an independent architectural state, wherein the independently maintained architectural states share access to execution resources. As can be seen, when certain resources are shared and others are dedicated to an architectural state, the line between the nomenclature of a hardware thread and core overlaps. Yet often, a core and a hardware thread are viewed by an operating system as individual logical processors, where the operating system is able to individually schedule operations on each logical processor.Physical processor 1100, as illustrated in FIG. 6 , includes two cores, cores 1101 and 1102. Here, cores 1101 and 1102 are considered symmetric cores, i.e. cores with the same configurations, functional units, and/or logic. In another embodiment, core 1101 includes an out-of-order processor core, while core 1102 includes an in-order processor core. However, cores 1101 and 1102 may be individually selected from any type of core, such as a native core, a software managed core, a core adapted to execute a native instruction set architecture (ISA), a core adapted to execute a translated ISA, a co-designed core, or other known core. Yet to further the discussion, the functional units illustrated in core 1101 are described in further detail below, as the units in core 1102 operate in a similar manner.As shown, core 1101 includes two hardware threads 1101a and 1101b, which may also be referred to as hardware thread slots 1101a and 1101b. Therefore, software entities, such as an operating system, in one embodiment potentially view processor 1100 as four separate processors, i.e., four logical processors or processing elements capable of executing four software threads concurrently. As alluded to above, a first thread is associated with architecture state registers 1101a, a second thread is associated with architecture state registers 1101b,a third thread may be associated with architecture state registers 1102a, and a fourth thread may be associated with architecture state registers 1102b. Here, each of the architecture state registers (1101a, 1101b, 1102a, and 1102b) may be referred to as processing elements, thread slots, or thread units, as described above.As illustrated, architecture state registers 1101a are replicated in architecture state registers 1101b, so individual architecture states/contexts are capable of being stored for logical processor 1101a and logical processor 1101b. In core 1101, other smaller resources, such as instruction pointers and renaming logic in allocator and renamer block 1130 may also be replicated for threads 1101a and 1101b. Some resources, such as re-order buffers in reorder/retirement unit 1135, ILTB 1120, load/store buffers, and queues may be shared through partitioning. Other resources, such as general purpose internal registers, page-table base register(s), low-level data-cache and data-TLB 1115, execution unit(s) 1140, and portions of out-of-order unit 1135 are potentially fully shared.Processor 1100 often includes other resources, which may be fully shared, shared through partitioning, or dedicated by/to processing elements. In FIG. 6 , an embodiment of a purely exemplary processor with illustrative logical units/resources of a processor is illustrated. Note that a processor may include, or omit, any of these functional units, as well as include any other known functional units, logic, or firmware not depicted. As illustrated, core 1101 includes a simplified, representative out-of-order (OOO) processor core. But an in-order processor may be utilized in different embodiments. The OOO core includes a branch target buffer 1120 to predict branches to be executed/taken and an instruction-translation buffer (I-TLB) 1120 to store address translation entries for instructions.Core 1101 further includes decode module 1125 coupled to fetch unit 1120 to decode fetched elements. Fetch logic, in one embodiment, includes individual sequencers associated with thread slots 1101a, 1101b, respectively. Usually core 1101 is associated with a first ISA, which defines/specifies instructions executable on processor 1100. Often machine code instructions that are part of the first ISA include a portion of the instruction (referred to as an opcode), which references/specifies an instruction or operation to be performed. Decode logic 1125 includes circuitry that recognizes these instructions from their opcodes and passes the decoded instructions on in the pipeline for processing as defined by the first ISA. As a result of the recognition by decoders 1125, the architecture or core 1101 takes specific, predefined actions to perform tasks associated with the appropriate instruction (e.g., one or more of the actions shown in FIGs. 3A-3C ). It is important to note that any of the tasks, blocks, operations, and methods described herein may be performed in response to a single or multiple instructions; some of which may be new or old instructions.In one example, allocator and renamer block 1130 includes an allocator to reserve resources, such as register files to store instruction processing results. However, threads 1101a and 1101b are potentially capable of out-of-order execution, where allocator and renamer block 1130 also reserves other resources, such as reorder buffers to track instruction results. Unit 1130 may also include a register renamer to rename program/instruction reference registers to other registers internal to processor 1100. Reorder/retirement unit 1135 includes components, such as the reorder buffers mentioned above, load buffers, and store buffers, to support out-of-order execution and later in-order retirement of instructions executed out-of-order.Scheduler and execution unit(s) block 1140, in one embodiment, includes a scheduler unit to schedule instructions/operation on execution units. For example, a floating point instruction is scheduled on a port of an execution unit that has an available floating point execution unit. Register files associated with the execution units are also included to store information instruction processing results. Exemplary execution units include a floating point execution unit, an integer execution unit, a jump execution unit, a load execution unit, a store execution unit, and other known execution units.Lower level data cache and data translation buffer (D-TLB) 1150 are coupled to execution unit(s) 1140. The data cache is to store recently used/operated on elements, such as data operands, which are potentially held in memory coherency states. The D-TLB is to store recent virtual/linear to physical address translations. As a specific example, a processor may include a page table structure to break physical memory into a plurality of virtual pages.Here, cores 1101 and 1102 share access to higher-level or further-out cache 1110, which is to cache recently fetched elements. Note that higher-level or further-out refers to cache levels increasing or getting further away from the execution unit(s). In one embodiment, higher-level cache 1110 is a last-level data cache-last cache in the memory hierarchy on processor 1100-such as a second or third level data cache. However, higher level cache 1110 is not so limited, as it may be associated with or includes an instruction cache. A trace cache-a type of instruction cache-instead may be coupled after decoder 1125 to store recently decoded traces.In the depicted configuration, processor 1100 also includes bus interface module 1105 and a power controller 1160, which may perform power sharing control in accordance with an embodiment of the present invention. In some embodiments, the power controller 1160 may include some or all of the functionality of the power control unit 107 shown in FIG. 1A .Historically, controller 1170 has been included in a computing system external to processor 1100. In this scenario, bus interface 1105 is to communicate with devices external to processor 1100, such as system memory 1175, a chipset (often including a memory controller hub to connect to memory 1175 and an I/O controller hub to connect peripheral devices), a memory controller hub, a northbridge, or other integrated circuit. And in this scenario, bus 1105 may include any known interconnect, such as multi-drop bus, a point-to-point interconnect, a serial interconnect, a parallel bus, a coherent (e.g. cache coherent) bus, a layered protocol architecture, a differential bus, and a GTL bus.Memory 1175 may be dedicated to processor 1100 or shared with other devices in a system. Common examples of types of memory 1175 include DRAM, SRAM, non-volatile memory (NV memory), and other known storage devices. Note that device 1180 may include a graphic accelerator, processor or card coupled to a memory controller hub, data storage coupled to an I/O controller hub, a wireless transceiver, a flash device, an audio controller, a network controller, or other known device.Note however, that in the depicted embodiment, the controller 1170 is illustrated as part of processor 1100. Recently, as more logic and devices are being integrated on a single die, such as SOC, each of these devices may be incorporated on processor 1100. For example in one embodiment, memory controller hub 1170 is on the same package and/or die with processor 1100. Here, a portion of the core (an on-core portion) includes one or more controller(s) 1170 for interfacing with other devices such as memory 1175 or a graphics device 1180. The configuration including an interconnect and controllers for interfacing with such devices is often referred to as an on-core (or un-core configuration). As an example, bus interface 1105 includes a ring interconnect with a memory controller for interfacing with memory 1175 and a graphics controller for interfacing with graphics processor 1180. Yet, in the SOC environment, even more devices, such as the network interface, coprocessors, memory 1175, graphics processor 1180, and any other known computer devices/interface may be integrated on a single die or integrated circuit to provide small form factor with high functionality and low power consumption.Embodiments may be implemented in many different system types. Referring now to FIG. 7 , shown is a block diagram of a system in accordance with an embodiment of the present invention. As shown in FIG. 7 , multiprocessor system 600 is a point-to-point interconnect system, and includes a first processor 670 and a second processor 680 coupled via a point-to-point interconnect 650. As shown in FIG. 7 , each of processors 670 and 680 may be multicore processors, including first and second processor cores (i.e., processor cores 674a and 674b and processor cores 684a and 684b), although potentially many more cores may be present in the processors. Each of these processors can include any part of the central power controller 110 and/or the block power logic 130 described above with reference to FIG. 1 . Although not shown for ease of illustration in FIG. 6 , in some embodiments, each of the processor cores 674, 684 may be associated with one of the cache units 105 shown in FIGs. 1A-1B and 2 .Still referring to FIG. 7 , first processor 670 further includes a memory controller hub (MCH) 672 and point-to-point (P-P) interfaces 676 and 678. Similarly, second processor 680 includes a MCH 682 and P-P interfaces 686 and 688. As shown in FIG. 7 , MCH's 672 and 682 couple the processors to respective memories, namely a memory 632 and a memory 634, which may be portions of system memory (e.g., DRAM) locally attached to the respective processors. First processor 670 and second processor 680 may be coupled to a chipset 690 via P-P interconnects 652 and 654, respectively. As shown in FIG. 7 , chipset 690 includes P-P interfaces 694 and 698.Furthermore, chipset 690 includes an interface 692 to couple chipset 690 with a high performance graphics engine 638, by a P-P interconnect 639. In turn, chipset 690 may be coupled to a first bus 616 via an interface 696. As shown in FIG. 7 , various input/output (I/O) devices 614 may be coupled to first bus 616, along with a bus bridge 618 which couples first bus 616 to a second bus 620. Various devices may be coupled to second bus 620 including, for example, a keyboard/mouse 622, communication devices 626 and a data storage unit 628 such as a disk drive or other mass storage device which may include code 630, in one embodiment. Further, an audio I/O 624 may be coupled to second bus 620. Embodiments can be incorporated into other types of systems including mobile devices such as a smart cellular telephone, tablet computer, netbook, Ultrabook™, or so forth.It should be understood that a processor core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).Any processor described herein may be a general-purpose processor, such as a Core™ i3, i5, i7, 2 Duo and Quad, Xeon™, Itanium™, XScale™ or StrongARM™ processor, which are available from Intel Corporation, of Santa Clara, Calif. Alternatively, the processor may be from another company, such as ARM Holdings, Ltd, MIPS, etc.. The processor may be a special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, co-processor, embedded processor, or the like. The processor may be implemented on one or more chips. The processor may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.It is contemplated that the processors described herein are not limited to any system or device. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.Embodiments may be implemented in code and may be stored on a non-transitory storage medium having stored thereon instructions which can be used to program a system to perform the instructions. The storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, solid state drives (SSDs), compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.The following examples pertain to further embodiments.In an example, in the second operating mode, the first cache slice may return a cache miss.In an example, the processor may include a power control unit to generate a request to switch the operation of the cache unit.In an example, the power control unit may be to generate the request based on a type of processing task expected to be performed by the first core.The power logic may be further to: set the at least one configuration register to indicate that the cache unit is to switch to the second operating mode, and upon exiting a sleep state, switch from the first operating mode to the second operating mode.The power logic may be further to: set the at least one configuration register to indicate that the cache unit is to switch to the first operating mode, and upon exiting a sleep state, switch from the second operating mode to the first operating mode.In an example, the power logic may be further to, upon switching to the second operating mode, initiate a count in a cache counter. In an example, the power logic may be further to, when the cache counter reaches a maximum count, switch from the second operating mode to the first operating mode.In another example embodiment may be a system, the system including a multicore processor and a dynamic random access memory (DRAM) coupled to the multicore processor. The multicore processor may include a plurality of tiles, each tile including a core and a cache unit, where the cache unit is private to the tile.In an example, the multicore processor further includes a power control unit to generate a request to switch the operation of the cache unit between the first operating mode and the second operating mode. In an example, the power control unit may be to generate the request when a frequency of sleep states expected in a processing task exceeds a threshold level. In an example, the processing task may be video processing.In an example, initiating the second operating mode may include: in response to the request, setting at least one configuration register to indicate receipt of the first request from a power control unit; and, upon waking from a sleep state, initiating the second operating mode in the cache unit based on the at least one configuration register.In an example, the method may further include, upon initiating the second operating mode in the cache unit: initiating a cache counter to perform a count; and upon reaching a maximum count in the cache counter, switching the cache unit to the first operating mode, the first operating mode including use of both the first cache slice and the second cache slice.In an example, the method may further include generating the request based on a type of processing task to be performed by a core associated with the cache unit. In an example, the processing task is video processing.In an example, the method may further include generating the first request when a frequency of sleep states expected in a processing task exceeds a threshold level.In another example embodiment may be a communication device may be arranged to perform the method of any of the above examples.In another example embodiment may be at least one machine readable medium may include a plurality of instructions that in response to being executed on a computing device, cause the computing device to carry out the method of any of the above examples.In another example embodiment may be an apparatus for processing instructions is configured to perform the method of any of the above examples.In another example embodiment may be an apparatus comprising means for performing the method of any of the above examples.References throughout this specification to "one embodiment" or "an embodiment" mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation encompassed within the present invention. Thus, appearances of the phrase "one embodiment" or "in an embodiment" are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be instituted in other suitable forms other than the particular embodiment illustrated and all such forms may be encompassed within the claims of the present application.While the present invention has been described with respect to a limited number of embodiments for the sake of illustration, those skilled in the art will appreciate numerous modifications and variations therefrom.
An integrated circuit manufacturing method is provided having a semiconductor substrate with a semiconductor device. A device dielectric layer is formed on the semiconductor substrate. A channel dielectric layer on the device dielectric layer has an opening formed therein. A barrier layer lines the channel opening. A conductor core fills the opening over the barrier layer. The conductor core and barrier layer are chemical-mechanical polished. The dielectric layer is then chemically-mechanically polished using a slurry containing ceria, a Ce(IV) oxide. Residual ceria on the conductor core and dielectric layer is then removed using a reducing agent to react the Ce(IV) oxide to the Ce(III) oxide for removal in an aqueous solution.
The invention claimed is: 1. A method of manufacturing an integrated circuit comprising: providing a semiconductor substrate having a semiconductor device provided thereon; forming a dielectric layer on the semiconductor substrate; forming an opening in the dielectric layer; depositing a conductor core to fill the opening and connect to the semiconductor device; chemical-mechanical polishing the dielectric layer using a slurry containing ceria, a Ce(IV) oxide; dissolving residual ceria on the conductor core and dielectric layer using a reducing agent to react the Ce(IV) oxide to Ce(III) oxide. 2. The method of manufacturing an integrated circuit as claimed in claim 1 wherein dissolving residual ceria uses 1% to 5% by weight of the reducing agent in a solution. 3. The method of manufacturing an integrated circuit as claimed in claim 1 wherein dissolving residual ceria uses the reducing agent in a solution having a pH from 1.5 to 2.5. 4. The method of manufacturing an integrated circuit as claimed in claim 1 wherein dissolving residual ceria includes using a complexing agent. 5. The method of manufacturing an integrated circuit as claimed in claim 1 wherein dissolving residual ceria uses 1% to 5% by weight in water of a complexing agent. 6. The method of manufacturing an integrated circuit as claimed in claim 1 wherein dissolving residual ceria uses a complexing agent in a solution having a pH from 1.5 to 2.5. 7. The method of manufacturing an integrated circuit as claimed in claim 1 including water scrub brushing the conductor core and the dielectric layer after dissolving the residual ceria. 8. The method of manufacturing an integrated circuit as claimed in claim 1 including scrub brushing the conductor core and the dielectric layer using the reducing agent in solution. 9. The method of manufacturing an integrated circuit as claimed in claim 1 wherein depositing the conductor core deposits a metal selected from a group consisting of copper, aluminum, gold, silver, a compound thereof, and a combination thereof. 10. The method of manufacturing an integrated circuit as claimed in claim 1 wherein forming the dielectric layers uses dielectric materials selected from a group consisting of silicon oxide (SiOx), silicon nitride (Six Nx), silicon oxynitride (SiON), a dielectric material with a dielectric constant from 4.2 to 3.9, and a low dielectric material with a dielectric constant below 3.9, and a combination thereof. 11. A method of manufacturing an integrated circuit comprising: providing a semiconductor substrate having a semiconductor device provided thereon; depositing a device oxide layer on the semiconductor substrate; depositing a channel oxide layer on the device oxide layer; forming a channel opening in the channel oxide layer; depositing a barrier layer to line the channel opening; depositing a seed layer to line the barrier layer; depositing a conductor core to fill the channel opening and connect to the semiconductor device; chemical-mechanical polishing the barrier layer, seed layer, and conductor core; chemical-mechanical polishing the dielectric layer using a slurry containing ceria, a Ce(IV) oxide; and dissolving residual ceria on the conductor core and dielectric layer using a reducing agent to react the Ce(IV) oxide to Ce(III) oxide. 12. The method of manufacturing an integrated circuit as claimed in claim 11 wherein dissolving residual ceria includes using a reducing agent selected from a group consisting of phosphorus acid, hypophosphoric acid, oxalic acid, L-ascorbic acid and a combination thereof. 13. The method of manufacturing an integrated circuit as claimed in claim 11 wherein dissolving residual ceria uses 1% to 5% by weight phosphorous acid as the reducing agent in water. 14. The method of manufacturing an integrated circuit as claimed in claim 11 wherein dissolving residual ceria uses phosphorous acid as the reducing agent in a water solution having a pH from 1.5 to 2.5. 15. The method of manufacturing an integrated circuit as claimed in claim 11 wherein dissolving residual ceria includes using a complexing agent selected from a group consisting of ascorbic acid, citric acid, tartaric acid, malic acid, glutamic acid, and a combination thereof. 16. The method of manufacturing an integrated circuit as claimed in claim 11 wherein dissolving residual ceria includes using ascorbic acid as a complexing agent. 17. The method of manufacturing an integrated circuit as claimed in claim 11 wherein dissolving residual ceria uses 1% to 5% by weight ascorbic acid as a complexing agent in water. 18. The method of manufacturing an integrated circuit as claimed in claim 11 wherein dissolving residual ceria uses ascorbic acid as a complexing agent in a water solution having a pH from 1.5 to 2.5. 19. The method of manufacturing an integrated circuit as claimed in claim 11 including water scrub brushing the conductor core and the dielectric layer after dissolving the residual ceria. 20. The method of manufacturing an integrated circuit as claimed in claim 11 including water scrub brushing the conductor core and the dielectric layer using the reducing agent with a complexing agent in solution. 21. The method of manufacturing an integrated circuit as claimed in claim 11 wherein depositing the conductor core and seed layer deposit materials selected from a group consisting of copper, gold, silver, a compound thereof, and a combination thereof. 22. The method of manufacturing an integrated circuit as claimed in claim 11 wherein depositing the barrier layer deposits a material selected from a group consisting of titanium, tantalum, tungsten, a compound thereof, and a combination thereof. 23. The method of manufacturing an integrated circuit as claimed in claim 11 wherein forming the dielectric layers uses dielectric materials selected from a group consisting of silicon oxide, silicon nitride, silicon oxynitride, a dielectric material with a dielectric constant from 4.2 to 3.9, and a low dielectric material with a dielectric constant below 3.9, and a combination thereof.
TECHNICAL FIELD The present invention relates generally to semiconductor technology and more specifically to removal of chemical-mechanical polishing solutions in semiconductor processing. BACKGROUND ART In the manufacture of integrated circuits, after the individual devices such as the transistors have been fabricated in and on the semiconductor substrate, they must be connected together to perform the desired circuit functions. This interconnection process is generally called "metalization" and is performed using a number of different photolithographic, deposition, and removal techniques. Briefly, individual semiconductor devices are formed in and on a semiconductor substrate and a device dielectric layer is deposited. Various techniques are used to form gate and source/drain contacts, which extend up to the surface of the device dielectric layer. In a process called the "damascene" technique, dielectric layers are deposited over the device dielectric layers and openings are formed in the dielectric layers. Conductor materials are deposited on the dielectric layers and in the openings. A process is used to planarize the conductor materials with the surface of the dielectric layers so as to cause the conductor materials to be "inlaid" in the dielectric layers. More specifically for a single layer of interconnections, a "single damascene" technique is used in which the first channel formation of the single damascene process starts with the deposition of a thin first channel stop layer over the device dielectric layer. The first channel stop layer is an etch stop layer which is subject to a photolithographic processing step which involves deposition, patterning, exposure, and development of a photoresist, and an anisotropic etching step through the patterned photoresist to provide openings to the device contacts. The photoresist is then stripped. A first channel dielectric layer is formed on the first channel stop layer. Where the first channel dielectric layer is of an oxide material, such as silicon oxide (SiO2), the first channel stop layer is a nitride, such as silicon nitride (SiN), so the two layers can be selectively etched. The first channel dielectric layer is then subject to further photolithographic process and etching steps to form first channel openings in the pattern of the first channels. The photoresist is then stripped. An optional thin adhesion layer is deposited on the first channel dielectric layer and lines the first channel openings to ensure good adhesion of subsequently deposited material to the first channel dielectric layer. Adhesion layers for copper (Cu) conductor materials are composed of compounds such as tantalum nitride (TaN), titanium nitride (TiN), or tungsten nitride (WN). These nitride compounds have good adhesion to the dielectric materials and provide fair barrier resistance to the diffusion of copper from the copper conductor materials to the dielectric material. High barrier resistance is necessary with conductor materials such as copper to prevent diffusion of subsequently deposited copper into the dielectric layer, which can cause short circuits in the integrated circuit. However, these nitride compounds also have relatively poor adhesion to copper and relatively high electrical resistance. Because of the drawbacks, pure refractory metals such as tantalum (Ta), titanium (Ti), or tungsten (W) are deposited on the adhesion layer to line the adhesion layer in the first channel openings. The refractory metals are good barrier materials, have lower electrical resistance than their nitrides, and have good adhesion to copper. In some cases, the barrier material has sufficient adhesion to the dielectric material that the adhesion layer is not required, and in other cases, the adhesion and barrier material become integral. The adhesion and barrier layers are often collectively referred to as a "barrier" layer herein. For conductor materials such as copper, which are deposited by electroplating, a seed layer is deposited on the barrier layer and lines the barrier layer in the first channel openings to act as an electrode for the electroplating process. Processes such as electroless, physical vapor, and chemical vapor deposition are used to deposit the seed layer. A first conductor material is deposited on the seed layer and fills the first channel opening. The first conductor material and the seed layer generally become integral, and are often collectively referred to as the conductor core when discussing the main current-carrying portion of the channels. A chemical-mechanical polishing (CMP) process is then used to remove the first conductor material, the seed layer, and the barrier layer above the first channel dielectric layer to form the first channels. When a layer is placed over the first channels as a final layer, it is called a "capping" layer and a "single" damascene process is completed. When the layer is processed further for placement of additional channels over it, the layer is a via stop layer. For more complex integrated circuits, a "dual damascene" technique is used in which channels of conductor materials are separated by interlayer dielectric layers in vertically separated planes and interconnected by vertical connections, or "vias". More specifically, the dual damascene process starts with the deposition of a thin etch stop layer, or the via stop layer, over the first channels and the first channel dielectric layer. A via dielectric layer is deposited on the via stop layer. Again, where the via dielectric layer is of an oxide material, such as silicon oxide, the via stop layer is a nitride, such as silicon nitride, so the two layers can be selectively etched. Second channel stop and second channel dielectric layers are formed on the via dielectric layer. Again, where the second channel dielectric layer is of an oxide material, such as silicon oxide, the second channel stop layer is a nitride, such as silicon nitride, so the two layers can be selectively etched. The second channel and via stop layers and second channel and via dielectric layers are then subject to further photolithographic process, etching, and photoresist removal steps to form via and second channel openings in the pattern of the second channels and the vias. An optional thin adhesion layer is deposited on the second channel dielectric layer and lines the second channel and the via openings. A barrier layer is then deposited on the adhesion layer and lines the adhesion layer in the second channel openings and the vias. Again, for conductor materials such as copper and copper alloys, a seed layer is deposited by electroless deposition on the barrier layer and lines the barrier layer in the second channel openings and the vias. A second conductor material is deposited on the seed layer and fills the second channel openings and the vias. A CMP process is then used to remove the second conductor material, the seed layer, and the barrier layer above the second channel dielectric layer to form the second channels. When a layer is placed over the second channels as a final layer, it is called a "capping" layer and the dual damascene process is completed. The capping layer may be an etch stop layer and may be processed further for placement of additional levels of channels and vias over it. Individual and multiple levels of single and dual damascene structures can be formed for single and multiple levels of channels and vias, which are collectively referred to as "interconnects". The use of the single and dual damascene techniques eliminates metal etch and dielectric gap fill steps typically used in the metalization process. The elimination of metal etch steps is important as the semiconductor industry moves from aluminum (Al) to other metalization materials, such as copper, which are very difficult to etch. One of the problems encountered during the process of forming copper (Cu) interconnects is that CMP is required. Unfortunately, ceria is difficult to remove from dielectric materials, such as silicon dioxide (SiO2) and silicon nitride (SiN). The difficulty is due to the strong chemical bonding interactions of elemental cerium. Conventional removal methods use sulfuric acid (H2 SO4) and hydrogen peroxide (H2 O2) solutions at a very low pH under zero. These solutions are incompatible with water-brush scrubbers for cleaning semiconductor wafers. A solution to this problem has been long sought but has long eluded those skilled in the art. DISCLOSURE OF THE INVENTION The present invention provides a method for manufacturing an integrated circuit having a semiconductor substrate with a semiconductor device. A dielectric layer is formed on the semiconductor substrate and an opening is formed in the dielectric layer. A barrier layer is deposited to line the opening and a conductor core is deposited to fill the channel opening over the barrier layer. The barrier layer and the conductor core are subject to separate CMP processes. The dielectric layer is subject to a chemical-mechanical polishing (CMP) process using a slurry containing ceria, a Ce(IV) oxide. Residual ceria on the conductor core and dielectric layer is then removed after CMP using a reducing agent to react the Ce(IV) oxide to the Ce(III) oxide. The Ce(III) oxidation state is soluble in aqueous solutions compared to Ce(IV) oxide, which is insoluble in most reagents. This allows ceria to be compatible with water brush scrubbing systems. The above and additional advantages of the present invention will become apparent to those skilled in the art from a reading of the following detailed description when taken in conjunction with the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 (PRIOR ART) is a plan view of aligned channels with a connecting via; FIG. 2 (PRIOR ART) is a cross-section of FIG. 1 (PRIOR ART) along line 2--2; FIG. 3 is a cross-section similar to FIG. 2 (PRIOR ART) without the ceria particles shown in FIG. 2 (PRIOR ART); FIG. 4 is the chemical-mechanical polishing process of the present invention; and FIG. 5 is the ceria removal step of the present invention. BEST MODE FOR CARRYING OUT THE INVENTION Referring now to FIG. 1 (PRIOR ART), therein is shown a plan view of a semiconductor wafer 100 having as interconnects first and second channels 102 and 104 connected by a via 106. The first and second channels 102 and 104 are respectively disposed in first and second dielectric layers 108 and 110. The via 106 is an integral part of the second channel 104 and is disposed in a via dielectric layer 112. The term "horizontal" as used herein is defined as a plane parallel to the conventional plane or surface of a wafer, such as the semiconductor wafer 100, regardless of the orientation of the wafer. The term "vertical" refers to a direction perpendicular to the horizontal as just defined. Terms, such as "on", "above", "below", "side" (as in "sidewall"), "higher", "lower", "over", and "under", are defined with respect to the horizontal plane. Referring now to FIG. 2 (PRIOR ART), therein is shown a cross-section of FIG. 1 (PRIOR ART) along line 2--2. A portion of the first channel 102 is disposed in a first channel stop layer 114 and is on a device dielectric layer 116. Generally, metal contacts are formed in the device dielectric layer 116 to connect to an operative semiconductor device (not shown). This is represented by the contact of the first channel 102 with a semiconductor contact 118 embedded in the device dielectric layer 116. The various layers above the device dielectric layer 116 are sequentially: the first channel stop layer 114, the first channel dielectric layer 108, a via stop layer 120, the via dielectric layer 112, a second channel stop layer 122, the second channel dielectric layer 110, and a next channel stop layer 124 (not shown in FIG. 1). The first channel 102 includes a barrier layer 126, which could optionally be a combined adhesion and barrier layer, and a seed layer 128 around a conductor core 130. The second channel 104 and the via 106 include a barrier layer 132, which could also optionally be a combined adhesion and barrier layer, and a seed layer 134 around a conductor core 136. The barrier layers 126 and 132 are used to prevent diffusion of the conductor materials into the adjacent areas of the semiconductor device. The seed layers 128 and 134 form electrodes on which the conductor material of the conductor cores 130 and 136 are deposited. The seed layers 128 and 134 are of substantially the same conductor material as the conductor cores 130 and 136 and become part of the respective conductor cores 130 and 136 after the deposition. During the manufacturing process, after deposition of the barrier layer 126, the seed layer 128, and the conductor core 130, chemical-mechanical polishing (CMP) processes are applied for planarization. The CMP of the first channel dielectric layer 108 uses a slurry containing ceria, or cerium oxide (CeO2), as an abrasive. Unfortunately, ceria is difficult to remove from dielectric materials, such as silicon oxide (SiO2) and silicon nitride (SiN). The difficulty is due to the strong chemical bonding interactions of elemental cerium. In the past, conventional methods have used sulphuric acid (H2 SO4) with hydrogen peroxide (H2 O2) solutions at very low pH below zero for removal. In addition to causing problems with subsequently deposited layers, the H2 SO4 and H2 O2 solutions corrode the brushes, which are used in the water-brush scrubbing systems for cleaning the wafers after CMP. Referring now to FIG. 3, therein is shown a cross-section similar to that shown in FIG. 2 (PRIOR ART) of a semiconductor wafer 200 of the present invention. The semiconductor wafer 200 has first and second channels 202 and 204 connected by a via 206. The first and second channels 202 and 204 are respectively disposed in first and second dielectric layers 208 and 210. The via 206 is a part of the second channel 204 and is disposed in a via dielectric layer 212. A portion of the first channel 202 is disposed in a first channel stop layer 214 and is on a device dielectric layer 216. Generally, metal contacts (not shown) are formed in the device dielectric layer 216 to connect to an operative semiconductor device (not shown). This is represented by the contact of the first channel 202 with a semiconductor contact 218 embedded in the device dielectric layer 216. The various layers above the device dielectric layer 216 are sequentially: the first channel stop layer 214, the first channel dielectric layer 208, a via stop layer 220, the via dielectric layer 212, a second channel stop layer 222, the second channel dielectric layer 210, and a next channel stop layer 224. The first channel 202 includes a barrier layer 226 and a seed layer 228 around a conductor core 230. The second channel 204 and the via 206 include a barrier layer 232 and a seed layer 234 around a conductor core 236. The barrier layers 226 and 232 are used to prevent diffusion of the conductor materials into the adjacent areas of the semiconductor device. The seed layers 228 and 234 form electrodes on which the conductor material of the conductor cores 230 and 236 are deposited. The seed layers 228 and 234 are of substantially the same conductor material as the conductor cores 230 and 236 and become part of the respective conductor cores 230 and 236 after the deposition. Again, during the manufacturing process, after deposition of the barrier layer 226, the seed layer 228, and the conductor core 230, a chemical-mechanical polishing (CMP) process is applied for planarization and the CMP uses a slurry containing ceria as an abrasive. The present invention makes use of the highly oxidizing nature of ceria, such that chemical reducing agents will react with Ce(IV) oxide converting the Ce to the Ce(III) oxidation state, which is soluble in aqueous solutions compared to Ce(IV) oxide, or CeO2, which is insoluble in most reagents. The reducing agents include, but are not limited to, phosphorus acid (H3 PO3), hypophosphoric acid (H3 PO2), oxalic acid (H2 C2 O4), and L-ascorbic acid(C6 H8 O6). In another embodiment, the present invention makes use of the complexation of Ce ions by ligand species, which form strong chemical bonds to Ce(III) to increase Ce(IV) solubility. Ligand species include, but are not limited to, ascorbic acid, citric acid, tartaric acid, malic acid, and glutamic acid. Referring now to FIG. 4, therein is shown a step in the CMP process in which a pad 240 is used to planarize a first channel surface of the semiconductor wafer 200. Therein is thus shown the planarization of the first channel 202 and first channel dielectric layer 208. A ceria containing slurry 242 is used between the pad 240 and the semiconductor wafer 200 for the removal of the material 243. Referring now to FIG. 5, therein is shown the dispensation of a ceria dissolution solution 244 from a nozzle 246 which reacts with Ce(IV) oxide and converts the Ce to the Ce(III) oxidation state which is soluble in aqueous solutions while the ligand species causes complexation of the Ce ions. In one mode of the present invention, the Ce dissolution solution 244 is a solution of phosphorus acid (H3 PO3), acting as a reducing agent, and ascorbic acid, acting as both a complexing agent and a reducing agent. The Ce dissolution solution 244 readily dissolves the ceria and Ce. The composition is in the range of 1%-5% of phosphorus acid and ascorbic acid in water. Dissolution of ceria and Ce is increased as this phosphorus acid to ascorbic acid concentrations increase. The pH of the best embodiment is approximately 2.0, which allows the solutions to be used on conventional water-brush scrubbing systems. Alternative chemistries (to clean CeO2 from wafers but not in copper damascene applications) with comparable etch rates currently used in the industry have a pH of equal to or less than 0. In various embodiments, the barrier layers are of materials such as tantalum (Ta), titanium (Ti), tungsten (W), compounds thereof, and combinations thereof. The seed layers (where used) are of materials such as copper (Cu), gold (Au), silver (Ag), compounds thereof and combinations thereof with one or more of the above elements. The conductor cores with or without seed layers are of materials such as copper, aluminum (Al), gold, silver, compounds thereof, and combinations thereof. The dielectric layers are of dielectric materials such as silicon oxide (SiOx), tetraethoxysilane (TEOS), borophosphosilicate (BPSG) glass, etc. with dielectric constants from 4.2 to 3.9 or low dielectric constant dielectric materials such as fluorinated tetraethoxysilane (FTEOS), hydrogen silsesquioxane (HSQ), benzocyclobutene (BCB), etc. with dielectric constants below 3.9. The stop layers and capping layers (where used) are of materials such as silicon nitride (Six Nx), silicon oxynitride (SiON) or low dielectric constant materials such as silicon carbide (SiC) with dielectric constants below 5.5. While the invention has been described in conjunction with a specific best mode, it is to be understood that many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the aforegoing description. Accordingly, it is intended to embrace all such alternatives, modifications, and variations that fall within the spirit and scope of the included claims. All matters hither-to-fore set forth or shown in the accompanying drawings are to be interpreted in an illustrative and non-limiting sense.
The movement of individual wafers in a semiconductor facility is tracked via a set of coordinates that include rotational points of reference on the wafer that coincide with the wafer's location in the processing line. In an example embodiment, the method includes imparting angles of rotation on the wafers in different stages of the processing system. The different angles of rotation on each wafer are collected as data along with the wafer location in the processing system and the tool/equipment identification code. The combined angle of rotation and wafer location data is used to map the path the wafer has traveled from the onset of processing. An important advantage of the invention is the increased control and traceability that the invention brings to wafer processing.
We claim: 1. A method of rotating wafers in a multiple stage wafer processing system, the method comprising:determining an incoming angle of rotation of the wafer as the wafer is presented at a first processing stage; rotating the wafer to an outgoing angle of rotation as the wafer is exiting the first processing stage and moving into a second processing stage; and recording as data the rotation angle and a corresponding wafer location in the processing system as the wafer moves through each stage of wafer processing. 2. The method of claim 1, further including the step of determining and recording a translation angle of rotation and the corresponding wafer location while the wafer is within the first processing stage.3. The method of claim 1, further including the step of recording an identification code disposed on the wafer and recording the position of the wafer in a carrier slot.4. The method of claim 1, wherein the step of recording data includes recording the data in connection with a particular stage and tool in the processing system.5. The method of claim 1, wherein the step of recording data further includes recording angle of rotation wafer data in connection with the wafer moving through a multiple chamber location in the processing system.6. The method of claim 1, wherein the step of rotating the wafer includes rotating the wafer to the exclusion of a certain processing stage in the system.7. The method of claim 1, wherein the step of rotating the wafer includes rotating the wafer to the exclusion of certain areas on the wafer.8. The method of claim 1, wherein the step of rotating the wafer includes randomly rotating the wafer axially and recording the wafer position data by carrier slot and by angle.9. The method of claim 1, wherein the step of recording rotation angles includes using a computer arrangement to develop a historical wafer movement map composed of a plurality of sets of coordinates, each set of coordinates including the angle of rotation and the corresponding location of the wafer in the processing system.10. The method of claim 1, wherein the wafer includes a flat panel display substrate.11. A method of rotating wafers in a multiple stage wafer processing system, the method comprising:determining an incoming angle of rotation of the wafer as the wafer is presented at a first processing stage; rotating the wafer to an initial angle of rotation prior to determining the incoming angle of rotation, the initial angle of rotation measured from a predetermined starting point on the wafer; rotating the wafer to an outgoing angle of rotation as the wafer is exiting the first processing stage and moving into a second processing stage; and recording as data the rotation angle and a corresponding wafer location in the processing system as the wafer moves through each stage of wafer processing. 12. The method of claim 11, wherein the step of rotating the wafer to the initial angle of rotation includes placing more than one set of wafers in a single carrier and arranging each set of wafers with a distinct rotation angle with respect to the adjacent set of wafers within the carrier.13. The method of claim 12, further including the step of arranging all of the wafers in the carrier such that each of the wafers has the distinct angle of rotation from any other wafer in the carrier.14. The method of claimed 13, further including the step of verifying that all of the wafers have the distinct angle of rotation.15. A system for rotating wafers in a multiple stage wafer processing system, the system comprising:means for determining an incoming angle of rotation of the wafer as the wafer is presented to a first processing stage of wafer processing; means for rotating the wafer to an outgoing angle of rotation as the wafer is exiting the first and moving into a second stage of wafer processing; and a computer arrangement for recording as data the wafer rotation angles and a corresponding wafer location in the processing system as the wafer moves through each stage of wafer processing. 16. The system of claim 15, wherein the computer arrangement is adapted to develop a historical wafer movement map composed of a plurality of sets of coordinates, each set of coordinates including the angle of rotation and the corresponding location of the wafer in the processing system.17. The system of claim 15, wherein rotation angle determining means includes a scanning arrangement adapted to determine a translation angle of rotation on the wafer while the wafer is within a stage of processing.18. The system of claim 15, further including a wafer carrier movement detector for determining the rate of rotation of a carrier moving through the processing system, such data being recorded into the computer arrangement.19. The system of claim 15, further including a multiple chamber subsystem that imparts additional angles of rotation on the wafer that are recorded in the computer arrangement.20. A method of rotating an wafer in a multiple stage wafer processing system, the method comprising:determining an incoming angle of rotation of the wafer as the wafer is moving into a first stage of processing; determining an angle of translation of the wafer as the wafer is in the first stage of processing; rotating the wafer to an outgoing angle of rotation as the wafer is exiting the first stage of processing and moving into a second stage of processing; and tracking and recording as data the rotation angles and the corresponding wafer location in the processing system as the wafer moves through each stage of wafer processing. 21. A method of rotating wafers in a multiple stage wafer processing system, the method comprising:determining an incoming angle of rotation of the wafer as the wafer is presented at a first processing stage; rotating the wafer to an initial angle of rotation prior to determining the incoming angle of rotation, the initial angle of rotation measured from a predetermined starting point on the wafer; and rotating the wafer to an outgoing angle of rotation. 22. The method of claim 21, further including the step of recording as data the rotation angle and a corresponding wafer location in the processing system as the wafer moves through each stage of wafer processing.
FIELD OF THE INVENTIONThe present invention generally relates to processing of material in a manufacturing plant and, more particularly, to methods and systems for tracking the movement of wafers in a semiconductor processing plant.BACKGROUND OF THE INVENTIONConventional manufacturing plants move material to be processed through a manufacturing process having several processing areas. Currently these material lots are tracked in larger quantities that may be disposed in a carrier for ease of movement throughout the facility.Some manufacturing processes require that the item being processed be rotated regularly in order to ensure that the item is properly processed, such as when painting an object or when applying a coating to a substrate. In the case of a mechanical process, the object is rotated to ensure that the tooling is being worn evenly or that the tooling is mechanically treating the object evenly. Even though some of these items may be individually processed, or processed in small lots, the items may form part of a larger lot being manufactured and it is difficult to distinguish the progress of the individual item as it moves through the processing line. As the number of processing steps increase tracking becomes even more difficult. This is particularly a problem in the processing of wafers in a semiconductor processing plant.A conventional semiconductor fabrication plant typically includes multiple fabrication areas or bays interconnected by a path, such as a conveyor belt. Each bay generally includes the requisite fabrication tools (interconnected by a subpath) to process semiconductor wafers for a particular purpose, such as photolithography, chemical-mechanical polishing or chemical vapor deposition, for example. Material stockers or stocking tools generally lie about the plant and store semiconductor wafers waiting to be processed. Each material stocker typically services two or more bays and can hold hundreds of cassettes. The wafers are usually stored in cassettes in-groups of about 25 wafers. The wafers are then disposed within a carrier and move from one process step to another in the carrier. The carriers are usually tracked by their carrier code by a computer system as they move through the plant.Once a lot has been retrieved, and the equipment has been set up, the operation on the wafers by a particular piece of equipment, or "tool," can begin. At this point, the lot is "moved-in" to the operation. An operator on the line then communicates this information to the host computer. The lot remains in this state until the operation is completed. Once the operation is completed, the operator must perform tests and verifications on the wafers. When all tests and verifications have been performed, the host computer application program must be notified. Wafers may have moved from one cassette to another as a result of the operation; therefore the host application and computer have to be notified of these moves. The operator then places the cassette of "moved-out" wafers in the material stocker to await orders as to the location of the next piece of equipment that will perform operations on the wafers.The semiconductor fabrication plant, including the bays, material stockers and the interconnecting path, typically operates under control of a distributed computer system running a factory management program. In this environment, the automated material handling system (AMHS) may conceptually include the cassettes, the transportation system (e.g., paths) and control system (e.g., the distributed computer system). An empty carriers management system as well as a separate test wafer management system may also form part of the AMHS.Data gathered during the course of wafer processing is used to diagnose yield problems and forms the basis of yield improvement efforts. Such data includes parametric electrical test data gathered on individual circuits and test structures fabricated on the wafers, as well as wafer sort data which tests the suitability for use of the wafers once wafer processing is completed. One of the possible sources of yield variation is the order in which wafers in a lot are processed at a given processing step. When the processing is done one wafer at a time per step, yield variation may occur due to a build up of contaminants, uneven heating of a processing chamber or another physical aspect that changes during the processing of the lot. In a batch operation, the physical location of the wafer in the batch processing equipment may influence uniformity of the processing effects across the lot. In an example where wafers are moving through a contaminated chamber, if the order in which each wafer is processed is known then the final wafer yield may be plotted against the processing order in this step. For each wafer in a lot a drop-off in yield versus processing order would be observed due to the contamination problem. This data is used to make adjustments to the line to improve yield; however, this wafer tracking method lacks the level of precision in the data collected required by chip plants today.In tracking the wafer processing order, specialized equipment has been used to read scribed wafer identifiers, either immediately prior to or after critical processing steps, and to store this data for later correlation with device performance. Randomizing the order of the wafers prior to such steps is often done to ensure effects are not compounded. The wafer positional data is fed into a computer system, the device performance metrics for a wafer lot of interest are manually entered, and then all possible graphs of the device metrics for that lot versus wafer processing order at each step are generated. The data is then reviewed to determine those steps at which the processing order may affect performance. This type of approach to tracking wafers can be costly in its implementation due to the amount of hardware and software needed to randomize the wafer order and interface with the wafer processing system's main computer database.SUMMARY OF THE INVENTIONThe present invention is directed to addressing the above and other needs in connection with improving wear rates of tooling and equipment in a semiconductor processing line and improving traceability of product as it moves through the manufacturing process.According to one aspect of the invention, it has been discovered that by rotating a wafer to a defined angle of rotation and recording the rotation angle and the wafer's corresponding location in the processing line improved accuracy in wafer tracking is achieved. It has also been discovered that tooling wear may be more controlled if the wafer being processed is moved axially prior to processing. Accordingly, a method of rotating a wafer in a multiple stage wafer processing system includes determining an incoming angle of rotation of the wafer as the wafer is presented to a first processing stage of wafer processing. As the wafer is exiting the first processing stage and moving into a second processing stage, the wafer is rotated to an outgoing angle of rotation. The angle of rotation and wafer location is recorded as a set of coordinates for each wafer as the wafer moves through each stage of wafer processing. In a related embodiment, a translation angle of rotation can also be imparted on the wafer while the wafer is within a stage of wafer processing.According to another aspect of the invention, a system for rotating wafers in a multiple stage wafer processing system includes a scanning arrangement for determining an incoming rotation angle on a wafer as the wafer is presented to a first processing stage. A rotating apparatus rotates the wafer to an outgoing angle of rotation as the wafer is exiting the first processing stage and moving into a second processing stage. A computer arrangement records and tracks the angle of rotation and the corresponding wafer location as a set of coordinates for each wafer as the wafer moves through each stage of wafer processing.The above summary of the present invention is not intended to describe each illustrated embodiment or every implementation of the present invention. The figures in the detailed description that follow more particularly exemplify these embodiments.BRIEF DESCRIPTION OF THE DRAWINGSThe invention may be more completely understood in consideration of the following detailed description of various embodiments of the invention in connection with the accompanying drawings, in which:FIG. 1 is a carrier having a set of wafers arranged in accordance with one embodiment of the invention;FIG. 2 is a process flow diagram of an example wafer process line and the angles that the wafer is rotated to as the wafer moves through the process in accordance with one embodiment of the invention;FIG. 3 is a process flow diagram of an example wafer process line and the three wafers that move through different stages of the process line in accordance with one embodiment of the invention; andFIG. 4 is a flowchart of the manner in which objects are rotated and tracked in a manufacturing line in accordance with one embodiment of the invention.While the invention is amenable to various modifications and alternative forms, specifics thereof have been shown by way of example in the drawings and will be described in detail. It should be understood, however, that the intention is not to limit the invention to the particular embodiments described. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.DETAILED DESCRIPTIONThe present invention is generally directed to a method and system for rotating an object in a manufacturing plant. The invention is particularly suited for rotating wafers and recording their movement as they progress through a wafer processing system. While the present invention is not necessarily limited to a wafer processing application the invention will be better appreciated using a discussion of exemplary embodiments in such a specific context.In an example embodiment, a method of rotating wafers in a multiple stage wafer processing system includes imparting angles of rotation on wafers in different stages as the wafer moves through the processing line. The angles of rotation on each wafer are collected as data along with the corresponding wafer location in the process and the tool/equipment identification code. The data combination will serve as a set of coordinates or markers that can be used to track the movement of the wafer from the onset of processing. In one application, the data is used to develop wafer movement maps that detail the historical movement of each wafer that serve as analysis tools.Referring now to the figures, FIG. 1 illustrates an example embodiment of a wafer lot 10 arranged in a carrier 12 in accordance with one embodiment of the invention. Carrier 12 has a series of slots 13 that hold individual wafers 14 therein for movement through a wafer processing system. Wafers 14 have a slot or notch 16 located along the circumference that serves as a point of reference. In this example, slot 16 is at 0[deg.] degrees and serves as the starting point from which the wafer is rotated axially. The wafers are rotated to different angles of rotation at the onset of processing. In related embodiment, the wafers are rotated randomly, as long as each wafer has a distinct angle of rotation before moving through the process. In another related embodiment, where individual wafer data is not desired and only wafer lot data is of interest, the wafers have the same initial angle of rotation at the onset of wafer processing. In one example, each wafer can be given about 360 angles of rotation, of 1 degree each, excluding the slot portion of the wafer and the scribe portion.In other manufacturing applications, it is important to identify the axis of rotation of the object and the starting or reference point from which the angle of rotation will be measured. For example, where the object is a thin film display panel, the axis of rotation is similar to that of a wafer in that the panel is flat and acts as a substrate while its being processed. In one instance, the panel has about 4 main angles of rotation due to the panel's square shape.Where the wafers are to be subjected to common process steps, such as heating in a furnace, the wafers are usually arranged in tubes. Since many tubes include up to 100 wafers, in this example each set of wafers is to have a distinct angle of rotation with respect to the adjacent wafer set. In a related example, each wafer has a distinct angle of rotation with respect to all of the wafers in the tube. Referring briefly to FIG. 1, each wafer also has a scribe or a code 18 located on the wafer for identifying the wafer. In addition, each carrier and cassette in the wafer processing system is also identified and tracked by an identification tag, such as a bar code, which is read by a sensor along the processing path.Referring now to FIG. 2, a process flow diagram exemplifies a wafer processing line 20 that has a computer arrangement coupled thereto. FIG. 2 also illustrates a wafer having different angles of rotation as it moves through the wafer process. The different angles of rotation correspond to the various steps of the process. At location 22, the wafer lot is started and wafer 22A has an initial angle of rotation of 0 degrees. The wafer is also identified at this point by its code and slot position in the carrier and this information is recorded in a computer arrangement 28. The movement of the wafer is tracked with this information and the successive angles of rotation are used to create a historical map of the movement of the wafer through the process.In another embodiment, it is advantageous to impart an initial angle of rotation at 22A, either randomly or a predetermined angle. At location 23, the wafer is rotated to an angle of 45 degrees, now 23A, and scanned for identification. In this example, the rotation of the wafer is done with a wafer sorter that scans and sorts the wafers. The sorter identifies the wafer and the slot location and usually includes a robotic arm that imparts an angle of rotation on the wafer. The data that is generated after the scanning of the wafers is then recorded in the database of computer 28, with the computer being coupled to wafer processing line 20. Wafer 23A has an incoming angle of 45 degrees as it proceeds into the first stage of processing at location 24. A translation angle is added to the wafer due to the pick and place action (possibly by a robotic arm) that occurs as the wafer is removed from the carrier and placed into the first stage at location 24, resulting in wafer 24A. After the translation angle is added the wafer has an angle of rotation of 90 degrees. The wafer is scanned at 24B and the rotation angle is recorded in the computer database. Wafer 24A exits the first stage at location 24 and is again rotated another 45 degrees at location 25 to result in wafer 25A with an angle of rotation of 135 degrees. The new angle is scanned and recorded at computer 28 as the outgoing angle of the process. When the wafer moves into a second stage of processing at location 26 another translation angle is added to the wafer resulting in wafer 26A at 180 degrees.Referring to FIG. 3, a process flow diagram of an example wafer processing line 30 is illustrated with three wafers (A-C) moving through the processing line. Wafer processing starts at 32 and proceeds through four processing locations 34, 36, 38 and 40 before coming to an end at 42. A computer 44 is coupled to the processing line at about every point in the processing line in order to collect data on the movement of wafers in the system. Wafer A moves through the processing line, as indicated by the numbers under the locations, and wafer A is moved through each stage of the processing line. In this example, angles of rotation are imparted upon every movement of wafer A through the processing line. The angle of rotation data is tied to the corresponding processing stage and tool and then this information is recorded in the computer.FIG. 3 is also representative of processing three sets of wafers through a processing line. Although each set of wafers may be required to be processed according to its own processing recipe, the group of wafers may have a common processing step, such as going through the furnace in a tube holding about 100 wafers. In one example, the wafers are arranged such that each set of wafers has a different angle of rotation with respect to the adjacent set of wafers. In another example, each wafer in the tube has a different angle of rotation that is distinct from any other wafer. The processing line includes a scanning device for verifying that the angle of rotation of the wafers are distinct from each other before proceeding through the line.Wafer sets B and C also move through the processing line but follow different routes due to the lot sizes and the types of processing recipes applied to the wafers. Wafer set B is processed through stages at locations 34, 38 and 40. The angle of rotation data for wafer set B differs from that of wafer set C in that more angles of rotation are imparted due to the fact that more processing steps are involved. As a whole, the set of angles for wafer set B versus wafer set C is also different due to the different path taken during processing. In both cases, the angles of rotation are tied to a corresponding processing stage and tool; this data is then recorded in computer 44 in order to create the historical movement map of each wafer or wafer lot.In a related embodiment, in the second stage of processing location 36 represents a multiple chamber subsystem having a number of tools associated therewith and subprocesses that a wafer moves through. Additional angles of rotation are imparted in this subsystem that are then recorded as part of the mapping process. In a related embodiment, one of the chambers includes a rotating table device that is used to create a balanced subprocess, such as wafer coating. The current invention integrates the rate of rotation of the table into the calculations of the angles of rotation and records this data as well in the movement map of the wafer.Referring to FIG. 4, flowchart 50 illustrates an example of the flow of the method of rotating a wafer in accordance with an embodiment of the present invention. At 52, the axis of rotation of the wafer is defined as well as the starting point on the wafer for measuring the subsequent angles of rotation. At 54, an optional step in processing includes imparting an initial angle of rotation on the wafer and recording the data in the database of computer arrangement 60. At 56, an incoming angle of rotation is defined for the wafer and this data is recorded at 60. The wafer now arrives at a first processing stage at 58. As the wafer is moved into the first processing stage, the action of picking up the wafer and placing it in the processing stage imparts a translation angle. The translation angle is then defined at 62 and recorded at 60. Once the processing at the first stage is complete, a mechanical arm or rotating table rotates the wafer to give it an outgoing angle of rotation at 64. The wafer then exits the first stage at 66 and continues to the second stage; the wafer location and rotation angle being recorded at 60. The flow repeats itself at 56 as the wafer is identified and the incoming angle of rotation of the wafer is determined and recorded. The flow is equally applicable to other items such as flat panel displays. An additional step in the flow can include an angle verification step to ensure that the wafers are at the angle of rotation that was originally intended.In some parts of the processing system, it is advantageous to stop rotating the wafers, such as in the photolithography area, due to alignment issues. However, upon completion the wafers can be returned to the angle of rotation that they had prior to arriving to the photolithography area and then moved on to the next processing stage. In a related embodiment, a control system is included that captures wafer-processing data from prior production runs. The control system data is then shared with the computer arrangement of the rotation system and used to make adjustments up and down the line to improve processing of wafers. For instance, the angles of rotation that are being imparted on the wafers can change due to some change in conditions on the line. The change can be externally or internally driven, but now is manageable with a feedback control loop that is integrated into the processing system.As noted above, the present invention is applicable to a number of techniques for rotating and tracking material that is being processed in a manufacturing plant. Accordingly, the present invention is not necessarily being limited to the particular examples described above, but is intended to cover all aspects of the invention as fairly set out in the attached claims. For instance, while the rotation and tracking of wafers in a semiconductor facility are illustrated, other positional adjustments may be made to various objects during processing. These adjustments can lead to improvements in the product, in the manufacturing process or in the yield of product. Various modifications, equivalent processes, as well as numerous structures to which the present invention may be applicable will be readily apparent to those of skill in the art to which the present invention is directed upon review of the present specification. The claims are intended to cover such modifications and devices.
Particular embodiments described herein provide an electronic device, such as a notebook computer or laptop, which includes a circuit board coupled to a plurality of electronic components (which includes any type of components, elements, circuitry, etc.). One particular example implementation of the electronic device may include a low profile hinge design that includes a micro-hinge. The micro-hinge can couple a first element to a second element and can include a first attachment that couples to the first element, a second attachment that couples to the second element, and a plurality of linkages that couples the first attachment to the second attachment. The low profile hinge can further include a plurality of micro-hinges and a plurality of support rods.
1.An apparatus for plugging a computing device, the apparatus comprising:A docking device that receives the computing device, wherein the docking device includes a keyboard and a hinge that connects the computing device to the keyboard,Wherein the hinge is configured to allow the computing device to rotate in a laptop direction relative to the keyboard when the computing device is connected to the hinge,Wherein the hinge comprises a plurality of interconnected parallel hinge segments at least partially encapsulated in a flexible cover and each hinge segment is rotated about a respective one of a plurality of parallel axes of the hinge.2.The apparatus of claim 1, wherein the computing device comprises a tablet computer.3.The device of any of claims 1-2, wherein the hinge facilitates an electrical connection between the computing device and the keyboard.4.The device of claim 3, wherein the hinge comprises an electrical conduit.5.The apparatus of any of claims 1-4, wherein the plurality of hinge splits comprises at least four interlocked hinge splits.6.The device of any one of claims 1-5, wherein the rotation of the hinge facilitates both the open position and the closed position of the laptop direction.7.The device according to any one of claims 1-6, wherein the hinge further comprises a plurality of support rods.8.The apparatus of any one of claims 1-7, wherein the hinge is connected to a first end of the computing device, a first edge of the first end comprises a length of the first end, and The plurality of parallel axes of the hinge are each parallel to each of the first edge and the second edge.9.The apparatus according to any one of claims 1 to 8, wherein the plurality of parallel hinges are divided into a plurality of micro-hinges.10.A method for plugging in a computing device, the method comprising:Docking a computing device to a docking device, wherein the docking device includes a keyboard and a hinge coupling the computing device to the keyboard, wherein the hinge includes a plurality of interconnects at least partially enclosed in a flexible cover And each of the hinges segments about a respective one of a plurality of parallel axes of the hinge;Rotating the computing device in a laptop direction relative to the keyboard using the hinge; andThe computing device is detached from the plug device.11.A system comprising means for performing the method of claim 10.12.A system comprising:Computing equipment;A docking device that receives the computing device, wherein the docking device includes a keyboard and a hinge that connects the computing device to the keyboard,Wherein the hinge is configured to allow the computing device to rotate in a laptop direction relative to the keyboard when the computing device is connected to the hinge,Wherein the hinge comprises a plurality of interconnected parallel hinge segments at least partially encapsulated in a flexible cover and each hinge segment is rotated about a respective one of a plurality of parallel axes of the hinge.13.The system of claim 12, wherein the computing device comprises a display.14.The system of claim 13, wherein the display comprises a touch screen display.15.The system of any of claims 12-14, wherein the computing device comprises a tablet computer.16.The system of any one of claims 12-15, wherein the hinge facilitates an electrical connection between the computing device and the keyboard.17.The system of claim 16, wherein the hinge comprises an electrical conduit.18.The system of any one of claims 12-17, wherein the plurality of hinge splits comprises at least four interlocked hinge splits.19.The system of any of claims 12-18, wherein the rotation of the hinge facilitates the laptop-facing open and closed positions.20.The system of any one of claims 12-19, wherein the hinge further comprises a plurality of support rods.21.The system of any of claims 12-20, wherein the hinge is connected to a first end of the computing device, a first edge of the first end comprises a length of the first end, and The plurality of parallel axes of the hinge are each parallel to each of the first edge and the second edge.22.The system according to any one of claims 12-21, wherein the plurality of parallel hinges are divided into a plurality of micro-hinges.
Micro-hinges for electronic devicesThis application is a divisional application of the same title with application number 201580011089.4 filed on February 28, 2015.Technical fieldThe embodiments described herein relate generally to the field of hinges, and more specifically to the micro-hinges of electronic devices.Background techniqueEnd users have more choices of electronic devices than before. Many outstanding technology trends are currently being prepared (for example, more computing devices, more devices that can be changed to different configurations, etc.) and these trends change the landscape of electronic devices. One of the technology trends is to mix laptop computers (eg, convertible computers, collapsible notebooks, etc.). Hybrid laptop computers are single-piece mobile computers that can include both laptop and tablet configurations. In order to transition from a laptop configuration to a tablet configuration, the display or screen can typically be rotated, twisted, or rotated on a keyboard. Although hybrid laptop computers are an attractive way of enabling conversions from laptop-to-tablet configurations, in some designs the hinge may be cumbersome and limit the form factor of the device.BRIEF DESCRIPTION OF THE DRAWINGS FIGThe embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which like references indicate similar elements and in which:1A is a simplified orthographic view showing an embodiment of an electronic device in a closed clamshell configuration according to one embodiment of the present disclosure;Figure IB is a simplified orthographic view showing an embodiment of an electronic device in an opened clamshell configuration according to one embodiment of the present disclosure;Figure 1C is a simplified frontal perspective view showing an embodiment of an electronic device in an open planar configuration according to one embodiment of the present disclosure;Figure 1D is a simplified frontal perspective view showing an embodiment of an electronic device in a tablet configuration according to one embodiment of the present disclosure;Figure 1E is a simplified frontal perspective view showing an embodiment of an electronic device in a tablet configuration according to one embodiment of the present disclosure;Figure 2 is a simplified frontal perspective view showing an embodiment of a portion of a hinge in accordance with one embodiment of the present disclosure;Figure 3 is a simplified frontal projection showing an embodiment of a portion of a hinge according to one embodiment of the present disclosure;Figure 4 is a simplified frontal perspective view showing an embodiment of a portion of a hinge in accordance with one embodiment of the present disclosure;Figure 5A is a simplified frontal perspective view showing an embodiment of a portion of an electronic device in accordance with one embodiment of the present disclosure;Figure 5B is a simplified frontal perspective view showing an embodiment of a portion of an electronic device in accordance with one embodiment of the present disclosure;6A is a simplified frontal perspective view showing an embodiment of an electronic device in an opened clamshell configuration according to one embodiment of the present disclosure;Figure 6B is a simplified frontal perspective view showing an embodiment of an electronic device in an open planar configuration in accordance with one embodiment of the present disclosure;Figure 6C is a simplified orthographic view illustrating an embodiment of an electronic device in a closed clamshell configuration according to one embodiment of the present disclosure;Figure 6D is a simplified frontal perspective view showing an embodiment of an electronic device in a tablet configuration according to one embodiment of the present disclosure;7A is a simplified frontal perspective view showing an embodiment of an electronic device in an open clamshell configuration according to one embodiment of the present disclosure;Figure 7B is a simplified frontal perspective view showing an embodiment of an electronic device in an open planar configuration according to one embodiment of the present disclosure;Figure 7C is a simplified frontal perspective view of an embodiment of an electronic device in a closed clamshell configuration according to one embodiment of the present disclosure;Figure 7D is a simplified frontal perspective view showing an embodiment of an electronic device in a tablet configuration according to one embodiment of the present disclosure;8A is a simplified frontal perspective view showing an embodiment of an electronic device in an open clamshell configuration according to one embodiment of the present disclosure;Figure 8B is a simplified frontal perspective view showing an embodiment of an electronic device in an open planar configuration according to one embodiment of the present disclosure;Figure 8C is a simplified orthographic view showing an embodiment of an electronic device in a closed clamshell configuration according to one embodiment of the present disclosure;Figure 8D is a simplified frontal perspective view showing an embodiment of an electronic device in a tablet configuration according to one embodiment of the present disclosure;Figure 9 is a simplified block diagram associated with an exemplary ARM ecosystem system-on-chip (SOC) of the present disclosure; and10 is a simplified block diagram illustrating example logic that may be used to perform activities associated with the present disclosure.The figures of the drawings are not necessarily to scale, as their size may vary significantly without departing from the scope of the disclosure.detailed descriptionOverviewIn the examples, systems, devices, and methods for low profile hinge designs are disclosed. In an exemplary embodiment, the low profile hinge may include a micro-hinge. The micro-hinge may couple or connect the first element to the second element and may include a first joint coupled to the first element, a second joint coupled to the second element, and a plurality of couplings coupling the first joint to the second joint Link. The low profile hinges can rotate about three hundred and sixty degrees. The low profile hinge may also include a plurality of micro hinges and a plurality of support rods. In addition, the low profile hinge may include a flexible cover. In one example, the low profile hinge extends about the length of the first element and the second element. In addition, the micro-hinge may include an electrical conduit. In some examples, the first element is the base portion of the electronic device and the second element is the display portion of the electronic device.Exemplary embodiments of the present disclosureHybrid laptop computers are single-piece mobile computers that can include both laptop and tablet configurations. In order to transition from a laptop configuration to a tablet configuration, the display or screen can typically be rotated, twisted, or rotated on a keyboard. Although hybrid laptop computers are an attractive way of enabling conversions from laptop-to-tablet configurations, in some designs the hinge may be cumbersome and limit the form factor of the device. For example, the z-height of a device typically depends on the hinge design.Currently, the form factor limitation for electronic devices (eg, hybrid laptops) is addressed by supporting ultra-low profile and small form factor components such as non-core packaging and motherboards, connectors, batteries, and the like. The development of high-density supercapacitors has also been used to further reduce cell form factor and density to support low-profile platforms. However, the form factor for low-profile devices is often limited to hinges.The above description is provided by way of non-limiting example in which the systems and methods of the present specification may be usefully arranged. The following disclosure provides many different embodiments or examples for implementing different features of the disclosure. Specific examples of components and arrangements are described below to simplify the disclosure. Naturally, these are just examples and are not intended to be limiting. In addition, the present disclosure may repeat reference numerals and / or letters in various examples. This repetition is for the purpose of simplification and clarity and does not of itself represent the relationship between the various embodiments and / or configurations discussed. Different embodiments may have different advantages, and any embodiment does not necessarily require a particular advantage.In the examples in this description, systems and methods are provided for low profile hinge designs. In one example, using a micro-hinge design, a device (eg, an electronic device) may be configured such that the hinge form factor does not limit the overall z-height of the device (the height on the z-axis of the X, Y, Z Cartesian coordinate system ) Of the zoom. The hinges may be low profile, integrally foldable, 360 degree (360 °) hinges. The overall thickness of the hinge design can be scaled by configuring the dimension of the split part of the hinge according to the desired z-height. Therefore, the overall z-height of the device may be scaled based on the components of the device (eg, display portion, base portion, keyboard portion, etc.), and is not limited by the size of the hinge. For example, with a low profile hinge design, the electronics can be operated in a low profile clamshell configuration, a low profile plane configuration, and a low profile tablet configuration.The following is an illustration of an example of a micro-hinge design according to one or more exemplary embodiments of the present specification. It should be understood that the hinge design disclosed herein is given by way of non-limiting example only, and that any suitable technique or configuration is intended to be encompassed within the broad scope of the specification.Example embodimentsThe detailed description that follows sets forth exemplary embodiments with respect to devices, methods, and systems for micro-hinge configurations for electronic devices. For example, features such as structures, functions, and / or characteristics may be described with reference to one embodiment for convenience; the various embodiments may be implemented with any suitable one or more of the described features.Turning to FIG. 1A, FIG. 1A is a simplified orthographic view illustrating an embodiment of an electronic device 10 in a closed clamshell configuration according to one embodiment of the present disclosure. The electronic device 10 may include a base portion 12, a display portion 14, a keyboard portion 16, a display hinge 38, and a keyboard hinge 20. The display hinge 38 may define a rotation axis shared between the base portion 12 and the display portion 14. The keyboard hinge 20 may define a rotation axis shared between the base portion 12 and the keyboard portion 16. In this configuration, the keyboard hinges 20 and the display hinges 38 may have a low, planar, or relatively planar cross-section of low z height. As used throughout this specification, the z-height is the z-axis height of the X, Y, Z Cartesian coordinate system. In embodiments, the keyboard hinge 20 is a different type of hinge than the display hinge 38 and may be a flexible fabric, a molded flexible polymer, or some other similar thin flexible material.In one or more embodiments, the electronic device 10 is a notebook computer or a laptop computer. In other embodiments, the electronic device 10 may be any suitable electronic device with a display such as a mobile device, a tablet computer and / or tablet device (eg, iPad ™), a tablet phone, a personal digital assistant (PDA), a smart phone, Audio system, any type of movie player, a computer docking station, and the like. In another embodiment, most of the electronic components (eg, processor, memory, etc.) of the electronic device 10 reside in the base portion 12.Turning to FIG. 1B, FIG. 1B is a simplified orthographic view of an electronic device 10 in an open clamshell configuration according to one embodiment of the disclosure. As shown in FIG. 1B, the display portion 14 has been rotated on the display hinge 38. The keyboard portion 16 has been rotated on the keyboard hinge 20.Keyboard portion 16 may include a keyboard 24. Display portion 14 may include a display 22. In one or more embodiments, the display 22 may be a liquid crystal display (LCD) display, a light emitting diode (LED) display, an organic light emitting diode (OLED) display, a plasma display, or any other suitable display system . Display 22 may be a touch screen that may detect the presence and location of touches within the display area. In another embodiment, the display portion 14 may include a video camera, a microphone, and a speaker.Turning to FIG. 1C, FIG. 1C is a simplified orthographic view of an electronic device 10 in an open planar configuration according to one embodiment of the present disclosure. As shown, in FIG. 1C, the display portion 14 has been rotated on the display hinge 38 such that the display portion 14 is on the same plane as the base portion 12. In addition, the keyboard portion 16 has been rotated on the keyboard hinge 20 so that the keyboard portion 16 is also on the same plane as the base portion 12. Keyboard hinges 20 and display hinges 38 are configured to sit relatively flat on a planar surface and allow the electronic device 10 to have a low, planar, or relatively planar cross-section of low z-height when the electronic device 10 is in a planar configuration.Turning to FIG. 1D, FIG. 1D is a simplified orthographic view of an electronic device in a tablet configuration according to one embodiment of the disclosure. As shown in FIG. 1D, the display portion 14 has been rotated on the display hinge 38 such that the display 22 faces upward and away from the base portion 12. In addition, the keyboard portion 16 has been rotated on the keyboard hinge 20 such that the keyboard 24 (not shown) faces downward and away from the base portion 12. In this configuration, the display 22 is facing up while on the opposite side, the keyboard 24 is facing downward. The base portion 12 is between the display portion 14 and the keyboard portion 16. The keyboard hinge 20 and the display hinge 38 are configured to have a low profile and allow the electronic device 10 to have a low, planar, or relatively planar cross-section of low z-height when the electronic device 10 is in a flatbed configuration.Turning to FIG. 1E, FIG. 1E is a simplified front elevation view of an electronic device in a tablet configuration according to one embodiment of the present disclosure. As shown in FIG. 1E, the display portion 14 has been rotated on the display hinge 38 such that the display 22 faces upward and away from the base portion 12. In addition, the keyboard portion 16 has been rotated on the keyboard hinge 20 such that the keyboard 24 (not shown) faces downward and toward the base portion 12. In this configuration, the keyboard portion 16 faces downward and is between the base portion 12 and the display portion 14. In another embodiment, the keyboard portion 16 can be rotated on the keyboard hinge 20 such that the keyboard 24 faces downward and toward the display portion 14 and serves as a protective layer for the display 22. In this configuration, the display portion 14 is between the base portion 12 and the keyboard portion 16.In general, the electronic device 10 may be configured to provide a display portion and a keyboard portion that are coupled to the base portion using a micro-hinge design. The micro-hinge may be configured such that the display portion and the keyboard portion may rotate about the base portion about 360 °. The entire system can be configured to operate in low-profile low-profile clamshell mode configurations, low-profile planar mode configurations, and low-profile flat-panel mode configurations.To illustrate certain example features of the electronic device 10, subsequent base information may be viewed as a basis upon which the present disclosure may be correctly interpreted. With the recent Touch Optimized Operating System (OS), the distribution of hybrid laptop computers (eg, tablets, convertible laptops, clamshell computers, etc.) has become more and more popular. However, switchable hinge designs have the disadvantage of usability issues for specific consumer groups. For example, current hinge solutions may have cumbersome hinge components that may create large profiles and inhibit the functionality and usability of electronic devices. For example, cumbersome hinge components may constrain hybrid electronics or combo shape factor scaling.Today, hybrid electronics and switchable form factor limitations are addressed by supporting low profile and small form factor components such as non-core packaging and motherboards, connectors, batteries, and the like. High density supercapacitors have also been developed to further reduce battery form factor and density. In at least one of the exemplary embodiments discussed herein, an electrical device may be configured with a low profile hinge design, wherein all of the systems may operate in low profile low profile clamshells configurations, low profile planar configurations and low profile planar configurations. Designed to support low-profile, fully-folded, 360 ° hinges, the low-profile hinges prevent the hinge-form factor from limiting the overall z-height of the system by using a micro-hinged split design. You can scale the overall thickness of a hinge by configuring the dimensions of the split component, based on the height of the system. Therefore, the overall system z-height can be scaled based on the display portion and the keyboard portion, without being limited by the size of the hinge.Certain embodiments described herein provide electronic devices such as notebook computers, laptop computers, cell phones, or other devices including circuit boards coupled to multiple electronic components, including any type of components, elements, circuits, etc. Mobile devices. The electronic device may also include a display portion and a keyboard portion coupled to the substrate with micro-hinges. Micro-hinges can be configured to allow low profile 360 ​​° hinge designs for hybrid electronics and two-in-one applications. Micro-hinges include micro-hinged links. Micro-hinge links can be embedded or covered with a molded flexible polymer (eg, polyurethane or some other rubber-like material). The micro-hinge is mechanically coupled or connected to a display portion (eg, a display panel) and a base portion (eg, a system board component) to form an electronic device.The micro-hinge linkage is designed to provide guidance and support when the body (eg, support rod) of the micro-hinge is bent. For example, the body can include a plurality of flexible support rods encapsulated within the polymer heat shrink. Micro-hinge linkages (as well as support bars) can be relatively durable and can withstand several switching cycles without mechanical breakage. The micro-hinge link may include a mechanical support structure. The mechanical support mechanism may include a metal rod, for example, having a thin stainless steel rod with a diameter of about 0.5 mm. Polymer-based composites can also be used, and it can provide improved mechanical reliability and durability.The electrical connection between the base portion and the display portion can be established by micro-hinges embedded or overmolded interconnects. The micro-hinge may include a connector and a mechanical retention to provide an electrical connection between the display portion and the base portion. In one embodiment, the electrical connections between the motherboard formed in the base portion and the display components in the display portion may be made via a conventional wired connection via a micro-hinge. In another embodiment, a printed circuit board (PCB) interconnect may be used to electrically connect the display portion and the keyboard portion. In other examples, the current and signals may pass through the plug-in connector (eg, with its raised side bump connected to the display portion 14 and its recessed side connected to the base portion 12 or vice versa) or a wireless connector (eg, , Wi-Fi, Bluetooth, etc.). Note that any number of connectors (eg, universal serial bus (USB) connectors (eg, compatible with the USB 3.0 specification released in November 2008), Thunderbolt ™ connectors, non-standard connection points, For example, plug-in connectors, etc.). [ThunderboltTM and Thunderbolt marks are trademarks of Intel Corporation in the United States, other countries, or both]. In fact, any other method of electrical connection may be used, and thus obviously fall within the scope of the present disclosure.In an embodiment, major system components (eg, motherboards, hard drives, batteries, communication modules, etc.) remain in the base portion. In a particular embodiment, the display may be a touch screen display. The display portion may also include a camera module, a microphone, a speaker, and / or a wireless module. This design allows the electronics to operate in a clamshell or tablet configuration. In an embodiment, the display includes a plurality of electrical components that allow the display portion to operate or operate as a tablet.Turning to FIG. 2, FIG. 2 is a simplified orthographic view illustrating an embodiment of a portion of a micro-hinge 26 in accordance with one embodiment of the present disclosure. The micro-hinge 26 may include a base fitting 30, a link 32, a display fitting 34, and an electrical conduit 40. The base sub 30 may be coupled or connected to the base portion 12. Display connector 34 may be coupled or connected to display portion 14. The link 32 allows the micro-hinge 26 to be flexible and rotate about 360 [deg.] While having a low profile. Electrical conduit 40 may allow for an electrical connection between base portion 12 and display portion 14.Turning to FIG. 3, FIG. 3 is a simplified front orthographic exploded view showing an embodiment of a portion of the micro-hinge 26. The base joint 30 may include an electrical conduit 40 and a base link joint 54. The link 32 may include an electrical conduit 40, a link link 52, a joint region 56, and a joint support 58. The splice region may accommodate a base link splice 54 or a link splice 52 from another link 32. Display connector 34 may include electrical conduit 40, link connector area 50, and connector support 58.When the base link joint 54 on the base joint 30 is inserted into the joint region 56, a pin, rod, or some other fastening member may be inserted through the joint support 58 and through the base link joint 54 to secure the base joint 30 to the link Piece 32. Similarly, when a link joint 52 on another link 32 is inserted into the joint region 56, a pin, rod, or some other fastening member may be inserted through the joint support 58 and through (on the other link 32) Link link 52 to secure the other link 32 to the link 32. In addition, when the link link connector 52 is inserted into the link connector area 50 on the display connector 34, a pin, rod, or some other fastening component may be inserted through the connector support 58 and through the link connector 52 to connect the link 32 Fasten to monitor connector 34. The base link joint 54 can rotate while being secured into the joint region 56. Similarly, the link link connector 52 can also be rotated while being secured into the connector area 56 and the link connector area 50. This configuration allows the micro-hinge 26 to be flexible to rotate about 360 [deg.] While having a low profile.Turning to FIG. 4, FIG. 4 is a simplified orthographic view showing an embodiment of a micro-hinge 26. Several links 32 may be stacked together to account for the thickness of the base portion 12, the display portion 14, and / or the keyboard portion 16. For example, if the base portion 12, the display portion 14, and / or the keyboard portion 16 is relatively thin, fewer links 32 are required than would be the case if the base portion 12, the display portion 14 and / or the keyboard portion 16 were relatively thick Stacked together.Turning to FIG. 5A, FIG. 5A is a simplified orthographic view illustrating an embodiment of a portion of the display hinge 38 in accordance with one embodiment of the present disclosure. Display hinge 38 may include a micro-hinge 26, a base connector 30, a display connector 34, an electrical conduit 40, a plurality of support rods 44, and a support arm 46. The micro-hinge 26 may include a link 32. The support arm 46 may couple or connect the plurality of support bars 44 to the base portion 12 and the display portion 14. As shown in FIG. 5A, display hinges 38 are in an open, planar configuration (similar to FIG. 6B shown below). However, in some examples, the display hinge 38 is shown without the cover 42, which may cover all or a portion of the display hinges 38. In an embodiment, the electrical conduit 40 may be configured to receive the support rod 44 such that the support rod 44 is contained within the micro-hinge 26 and provide support for the micro-hinge 26.Turning to FIG. 5B, FIG. 5B is a simplified orthographic view illustrating an embodiment of a portion of the display hinge 38 in accordance with one embodiment of the present disclosure. As shown in FIG. 5B, display hinges 38 may be in a closed clamshell configuration (similar to FIG. 6C shown below) or in a tablet configuration (similar to FIG. 6D shown below). The plurality of support bars 44 are flexible enough to bend and flex with the micro-hinges 26 and strong enough to provide support for the display portion 14 when the electronic device is in an open clamshell configuration.Turning to FIG. 6A, FIG. 6A is a simplified frontal perspective view showing an embodiment of an electronic device 10 in an open clamshell configuration according to one embodiment of the present disclosure. As shown in FIG. 6A, the display hinge 38 may include a cover 42, a plurality of micro-hinges 26, and a plurality of support bars 44. While only a portion of the cover 42 is shown, the cover 42 may cover the entire portion of the display hinge 38 or the display hinge 38 does not include any cover 42. The cover 42 may provide aesthetic and / or protective cover for the plurality of micro-hinges 26 and the plurality of support bars 44. Cover 42 may be a molded flexible polymer (eg, polyurethane or some other rubber-like material) or some other material that provides aesthetics and / or protective cover for multiple micro-hinges 26 and multiple support bars 44.Turning to FIG. 6B, FIG. 6B is a simplified frontal perspective view of the electronic device 10 in a planar configuration according to one embodiment of the present disclosure. As shown in FIG. 6B, the display portion 14 has been rotated on the plurality of micro hinges 26 and the plurality of support rods 44 so that the display portion 14 and the base portion 12 are coplanar. In addition, the keyboard portion 16 has been rotated on the keyboard hinge 20 so that the keyboard portion 16 is also coplanar with the base portion 12. The plurality of micro-hinges 26, the plurality of support bars 44, and the keyboard hinges 20 are configured to be relatively flat on a planar surface and allow the electronic device 10 to have a low, planar, low-z height flat surface when the electronic device 10 is in a planar configuration , Or a relatively planar cross-section.Turning to FIG. 6C, FIG. 6C is a simplified frontal perspective view showing an embodiment of an electronic device 10 in a closed clamshell configuration according to one embodiment of the present disclosure. As shown, in FIG. 6C, the display portion 14 has been rotated on the plurality of micro-hinges 26 and the plurality of support rods 44 such that the display portion 14 faces the base portion 12. In addition, the keyboard portion 16 has been rotated on the keyboard hinge 20 such that the keyboard portion 16 faces away from the base portion 12. The plurality of micro-hinges 26, the plurality of support bars 44, and the keyboard hinge 30 are configured to have a low profile and allow the electronic device 10 to have a low, planar, low- Or relatively planar cross-section.Turning to FIG. 6D, FIG. 6D is a simplified frontal perspective view of an electronic device in a tablet configuration according to one embodiment of the present disclosure. As shown in FIG. 6D, the display portion 14 has been rotated on the plurality of micro hinges 26 and the plurality of support rods 44 such that the display 22 faces upward and away from the base portion 12. In addition, the keyboard portion 16 has been rotated on the keyboard hinge 20 such that the keyboard 24 (not shown) faces downward and away from the base portion 12. In this configuration, the base portion 12 is between the display portion 14 and the keyboard portion 16. The plurality of micro-hinges 26, the plurality of support bars 44, and the keyboard hinge 20 are configured to have a low profile and allow the electronic device 10 to have a low, planar, or opposing plane with a low z-height when the electronic device 10 is in a flatbed configuration Section.Turning to FIG. 7A, FIG. 7A is a simplified frontal perspective view showing an embodiment of an electronic device 10 in an open clamshell configuration according to one embodiment of the present disclosure. As shown in FIG. 7A, the display portion 14 is supported by a single micro-hinge 26. A single micro-hinge 26 may include additional or more supports than a single micro-hinge of the plurality of micro-hinges 26. For example, a single micro-hinge 26 may include a support bar 44. In the illustrated example, the electrical conduit 40 may be configured to receive the support rods 44 such that the support rods 44 are contained within a single micro-hinge 26 and provide support for a single micro-hinge 26. In the opened clamshell configuration, additional support may allow a single micro-hinge 26 to support the display portion 14.Turning to FIG. 7B, FIG. 7B is a simplified frontal perspective view of the electronic device 10 in a planar configuration according to one embodiment of the present disclosure. As shown in FIG. 7B, the display portion 14 has been rotated on a single micro-hinge 26 so that the display portion 14 and the base portion 12 are coplanar. In addition, the keyboard portion 16 has been rotated on the keyboard hinge 20 so that the keyboard portion 16 is also coplanar with the base portion 12. The single micro-hinge 26 is configured to have a low profile and allow the electronic device 10 to have a low, planar, or relatively planar cross-section of low z-height when the electronic device 10 is in a planar configuration.Turning to FIG. 7C, FIG. 7C is a simplified orthographic view illustrating an embodiment of an electronic device 10 in a closed clamshell configuration according to one embodiment of the disclosure. As shown, in FIG. 7C, the display portion 14 has been rotated on a single micro-hinge 26 such that the display portion 14 faces the base portion 12. In addition, the keyboard portion 16 has been rotated on the keyboard hinge 20 such that the keyboard portion 16 faces away from the base portion 12. The single micro-hinge 26 is configured to have a low profile and allows the electronic device 10 to have a low, planar, or relatively planar cross-section of low z-height when the electronic device 10 is in a closed clamshell configuration.Turning to FIG. 7D, FIG. 7D is a simplified frontal perspective view of an electronic device in a tablet configuration according to one embodiment of the present disclosure. As shown in FIG. 7D, the display portion 14 has been rotated on a single micro-hinge 26 such that the display 22 faces upward and away from the base portion 12. In addition, the keyboard portion 16 has been rotated on the keyboard hinge 20 such that the keyboard 24 (not shown) faces downward and away from the base portion 12. In this configuration, the base portion 12 is between the display portion 14 and the keyboard portion 16. The single micro-hinge 26 is configured to have a low profile and allows the electronic device 10 to have a low, planar, or relatively planar cross-section of low z-height when the electronic device 10 is in a flatbed configuration.Turning to FIG. 8A, FIG. 8A is a simplified frontal perspective view showing an embodiment of an electronic device 10 in an open clamshell configuration according to one embodiment of the present disclosure. As shown in FIG. 8A, the display portion 14 is supported by the micro hinge band 28. The micro-hinge strap 28 may include a band of continuous (or nearly continuous) micro-hinges 26. One or more of the micro-hinges 26 in the micro-hinge strap 28 may include additional supports. For example, the one or more micro-hinges 26 may include a support bar 44 (eg, the electrical conduit 40 may be configured to receive the support bar 44).Turning to FIG. 8B, FIG. 8B is a simplified frontal perspective view of the electronic device 10 in a planar configuration according to one embodiment of the present disclosure. As shown, in FIG. 8B, the display portion 14 has been rotated on the micro-hinge strap 28 such that the display portion 14 is coplanar with the base portion 12. In addition, the keyboard portion 16 has been rotated on the keyboard hinge 20 so that the keyboard portion 16 is also coplanar with the base portion 12. The micro hinge straps 28 are configured to have a low profile and allow the electronic device 10 to have a low, planar, or relatively planar cross-section of low z height when the electronic device 10 is in a planar configuration.Turning to FIG. 8C, FIG. 8C is a simplified orthographic view illustrating an embodiment of an electronic device 10 in a closed clamshell configuration according to one embodiment of the present disclosure. As shown, in FIG. 8C, the display portion 14 has been rotated on the micro-hinge strap 28 such that the display portion 14 faces the base portion 12. In addition, the keyboard portion 16 has been rotated on the keyboard hinge 20 such that the keyboard portion 16 faces away from the base portion 12. The micro-hinge strap 28 is configured to have a low profile and allow the electronic device 10 to have a low, planar, or relatively planar cross-section of low z-height when the electronic device 10 is in a closed clamshell configuration.Turning to FIG. 8D, FIG. 8D is a simplified frontal perspective view of an electronic device in a tablet configuration according to one embodiment of the present disclosure. As shown in FIG. 8D, the display portion 14 has been rotated on the micro-hinge strap 28 such that the display 22 faces upward and away from the base portion 12. In addition, the keyboard portion 16 has been rotated on the keyboard hinge 20 such that the keyboard 24 (not shown) faces downward and away from the base portion 12. In this configuration, the base portion 12 is between the display portion 14 and the keyboard portion 16. The micro hinge straps 28 are configured to have a low profile and allow the electronic device 10 to have a low, planar, or relatively planar cross-section of low z-height when the electronic device 10 is in a tablet configuration.Turning to FIG. 9, FIG. 9 is a simplified block diagram associated with an example ARM ecosystem SOC 900 of the present disclosure. At least one example implementation of the present disclosure may include the micro-hinge features and ARM components discussed herein. For example, the example of FIG. 9 may be associated with an ARM core (eg, A-9, A-15, etc.). In addition, the architecture may be any type of tablet computer, smart phone (including Android phones, iphonesTM), iPadTM, Google NexusTM, Microsoft SurfaceTM, personal computers, servers, video processing components, laptops (including any type of notebook) UltrabookTM system, any type of touch-input device that supports part of it.In this example of FIG. 9, the ARM ecosystem SOC 900 may include a plurality of cores 906-907, an L2 cache control 908, a bus interface unit 909, an L2 cache 910, a graphics processing unit (GPU) 915, an interconnect 902, Video codec 920 and liquid crystal display (LCD) I / F 925, which may be associated with a Mobile Industrial Processor Interface (MIPI) / High Definition Multimedia Interface (HDMI) link coupled to the LCD.The ARM ecosystem SOC 900 may also include a subscriber identity module (SIM) I / F 930, a boot read only memory (ROM) 935, a synchronous dynamic random access memory (SDRAM) controller 940, a flash memory controller 945, a serial peripheral interface (SPI) master 950, appropriate power control 955, dynamic RAM (DRAM) 960, and flash memory 965. In addition, one or more exemplary embodiments include one or more communication capabilities, interfaces, and features such as an example of a Bluetooth ™ 970, a 3G modem 975, a Global Positioning System (GPS) 980, and an 802.11 Wi-Fi 985.In operation, the example of FIG. 9 may provide processing power as well as relatively low power consumption to support various types of computations (eg, mobile computing, high-end digital homes, servers, wireless infrastructure, etc.). In addition, this architecture can support any number of software applications (eg, AndroidTM,Player, JavaSE, JavaFX, Linux, Embedded Microsoft Windows, Symbian and Ubuntu). In at least one example embodiment, the core processor may implement out-of-order superscalar pipelines with coupled low latency level 2 caches.Turning to FIG. 10, FIG. 10 is a simplified block diagram illustrating possible electronics and logic that may be associated with any of the electronic devices discussed herein. In at least one example embodiment, system 1000 may include touch controller 1002, one or more processors 1004, system control logic 1006 coupled to at least one processor 1004, system memory 1008 coupled to system control logic 1006, A non-volatile memory and / or storage device 1032 coupled to system control logic 1006, a display controller 1012 coupled to system control logic 1006, a display controller 1012 coupled to display device 1010, a power supply coupled to system control logic 1006 Management controller 1018, and / or communication interface 1016 coupled to system control logic 1006.In at least one embodiment, system control logic 1006 may include any suitable interface controller to provide any suitable interface to at least one processor 1004 and / or any suitable device or component in communication with system control logic 1006. In at least one example embodiment, system control logic 1006 may include one or more memory controllers for providing an interface to system memory 1008. System memory 1008 may be used to load and store data and / or instructions, for example, for system 1000. In at least one example embodiment, system memory 1008 may include any suitable volatile memory, for example, suitable dynamic random access memory (DRAM). In at least one example embodiment, system control logic 1006 may include one or more I / O controllers for providing interface to display device 1010, touch controller 1002, and non-volatile memory and / or storage device 1032 .Non-volatile memory and / or storage 1032 may be used, for example, to store data and / or instructions within software 1028. Non-volatile memory and / or storage 1032 may include any suitable non-volatile memory, such as flash memory, and / or may include any suitable non-volatile storage device such as one or more hard disk drives (HDDs) , One or more compact disc (CD) drives, and / or one or more digital versatile disc (DVD) drives.Power management controller 1018 may include power management logic 1030 configured to control various power management and / or power saving functions disclosed herein, or any portion thereof. In at least one example embodiment, the power management controller 1018 is configured to reduce power consumption of components or devices of the system 1000 that can be operated or shut down at reduced power when the electronic device is in a closed configuration. For example, in at least one example embodiment, when the electronic device is in the closed configuration, the power management controller 1018 performs one or more of the following steps: performing an unused portion of the display and / or any backlight associated therewith Power down; allowing one or more processors 1004 to enter a low power state when lower computational power is required in a closed configuration; and turning off any unused devices and / or components while the electronic device is in a closed configuration.The communication interface 1016 may provide an interface for the system 1000 to communicate over any one or more networks and / or with any other suitable device. Communication interface 1016 may include any suitable hardware and / or firmware. In at least one example embodiment, communication interface 1016 may include, for example, a network adapter, a wireless network adapter, a telephone modem, and / or a wireless modem.In at least one example embodiment, system control logic 1006 may include one or more I / O controllers to provide an interface to any suitable input / output device such as, for example, an audio device, To help convert sound to corresponding digital signals and / or to help convert digital signals to corresponding sound, video cameras, camcorders, printers, and / or scanners.For at least one example embodiment, at least one processor 1004 may be logically packaged together with one or more controllers for system control logic 1006. In at least one example embodiment, at least one processor 1004 may be logically packaged together with one or more controllers of system control logic 1006 to form a system in package (SiP). In at least one example embodiment, at least one processor 1004 may be integrated with the logic of one or more controllers of system control logic 1006 on the same die. For at least one example embodiment, at least one processor 1004 may be integrated with the logic of one or more controllers of system control logic 1006 on the same die to form a system on chip (SoC).For touch control, touch controller 1002 may include touch sensor interface circuit 1022 and touch control logic 1024. The touch sensor interface circuit 1022 may be coupled to detect touch inputs on the first touch surface layer and the second touch surface layer of the display (ie, the display device 1010). The touch sensor interface circuit 1022 may include any suitable circuitry that may, for example, depend at least in part on touch-sensitive technology for touch input devices. In one embodiment, the touch sensor interface circuit 1022 may support any suitable multi-touch technology. In at least one embodiment, the touch sensor interface circuit 1022 may include any suitable circuitry to convert analog signals corresponding to the first touch surface layer and the second surface layer into any suitable digital touch input data. Suitable digital touch input data for at least one embodiment may include touch location or coordinate data, for example.Touch control logic 1024 may be coupled to help control touch sensor interface circuit 1022 to detect touch inputs on the first touch surface layer and the second touch surface layer in any manner. Touch control logic 1024 of at least one example embodiment may also be coupled to output digital touch input data corresponding to the touch input detected by touch sensor interface circuit 1022 in any suitable manner. Touch control logic 1024 may be implemented using any suitable logic, including any suitable hardware, firmware, and / or software logic (eg, non-transitory tangible media) that relies, at least in part, on touch sensor interface circuit 1022 The circuit. Touch control logic 1024 of at least one embodiment may support any suitable multi-touch technology.Touch control logic 1024 may be coupled to output digital touch input data to system control logic 1006 and / or to at least one processor 1004 for processing. At least one processor 1004 of at least one embodiment may execute any suitable software to process digital touch input data output from touch control logic 1024. For example, suitable software may include any suitable driver software and / or any suitable application software. As shown in FIG. 10, system memory 1008 may store suitable software 1026 and / or non-volatile memory and / or storage devices.It is noted that in some example implementations, the functions listed herein may be implemented in conjunction with logic encoded in one or more tangible, non-transitory media (eg, in an application specific integrated circuit (ASIC), a digital signal processor ) Instructions, software to be executed by the processor (which may include object code and source code) or other similar machines, etc.). In some of these examples, the memory elements may store data for the operations described herein. This may include memory elements capable of storing software, logic, code, or processor instructions that are executed to implement the activities described herein. The processor can execute any type of instruction associated with the data to achieve the operations detailed herein. In one example, a processor may transform an element or item (eg, data) from one state or thing to another state or thing. In another example, the activities listed herein may be implemented with fixed logic or programmable logic (eg, computer-executed software / processor instructions), and the elements identified herein may be some type of programmable processor, Programmable digital logic such as Field Programmable Gate Arrays (FPGAs), DSPs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read only memories (EEPROMs), or may include digital logic, software , Code, electrical instructions, or any suitable combination thereof.It is essential to note that all specifications, dimensions and relationships set forth herein (eg, height, width, length, material, etc.) are provided for the purpose of example and teaching. Each of these data can vary significantly without departing from the spirit of the disclosure or the scope of the appended claims. The specifications apply only to one non-limiting example, and as such, they should be so construed. In the foregoing description, example embodiments have been described. Various modifications and changes may be made to this embodiment without departing from the scope of the appended claims. Accordingly, the description and drawings are to be regarded in an illustrative, rather than a restrictive, sense.Many other changes, substitutions, modifications, changes, and variations will be apparent to those skilled in the art, and it is intended that the present disclosure covers all such changes, substitutions, changes, modifications and variations that fall within the scope of the appended claims. In order to assist the United States Patent and Trademark Office (USPTO) and to assist any reader of any other patent issued in this application to interpret the appended claims, Applicant wishes to note that Applicant (a) does not intend to be bound by the appended claims Either invokes paragraph 6 (6) of 35U.SC Part 112 (as it existed on the filing date) unless the words "a unit for" or "step for" are used specifically for a particular right Requirements; and (b) does not want to limit the disclosure in any way whatsoever that is not reflected in the appended claims, through any of the statements in the specification.Example Embodiment ImplementationOne particular exemplary implementation of an electronic device includes activities associated with a low profile hinge design. The low profile hinge design allows for a hybrid or convertible laptop hinge that does not have the bulky hinge components that make it possible to build large sections, the functionality and usability of electronic devices, and significant industrial design implications. The low profile hinge may be configured with a micro-hinge that couples the first element to the second element. The micro-hinge may include a first joint coupled to the first member, a second joint coupled to the second member, and a plurality of links coupling the first joint to the second joint. The first element may be a base portion, and the second element may be a display portion. The micro-hinge can rotate about three hundred and sixty degrees and can have a flexible cover. The low profile hinge may also include a plurality of micro hinges and a plurality of support rods. In an embodiment, the low profile hinge extends about the length of the first element and the second element. In addition, the micro-hinge can also include an electrical conduit.Other comments and examplesExample A1 is a low profile hinge that includes a micro-hinge. The micro-hinge may couple a first element to a second element, and includes: a first joint coupled to the first element; a second joint coupled to the second element; and a plurality of links, Which couples the first connector to the second connector.In Example A2, the subject matter of Example Al can optionally include that the low profile hinge is capable of rotating about three hundred and sixty degrees.In Example A3, the subject matter of any of the aforementioned 'A' examples may optionally include: a plurality of micro-hinges; and a plurality of support bars.In Example A4, the subject matter of any of the foregoing 'A' examples may optionally include a flexible cover.In Example A5, the subject matter of any one of the preceding 'A' examples may optionally include the low profile hinge extending about the length of the first member and the second member.In Example A6, the subject matter of any one of the preceding 'A' examples may optionally include the micro-hinge further including an electrical conduit.In Example A7, the subject matter of any one of the aforementioned 'A' examples may optionally include that the first element is a base portion of an electronic device.In Example A8, the subject matter of any of the aforementioned 'A' examples may optionally include that the second element is a display portion of an electronic device.Example AA1 may include an electronic device that includes a base portion, a display portion, and a micro-hinge that couples the base portion to the display portion. The micro-hinge comprising: a first joint coupled to the base portion; a second joint coupled to the display portion; and a plurality of links coupling the first joint to the second joint .In Example AA2, the subject matter of any one of the aforementioned 'AA' examples may optionally include the micro-hinge being capable of rotating about three hundred and sixty degrees.In Example AA3, the subject matter of any one of the aforementioned 'AA' examples may optionally include: a plurality of micro-hinges; and a plurality of support rods.In Example AA4, the subject matter of any of the foregoing 'AA' examples may optionally include the micro-hinge further including a flexible cover.In Example AA5, the subject matter of any one of the aforementioned 'AA' examples may optionally include the micro-hinge extending about the length of the base portion and the display portion.In Example AA6, the subject matter of any one of the aforementioned 'AA' examples may optionally include the micro-hinge further including an electrical conduit.In Example AA7, the subject matter of any of the foregoing 'AA' examples may optionally include that the micro-hinge is a low-profile hinge.Example M1 is a method comprising rotating a display portion about a base portion with a low profile micro-hinge, wherein the low-profile micro-hinge includes a first tab coupled to the base portion, a second tab coupled to the base section, A display portion; and a plurality of links coupling the first connector to the second connector.In Example M2, the subject matter of any of the aforementioned 'M' examples may optionally include rotating the display portion about the base portion using a plurality of low profile micro-hinges and a plurality of support rods.In Example M3, the subject matter of any one of the aforementioned 'M' examples may optionally include the low profile micro-hinge further comprising a flexible cover.In Example M4, the subject matter of any of the foregoing 'M' examples may optionally include the low profile micro-hinge further including an electrical conduit.One exemplary system S1 may include a unit for rotating a display portion around a base portion with a low profile micro-hinge, wherein the low-profile micro-hinge includes a first tab coupled to the base portion, A second connector coupled to the display portion; and a plurality of links coupling the first connector to the second connector.An example system SS1 may include a processor; and a micro-hinge that couples the first element to the second element, wherein the micro-hinge includes: a first joint that is coupled to the first element; a second joint Coupled to the second component; and a plurality of links coupling the first connector to the second connector.In example SS2, the subject matter of any one of the aforementioned 'SS' examples may optionally include the micro-hinge being capable of rotating three hundred sixty degrees.In example SS3, the subject matter of any of the aforementioned 'SS' examples may optionally include a plurality of micro hinges and a plurality of support bars.In example SS4, the subject matter of any one of the aforementioned 'SS' examples may optionally include the micro-hinge further including an electrical conduit.In example SS5, the subject matter of any of the aforementioned 'SS' examples may optionally include that the first element is a base portion and the second element is a display portion.Example X1 is a machine-readable storage medium comprising machine readable instructions to implement or perform the method of any of Examples A-A8, AA1-AA7, M1-M4. Example Y1 is a device that includes means for performing any one of the example methods M1-M4. In Example Y2, the subject matter of Example Y1 may optionally include means for executing a method that includes a processor and a memory. In Example Y3, the subject matter of Example Y2 can optionally include that the memory includes machine readable instructions.
The invention relates to a high-performance interconnect physical layer. A serial data link is to be adapted during initialization of the link. Adaptation of the link includes receiving a pseudorandombinary sequence (PRBS) from a remote agent, analyzing the PRBS to identify characteristics of the data link, and generating metric data describing the characteristics.
1.A device for transmitting data, the device comprising:The receiver processor includes an agent for supporting a layered protocol stack, the layered protocol stack including physical layer logic, link layer logic, and protocol layer logic, wherein the agent is used for:Receiving a link layer data stream in the active link state L0, where the link layer data includes a collection of microchips;Enter the coordinated link state L0c intermittently, where the coordinated link state defines the L0c interval in which the physical layer takes control;Receive the control code within the L0c interval; andThe reset of the link is initiated based on a control code mismatch, wherein the control code mismatch is based on an identification that the control code fails to match a designated code in a designated code set.2.The apparatus of claim 1, wherein the control code mismatch is based on bit errors.3.The apparatus of claim 1 or 2, wherein the physical layer logic is further configured to try to resolve a bit error, and the control code mismatch is identified in response to failing to resolve the bit error.4.The device according to claim 1 or 2, wherein the control code is from a set including a reset request code, a low power entry request, a partial width entry request, and a partial width exit request.5.The device according to claim 1 or 2, wherein the LOc interval is entered according to a defined interval.6.The device of claim 5, wherein the LOc interval is entered periodically according to the defined interval.7.The device according to claim 1 or 2, wherein the L0c interval is a defined length.8.The device of claim 1 or 2, wherein the L0c interval is used to start and end on the boundary of the cleaning microchip.9.The device according to claim 1 or 2, wherein the control code comprises an 8-bit code.10.The device according to claim 1 or 2, wherein the control code includes a request control sequence.11.The device according to claim 1 or 2, wherein the agent is further configured to detect that the control code fails to match a designated code in the designated code set.12.The apparatus of claim 11, wherein the agent includes a comparison circuit for detecting that the control code fails to match a specified code in the specified code set.13.The device according to claim 1 or 2, characterized by further comprising transmitter logic for sending data to indicate entry into L0c, and sending one or more control codes in the L0c interval.14.The device according to claim 1 or 2, wherein the reset includes an in-band reset.15.A method for transmitting data, the method comprising:Receiving a link layer data stream in the active link state L0, where the link layer data includes a collection of microchips;Enter the coordinated link state L0c intermittently, where the coordinated link state defines the L0c interval in which the physical layer takes control;Receive the control code within the L0c interval; andThe reset of the link is initiated based on a control code mismatch, wherein the control code mismatch is based on an identification that the control code fails to match a designated code in a designated code set.16.The method of claim 15, further comprising: sending one or more control codes in the LOc interval.17.The method according to claim 15 or 16, further comprising: detecting that the control code fails to match a specified code in the specified code set.18.A system for transmitting data, comprising a device for executing the method according to any one of claims 15-17.19.The system according to claim 18, wherein the device comprises a computer-readable medium on which instructions are stored, and when the instructions are executed by a machine, the machine executes the At least part of any of the methods.20.A system for transmitting data, the system comprising:The first computing device; andThe second computing device is coupled to the first computing device through a link, wherein the second computing device includes:A link layer circuit for receiving a link layer data stream in an active link state L0, wherein the link layer data includes a collection of microchips;State machine logic for entering the coordinated link state L0c intermittently, where the coordinated link state defines the L0c interval in which the physical layer obtains control;Physical layer circuit for:Identify the control code received from the first computing device within the L0c interval; and initiate a reset of the link based on a control code mismatch, wherein the control code mismatch is based on the failure of the control code An identifier that matches a specified code in the specified code set.21.The system of claim 20, wherein the first computing device includes a first processor, and the second computing device includes a second processor.22.A computer-readable storage medium with instructions that, when executed, cause a machine to execute the method according to any one of claims 15-17.
High-performance interconnect physical layerThis application is a divisional application of a Chinese patent application whose application date is March 15, 2013, and the application number is No. 201380049203.3 and the invention title is "High-Performance Interconnect Physical Layer".fieldThe present disclosure generally relates to the field of computer development, and particularly to the coordinated software development of interdependent restraint systems.backgroundAdvances in semiconductor processing and logic design have allowed an increase in the amount of logic that can appear on integrated circuit devices. As an inevitable result, computer system configuration has evolved from a single or multiple integrated circuits in a system to multi-core, multi-hardware threads and multi-logic processors appearing on a single integrated circuit, as well as other processors integrated in such processors. interface. A processor or integrated circuit generally includes a single physical processor die, where the processor die can include any number of cores, hardware threads, logical processors, interfaces, memories, controller hubs, and so on.As a result of greater capabilities that provide more processing power in a smaller package, smaller computing devices have generally increased. Smart phones, tablets, ultra-thin notebooks and other user devices have grown exponentially. However, these smaller devices rely on servers for data storage and complex processing that exceed the form factor. Therefore, the demand for the high-performance computing market (ie, server space) has also increased. For example, in modern servers, there are usually not only a single processor with multiple cores, but also multiple physical processors (also called multiple sockets) in order to increase computing power. However, as processing power grows with the number of devices in a computing system, communication between slots and other devices becomes more important.In fact, interconnection has evolved from a more traditional multipoint bus that primarily handles electrical communications to a fully mature interconnection architecture that facilitates fast communications. Unfortunately, due to the future demand for processors to be consumed at even higher rates, corresponding demands are placed on the capabilities of existing interconnect architectures.Brief description of the drawingsFigure 1 illustrates a simplified block diagram of a system including serial point-to-point interconnection of I/O devices in a computer system according to one embodiment;Figure 2 illustrates a simplified block diagram of a layered protocol stack according to an embodiment;Figure 3 illustrates an embodiment of a transaction descriptor.Figure 4 illustrates an embodiment of a serial point-to-point link.Figure 5 illustrates an embodiment of a potential high performance interconnect (HPI) system configuration.Figure 6 illustrates an embodiment of a layered protocol stack associated with HPI.Figure 7 illustrates the representation of an example state machine.Figure 8 illustrates an example control supersequence.Figure 9 illustrates a flowchart of an example transition to a partial width state.Figure 10 illustrates a schematic diagram of an example pattern generator.Figure 11 illustrates an embodiment of a block diagram of a computing system including a multi-core processor.Figure 12 illustrates another embodiment of a block diagram of a computing system including a multi-core processor.Figure 13 illustrates an embodiment of a block diagram of a processor.Figure 14 illustrates another embodiment of a block diagram of a computing system including a processor.Figure 15 illustrates an embodiment of a block diagram of a computing system including multiple processor sockets.Figure 16 illustrates another embodiment of a block diagram of a computing system.In each figure, the same reference numerals and names indicate the same elements.Detailed DescriptionIn the following description, many specific details are stated, such as specific types of processors and system configurations, specific hardware structures, specific architecture and microarchitecture details, specific register configurations, specific instruction types, specific system components, Examples of specific processor pipeline stages, specific interconnection layers, specific packet/transaction configurations, specific transaction names, specific protocol switching, specific link widths, specific implementations and operations, etc., in order to provide examples for the present invention Thorough understanding. However, it is obvious to those skilled in the art that these specific details are not necessarily required to practice the subject matter of the present disclosure. In other instances, a very detailed description of known components or methods has been avoided, such as specific and alternative processor architectures of computer systems, specific logic circuits/codes for the described algorithms, specific firmware Code, low-level interconnection operations, specific logic configurations, specific manufacturing technologies and materials, specific compiler implementations, specific representations of algorithms using codes, specific power-off and gating technology/logic, and other specific operational details, so as not to be unnecessary To obscure the present disclosure.Although the following embodiments can be described with reference to energy saving, energy efficiency, processing efficiency, etc. in a specific integrated circuit, such as a computing platform or a microprocessor, other embodiments are applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of the various embodiments described herein can be applied to other types of circuits or semiconductor devices that can also benefit from such features. For example, the disclosed embodiments are not limited to server computer systems, desktop computer systems, laptop computers, UltrabooksTM, but can also be used in other devices, such as handheld devices, smart phones, tablets, other thin and light notebooks, System-on-chip (SOC) devices and embedded applications. Some examples of handheld devices include cellular phones, Internet protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Here, similar techniques for high-performance interconnects can be applied to increase performance (or even save power) in low-power interconnects. Embedded applications typically include microcontrollers, digital signal processors (DSP), system-on-chips, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other systems that can perform the functions and operations taught below. In addition, the devices, methods, and systems described herein are not limited to physical computing devices, but may also involve software optimization for energy saving and efficiency. It is obvious from the following description that the various embodiments of the methods, devices, and systems described herein (whether or not they refer to hardware, firmware, software, or a combination thereof) can be regarded as "green technologies" that balance performance considerations. The future is crucial.As computing systems progress, their components have become more complex. The complexity of the interconnect architecture that couples and communicates between components has also increased in order to ensure that the bandwidth requirements for optimal component operation are met. In addition, different market segments require different aspects of the interconnect architecture in order to adapt to their respective markets. For example, servers require high performance, and mobile ecosystems can sometimes sacrifice overall performance in order to save energy. However, the only goal for most organizations is to provide the highest possible performance with maximum power savings. Further, a variety of different interconnections can potentially benefit from the subject discussed here.According to one or more principles and other examples described herein, Peripheral Component Interconnect (PCI) Express (PCIe) interconnection organization structure and QuickPath Interconnection (QPI) organization structure and other examples can be potentially improved. For example, the main purpose of PCIe is to span multiple market segments; clients (desktop and mobile), servers (standard and enterprise) and embedded devices and communication devices, allowing components and devices from different vendors to open up Interoperability in the architecture. PCI EXPRESS is a high-performance general-purpose I/O interconnect defined for various future computing and communication platforms. Some PCI attributes such as its usage model, load-store architecture, and software interface have been maintained through its revised version, while the previous parallel bus implementation has been replaced by a highly scalable, fully serial interface. The most recent version of PCI Express takes advantage of advances in point-to-point interconnection, switch-based technology, and packetization protocols to deliver new levels of performance and features. Power management, quality of service (QoS), hot plug/hot switch support, data integrity, and error handling are some of the advanced features supported by PCI Express. Although the main discussion in this article refers to the new high-performance interconnect (HPI) architecture, the aspects of the invention described here can be applied to other interconnect architectures, such as PCIe-compatible architecture, QPI-compatible architecture, and MIPI-compatible architecture , High-performance architecture or other known interconnect architectures.Referring to Figure 1, an example of an organizational structure consisting of point-to-point links interconnecting a group of components is illustrated. The system 100 includes a processor 105 coupled to a controller hub 115 and a system memory 110. The processor 105 may include any processing element, such as a microprocessor, a main processor, an embedded processor, a coprocessor, or other processors. The processor 105 is coupled to the controller hub 115 through a front side bus (FSB) 106. In one embodiment, FSB 106 is a serial point-to-point interconnect as described below. In another embodiment, the link 106 includes a serial differential interconnection architecture compatible with different interconnection standards.The system memory 110 includes any memory device, such as random access memory (RAM), non-volatile (NV) memory, or other memory that can be accessed by devices in the system 100. The system memory 110 is coupled to the controller hub 115 through a memory interface 116. Examples of memory interfaces include double data rate (DDR) memory interfaces, dual channel DDR memory interfaces, and dynamic RAM (DRAM) memory interfaces.In an embodiment, the controller hub 115 may include a root hub, a root complex, or a root controller, for example, in a PCIe interconnection hierarchy. Examples of the controller hub 115 include a chipset, a memory controller hub (MCH), a north bridge, an interconnect controller hub (ICH), a south bridge, and a root controller/hub. Generally, the term chipset refers to two physically separated controller hubs, such as a memory controller hub (MCH) coupled to an interconnect controller hub (ICH). Note that current systems often include an MCH integrated with the processor 105, and the controller 115 communicates with I/O devices in a manner similar to that described below. In some embodiments, peer-to-peer routing is optionally supported through the root complex 115.Here, the controller hub 115 is coupled to the switch/bridge 120 through a serial link 119. The input/output modules 117 and 121 may also be referred to as interfaces/ports 117 and 121, and may include/implement a layered protocol stack to provide communication between the controller hub 115 and the switch 120. In one embodiment, multiple devices can be coupled to the switch 120.The switch/bridge 120 routes packets/messages from the upstream device 125 (that is, toward the root complex on the hierarchical structure) to the controller hub 115 and downstream (that is, downstream from the root controller along the hierarchical structure), from the processor 105 or system memory 110 to device 125. In one embodiment, the switch 120 is referred to as a logical combination of multiple virtual PCI to PCI bridge devices. The device 125 includes any internal or external devices or components coupled to the electronic system, such as I/O devices, network interface controllers (NIC), add-in cards, audio processors, network processors, hard disk drives, storage devices, CD/ DVD ROM, monitor, printer, mouse, keyboard, router, portable storage device, FireWire device, universal serial bus (USB) device, scanner and other input/output devices. Generally, in PCIe terminology, for example, devices are called endpoints. Although not specifically shown, the device 125 may include a bridge (for example, a PCIe to PCI/PCI-X bridge) to support legacy or other versions of the device or the interconnection organization structure supported by such devices.The graphics accelerator 130 may also be coupled to the controller hub 115 through a serial link 132. In one embodiment, the graphics accelerator 130 is coupled to the MCH, and the MCH is coupled to the ICH. Then, the switch 120 is coupled to the ICH, and, accordingly, the I/O device 125 is coupled to the ICH. The I/O modules 131 and 118 also implement a layered protocol stack for communication between the graphics accelerator 130 and the controller hub 115. Similar to the MCH discussed above, the graphics controller or graphics accelerator 130 itself may be integrated in the processor 105.Turning to Figure 2, an embodiment of a layered protocol stack is explained. The layered protocol stack 200 may include any form of layered communication stack, such as a QPI stack, a PCIe stack, a next generation high performance computing interconnect (HPI) stack, or other layered stacks. In an embodiment, the protocol stack 200 may include a transaction layer 205, a link layer 210, and a physical layer 220. Interfaces such as the interfaces 117, 118, 121, 122, 126, and 131 in FIG. 1 may be represented as the communication protocol stack 200. Expressed as a communication protocol stack can also be referred to as a module or interface that implements/includes the protocol stack.Packets can be used to transfer information between components. Packets can be formed in the transaction layer 205 and the data link layer 210 to carry information from the sending component to the receiving component. As sent packets flow through other layers, they are expanded with additional information used to process packets of those layers. On the receiving side, the reverse process occurs, and the packet is transformed from the physical layer 220 representation of the packet to the data link layer 210 representation, and finally (for transaction layer packets) into a form that can be processed by the transaction layer 205 of the receiving device.In one embodiment, the transaction layer 205 may provide an interface between the processing core of the device and the interconnection architecture such as the data link layer 210 and the physical layer 220. In this regard, the main responsibility of the transaction layer 205 may include the assembly and decomposition of packets (ie, transaction layer packets or TLP). The conversion layer 205 may also manage credit-based flow control for TLP. In some implementations, split transactions can be utilized, that is, transactions with requests and responses split in time, allowing the link to carry other traffic while the target device collects data for the response, among other examples.Credit-based flow control can be used to implement virtual channels and networks using interconnected organizational structures. In one example, the device may broadcast an initial amount of credits for each of the receiving buffers in the transaction layer 205. An external device at the opposite end of the link, such as the controller hub 115 in FIG. 1, can count the number of credits consumed per TLP. If the transaction does not exceed the credit limit, the transaction can be sent. When the response is received, the credit amount is restored. One of the advantages of such a credit scheme is that assuming no credit limit is encountered, the delay time of credit return does not affect performance, and there are other potential advantages.In one embodiment, the four transaction address spaces may include configuration address space, memory address space, input/output address space, and message address space. The memory space transaction includes one or more of a read request and a write request in order to transfer data to/from the memory mapped location. In one embodiment, memory space transactions can use two different address formats, for example, a short address format such as a 32-bit address, or a long address format such as a 64-bit address. Configuration space transactions can be used to access configuration spaces of various devices connected to the interconnect. Configuration space transactions can include read requests and write requests. Message space transactions (or simply called messages) can also be defined as supporting in-band communication between interconnected agents. Therefore, in an example embodiment, the transaction layer 205 may assemble the packet header/payload 206.Referring quickly to Figure 3, an example embodiment of a transaction layer packet descriptor is illustrated. In an embodiment, the transaction descriptor 300 may be a mechanism for carrying transaction information. In this regard, the transaction descriptor 300 supports the identification of transactions in the system. Other possible uses include tracking changes to the default transaction ordering and the association of transactions with channels. For example, the transaction descriptor 300 may include a global identifier field 302, an attribute field 304, and a channel identifier field 306. In the illustrated example, the global identifier field 302 including the local transaction identifier field 308 and the source identifier field 310 is described. In one embodiment, the global transaction identifier 302 is unique to all outstanding requests.According to one implementation, the local transaction identifier field 308 is a field generated by the requesting agent, and may be unique to all outstanding requests that require completion of the requesting agent. Furthermore, in this example, the source identifier 310 uniquely identifies the requester agent within the interconnected hierarchy. Therefore, along with the source ID 310, the local transaction identifier 308 field provides a global identification of the transaction within the hierarchical domain.The attribute field 304 specifies the characteristics and relationships of the transaction. In this regard, the attribute field 304 may be used to provide additional information that allows modification of the default handling of the transaction. In one embodiment, the attribute field 304 includes a priority field 312, a reserved field 314, a ranking field 316, and a non-listening field 318. Here, the priority subfield 312 can be modified by the initiator in order to assign priority to the transaction. The reserved attribute field 314 is reserved for future use or vendor-defined use. Using reserved attribute fields, a possible usage model of usage priority or security attributes can be realized.In this example, the sorting attribute field 316 is used to provide optional information expressing the type of sorting, and the optional information can modify the default sorting rule. According to an example implementation, the sorting attribute "0" means that the default sorting rule is to be applied, where the sorting attribute "1" means that the sorting is not strict, in which writing can be transmitted in the same direction, and the reading can be sent to Write in the same direction. The monitor attribute field 318 is used to determine whether to monitor the transaction. As shown, the channel ID field 306 identifies the channel with which the transaction is associated.Returning to the discussion of FIG. 2, the link layer 210, also referred to as the data link layer 210, can serve as an intermediate level between the transaction layer 205 and the physical layer 220. In one embodiment, the responsibility of the data link layer 210 is to provide a reliable mechanism for exchanging transaction layer packets (TLP) between two components on the link. One side of the data link layer 210 accepts the TLP assembled by the transaction layer 205, applies the packet sequence identifier 211, that is, the identification number or the packet number, calculates and applies the error detection code, namely CRC 212, and submits the modified TLP to The physical layer in order to cross the physical transmission to external devices.In one example, the physical layer 220 includes a logical sub-block 221 and an electrical sub-block 222 in order to physically send the packet to an external device. Here, the logical sub-block 221 is responsible for the "digital" function of the physical layer 221. At this point, the logical sub-block may include a sending part and a receiver part. The sending part prepares the outgoing information for transmission by the physical sub-block 222. The receiver part identifies and sends the received information to the link layer 210. Prepare the received information.The physical block 222 includes a transmitter and a receiver. The transmitter provides symbols from the logic sub-block 221, and the transmitter serializes the symbols and sends them to the external device. The receiver is provided with serialized symbols from the external device, and the receiver converts the received signal into a bit stream. The bit stream is deserialized and provided to the logic sub-block 221. In an example embodiment, an 8b/10b transmission code is used, in which ten-bit symbols are transmitted/received. Here, special symbols are used to group into frames by means of frame 223. In addition, in one example, the receiver also provides the symbol clock recovered from the incoming serial stream.As described above, although the transaction layer 205, the link layer 210, and the physical layer 220 are discussed with reference to a specific embodiment of a protocol stack (eg, a PCIe protocol stack), the layered protocol stack is not limited thereto. In fact, it is possible to include/implement any layered protocol and adopt the features discussed here. As an example, the port/interface that is represented as a component layer protocol may include: (1) the first layer of assembling packets, that is, the transaction layer; the second layer of serializing packets, that is, the link layer; and the third layer of sending packets, That is the physical layer. As a specific example, the high-performance interconnection layered protocol described here is utilized.Next, referring to Fig. 4, an example embodiment of a serial point-to-point organization structure is explained. The serial point-to-point link may include any transmission path for transmitting serial data. In the illustrated embodiment, the link may include two low-voltage differential drive signal pairs: a transmit pair 406/411 and a receive pair 412/407. Correspondingly, the device 405 includes a transmission logic 406 to send data to the device 410 and a receiving logic 407 to receive data from the device 410. In other words, some implementations of the link include two transmission paths (i.e., paths 416 and 417) and two reception paths (i.e., paths 418 and 419).Transmission path refers to any path used to send data, such as transmission lines, copper cables, optical cables, wireless communication channels, infrared communication links, or other communication paths. A connection between two devices, such as device 405 and device 410, is called a link, such as link 415. The link can support one channel, and each channel represents a set of differential signal pairs (one pair for transmission and one pair for reception). In order to expand the bandwidth, a link can aggregate multiple channels denoted by xN, where N is any supported link width, such as 1, 2, 4, 8, 12, 16, 32, 64 or wider.The differential pair may refer to two transmission paths for transmitting differential signals, such as lines 416 and 417. As an example, when the line 416 switches from a low voltage level to a high voltage level, that is, a rising edge, the line 417 is driven from a high logic level to a low logic level, that is, a falling edge. Differential signals potentially exhibit better electrical characteristics, such as better signal integrity, that is, cross-coupling, voltage overshoot/undershoot, ringing, and other example advantages. This allows for a better timing window, which in turn allows for a faster transmission frequency.In one embodiment, a new high-performance interconnect (HPI) is provided, which may include a next-generation cache coherent link-based interconnect. As an example, HPI can be used in a high-performance computing platform, such as a workstation or a server, and each system includes PCle or another interconnection protocol to connect processors, accelerators, I/O devices, and so on. However, HPI is not limited to this. Instead, HPI can be used on any system or platform described here. In addition, the various ideas developed can be applied to other interconnects and platforms, such as PCIe, MIPI, QPI, and so on.In order to support multiple devices, in an example implementation, HPI may include an indeterminate instruction set architecture (ISA) (that is, HPI may be implemented in multiple different devices). In another scenario, HPI can also be used to connect high-performance I/O devices, not just processors or accelerators. For example, high-performance PCIe devices can be coupled to HPI through an appropriate conversion bridge (ie, HPI to PCIe). In addition, the HPI link can be used in various ways (for example, star, ring, grid, etc.) by various HPI-based devices such as processors. Figure 5 illustrates an example implementation of multiple potential multi-slot configurations. As stated, the dual-socket configuration 505 may include two HPI links; however, in other implementations, one HPI link may be utilized. For larger topologies, any configuration can be used as long as the identifier (ID) is assignable and there is some form of virtual path and other additional or alternative features. As shown, in one example, the four-socket configuration 510 has an HPI link from each processor to another processor. But in the eight-slot implementation shown in configuration 515, not every slot is directly connected to each other through HPI links. However, if there are virtual paths or channels between the processors, this configuration is supported. The range of supported processors includes 2-32 in the native domain. By using multiple domains or other interconnections between node controllers and other examples, a larger number of processors can be achieved.The HPI architecture includes the definition of a layered protocol architecture, including in some examples the protocol layer (consistent, non-conformance, and optionally other memory-based protocols), routing layer, link layer, and physical layer. In addition, HPI may also include enhanced functions related to the power manager (such as the power control unit (PCU)), design for testing and debugging (DFT), fault handling, registers, security, and other examples. Figure 6 illustrates an embodiment of an example HPI layered protocol stack. In some implementations, at least some of the layers illustrated in FIG. 6 may be optional. Each layer handles its own level of granular or quantum information (protocol layer 605a, b handles packet 630, link layer 610a, b handles microchip 635, and physical layer 605a, b handles physical slice 640). Note that in some embodiments, based on this implementation, the grouping may include portions of microchips, a single microchip, or multiple microchips.As a first example, the width of the physical slice 640 includes a 1-to-1 mapping of link width to bits (for example, a 20-bit link width includes a 20-bit physical slice, etc.). The microchip can have a larger size, such as 184, 192, or 200 bits. Note that if the physical chip 640 is 20 bits wide and the size of the microchip 635 is 184 bits, then the fraction of the physical chip 640 is used to send a microchip. Slice 635 (for example, use 9.2 20-bit physical slices to send 184-bit micro-chip 635, or 9.6 20-bit physical slices to send 192-bit micro-chip, and other examples). Note that at the physical layer, the width of the basic link can be change. For example, the number of channels in each direction may include 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, and so on. In one embodiment, the link layer 610a, b can embed different transactions of multiple chips in a single microchip, and one or more headers (for example, 1, 2, 3, 4) can be embedded in the microchip . In one example, HPI divides the header into corresponding time slots to allow multiple messages in the flick to go to different nodes.In an embodiment, the physical layer 605a, b may be responsible for the rapid transmission of information on a physical medium (electrical or optical, etc.). The physical link may be a point-to-point link between two link layer entities such as layers 605a and 605b. The link layer 610a, b can abstract the physical layer 605a, b from the upper layer and provide the ability to reliably transmit data (and request) between two directly connected entities and manage flow control. The link layer can also be responsible for virtualizing the physical channel into multiple virtual channels and message classes. Before submitting the protocol message to the physical layer 605a, b for transmission across the physical link, the protocol layer 620a, b relies on the link layer 610a, b to map the protocol message into the appropriate message class and virtual channel. The link layer 610a, b may support multiple messages, such as request, monitoring, response, write back, non-consistent data, and other examples.The physical layers 605a, b (or PHY) of HPI can be implemented above the electrical layer (ie, the electrical conductor connecting the two components) and below the link layer 610a, b, as illustrated in FIG. 6. The physical layer and corresponding logic can reside on each agent, and connect the link layer on two separate agents (A and B) (for example, devices on either side of the link). The local and remote electrical layers are connected by physical media (such as lines, conductors, optical cables, etc.). In one embodiment, the physical layer 605a, b has two main phases, namely initialization and work. During initialization, the connection is opaque to the link layer, and signaling can involve a combination of timing states and handshake events. During operation, the connection is transparent to the link layer, the signaling is at a speed, and all channels work with a single link. During the work phase, the physical layer transfers the microchips from agent A to agent B and from agent B to agent A. A connection is also called a link, and it abstracts some physical aspects from the link layer, including media, width, and speed, while exchanging flicks and current configured control/state (such as width) with the link layer. The initialization phase includes secondary phases, such as polling and configuration. The working phase also includes secondary phases (such as link power management status).In one embodiment, the link layers 610a, b may be implemented to provide reliable data transmission between two protocols or routing entities. The link layer can abstract the physical layer 605a, b from the protocol layer 620a, b, and can be responsible for the flow control between the two protocol agents (A, B), and to the protocol layer (message type) and routing layer (virtual network). ) Provide virtual channel services. The interface between the protocol layers 620a, b and the link layers 610a, b can generally be at the packet level. In one embodiment, the smallest transmission unit of the link layer is called a flit, and a flit is a specific number of bits, such as 192 bits or some other number. The link layers 610a and 610b rely on the physical layers 605a and 605b to frame the transmission units (physical slices) of the physical layers 605a and 605a into the transmission units (flits) of the link layer 610a and b. In addition, the link layer 610a, b can be logically divided into two parts, namely the transmitter and the receiver. A transmitter/receiver pair on one entity can be connected to a receiver/transmitter pair on another entity. Flow control is usually performed on the basis of microchips and packets. It is also possible to perform error detection and correction on a microchip level.In one embodiment, routing layers 615a, b may provide a flexible and distributed method for routing HPI transactions from source to destination. This solution is flexible because the routing algorithm for multiple topologies can be specified through the programmable routing table at each router (in one embodiment, the programming is performed by firmware, software, or a combination thereof). The routing function can be distributed; routing can be completed through a series of routing steps, and each routing step can be defined through a table lookup at the source router, intermediate router, or destination router. The search at the source can be used to inject HPI groups into the HPI organizational structure. The lookup at the intermediate router can be used to route the HPI packet from the input port to the output port. The lookup at the destination port can be used to target the destination HPI protocol proxy. Note that in some implementations, the routing layer can be thin due to the routing table, and therefore the routing algorithm is not specifically defined by the specification. This allows flexibility and various usage models, including flexible platform architecture topologies defined by the system implementation. The routing layer 615a, b relies on the link layer 610a, b to provide the use of up to three (or more) virtual networks (VNs)-in one example, two deadlock-free VNs have each virtual network VN0 and VN1 of several message classes defined in. A shared adaptive virtual network (VNA) can be defined in the link layer, but the concept of routing may not directly expose this adaptive network. This is because each message type and virtual network can have dedicated resources and guaranteed Forwarding progress, as well as other characteristics and examples.In some implementations, HPI can utilize an embedded clock. The clock signal can be embedded in the data transmitted using the interconnect. With the help of clock signals embedded in the data, different and dedicated clock channels can be omitted. For example, this is useful because it allows more pins of the device to be dedicated to data transfer, especially when space for pins is very scarce.A link can be established between two agents on either side of the interconnection. The agent sending the data can be a local agent, and the agent receiving the data can be a remote agent. These two agents can use state machines to manage all aspects of the link. In one embodiment, the physical layer data path can send the microchip from the link layer to the electrical front end. In one implementation, the control path includes a state machine (also called a link training state machine or similar). The actions of the state machine and the exit from each state can depend on internal signals, timers, external signals, or other information. In fact, some states, such as several initialization states, can use timers to provide a timeout value to exit the state. Note that in some embodiments, detection refers to the detection of events on both ends of the channel; however, it does not have to be simultaneous. However, in other embodiments, detection means that the referring agent detects the event. As an example, debounce refers to the continuous assertion of a signal. In one embodiment, HPI supports the operation of events of non-functional channels. Here, the channel can be ended in a specific state.The state defined in the state machine can include reset state, initialization state, and operating state, as well as other categories and subcategories. In one example, some initialization states may have a secondary timer that is used to exit the state when it times out (essentially due to the suspension of not being able to make progress in that state). The suspension may include an update of a register such as a status register. Some states may also have main timer(s) used to time the main functions in the state. Other states can be defined so that internal signals or external signals (such as a handshake protocol) drive the transition from this state to another state, and other examples.The state machine can also support debugging through a single step, freeze initialization, abort and use the tester. Here, you can postpone/maintain the exit until the debugging software is ready. In some instances, the exit can be postponed/maintained until the secondary timeout. In one embodiment, actions and exits may be based on the exchange of training sequences. In one embodiment, the link state machine runs in the local agent clock domain, and the transition from one state to the next state should be consistent with the transmitter training sequence boundary. The status register can be used to reflect the current status.Figure 7 illustrates a representation of at least a portion of the state machine used by the agent in an example implementation of HPI. It should be understood that the states included in the state table of FIG. 7 include a non-exhaustive list of possible states. For example, some transformations have been omitted in order to simplify the figure. Moreover, some states can be combined, split or omitted, while other states can be added. Such status can include:Event reset state: Enter during a warm reset event or a cold reset event. Restore Defaults. Initialize the counter (for example, the synchronization counter). It can exit to another state, such as another reset state.Timing reset state: the timing state used for in-band reset. It can drive a predefined electrical order set (EOS) so that the remote receiver can detect EOS and also enter a timing reset. The receiver has channels to maintain electrical settings. You can exit to the agent to calibrate the reset state.Calibration reset status: Calibration without signaling on the channel (such as receiver calibration status) or turning off the driver. It may be a predetermined amount of time based on the timer being in this state. The running speed can be set. Can act as a wait state when the port is not allowed. Can include minimum residence time. Receiver tuning or staggering off may occur based on the design. After the timeout and/or the calibration is completed, it can exit to the receiver detection state.Receiver detection status: Detect the presence of receivers on (multiple) channels. You can look for receiver termination (for example, receiver pull-down insertion). When the specified value is set or another specified value is not set, the calibration reset state can be exited. If the receiver is detected or the timeout is reached, you can exit to the transmitter calibration state.Transmitter calibration status: used for transmitter calibration. It can be a timing state assigned for transmitter calibration. Can include signaling on the channel. EOS can be driven continuously, such as EIEOS. You can exit to the compatibility state when the calibration is completed or when the timer expires. If the counter has expired or a secondary timeout has occurred, you can exit to the transmitter detection state.Transmitter detection status: verify valid signaling. It can be a handshake state in which the agent completes the action and exits to the next state based on remote agent signaling. The receiver can verify valid signaling from the transmitter. In one embodiment, the receiver looks for wake-up detection, and if debounced on one or more channels, looks for it on other channels. The transmitter drives the detection signal. In response to the completion of debouncing on all channels and/or in response to a timeout, or if debouncing on all channels is not completed and there is a timeout, the polling state can be exited. Here, one or more monitor channels can remain awake in order to debounce the wake-up signal. And if it has been de-jittered, then other channels are potentially de-jittered. This can allow energy saving in low power states.Polling status: The receiver modifies, initializes the drift buffer, and locks bits/bytes (for example, to identify symbol boundaries). The channel can be de-skewed. In response to the confirmation message, the remote agent can cause exit to the next state (e.g., link width state). By locking to EOS and the training sequence header, polling can additionally include training sequence lock. The limit of the channel-to-channel skew at the remote transmitter can be determined at the first length of the highest speed and the second length of the slow speed. Deskew can be performed in slow mode as well as operating mode. The receiver may have a maximum value for de-skewing the channel-to-channel skew, such as 8, 16, or 32 skew intervals. Receiver actions can include delay repair. In one embodiment, the receiver action can be completed when the effective channel mapping is successfully de-skewed. In one example, a successful handshake can be achieved when multiple successive training sequence headers with acknowledgments are received, and multiple training sequences with acknowledgments are sent after the receiver has completed its actions.Link width status: The agent communicates with the final channel mapping to the remote sender. The receiver receives the information and decodes it. After the checkpoint of the previous channel mapping value in the second structure, the receiver can record the configured channel mapping in one structure. The receiver can also respond with an acknowledgment ("ACK"). An in-band reset can be initiated. As an example, the first state initiates an in-band reset. In one embodiment, in response to the ACK, the execution exits to the next state, such as the flick configuration state. Further, before entering the low power state, if the frequency of the wake-up detection signal is lower than a specified value (for example, every number of unit intervals (UI) such as 4K UI, once), a reset signal can also be generated. The receiver can maintain the current and previous channel mapping. Based on training sequences with different values, the transmitter can use different channel groups. In some embodiments, the channel mapping may not modify some status registers.Microchip lock configuration state: Entered by the transmitter, but when both the transmitter and receiver have exited to the blocked link state or other link states, consider exiting this state (that is, the secondary timeout is meaningless). In one embodiment, exiting the transmitter to the link state includes starting a data sequence (SDS) and training sequence (TS) boundary after receiving the planetary permutation signal. Here, the receiver exit may be based on receiving SDS from the remote transmitter. This state can be a bridge from the agent to the link state. The receiver identifies the SDS. If the SDS is received after the descrambler is initialized, the receiver can exit to the blocked link state (BLS) (or control window). If a timeout occurs, exit can reset the state. The transmitter drives the channel with the help of the configuration signal. Based on various conditions or timeouts, the transmitter exit can be reset, BLS, or other states.Send link status: link status. Send the microchip to the remote agent. You can enter from the blocked link state and return to the blocked link state in an event such as timeout. The transmitter sends the microchip. The receiver receives the microchip. You can also exit to the low-power link state. In some implementations, the transmission link state (TLS) may be referred to as the L0 state.Block link state: a link state. The transmitter and receiver operate in a unified manner. It can be a timing state. During this period, when the physical layer information is transmitted to the remote agent, each link layer flick is postponed. Can exit to low power link state (or exit to other link state based on design). In one embodiment, the blocking link state (BLS) occurs periodically. This period is called the BLS interval and can be timed, and can be different between slow speed and running speed. Note that the link layer can be periodically prevented from sending flits, so that a physical layer control sequence of a certain length can be sent, for example, during the sending of the link state or the partial width of the sending link state. In some implementations, the blocking link state (BLS) may be referred to as L0 control or L0c state.Partial width sending link status: link status. You can save energy by entering the partial width state. In an embodiment, the width of the asymmetrical part means that each direction of the two-directional link has a different width, which can be supported in some designs. In one example, the partial width transmission link state is shown in the example of FIG. 9. Here, the partial width indication is sent at the same time as the first width is sent on the link, so as to convert the link to the second new width. Mismatch can cause reset. Note that the speed can be unchanged but the width can be changed. Therefore, each microchip is potentially sent with different widths. It can be logically similar to the sending link state; however, due to the smaller width, sending the flick may take longer. The link state is sent based on some received and sent messages or the exit partial width, or the link blocking state based on other events, and can exit to other link states, such as low-power link states. In one embodiment, the transmitter port can close idle channels in a staggered manner to provide better signal integrity (ie, noise mitigation). Here, during the period in which the link width is changing, a non-retryable microchip such as an empty microchip may be used. The corresponding receiver can discard these empty microchips and close idle channels in a staggered manner, and record the current and previous channel mappings in one or more structures. Note that the status and associated status register can remain unchanged. In some implementations, the partial width transmission link state may be referred to as a partial L0 or L0p state.Exit partial width sending link status: Exit partial width status. In some implementations, the blocking link state may or may not be used. In one embodiment, the transmitter initiates the exit by sending a partial width exit mode on the idle channel in order to train and deskew them. As an example, the exit mode starts with EIEOS, detects EIEOS and de-jitters it, in order to signal that the channel is ready to start to enter the full transmission link state, and can be ended with SDS or fast training sequence (FTS) on the idle channel . Any failure during the exit sequence (receiver actions, such as not completing de-skew before timeout) stops the flick from passing to the link layer and asserts a reset, which is handled by resetting the link when the next blocked link state occurs. One problem. SDS can also initialize the scrambler/descrambler on each channel to appropriate values.Low-power link state: is the state of lower power. In one embodiment, it is lower power than the partial width link state, because in this embodiment the signaling is stopped on all channels and in both directions. The transmitter can use the blocked link state to request the low-power link state. Here, the receiver can decode the request and respond with ACK or NAK; otherwise, it can trigger a reset. In some implementations, the low power link state may be referred to as the L1 state.In some implementations, for example, when the state actions (such as specific calibration and configuration) of the state have been completed, the state transition may be facilitated to allow the bypass state. The previous state results and configuration of the link can be stored and reused in subsequent initialization and configuration of the link. Instead of repeating such configuration and state actions, the corresponding state can be bypassed. However, traditional systems that implement state bypass often implement complex designs and expensive verification escapes. Instead of using traditional bypasses, in one example, HPI can utilize short timers in certain states (for example, states that do not require repeated state actions). This may allow for more uniform and synchronized state machine transitions, among other possible advantages.In one example, a software-based controller (e.g., through an external control point at the physical layer) may allow a short timer to be used in one or more specific states. For example, for a state whose actions have been executed and stored, the state can be timed briefly to facilitate rapid exit from this state to the next state. However, if the previous state action fails or cannot be applied within the short timer duration, a state exit can be performed. Further, the controller can disable the short timer, for example when a state action should be re-executed. A long or default timer can be set for each respective state. If the configuration action in this state cannot be completed within the long timer, a state exit may occur. The long timer can be set to a reasonable duration to allow the completion of state actions. Conversely, the short timer can be greatly shortened, making it impossible in some cases to perform state actions without referring back to previously performed state actions, and other examples.In some instances, during the initialization (or reinitialization) of the link, when the agent advances to the operating link state through the state machine, one or more failures or state exits may occur, which causes the state to reset (for example, reset To reset state or other state). In fact, the initialization of the link can cycle through one or more states without completing the initialization and entering the link state. In one example, within the initialization of the link, a count can be maintained for the number of invalid cycles in the state transition. For example, whenever the initialization returns to the reset state without reaching the link state, the counter can be incremented. Once the link successfully enters the link state, the counter can be reset for that link. Such counters can be maintained by agents on both sides of the link. Further, for example, the threshold may be set by a software-based controller using one or more external control points. When the count of invalid cycles meets (or exceeds) a defined threshold, the initialization of the link can be suspended (for example, set and held in the reset state or before the reset state). In some implementations, in order to restart the initialization and release the initialization from the suspended state, the software-based controller can trigger a restart or reinitialization of the link. In some instances, software-based tools can analyze the nature of the pending initialization, and perform diagnostics, set register values, and perform other operations to prevent further cycles of initialization. In fact, in some implementations, in conjunction with restarting the suspended link initialization, the controller can set a higher counter threshold or even override the counter, among other examples.In some implementations of HPI, supersequences can be defined, and each supersequence corresponds to a respective state or enters/exits from a respective state. Supersequences can include repeated sequences of data sets and symbols. In some instances, the sequence can be repeated until the state or state transition is completed or the corresponding event is transmitted, among other examples. In some instances, the repetitive sequence of the supersequence may be repeated according to a defined frequency (for example, a defined number of unit intervals (UI)). The unit interval (UI) may correspond to the time interval for sending a single bit on the channel of the link or system. In some implementations, the repetitive sequence can start with an electrically ordered set (EOS). Therefore, the instances of EOS can be expected to repeat according to a predefined frequency. Such an ordered set can be implemented as a defined 16-byte code expressed in hexadecimal format, among other examples. In one example, the super-sequenced EOS may be an electrically ordered set (or EIEIOS). In one example, EIEOS may be similar to a low-frequency clock signal (eg, a predefined number of repeated FF00 or FFF000 hexadecimal symbols, etc.). A set of predefined data can follow EOS, such as a predefined number of training sequences or other data. Such a super sequence can be used for state transitions including link state transitions and initialization, among other examples.As described above, in one embodiment, the initialization can be performed initially at a slow speed, and then with a fast initialization. Use default values for registers and timers with slow initialization. Then, the software uses the slow link to set up registers, timers, and electrical parameters, and clears the calibration semaphore in order to arrange a way to quickly initialize. As an example, initialization can consist of states or tasks such as reset, detection, polling, and configuration, and possibly others.In one example, the link layer blocking control sequence (ie, blocking link state (BLS) or LOc state) may include a timing state during which the link layer flick is delayed while transmitting PHY information to the remote agent. Here, the transmitter and receiver can start blocking the control sequence timer. Also, when the timer expires, the transmitter and receiver can exit the blocking state and can take other actions, such as exiting to reset, exiting to a different link state (or other state), including allowing the sending of microchips across the link status.In an embodiment, link training may be provided, and link training includes sending one or more scrambling code training sequences, ordered sets, and control sequences, for example, combined with a defined super sequence. The training sequence symbol may include one or more of a header, a reserved part, a target delay time, a pair of numbers, a physical channel mapping code reference channel or a group of channels, and an initialization state. In one embodiment, the header can be sent with ACK or NAK, among other examples. As an example, the training sequence part can be sent as part of the super sequence, and the training sequence can be scrambled.In one embodiment, the ordered set and control sequence are not scrambled or interleaved, and the ordered set and control sequence are sent equally, simultaneously and completely on all channels. Effectively receiving a sequel may include checking at least a part of the sequel (or, for a partially ordered set, the entire sequel). The ordered set may include an electrical ordered set (EOS), such as an electrical idle ordered set (EIOS) or EIEOS. The super sequence may include the beginning of a data sequence (SDS) or a fast training sequence (FTS). Such sets and control supersequences can be predefined, and they can have any pattern or hexadecimal representation, and any length. For example, the length of ordered sets and supersequences can be 8 bytes, 16 bytes, 32 bytes, and so on. As an example, FTS may additionally be used for fast bit lock during the exit part-width transmission link state. Note that FTS definition can be done channel by channel, and a rotated version of FTS can be used.The super sequence, in one embodiment, may include inserting EOS such as EIEOS in the training sequence stream. In one implementation, at the beginning of the signaling, the channels are powered up in an interleaved manner. However, this can cause the initial supersequence to be seen truncated at the receiver on some channels. However, the supersequence can be repeated in short intervals (e.g., about one thousand unit intervals (or ~1 KUI)). In addition, the training supersequence can be used for one or more of de-skew, configuration and communication initialization targets, channel mapping, and so on. EIEOS can be used to transform a channel from an inactive state to an active state, filter good channels, one or more of the identification symbols and TS boundaries, and other examples.Turning to Figure 8, a representation of an example super sequence is shown. For example, an exemplary detection supersequence 805 can be defined. The detection supersequence 805 may include a repetitive sequence of a single EIEOS (or other EOS) followed by a predefined number of instances of a specific training sequence (TS). In one example, EIEOS can be sent, followed by seven repeated instances of TS. When the last of the seven TS is sent, EIEOS can be sent again, followed by seven additional instances of TS, and so on. This sequence can be repeated according to a certain predefined frequency. In the example of FIG. 8, EIEOS may repeat on the channel once every one thousand UI (~1KUI), followed by the remaining part of the detection supersequence 805. The receiver can monitor the channel for the occurrence of the repeated detection supersequence 805, and once confirmed, the supersequence 705 can infer that the remote agent exists, has been added on the channel (for example, hot plug), has been woken up or reinitialized, and so on.In another example, another super sequence 810 may be defined to indicate polling, configuration, or loopback conditions or status. As with the example detection supersequence 805, the channel of the link can be monitored by the receiver to discover such polling/configuration/cycle supersequence 810 in order to identify the polling status, configuration status, or loopback status or condition. In one example, the polling/configuration/loop supersequence 810 may start with EIEOS, followed by a predefined number of repeated instances of TS. For example, in one example, EIEOS may be followed by thirty-one (31) instances of TS, and EIEOS repeats approximately every four thousand UIs (eg, ~4KUI).Further, in another example, a partial width transmission state (PWTS) exit super sequence 815 may be defined. In one example, the PWTS exit supersequence may include the initial EIEOS to be repeated in order to pre-process the channel before sending the first complete sequence in the supersequence. For example, the sequence repeated in the supersequence 815 can start from EIEOS (repeat approximately once every 1 KUI). Further, the fast training sequence (FTS) can be used to replace other training sequences (TS), and the FTS is configured to assist faster bit lock, byte lock, and de-skew. In some implementations, FTS may not be descrambled in order to further assist in restoring idle channels to activity as quickly and non-destructively as possible. Like other supersequences before entering the link sending state, the supersequence 815 can be interrupted and ended by the start of the send data sequence (SDS). Further, a partial FTS (FTSp) can be sent to assist in synchronizing the new channel to the active channel, for example by allowing the bits to be subtracted (or added to) FTSp, among other examples.Supersequences such as detection supersequence 705 and polling/configuration/loop supersequence 710, etc., can potentially be sent substantially throughout the initialization or reinitialization of the link. Once the specific supersequence is received and detected, in some instances, the receiver can respond by sending the same supersequence back to the transmitter on the channel. The reception and confirmation of a specific super sequence by the sender and receiver can serve as a handshake for confirming the status or conditions of transmission through the super sequence. For example, such a handshake (e.g., using detection supersequence 705) can be used to identify the reinitialization of the link. In another example, such a handshake can be used to indicate the end of an electrical reset or low power state, causing the corresponding channel to be brought back, among other examples. For example, the end of the electrical reset can be identified from the handshake between the transmitter and the receiver that both transmit the detection supersequence 705.In another example, the channel can be monitored to discover the supersequence, and the supersequence can be used in conjunction with the screening of the channel for detection, wake-up, state exit and entry, and other events. The predefined and predictable nature and form of supersequences can also be used to perform initialization tasks, such as bit lock, byte lock, de-jitter, descrambling, de-skew, modification, delay repair, negotiation delay, and others Potential uses. In fact, the channel can be substantially continuously monitored for such events in order to speed up the system's ability to react and deal with such conditions.In one embodiment, the clock can be embedded in the data, so there is no separate clock channel. The microchips sent on the channel can be scrambled to facilitate clock recovery. As an example, the receiver clock recovery unit can deliver a sampling clock to the receiver (that is, the receiver recovers the clock from the data and uses it to sample the incoming data). In some implementations, the receiver continuously adapts to the incoming bit stream. By embedding the clock, it is possible to reduce the pin lead. However, embedding the clock in the in-band data can change the way the in-band reset is performed. In one embodiment, the blocking link state (BLS) can be utilized after initialization. Moreover, the electrical sequencing set supersequence can be used during initialization to facilitate resetting, among other considerations. Among the devices on the link, the embedded clock can be shared, and the shared operating clock can be set during the calibration and configuration of the link. For example, the HPI link can make the drift buffer reference a common clock. Such an implementation can achieve a lower delay time than the elastic buffer used in a non-shared reference clock, as well as other possible advantages. Further, the reference clock distribution part can be matched to be within specified limits.As mentioned above, the HPI link can operate at a variety of speeds, including a "slow mode" for default power-up, initialization, and so on. The operating (or "fast") speed or mode of each device can be statically set by the BIOS. The common clock on the link can be configured based on the respective operating speed of each device on either side of the link. For example, the link speed can be based on the slower of the two device operating speeds, and other examples. Any change in operating speed may be accompanied by a warm reset or a cold reset.In some examples, upon power-up, the link is initialized to a slow mode with a transmission rate of, for example, 100 MT/s. Then, the software sets up two ends for the operating speed of the link and starts to initialize. In other instances, such as when the slow mode does not exist or is not available, the sideband mechanism can be used to establish a link including a common clock on the link.In one embodiment, the slow mode initialization phase can use the same encoding, scrambling code, training sequence (TS), status, etc. as the operating speed, but with possibly fewer features (for example, no electrical parameter settings) , No modification, etc.). The slow mode working phase may also potentially use the same encoding, scrambling, etc. (although other implementations may not use it), but may have fewer states and features (e.g., no low-power state) compared to operating speed.Further, the device's native phase-locked loop (PLL) clock frequency can be used to implement the slow mode. For example, HPI can support analog slow mode without changing the PLL clock frequency. Although some designs can use separate PLLs for slow and fast speeds, in some implementations of HPI, by allowing the PLL clock to run at the same fast operating speed during the slow mode, an analog slow mode can be implemented. For example, by repeating each bit multiple times in order to emulate a slow high clock signal and then a slow low clock signal, the transmitter can emulate a slower clock signal. The receiver can then oversample the received signal in order to locate the edge emulated by the repeated bit and identify the bit. In such an implementation, the ports sharing the PLL can coexist at slow and fast speeds.In some implementations of HPI, the modification of the channel on the link can be supported. The physical layer can support both receiver modification and transmitter or transmitter modification. With the receiver modification, the transmitter on the channel can send sample data to the receiver, and the receiver logic can process the sample data in order to identify the electrical characteristics of the channel and the shortcomings of the signal quality. The receiver can then make modifications to the calibration of the channel to optimize the channel based on the analysis of the received sample data. In the case of transmitter modification, the receiver can again receive sample data and develop quality indicators describing the channel, but in this case the indicators are transmitted to the transmitter (for example, using the reverse channel, such as software, hardware, embedded , Sidebands, or other channels) to allow the transmitter to modify the channel based on feedback.In an example implementation, the supersequence can be scrambled. For example, by XORing each part of a super sequence such as the TS payload with a random or pseudo-random sequence, those parts can be scrambled. Other parts of the supersequence (for example, EIEOS, TS header, FTS, etc.) can remain unscrambling. In one example, a pseudo-random binary sequence can be utilized with at least 23 bits (PRBS23). The PRBS can be generated based on the selected polynomial. In one example, the PRBS can be generated from similar bit sizes, self-seeded storage elements such as linear feedback shift registers (LFSR). The LFSR may be a 23-bit Fibonacci LFSR capable of generating PRBS sequences longer than 8Mb. PRBS can be repeated after the end of the sequence. In some implementations, the entire PRBS23 sequence can be used for the training sequence included in the supersequence used by the scrambling code, for example, during link initialization and during modification.Although full-length PRBS sequences can be used, in some implementations HPI can support the use of available PRBS sequences that allow the use of varying lengths (e.g., use only a portion of the PRBS23 sequence). In some examples, the controller of the device may specify that only a portion of the full-length PRBS sequence is utilized. For example, in test applications where the repeatability of the bit sequence is desired, and potentially other applications, this is desirable. The software-based controller can specify the PRBS of varying length to be applied. For example, the BIOS of the device can specify the PRBS length to be applied on the link. In some implementations, using a full-length PRBS sequence may be the default setting, for example, in order to maximize the benefits of an ultra-long PRBS sequence.The transmission link state (TLS) and channel traffic in the training sequence can be scrambled together with a PRBS of a specific minimum length (for example, 23 bits). The start seed applied to the flow can vary between channels in order to enhance the electrical benefits of PRBS on the link. In an example implementation, the PRBS can be generated by a 23-bit Fibonacci LFSR that implements a 6-tap generating polynomial such as (x23+x21+x16+x8+x5+x2+1).When modified, the agent's sender can send a random or pseudo-random pattern to the remote receiver. In some instances, scrambling supersequences can be used as the mode. The logic at the receiver can determine the characteristics of one or more channels of the link and generate index data describing such characteristics. In the case of receiver modification, the receiver can try to determine the optimal configuration for the channel based on this index and apply these configurations at the receiver. In the case of transmitter modification, the receiver can transmit the indicator to the transmitter for use by the sender agent to configure and modify the channel based on the indicator. In either instance, in some implementations, hardware or software can be used to evaluate different transmitter settings in an algorithmic order in order to determine the optimal settings.Using the polling supersequence sent from the remote transmitter, the receiver modification can be initiated at the beginning of the polling state. Similarly, the transmitter can be modified by repeating the following for each transmitter parameter. Both agents can enter the loopback mode state as the master device and send the specified mode. Furthermore, both receivers can measure indicators (such as BER) set by the specific transmitter at the remote agent. Both proxies can go to the loopback flag state, then reset and use the reverse channel (slow mode TLS or sideband) to exchange metrics. Based on these indicators, the next transmitter setting can be identified. Finally, the optimal transmitter settings can be identified and saved for later use.In some implementations, a timer can be used during the modification. Assuming that the time value is long enough to permit the sender and receiver to end the modification task and successfully modify the channel, the modification can be ended when the predefined timer value ends. In other implementations, alternative methods can be used to improve the efficiency of link modification. For example, in one example, the handshake can be used to modify the time spent in the modification to the time actually used to complete the modification. In one example, the receiver at the first agent responsible for generating indicators from the samples sent by the sender can send a signal that tells the sender that whether the modification is performed by the receiver or the sender, the receiver has approved The configuration of the link (or channel(s)). Once the signal is received, the transmitter can complete the handshake by sending an acknowledgment signal. In some instances, the confirmation may indicate similar approval of the link configuration at the sender agent, among other examples.Combined with the modification of the link, through various mechanisms, indicator information and other feedback can be transmitted from the receiver agent to the sender agent. In the case of transmitter modification, the transmitter can identify changes that can be made to one or more properties of the channel in order to improve the characteristics of the channel. The transmitter can make these changes and send additional sample data reflecting these changes on the channel. Then, in some instances, the receiver can provide additional indicator data or feedback in order to report the quality of these changes. In one example, the receiver can provide indicator information through the reverse channel. In one example, by putting the link (or one or more channels) into a slow mode that allows software tools to analyze the quality of samples received from the transmitter, such a reverse channel can be implemented as a software-based reverse channel . The software tool may cause the transmission of indicator information or configuration recommendations to the sender agent. This can be done through in-band communication, software-to-software messaging, or other means. In another example, the sideband channel (when available on the device(s)) can be used as the reverse channel. In yet another example, a hardware-based channel can be used as a back channel, for example by reserving one channel between the two agents for transferring samples and a second channel for transferring feedback indicator data (at least during the modification event). In yet another further example, an embedded channel can be used that uses a control or BLS window to send feedback indicator data. In some examples, the control window can be set to slow mode (e.g., to allow software analysis) while the samples are being transferred at running speed at the control interval, and other possible examples.In some examples, the modification may include sending the PRBS (or the PRBS scrambling code portion of the supersequence) to the receiver by the transmitter in a master-master loopback state. Both agents on the channel can be locked to the PRBS and use this sequence as a reference sequence for changes. One or two types of agents can receive the reference sequence, and determine whether the agent's receiver appropriately replicates the reference sequence. Then, one or both agents can separately assess the quality of the channel based on the comparison of the received sequence with the expected reference sequence. For example, based on this comparison, the bit error rate can be determined for the channel. In addition, the logic at the transmitter (or receiver) can deliberately inject jitter, noise, or other characteristics into the signal before transmission during the loopback, in order to test the quality of the channel (for example, whether there is still noise at the receiver) Can understand the signal), as well as other features. The index data used to modify the link may include the result of such an assessment, including the determined bit error rate.The self-test (for example, interconnect built-in self-test (IBIST)) can be performed through functions provided in some implementations of HPI. Supersequences can be used for such self-tests. For example, the transmitter or master device may transmit a pattern including all or part of a super sequence, a PRBS sequence, or other sequences. In some instances, the length and repeatability of such sequences can be controlled, allowing full-length specific sequences to be applied in some instances, while only a part (and repetitive parts) of the sequence is applied in other instances. In some examples, the PRBS23 or sequence using the PRBS23 scrambling code can be used for self-testing of the link. In addition, the start point and end point of the sequence can be specially selected and used in the self-test and other functions. Further, through some implementations of HPI, multiple uncorrelated data sequences can be made available, allowing different data sequences to be applied on adjacent channels. In one example, multiple non-relevant versions of PRBS may be provided, such as four or more sequences, and other examples.As mentioned above, loopback can be used for various tasks, including testing, modification, initialization, and so on. In some instances, it is difficult to synchronize the two agents in the loopback. For example, the agent of the receiver may be sending data, such as a specific training sequence, a super sequence, and so on. Further, once it enters the loopback, the receiver can splice the data it has initiated with the data it wants to send back, such as the training sequence it wants to send back. In one example, the sender or master device in the loopback may include logic to switch from the lock on the TS initiated by the receiver agent to the lock on the returned TS. Such TS lock may have aliasing threats and other problems. In one example, the TS, such as the payload of the TS, can be formatted to help remedy the risk of aliasing, or to interfere with the previous TS with the newly sent back TS. For example, in one example, the TS may be equipped with a suffix of return-to-zero data, which may include bytes for descrambling and other dual-purpose reserved bytes. Such zeroing bytes can additionally be used to reduce or eliminate (statistically) the risk of missing the newly sent TS among the data spliced by the receiver and the data originated from the receiver, among other examples. In loopback, the master device can check the integrity of its mode and relock after loopback, for example, by using NAK-ACK handshake, which has NAK TS with unchanged payload and is used for in-band Handshake (ACK) of the parameter payload. Further, the main-main loopback can also be supported, and the TS format is used for the TS lock at each side of the main-main loopback.In some implementations of HPI, a design for testing features can be provided. In one embodiment, HPI includes hooks that allow later design testing, debugging, and verification. An exemplary non-exhaustive list of such features is included below. Note that the following features are provided as examples, because some of them can be omitted and others can be added, etc.:Single step: A single step includes a debugging feature in which the software can make the agent step from the initialization state to the link state such as TLS. Storage elements, registers or signals (they are software accessible) can allow this mode. In this mode, the agent can set semaphores and perform state actions when entering the state. But when the exit conditions (including secondary timeout) are met, the semaphore can cause the next state transition not to be taken. Here, the actual conversion can occur in the direction of the software-based controller, for example by clearing the semaphore. This potentially allows the software to check the physical layer as it progresses to the sending state or loopback. Note that by setting the sub-state semaphore at the entry of the sub-state, and other examples, this can be extended to the sub-state. As long as a semaphore such as a bit in a register is set, the agent can remain in the current state. The transition from each state can be delayed until the external agent clears the hold bit. Except in cases involving timeouts, the exit criteria defined by the state rules can be maintained, and so on. The secondary timer can be disabled (e.g. omitted). Here, it can be considered that the clearing of the holding bit is an alternative stimulus to the timeout of the secondary timer that simulates a single step operation, and other examples. Further, the software-assisted single step can be executed in a manner that supports the integrity of the forwarding progress.Freeze on initialization abort: This is a debugging feature in which the agent does not immediately transition to the reset state when the initialization aborts, delaying or suspending the transition so that the software-based tool can identify the cause of the abort. For example, software-based tools can be used to detect the cause of the suspension, while supporting the integrity of the regression and reinitialization. One or more fields holding one or more bit registers, such as a control register, can control this action. This feature complements a single step by giving the software control rights for declaring exits due to failures (a single step does so under normal progress). In one embodiment, by default, the physical layer state machine can retry by transitioning to the reset state immediately after any initialization is aborted. However, by setting the initialization abort freeze bit in the register, the state machine can be frozen (that is, kept in the same state) at the point of failure, without transitioning to the reset state. As an example, in the case of freezing during the initialization abort mode, when the initialization abort occurs, the state machine is frozen by setting a state machine holding bit such as the semaphore described above in the register. In one embodiment, software can access registers to read the stopped state and other frozen resources, and use the frozen state to debug the state machine. Clearing the holding bit in this frozen state can cause the state machine to exit to reset. In one embodiment, the in-band reset does not release the hold.Automated Test Equipment (ATE): Automated Test Equipment (ATE) can be used to characterize (eg, define) links in various states including TLS. In this case, the ATE can act as a proxy and use a set of predetermined transmission modes to get the device under test (DUT) into TLS. In the ATE mode, you can set the ATE mode field holding one or more bits in the register. The DUT performs the same state action, but when the exit condition is met, the next state transition is not taken, and the actual transition occurs when the secondary timeout occurs. Thus, this mode is similar to a single step except that the transition occurs at a pre-programmed timeout instead of software intervention. For example, the ATE mode can manage the progress through each state based on a programmable timer. A longer timer set during this mode can allow the handshake in each state to be completed while still exiting at the time specified by software management or otherwise used in the ATE mode.In some instances, by connecting the sender of the DUT port to its own receiver and obtaining this link pair of TLS, each initialization mode (except for loopback or compatibility from the device) is sent and checked. Signature mode, in order to make the DUT pass or fail, high-volume manufacturing (HVM) testing can be performed. This can be done without a dedicated mode, but the delayed repair can be performed at the correct cycle to check the signature.IBIST (Interconnect Built-in Self Test): IBIST uses compatibility and loopback status to test the interconnection with the built-in pattern generator and checker.Compliance: For verification purposes, the agent can be made a compliant master or slave. The agent enters compliance from the transmitter calibration state (TCS). After the incoming data from the master device is timed again to its local clock, the slave device sends back the incoming data from the master device (without canceling any polarity reversal or channel reversal). The master device sends the compliance mode and receives the compliance mode sent back from the slave device. The master device can enter the loopback mode to check more dedicated modes. It is also possible to use the master device without the slave device so that its transmitter can be characterized. The typical use of compliance is to characterize the operation of the analog front end on some subset of the channel when the loopback does not work. The compliance state can be used for jitter or noise research, debugging, link detection, etc. The compliance state can drive the supersequence by means of a transmitter from the master device. The receiver looks for wakeups on the monitoring channel, debounces the wakeup, discards bad channels, modifies and bit locks, etc. The slave transmitter can drive the compliance mode until it completes its receiver action. Then, timed back again without de-skew. The slave receiver performs similar monitoring and de-jittering actions. Exiting can be to a reset state, such as a timed reset, or to a loopback mode state to start testing, among other examples.Loopback: It can make the agent become the loopback master device used to verify the subset of the channel in detail. After successful polling, the master device enters the loopback with the subset of the channel, and other agents also enter the loopback, but as slave devices. The loopback master device can use the loopback master bit in the polling training sequence (TS) to transmit its intention to enter the loopback. Agents that are not loopback master devices and receive this bit in TS polling can become loopback slave devices. At the end of the polling, both connected ports enter the loopback mark state (LMS). From there, the master device puts the slave device into a loopback mode state, where it sends the modes and checks them after they are sent back from the slave device. The return slave device returns the de-skewed data (different from the compliant slave device). The state machine can remain in the loopback and execute other tests after one test indefinitely. This allows serial testing without losing the bit lock. Tx modification can also use loopback mode generation and inspection capabilities. During TX modification, both agents act as masters, but in one scenario, TX transmission mode and Rx check bit errors.Mode generation: The mode generator can be activated in the compatibility and loopback state. In one embodiment, a pattern generator, such as the example pattern generator illustrated in the simplified block diagram of FIG. 10, may include one or more pattern buffers, each of which has a specified size (for example, 128 bits) and is passed such as Multiple 23-bit (or other length) LFSR seed buffers accessed by structures such as registers. Each word of the indirect addressing mode generator is selected through the mode buffer.In an example implementation, the contents of the pattern buffer are sent serially in each allowed channel, starting with the least significant bit first. Each channel can choose any buffer that utilizes the register mechanism. All channels that select the same mode buffer send the same data in one UI. It is also possible to scramble each mode buffer independently by a 23-bit pseudo-random generator, and use the bits in a register such as the mode control register to enable the 23-bit pseudo-random generator. For example, using the mode inversion selection register, the transfer in any channel can be inverted individually. The auto-inversion feature can be allowed in order to use the auto-inversion enable bit of the pattern generator control register to generate the crosstalk pattern, and other examples. For transmitter modification using loopback, the interleaved PRBS23 mode can be selected. This mode can also be used to scramble microchips in a low-power state. The number of patterns sent can be more than the loop count in the pattern generator control register, because the loop count refers to the total number of 128-bit patterns received. The master device can send integer multiples of 128UI mode. The content of the pattern generator can be sent continuously until at least one of the three exit conditions occurs: (i) if the loop count status is equal to the exponential loop count; (ii) "Stop On Error" is set in the register "And an error has occurred on any channel; or (iii) "Stop Test" is set in the register. By default, it is not detected as the transmitter channel indicated by the discarded channel in the Transmitter Data Lane Dropped Status Register (Transmitter Data Lane Dropped Status Register), and as the receiver data lane dropped status register (Receiver Data Lane Dropped Status Register). The discarded receiver channels indicated by the discarded receiver channels in Register) do not transmit or compare any patterns. If the "Include Dropped Lanes" bit is set in the pattern generator control register, the dropped lanes are also driven and the mode is checked in the loopback mode state. The disabled channel may not participate in the test. Further, it is possible to control the channel content of the slave device transmitter via the slave device return path selection register, so as to return the content from the Rx channel or select the mode generator. In some instances, there may be no alignment requirements between the returned data and the pattern generated from the device, as well as other features, structures, and examples.Mode check and error count: Mode check is allowed in loopback mode. Each receiver channel can compare the received data with the data sent in the corresponding transmitter channel. The slave device side check can be achieved by programming the same exact pattern generation value in both the loopback master and slave devices. The start of the checksum mode buffer scrambling code can be marked by the end of the SDS. Each channel can choose to compare or not depend on the register value. The number of patterns checked can be controlled by the loop count. Each count indicates 128 bits of pattern buffer data. The loop counter can have a 5-bit exponential count to allow long-term testing. The loop count value of zero corresponds to an infinite count. In this case, in some implementations, the test can be terminated only by setting the stop test bit. In order to accommodate the application of synchronized electrical parameters when entering the loopback mode, the check can be masked and the time specified by the time value in the pattern checker control register (Pattern Checker Control Register) can be continued. Use the mode checker to control the selective error check start and the selective error check interval in the register. For any bit in the interval, the check can be made optional.During the sender modification in the loopback, both proxies can act as the master, but the sender sends the pattern and the receiver checks for bit errors. Another difference is that the start of the test can be set before entering the loopback, and a structure can be used to delay the actual start of the test in the loopback flag (send SDS). In the loopback mode, when the loop count expires, the transmitter modification test ends, and the agent can return to the loopback flag, wait for the timeout, and then exit to reset for reverse channel operation. After trying a series of sender parameters, the agent can go back to loopback mode instead of resetting until the last parameter has been tried, among other examples.Error counting can be performed by each channel and the global counter together. The error counter can be accessed through the channel error counter register. The channel observed and selected for the global counter can be indicated by the receiver error counter channel selection field in the mode checker control register. The least significant 8 bits of the error counter can be used for each channel. When the state machine enters the loopback mode, the most significant 23 bits of the channel error counter register can only be used for the selected channel indicated by the receiver error counter channel selection. The channel error counter register does not adhere to the maximum value, but instead rolls over to all 0s, which is indicated by setting the overflow flag (for example, bit 31 in the channel error counter register) on a channel-by-channel basis. The channel-by-channel counters in unselected channels that freeze the maximum error count can be flagged for overflow. Initial shielding, selective error checking, and cycle counting shutdown can also be applied to the error counter. By writing all 1s to bits 31:0, and other examples, software can manually clear the channel error counter register.Channel reversal: If channel reversal or polarity reversal is detected at the receiver during polling, the mode check can be performed after the channel reversal and polarity reversal are cancelled (and if it is a slave device, it will be returned).Proxy loopback flag state: The loopback flag is the proxy state, but different from other proxy states, the actions and exits of the master device and the slave device can be different. The loopback slave device can cancel any polarity reversal and/or channel reversal, but may not descramble or descramble the returned bits again. Since the slave device is sending back, the confirmation exchange may not be applied to the slave device. Since the slave device can de-skew before sending back the symbol boundary, the master device may not be forced to lock the byte again or de-skew again, but the master device can lock the training sequence again to avoid locking to some aliases. Means for doing so can include reseeding of LFSR, comparing TS and/or EIEOS or some combination of these. The end of SDS marks the end of setting and the beginning of pattern generation, inspection and counting.Proxy loopback mode state (or blocking link state): In this state, instead of the control mode, the main transmitter can send the IBIST mode, and its receiver can check for errors in the received mode. For sender modification, both agents can be the master. For a predetermined period, the transmitter can send a pattern, and the remote receiver can compare this pattern and determine a figure of merit or index for the received pattern recorded in a storage element such as a register. Comparison methods and metrics can depend on the design (for example, BER with jitter injection). At the end of the period, both agents can exit to reset for the reverse channel to check the indicator and establish the next iteration of the sender modification.Channel enable/disable: Channels can be disabled at the transmitter, receiver, or both in order to cause the link to operate at a lower width. If the correct channel is flipped, disabling the correct channel may be the responsibility of the software-based controller or tool.As mentioned above, both timers and controls (e.g., control signals, handshake, etc.) can be used to facilitate transitions within the state machine defined on the agent in the HPI environment. For example, timers can be used for some state transitions, and signaling can be used for other state transitions. Further, a mechanism for facilitating state transition can be provided. For example, as described above, in some implementations, ATE mode or other test modes can be provided, and these modes can cover some state transition mechanisms, for example, to assist in the management and observation of system testing. For example, in an example test mode, all state transitions can be set by the test or test administrator according to their respective timers. It is also possible to provide logic that assists the configuration state, which will switch when the control signal appears, in order to switch based on the defined timer, and other examples. Such other examples may include, for example, software-controller state transitions, such as single stepping (e.g., abort by freeze initialization), and other examples.As described above, the BLS or L0c window can be used to transmit various control code signals and other data included in testing, initialization, and error checking applications. A group of predefined BLS codes can be defined, and the group of predefined BLS codes can be transmitted within a short window of the UI provided by the BLS. However, transients, transmission line irregularities, and other factors can cause bit errors, which can potentially cause control codes to be damaged or misunderstood. A certain degree of error detection and correction logic can be provided on the agent on the link so that more minor errors can be considered when interpreting and processing the control code. If the logic is still not clear and decisively solve the control code error, it can cause mismatch. In some implementations of HPI, features that respond to potentially catastrophic side effects of mismatches can be provided. For example, in one embodiment, once a mismatch is detected, the link can be suspended, including sending potentially corrupted flicks, modifications, and other communications. Then, at the end of the next BLS (or L0c) interval, the link can automatically transition to reset mode, among other examples.Since both devices on the link can stream the same reference clock (for example, ref elk), the elastic buffer can be omitted (any elastic buffer can be bypassed or used as a drift buffer with the lowest possible delay time ). However, phase modification or drift buffers can be used on each channel to transfer the respective receiver bit stream from the remote clock domain to the local clock domain. The delay time of the drift buffer is sufficient to handle the sum of the drift of all sources from the electrical specifications (for example, voltage, temperature, residual SSC introduced by reference clock routing adaptation, etc.), but as small as possible in order to reduce the transmission delay. If the drift buffer is too shallow, it may cause drift errors and appear as a series of CRC errors. Therefore, in some implementations, a drift alert can be provided, which can initiate a physical layer reset before the actual drift error occurs, and other examples.Some implementations of HPI can support both sides to run at the same nominal reference clock frequency but with a ppm difference. In this case, frequency modification (or elastic) buffers may be required, and these frequency modification (or elastic) buffers may be re-modified during the extended BLS window or during a dedicated sequence that will occur periodically, among others Example.Assuming that the delay time does not cause the delay time to repair errors or time out at the link layer, and other considerations, the operation of the HPIPHY logic layer can be independent of the underlying transmission medium.An external interface can be provided in HPI to assist in managing the physical layer. For example, external signals (from pins, fuses, other layers), timers, control and status registers can be provided. The input signal can change with respect to the PHY state at any time, but is observed by the physical layer at specific points in various states. For example, a changed alignment signal (as described below) can be received, but it is no longer valid after the link has entered the transmit link state, and other examples. Similarly, the physical layer entity only observes the command register value at a specific point in time. For example, the physical layer logic can take a snapshot of the value and use it for subsequent operations. Therefore, in some implementations, the update of the command register may be associated with a limited subset of a specific period (for example, in the transmit link state or when the reset calibration is maintained in the slow mode transmit link state) in order to avoid Abnormal behavior.Since each status value tracks hardware changes, the value readings can depend on when they are read. However, some state values, such as link mapping, delay time, speed, etc., may not change after initialization. For example, reinitialization (or low power link state (LPLS), or LI state, exit) is the only thing that can cause these changes (for example, a hard channel failure in TLS may not cause link reconfiguration until Triggered reinitialization, and other examples).Interface signals can include signals that are external but affect the behavior of the physical layer. As an example, such interface signals may include encoding and timing signals. Interface signals can be specific to the design. These signals can be input or output. Some interface signals, such as the so-called semaphore and prefix EO and other examples, can be active once per assertion, that is, they can be de-asserted and then asserted again to take effect again, and other examples. For example, Table 1 includes an example list of example functions:Table 1CSR timer default values can be provided in pairs-one for slow mode and one for running speed. In some instances, a value of 0 disables the timer (ie, timeout never occurs). The timer may include those shown in Table 2 below. The main timer can be used to time expected actions in a state of time. The secondary timer is used to stop the initialization of the state transition in the automated test equipment (or ATE) mode without progress or progress at a precise moment. In some cases, the secondary timer can be much larger than the primary timer in one state. The exponential timer set can be suffixed with exp, and the timer value increases by 2 to the value of this field. For linear timers, the timer value is the value of this field. Any timer can use different granularities. In addition, some timers in the power management section can be in a collection called a timing curve. These can be associated with timing diagrams of the same name.Table 2Can provide command and control registers. The control register can be post-action, and in some instances can be read or written by software. Later action values can take effect continuously upon reset (for example, from a software-oriented phase to a hardware-oriented phase). The control semaphore (prefixed CP) is RW1S and can be cleared by hardware. The control register can be used to perform any of the items described here. They can be modified and accessed by hardware, software, firmware, or a combination thereof.Status registers can be provided to track hardware changes (written and used by hardware) and can be read-only (but debugging software can also write to them). Such registers may not affect interoperability and can usually be complementary to a variety of private status registers. The state semaphores (with the prefix SP) can be hosted, because they can be cleared by the software to redo the actions of setting the state. Default means that the initial (at reset) value can be provided as a subset of these status bits related to initialization. When initialization is aborted, this register can be copied to the storage structure.A toolbox register can be provided. For example, the testability toolbox register in the physical layer can provide pattern generation, pattern checking, and loopback control mechanisms. Advanced applications can use these registers and electrical parameters to determine tolerances. For example, interconnect built-in testing can use this toolbox to determine tolerances. For transmitter modifications, these registers can be used in conjunction with the specific registers described in the previous sections, as well as other examples.In some implementations, HPI utilizes the physical layer to support reliability, availability, and maintainability (RAS) capabilities. In one embodiment, HPI supports hot plugging and removal with one or more layers, which may include software. Hot removal can include prohibiting the link, and can clear the initialization start state/signal for the agent to be removed. The remote agent (that is, the agent that has not been removed (for example, the home agent)) can be set to slow and its initialization signal can also be cleared. An in-band reset (for example, via BLS) can cause both agents to wait in a reset state such as the calibration reset state (CRS); and the agent to be removed can be removed (or can remain reset on a target pin) , Power off), and other examples and features. In fact, some of the above events can be omitted, and additional events can be added.Hot add can include the initialization speed on the agent to be added, which can be slow by default and the initialization signal can be set. The software can set the speed to slow, and can clear the initialization signal on the remote agent. The link can appear in slow mode, and the software can determine the operating speed. In some cases, remote PLL relock is not performed at this time. The running speed can be set on the two agents, and the permission for modification can be set (if not done previously). The initialization start indicator can be cleared on both agents, and the in-band BLS reset can cause both agents to wait in the CRS. The software can assert a warm reset (for example, target or self-reset) of the agent (to be added), which may cause the PLL to relock. The software can also set the initialization start signal through any known logic and set it remotely (thus making it enter the receiver detection state (RDS)). The software can de-assert the addition of a hot reset of the agent (thus making it enter the RDS). Then, the link can be initialized to transmit link state (TLS) at operating speed (or loop back if a modification signal is set), and other examples. In fact, some of the above events can be omitted, and additional events can be added.Can support data channel failure recovery. In one embodiment, by configuring itself to be less than the full width (for example, less than half of the full width), the link in HPI is resilient to hard errors on a single channel, which can eliminate faults. Channel. As an example, this configuration can be made by the link state machine, and unused channels can be closed in this configuration state. As a result, flicks can be sent across a narrower width, as well as other examples.In some implementations of HPI, channel reversal can be supported on some links. For example, channel reversal can mean that channel 0/1/2 of the transmitter is connected to channel n/n-1/n-2... (for example, n can be equal to 19 or 7, etc.). The channel reversal can be detected in the receiver, as identified in the TS header field. By using physical channels n...0 for logical channels 0...n, the receiver can handle channel reversal by starting with a polling state. Therefore, references to channels can refer to logical channel numbers. Therefore, circuit board designers can specify physical or electrical design more efficiently, and HPI can play a role in virtual channel allocation, as described herein. Furthermore, in one embodiment, the polarity can be reversed (ie, when the differential transmitter +/- is connected to the receiver -/+). In an embodiment, in the polling state, the receiver can also detect and process polarity from one or more TS header fields.Referring to FIG. 11, an embodiment of a block diagram of a computing system including a multi-core processor is described. The processor 1100 includes any processor or processing device, such as a microprocessor, an embedded processor, a digital signal processor (DSP), a network processor, a handheld processor, an application processor, a coprocessor, a system on a chip (SOC ) Or other devices that execute code. In one embodiment, the processor 1100 includes at least two cores, cores 1101 and 1102, which may include asymmetric cores or symmetric cores (embodiment illustrated). However, the processor 1100 may include any number of processing elements, which may be symmetrical or asymmetrical.In one embodiment, the processing element refers to hardware or logic that supports software threads. Examples of hardware processing elements include: thread units, thread time slots, threads, process units, contexts, context units, logical processors, hardware threads, cores, and/or any other elements that can maintain state for the processor, such as execution state Or architecture state. In other words, in one embodiment, a processing element refers to any hardware that can be independently associated with code such as a software thread, operating system, application, or other code. A physical processor (or processor socket) generally refers to an integrated circuit, which potentially includes any number of other processing elements, such as cores or hardware threads.A core refers to logic located on an integrated circuit capable of maintaining an independent architectural state, where each independently maintained architectural state is associated with at least some dedicated execution resources. Compared to cores, hardware threads generally refer to any logic on an integrated circuit that can maintain an independent architectural state, where the independently maintained architectural state shares access to execution resources. It can be seen that when some resources are shared and others are dedicated to the state of the architecture, the boundaries between the naming of hardware threads and cores overlap. However, cores and hardware threads are often regarded by the operating system as separate logical processors, where the operating system can schedule operations on each logical processor individually.As illustrated in FIG. 11, the physical processor 1100 includes two cores-cores 1101 and 1102. Here, the cores 1101 and 1102 are considered to be symmetric cores, that is, cores with the same configuration, functional unit, and/or logic. In another embodiment, the core 1101 includes an out-of-order processor core, and the core 1102 includes an in-order processor core. However, the cores 1101 and 1102 can be independently selected from any types of cores, such as native cores, software-hosted cores, cores suitable for executing native instruction set architecture (ISA), and cores suitable for executing converted instruction set architecture (ISA). Core, co-designed core, or other known cores. In a heterogeneous core environment (ie, asymmetric core), some form of conversion, such as binary conversion, can be used to schedule or execute code on one or two cores. For further discussion, the functional units explained in the core 1101 are described in further detail below. In the described embodiment, the units in the core 1102 operate these functional units in a similar manner.As stated, the core 1101 includes two hardware threads 1101a and 1101b, which may also be referred to as hardware thread time slots 1101a and 1101b. Therefore, in one embodiment, a software entity such as an operating system can view the processor 1100 as four separate processors, that is, four logical processors or processing elements capable of executing four software threads concurrently. . As mentioned indirectly above, the first thread is associated with the architecture status register 1101a, the second thread is associated with the architecture status register 1101b, the third thread can be associated with the architecture status register 1102a, and the fourth thread can be associated with the architecture status register. 1102b is associated. Here, each of the architectural status registers (110la, 1101b, 1102a, and 1102b) may be referred to as a processing element, thread slot, or thread unit, as described above. As explained, the architecture status register 1101a is copied in the architecture status register 1101b, so that various architecture states/contexts can be stored for the logical processor 1101a and the logical processor 1101b. In the core 1101, other smaller resources, such as the instruction pointer and renaming logic in the allocator and renamer module 1130, can also be replicated for the threads 1101a and 1101b. Some resources can be shared through division, such as the reordering buffer in the reordering/retirement unit 1135, ILTB 1120, load/store buffers, and queues. Potentially share other resources completely, such as general internal registers, page table base register(s), low-level data cache and data TLB 1115, execution unit(s) 1140, and parts of out-of-order unit 1135.The processor 1100 often includes other resources, which can be completely shared, shared through partitions, or dedicated to each processing element. In Figure 11, a purely exemplary processor embodiment with an illustrative logical unit/resource of the processor is illustrated. Note that the processor may include or omit any of these functional units, and include any other known functional units, logic, or firmware that are not described. As explained, the core 1101 includes a simplified representative out-of-order (OOO) processor core. However, in-order processors can be used in different embodiments. The OOO core includes a branch target buffer 1120 of the predicted branch to be executed/fetched, and an instruction translation buffer (I-TLB) 1120 that stores address translation entries for instructions.The core 1101 also includes a decoding module 1125, which is coupled to the acquiring unit 1120 to decode the acquired elements. In one embodiment, the acquisition logic includes respective sequencers associated with thread time slots 1101a, 1101b, respectively. Generally, the core 1101 is associated with a first ISA, and the first ISA defines/specifies instructions that can be executed on the processor 1100. The machine code instructions that are part of the first ISA often include a part of the instruction (referred to as an operation code), which references/specifies the instruction or operation to be executed. The decoding logic 1125 includes circuits that recognize these instructions from their opcodes and transmit the decoded instructions in the pipeline for processing as defined by the first ISA. For example, as discussed in more detail below, in one embodiment, the decoder 1125 includes logic designed or adapted to recognize specific instructions such as transaction instructions. As a result of the recognition by the decoder 1125, the architecture or core 1101 takes certain predefined actions to perform tasks associated with the appropriate instructions. It is important to note that any of the tasks, modules, operations, and methods described herein can be executed in response to single or multiple instructions; some of the instructions can be new or old instructions. Note that in one embodiment, the decoder 1126 recognizes the same ISA (or a subset thereof). Alternatively, in a heterogeneous core environment, the decoder 1126 identifies the second ISA (a subset of the first ISA or a different ISA).In one example, the allocator and renamer module 1130 includes an allocator that reserves resources, such as a register file, to store instruction processing results. However, threads 1101a and 1101b are potentially capable of out-of-order execution, where the allocator and renamer module 1130 also reserves other resources, such as a reorder buffer, in order to track instruction results. The unit 1130 may also include a register renamer to rename the program/instruction reference register to other registers inside the processor 1100. The reordering/retirement unit 1135 includes various components, such as the above-mentioned reordering buffer, load buffer, and store buffer, in order to support out-of-order execution and later sequential retirement of out-of-order execution instructions.In one embodiment, the scheduler and execution unit block 1140 includes a scheduler unit that schedules instructions/operations on the execution unit. For example, floating-point instructions are dispatched on ports of execution units with available floating-point execution units. It also includes a register set associated with the execution unit to store the result of the information instruction processing. Exemplary execution units include floating-point execution units, integer execution units, jump execution units, load execution units, store execution units, and other known execution units.The low-level data buffer and data conversion buffer (D-TLB) 1150 is coupled to the execution unit 1140. The data cache stores recently used/operated elements, such as data operands, that may remain in a memory coherent state. D-TLB stores recent translations of virtual/linear addresses to physical addresses. As a specific example, the processor may include a page table structure that divides physical memory into multiple virtual pages.Here, the cores 1101 and 1102 share access to high-level or external caches such as the second-level cache associated with the on-chip interface 1110. Note that advanced or more external means that the cache level is increased or further away from the execution unit. In one embodiment, the high-level cache is the last-level data cache—the last cache in the memory hierarchy on the processor 1100—such as the second or third level data cache. However, the advanced cache is not limited to this, because it can be associated with or include the instruction cache. It can be changed after the decoder 1125 to couple the trace buffer, a type of instruction buffer, in order to store the trace of the recent decoding. Here, instructions may refer to macro instructions (ie, general instructions recognized by the decoder), which can be decoded into multiple micro instructions (micro operations).In the configuration described, the processor 1100 also includes an on-chip interface module 1110. Historically, the memory controller described in more detail below was included in a computing system outside of the processor 1100. In this scenario, the on-chip interface 1110 communicates with devices external to the processor 1100, such as system memory 1175, chipsets (often including a memory controller hub connected to the memory 1175 and an I/O controller hub connected to peripheral devices). ), memory controller hub, north bridge or other integrated circuits. And, in this scenario, the bus 1105 may include any known interconnection, such as a multipoint bus, point-to-point interconnection, serial interconnection, parallel bus, coherent (for example, cache coherent) bus, hierarchical Protocol architecture, differential bus and GTL bus.The memory 1175 may be dedicated to the processor 1100 or shared with various other devices of the system. Common examples of types of memory 1175 include DRAM, SRAM, non-volatile memory (NV memory), and other known storage devices. Note that the device 1180 may include a graphics accelerator, a processor or card coupled to the hub of the memory controller, a data storage coupled to the hub of the I/O controller, a wireless transceiver, a flash memory device, an audio controller, a network controller, or other知设备。 Know equipment.However, recently, as more logic and devices are integrated on a single die, such as an SOC, each of these devices can be incorporated on the processor 1100. For example, in one embodiment, the memory controller hub is on the same package and/or die as the processor 1100. Here, a part of the core (the part on the core) 1110 includes one or more controllers for connecting with other devices such as the memory 1175 or the graphics device 1180. This configuration including interconnects and controllers for connecting with such devices is often referred to as an on-core (or non-core) configuration. As an example, the on-chip interface 1110 includes a ring interconnect for on-chip communication and a high-speed serial point-to-point link 1105 for off-chip communication. However, in the SOC environment, even more devices such as network interfaces, coprocessors, memory 1175, graphics processor 1180, and any other known computer devices/interfaces can be integrated on a single die or integrated circuit To provide a small form factor with high functionality and low power consumption.In one embodiment, the processor 1100 can execute the compiler, optimization, and/or converter code 1177 to compile, transform, and/or optimize the application code 1176 to support the devices and methods described herein or connected thereto. A compiler often includes a program or set of programs that converts source text/code into target text/code. Generally, the compilation of the program/application code with the aid of a compiler is completed in multiple stages and passes in order to transform the high-level programming language code into a low-level machine or assembly language code. However, a single pass compiler can still be used for simple compilation. The compiler can use any known compilation technology and perform any known compiler operations, such as lexical analysis, preprocessing, parsing, semantic analysis, code generation, code transformation, and code optimization.Larger compilers often include multiple stages, but the most common is that these stages are included in two general stages: (1) The front-end, that is, where syntax processing, semantic processing, and some transformation/optimization usually occur, And (2) the back end, where analysis, transformation, optimization, and code generation usually occur. Some compilers are called intermediate, which interprets the fuzzy delineation between the front end and the back end of the compiler. As a result, the insertion, association, generation, or other operations referring to the compiler can occur in any of the aforementioned stages or rounds as well as any other known stages or rounds of the compiler. As an illustrative example, the compiler may insert operations, calls, functions, etc. in one or more compilation stages, such as inserting calls/operations in the front-end stage of compilation and then transforming the calls/operations into low-level code during the transformation phase . Note that during dynamic compilation, compiler code or dynamic optimization code can insert such operations/calls, and optimize the code for execution during runtime. As a specific illustrative example, binary code (code that has already been compiled) can be dynamically optimized during runtime. Here, the program code may include dynamic optimization code, binary code, or a combination thereof.Similar to a compiler, a translator such as a binary translator statically or dynamically translates the code in order to optimize and/or transform the code. Therefore, the reference to the execution of code, application code, program code or other software environment can refer to: (1) Dynamically or statically execute the compiler program, optimize the code optimizer or the translator, in order to compile the program code and maintain the software Structure, perform other operations, optimize code or translate code; (2) Execute the main program code including operations/calls, such as optimized/compiled application code; (3) Execute other program codes, such as being associated with the main program code In order to maintain the software structure, to perform other software-related operations, or to optimize the code: or (4) its combination.Referring now to FIG. 12, shown is a block diagram of an embodiment of a multi-core processor. As shown in the embodiment in FIG. 12, the processor 1200 includes a plurality of domains. Specifically, the core domain 1230 includes a plurality of cores 1230A-1230N, and the graphics domain 1260 includes one or more graphics engines having a media engine 1265 and a system agent domain 1210.In various embodiments, the system agent domain 1210 handles power control events and power management so that each unit of the domains 1230 and 1260 (e.g., core and/or graphics engine) is independently controllable in order to respond to activities occurring in a given unit (Or inactivity) work dynamically with an appropriate power mode/level (such as active, acceleration, sleep, hibernation, deep sleep, or other similar advanced configuration power interface states). Each of the domains 1230 and 1260 can operate at a different voltage and/or power, and, in addition, each unit in each domain potentially operates at an independent frequency and voltage. Note that although only shown as having three domains, it should be understood that the scope of the present invention is not limited to this, and there may be additional domains in other embodiments.As shown, in addition to various execution units and additional processing elements, each core 1230 also includes a low-level cache. Here, various cores are coupled to each other and to a shared cache memory formed by multiple units or parts of the last-level cache (LLC) 1240A-1240N; these LLCs often include storage and cache controller functions, and each core Shared among them, and may also be shared among graphics engines.As can be seen, the ring interconnection 1250 couples the cores together and provides interconnection between the core domain 1230, the graphics domain 1260 and the system agent circuit 1210 via a plurality of ring stations 1252A-1252N, the ring stations 1252A-1252N Both are coupled between the core and the LLC slice. As shown in Figure 12, the interconnection 1250 is used to carry various information, including address information, data information, confirmation information, and interception/invalidation information. Although a ring interconnect is illustrated, any known on-die interconnect or organization structure can be utilized. As an illustrative example, some of the above-described organizational structures (such as another on-chip interconnection, system-on-chip organization structure (OSF), advanced microcontroller bus architecture (AMBA) interconnection, multi-dimensional Grid organization structure or other known interconnection architecture).As stated, the system agent domain 1210 includes a display engine 1212, which provides control and interface to the associated display. The system agent domain 1210 may include other units, such as: an integrated memory controller 1220, which provides an interface to the system memory (for example, implemented as a DRAM with multiple DIMMs); and coherency logic 1222, which performs memory coherency operations . There may be multiple interfaces to allow interconnection between the processor and other circuits. For example, in one embodiment, at least one direct media interface (DMI) 1216 and one or more PCIe™ interfaces 1214 are provided. The display engine and these interfaces are usually coupled to the memory via a PCIe™ bridge 1218. Furthermore, in order to provide communication between other agents such as additional processors or other circuits, one or more other interfaces may be provided.Referring now to FIG. 13, shown is a block diagram of a representative core; specifically, the logic modules at the back end of the core (for example, the core 1230 from FIG. 12). Generally, the structure shown in FIG. 13 includes an out-of-order processor with a front-end unit 1370. The front-end unit 1370 is used to obtain incoming instructions, perform various processing (such as caching, decoding, branch prediction, etc.) and The instructions/operations are sent to the out-of-order (OOO) engine 1380. The OOO engine 1380 performs further processing on the decoded instructions.Specifically, in the embodiment of FIG. 13, the out-of-order engine 1380 includes an allocating unit 1382. The allocating unit 1382 receives decoded instructions in the form of one or more micro-instructions or micro-op codes from the front-end unit 1370, and transfers them to Allocate appropriate resources such as registers, etc. Next, each instruction is provided to the reservation station 1384, and the reservation station reserves resources and schedules them for execution on one of the plurality of execution units 1386A-1386N. There may be various types of execution units, including, for example, arithmetic logic unit (ALU), load and store unit, vector processing unit (VPU), floating point execution unit, and others. The results from these different execution units are provided to the reorder buffer (ROB) 1388, which accepts the unordered results and restores them to the correct program order.Still referring to FIG. 13, note that both the front-end unit 1370 and the out-of-order engine 1380 are coupled to different levels of the memory hierarchy. Specifically, the instruction-level cache 1372 is shown, the instruction-level cache 1372 is in turn coupled to the intermediate cache 1376, and the intermediate cache 1376 is coupled to the final cache 1395. In one embodiment, the last level cache 1395 is implemented in the on-chip (sometimes referred to as non-core) unit 1390. As an example, the unit 1390 is similar to the system agent 1210 of FIG. 12. As described above, the non-core 1390 communicates with the system memory 1399. In the illustrated embodiment, the system memory 1399 is implemented via ED RAM. It should also be noted that various execution units 1386 in the out-of-order engine 1380 communicate with the first-level cache 1374, and the first-level cache 1374 also communicates with the intermediate cache 1376. It should also be noted that additional cores 1330N-2-1330N can be coupled to LLC 1395. Although shown at such a high level in the embodiment of FIG. 13, it should be understood that there may be various changes and additional components.Turning to FIG. 14, a block diagram of an exemplary computer system formed with a processor is illustrated, the processor including an execution unit that executes instructions, wherein one or more of the interconnections implement an implementation according to the present invention One or more characteristics of an example. The system 1400 includes components such as a processor 1402 in order to employ execution units including logic that executes algorithms for processing data according to the present invention, for example in the embodiments described herein. System 1400 represents a processing system based on Pentium IIITM, Pentium 4TM, XeonTM, Itanium, XScaleTM and/or StrongARMTM microprocessors, but other systems (including PCs with other microprocessors, engineering workstations, set-top boxes, etc.) can also be used ). In one embodiment, the sample system 1400 executes a version of WindowsTM operation commercially available from Microsoft Corporation in Redmond, Washington, but other operating systems (such as UNIX and Linux), embedded software, and/or Or graphical user interface. Therefore, the embodiments of the present invention are not limited to any specific combination of hardware circuits and software.The embodiments are not limited to computer systems. Alternative embodiments of the invention can be used in other devices, such as handheld devices and embedded applications. Some examples of handheld devices include cellular phones, Internet protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. According to at least one embodiment, an embedded application may include a microcontroller, a digital signal processor (DSP), a system on a chip, a network computer (NetPC), a set-top box, a network backbone, a wide area network (WAN) switch, or may execute one or more Any other system of instructions.In this illustrated embodiment, the processor 1402 includes one or more execution units 1408 that implement an algorithm that executes at least one instruction. An embodiment may be described in the context of a single-processor desktop or server system, but alternative embodiments may be included in a multi-processor system. System 1400 is an example of a'central' system architecture. The computer system 1400 includes a processor 1402 that processes data signals. As an illustrative example, the processor 1402 includes a complex instruction set computer (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, and a combination of various instruction sets. The processor or any other processor device such as, for example, a digital signal processor. The processor 1402 is coupled to a processor bus 1410, and the processor bus 1410 transmits data signals between the processor 1402 and other components in the system 1400. Elements of system 1400 (e.g. graphics accelerator 1412, memory controller hub 1416, memory 1420, I/O controller hub 1424, wireless transceiver 1426, flash BIOS 1428, network controller 1434, audio controller 1436, serial expansion The port 1438, I/O controller 1440, etc.) perform their conventional functions well known to those skilled in the art.In one embodiment, the processor 1402 includes a level 1 (L1) internal cache memory 1404. Depending on the architecture, the processor 1402 may have a single internal cache or multiple levels of internal cache. Other embodiments include a combination of both internal and external caches, depending on specific implementations and needs. The register file 1406 stores different types of data in various registers, including integer registers, floating-point registers, vector registers, marshalling registers, shadow registers, checkpoint registers, status registers, and instruction pointer registers.The logic execution unit 1408 including the execution of integer and floating point operations also resides in the processor 1402. In one embodiment, the processor 1402 includes a microcode (ucode) ROM that stores microcode, and when executed, the microcode executes a specific macroinstruction algorithm or responds to complex scenarios. Here, the microcode is potentially updatable in order to handle logic errors/fixes for the processor 1402. For one embodiment, the execution unit 1408 includes logic to process the compressed instruction set 1409. By including the packaged instruction set 1409 in the instruction set of the general-purpose processor 1402, together with the associated circuit for executing the instructions, the compressed data in the general-purpose processor 1402 can be used to perform operations used by a variety of multimedia applications. Therefore, by using the full width of the processor's data bus to perform operations on compressed data, a variety of multimedia applications can be executed faster and more efficiently. This potentially eliminates the need to transfer smaller data units one data element at a time across the processor's data bus in order to perform one or more operations.Alternative embodiments of the execution unit 1408 may also be used in microcontrollers, embedded processors, graphics devices, DSPs, and other types of logic circuits. The system 1400 includes a memory 1420. The memory 1420 includes a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, a flash memory device, or other memory devices. The memory 1420 stores instructions to be executed by the processor 1402 and/or data represented by data signals.Note that any of the aforementioned features or aspects of the present invention can be used for one or more of the interconnections illustrated in FIG. 14. For example, an on-core interconnect (ODI) of the internal unit of the processor 1402 not shown for coupling implements one or more aspects of the present invention described above. Alternatively, the present invention may be connected to the processor bus 1410 (such as other known high-performance computing interconnects), the high-bandwidth memory path 1418 to the memory 1420, and the point-to-point link to the graphics accelerator 1412 (such as compatible with Peripheral Component Interconnect )), the controller hub interconnect 1422, I/O, or other interconnects (for example, USB, PCI, PCIe) used to couple other illustrated components. Some examples of such components include audio controller 1436, firmware hub (flash BIOS) 1428, wireless transceiver 1426, data storage 1424, legacy I/O controller 1410 containing user input and keyboard interface 1442, such as universal serial Serial expansion port 1438 and network controller 1434 such as the bus (USB). The data storage device 1424 may include a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage devices.Referring now to FIG. 15, shown is a block diagram of a second system 1500 according to an embodiment of the present invention. As shown in FIG. 15, the multi-processor system 1500 is a point-to-point interconnection system, and includes a first processor 1570 and a second processor 1580 coupled via a point-to-point interconnection 1550. Each of the processors 1570 and 1580 may be a certain version of the processor. In one embodiment, 1552 and 1554 are part of a serial, point-to-point coherent interconnection organization structure such as a high-performance architecture. As a result, the present invention can be implemented in the QPI architecture.Although shown with the aid of only two processors 1570, 1580, it should be understood that the scope of the present invention is not limited thereto. In other embodiments, one or more additional processors may be present in a given processor.The processors 1570 and 1580 are shown as including integrated memory controller units 1572 and 1582, respectively. The processor 1570 also includes point-to-point (P-P) interfaces 1576 and 1578 as part of its bus controller unit; similarly, the second processor 1580 includes P-P interfaces 1586 and 1588. The processors 1570 and 1580 can exchange information via a point-to-point (P-P) interface 1550 using P-P interface circuits 1578 and 1588. As shown in Figure 15, IMC 1572 and 1582 couple the processors to their respective memories, namely memory 1532 and memory 1534, which may be part of the main memory locally attached to the respective processors.The processors 1570 and 1580 both use point-to-point interface circuits 1576, 1594, 1586, and 1598 to exchange information with the chipset 1590 via respective P-P interfaces 1552, 1554. The chipset 1590 also exchanges information with the high-performance graphics circuit 1538 via the interface circuit 1592 and the high-performance graphics interconnect 1539.The shared cache (not shown) can be included in either processor or outside of the two processors; however, it is connected to the processor via the PP interconnect, so that if the processor is placed in a low power mode, either or both The processor's local cache information can be stored in the shared cache.The chipset 1590 may be coupled to the first bus 1516 via the interface 1596. In an embodiment, the first bus 1516 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as the PCI EXPRESS bus or another third-generation I/O interconnect bus, but the scope of the present invention is not Limited to this.As shown in FIG. 15, various I/O devices 1514 are coupled to a first bus 1516 and a bus bridge 1518, and the bus bridge 1518 couples the first bus 1516 to the second bus 1520. In one embodiment, the second bus 1520 includes a low pin count (LPC) bus. In one embodiment, various devices are coupled to the second bus 1520. These devices include, for example, a keyboard and/or mouse 1522, a communication device 1527, and a storage unit 1528 such as a disk drive or other mass storage device, which often Including instructions/codes and data 1530. Further, the audio I/O 1524 is shown as being coupled to the second bus 1520. Note that other architectures are possible, and the included components and interconnection architectures change. For example, instead of the point-to-point architecture of FIG. 15, the system can implement a multipoint bus or other such architectures.Next, turning to FIG. 16, an embodiment of the on-chip (SOC) system design according to various inventions is described. As a specific illustrative example, the SOC 1600 is included in the user equipment (UE). In one embodiment, UE refers to any device used by end users to communicate, such as handheld phones, smart phones, tablets, ultra-thin notebooks, notebooks with broadband adapters, or any other similar communication devices. Generally, the UE is connected to a base station or node, which potentially basically corresponds to a mobile base station (MS) in a GSM network.Here, SOC 1600 includes two cores-1606 and 1607. Similar to the above discussion, the cores 1606 and 1607 can conform to an instruction set architecture, such as architecture core TM-based processors, AMD processors, MIPS-based processors, and ARM-based processor designs. Or their consumers and their authorized or adopters. The cores 1606 and 1607 are coupled to the cache controller 1608 associated with the bus interface unit 1609 and the L2 cache 1611 to communicate with other parts of the system 1600. Interconnect 1610 includes on-chip interconnects, such as IOSF, AMBA, or other interconnects described above, which potentially implement one or more aspects described herein.The interconnect 1610 provides a communication channel to other components, such as a Subscriber Identity Module (SIM) 1630 connected to a SIM card, saves boot code for execution by the cores 1606 and 1607 to initialize and boot the boot ROM 1635 of the SOC1600, and external memory SDRAM controller 1640 connected to non-volatile memory (such as flash memory 1665), peripheral controller 1650 connected to peripheral devices (such as serial peripheral interface), display and receiving input A video codec 1620 and a video interface 1625 (such as touch-enabled input), a GPU 1615 that performs graphics-related calculations, and so on. Any of these interfaces can incorporate aspects of the invention described herein.In addition, the system explains peripheral devices for communication, such as Bluetooth module 1670, 3G modem 1675, GPS 1685, and WiFi 1685. Note that, as described above, the UE includes a radio for communication. As a result, not all of these peripheral communication modules are required. However, in the UE, some form of radio for external communication is included.Although the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and changes derived therefrom. It is expected that the appended claims cover all such modifications and changes that fall within the true spirit and scope of the present invention.A design can go through multiple stages from creation to simulation to manufacturing. The data representing the design can represent the design in many ways. First, hardware description language or another functional description language can be used to represent hardware, which is useful in simulation. In addition, circuit-level models with logic and/or transistor gates can be generated at some stages of the design process. In addition, at some stage, most designs reach the level of data representing the physical placement of various devices in the die hardware model. In the case of using conventional semiconductor manufacturing technology, the data representing the hardware model may be data specifying the presence or absence of various features on different mask layers used to produce integrated circuit masks. In any representation of the design, data can be stored in any form of machine-readable medium. The memory or magnetic storage or optical storage such as a disk may be a machine-readable medium that stores information transmitted via light waves or electric waves, which are modulated or generated to transmit such information. When sending instructions or carrying code or design electric carrier wave, make a new copy within the scope of performing the copy, buffer or retransmission of the electric signal. Therefore, the communication provider or the network provider can store the product, such as information encoded in the carrier wave, in a tangible machine-readable medium at least temporarily to implement the technology of the various embodiments of the present invention.The module used here refers to any combination of hardware, software, and/or firmware. As an example, a module includes hardware such as a microcontroller, which is associated with a non-transitory medium that stores code suitable for execution by the microcontroller. Therefore, in one embodiment, the reference to the module refers to hardware that is specifically configured to recognize and/or execute code stored on a non-transitory medium. Furthermore, in another embodiment, the use of a module refers to a non-transitory medium including code, which is specifically adapted to be executed by a microcontroller in order to perform a predetermined operation. And it can be inferred that, in yet another embodiment, the term module (in this example) may refer to a combination of a microcontroller and a non-transitory medium. In general, the boundaries of modules that are interpreted as separate tend to change and may overlap. For example, the first module and the second module may share hardware, software, firmware, or a combination thereof, while some independent hardware, software, or firmware may be reserved. In one embodiment, the use of the term logic includes hardware, such as transistors, registers, or other hardware such as programmable logic devices.In one embodiment, the use of the phrase'configured to' refers to arranging, placing together, manufacturing, offering for sale, introducing, and/or designing a device, hardware, logic, or element in order to perform an assigned or Determined tasks. In this example, if it is not the operating device or its element that is designed, coupled, and/or interconnected to perform the assigned task, then it is not the operating device or its element is still'configured to perform' The assigned task. As a purely illustrative example, logic gates may provide 0 or 1 during operation. However, logic gates that are'configured to provide an enable signal to the clock do not include every possible logic gate that can provide 1 or 0. On the contrary, a logic gate is a logic gate that is coupled in a certain way. During operation, a 1 or 0 output allows the clock. It should be noted again that the use of the term'configured to' does not require operation, but instead focuses on the latent state of the device, hardware, and/or element in which the device, hardware, and/or element is designed as Perform specific tasks while the device, hardware, and/or elements are operating.In addition, in one embodiment, the use of the phrases'to','able/should' and/or'operable as' refers to the design of a certain device, logic, hardware and/or element in such a way as to allow Use the device, logic, hardware, and/or elements in a specified manner. As mentioned above, it should be noted that in one embodiment, to be, capable of, or operable refers to the latent state of a device, logic, hardware, and/or element, where the device, logic, hardware, and/or element is not operating Rather, it is designed in such a way as to allow the device to be used in a specified way.The value used here includes any known representation of quantity, state, logic state, or binary logic state. The use of logic levels, logic values, or logic values is also commonly referred to as 1 and 0, and they simply represent binary logic states. For example, 1 refers to a high logic level and 0 refers to a low logic level. In one embodiment, memory cells such as transistors or flash memory cells can hold a single logic value or multiple logic values. However, other representations of values in computer systems have been used. For example, the decimal number ten can also be expressed as a binary value of 1010 and the hexadecimal letter A. Therefore, the value of any representation including information can be saved in the computer system.In addition, the state can be represented by each value or part of each value. As an example, a first value such as a logical one may indicate a default or initial state, and a second value such as a logical zero may indicate a non-default state. In addition, in one embodiment, the terms reset and set refer to default and updated values or states, respectively. For example, the default value potentially includes a high logic value, ie, reset, and the updated value, potentially includes a low logic value, ie, set. Note that any combination of values can be used to represent any number of states.The above-stated embodiments of the various methods, hardware, software, firmware or codes can be implemented via instructions or codes stored on a machine-accessible, machine-readable, computer-accessible or computer-readable medium. These instructions or The code can be executed by the processing element. A non-transitory machine-accessible/readable medium includes any mechanism that provides (ie, stores and/or transmits) information in a form readable by a machine, such as a computer or electronic system. For example, non-transitory machine-accessible media include random access memory (RAM), such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic or optical storage media; flash memory devices; electrical storage devices; optical storage Devices; sound storage devices; other forms of storage devices used to store information received from transient (propagated) signals (for example, carrier waves, infrared signals, digital signals); Transient medium.The instructions used to program logic to execute the various embodiments of the present invention may be stored in a memory in the system, such as DRAM, cache, flash memory, or other storage. In addition, instructions can be distributed via a network or as other computer-readable media. Thus, a machine-readable medium may include any mechanism for storing or sending information in a form readable by a machine (for example, a computer), but is not limited to floppy disks, optical disks, compact disks, read-only memory (CD-ROM), and Magneto-optical disk, read only memory (ROM), random access memory (RAM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), magnetic or optical card, flash memory Fast memory or tangible machine-readable storage used to transmit information on the Internet via electrical, optical, acoustic, or other forms of propagating signals (for example, carrier waves, infrared signals, digital signals, etc.). Therefore, a computer-readable medium includes any type of tangible machine-readable medium suitable for storing or sending electronic instructions or information in a form readable by a machine (for example, a computer).Reference to "one embodiment" or "one embodiment" throughout this specification means that a specific feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearance of the phrase "in one embodiment" or "in one embodiment" on various occasions throughout this specification does not necessarily all refer to the same embodiment. In addition, in one or more embodiments, each specific feature, structure, or characteristic may be combined in any suitable manner.In the foregoing specification, a detailed description has been given with reference to specific exemplary embodiments. However, it will be apparent that various modifications and changes can be made thereto without departing from the broad spirit and scope of the invention as set forth in the appended claims. Therefore, the description and drawings should be viewed in an illustrative rather than restrictive sense. In addition, the foregoing use of embodiments and other exemplary language does not necessarily refer to the same embodiment or the same example, but may refer to different and unique embodiments and possibly the same embodiment.
Disclosed is a method for efficient behavioral analysis on a mobile station. In the method, one or more first behavioral characteristics associated with a first state of a finite state machine are observed. The one or more first behavioral characteristics may comprise a first subset of observable behavioral characteristics. The mobile station transitions from the first state to a second state. One or more second behavioral characteristics associated with the second state of the finite state machine are observed. The one or more second behavioral characteristics may comprise a second subset of the observable behavioral characteristics.
CLAIMSWHAT IS CLAIMED IS:1. A method for behavioral analysis on a mobile station, comprising:observing one or more first behavioral characteristics associated with a first state of a finite state machine, wherein the one or more first behavioral characteristics comprise a first subset of observable behavioral characteristics;transitioning from the first state to a second state; andobserving one or more second behavioral characteristics associated with the second state of the finite state machine, wherein the one or more second behavioral characteristics comprise a second subset of the observable behavioral characteristics.2. A method for behavioral analysis as defined in claim 1, wherein the observable behavioral characteristics comprise APIs.3. A method for behavioral analysis as defined in claim 1, further comprising:transitioning from the second state to a third state; andobserving one or more third behavioral characteristics associated with a third state of the finite state machine, wherein the one or more third behavioral characteristics comprise a third subset of the observable behavioral characteristics.4. A method for behavioral analysis as defined in claim 1, wherein the first state comprises an initial state.5. A method for behavioral analysis as defined in claim 1, wherein the third state comprises a final state.6. A method for behavioral analysis as defined in claim 1, wherein the one or more first behavioral characteristics are associated with transitions from the first state, and the one or more second behavioral characteristics are associated with transitions from the second state.7. A mobile station, comprising:means for observing one or more first behavioral characteristics associated with a first state of a finite state machine, wherein the one or more first behavioral characteristics comprise a first subset of observable behavioral characteristics;means for transitioning from the first state to a second state; andmeans for observing one or more second behavioral characteristics associated with the second state of the finite state machine, wherein the one or more second behavioral characteristics comprise a second subset of the observable behavioral characteristics.8. A mobile station as defined in claim 7, wherein the observable behavioral characteristics comprise APIs.9. A mobile station as defined in claim 7, further comprising:means for transitioning from the second state to a third state; andmeans for observing one or more third behavioral characteristics associated with a third state of the finite state machine, wherein the one or more third behavioral characteristics comprise a third subset of the observable behavioral characteristics.10. A mobile station as defined in claim 7, wherein the first state comprises an initial state.11. A mobile station as defined in claim 7, wherein the third state comprises a final state.12. A mobile station as defined in claim7, wherein the one or more first behavioral characteristics are associated with transitions from the first state, and the one or more second behavioral characteristics are associated with transitions from the second state.13. A mobile station, comprising:a processor configured to:observe one or more first behavioral characteristics associated with a first state of a finite state machine, wherein the one or more first behavioral characteristics comprise a first subset of observable behavioral characteristics;transition from the first state to a second state; andobserve one or more second behavioral characteristics associated with the second state of the finite state machine, wherein the one or more second behavioral characteristics comprise a second subset of the observable behavioral characteristics.14. A mobile station as defined in claim 13, wherein the observable behavioral characteristics comprise APIs.15. A mobile station as defined in claim 13, wherein the processor is further configured to:transition from the second state to a third state; andobserve one or more third behavioral characteristics associated with a third state of the finite state machine, wherein the one or more third behavioral characteristics comprise a third subset of the observable behavioral characteristics.16. A mobile station as defined in claim 13, wherein the first state comprises an initial state.17. A mobile station as defined in claim 13, wherein the third state comprises a final state.18. A mobile station as defined in claim 13, wherein the one or more first behavioral characteristics are associated with transitions from the first state, and the one or more second behavioral characteristics are associated with transitions from the second state.19. A computer program product, comprising:computer-readable medium, comprising:code for causing a computer to observe one or more first behavioral characteristics associated with a first state of a finite state machine, wherein the one or more first behavioral characteristics comprise a first subset of observable behavioral characteristics;code for causing a computer to transition from the first state to a second state; and code for causing a computer to observe one or more third behavioral characteristics associated with a third state of the finite state machine, wherein the one or more third behavioral characteristics comprise a third subset of the observable behavioral characteristics.20. A computer program product as defined in claim 19, wherein the observable behavioral characteristics comprise APIs.21. A computer program product as defined in claim 19, further comprising:code for causing a computer to transition from the second state to a third state; andcode for causing a computer to observe one or more third behavioral characteristics associated with a third state of the finite state machine, wherein the one or more third behavioral characteristics comprise a third subset of the observable behavioral characteristics.22. A computer program product as defined in claim 19, wherein the first state comprises an initial state.23. A computer program product as defined in claim 19, wherein the third state comprises a final state.24. A computer program product as defined in claim 19, wherein the one or more first behavioral characteristics are associated with transitions from the first state, and the one or more second behavioral characteristics are associated with transitions from the second state.25. A method for efficient behavioral analysis on a mobile station, comprising:observing one or more first behavioral characteristics associated with a first set of states of a finite state machine, wherein the one or more first behavioral characteristics comprise a first subset of observable behavioral characteristics;transitioning from the first set of states to a second set of states; andobserving one or more second behavioral characteristics associated with the second set of states of the finite state machine, wherein the one or more second behavioral characteristics comprise a second subset of the observable behavioral characteristics.26 A method for efficient behavioral analysis as defined in claim 25, wherein the observable behavioral characteristics comprise APIs.27. A method for efficient behavioral analysis as defined in claim 25, further comprising: transitioning from the second set of states to a third set of states; andobserving one or more third behavioral characteristics associated with the third set of states of the finite state machine, wherein the one or more third behavioral characteristics comprise a third subset of the observable behavioral characteristics.28. A mobile station, comprising:means for observing one or more first behavioral characteristics associated with a first set of states of a finite state machine, wherein the one or more first behavioral characteristics comprise a first subset of observable behavioral characteristics;means for transitioning from the first set of states to a second set of states; andmeans for observing one or more second behavioral characteristics associated with the second set of states of the finite state machine, wherein the one or more second behavioral characteristics comprise a second subset of the observable behavioral characteristics.29. A mobile station as defined in claim 28, wherein the observable behavioral characteristics comprise APIs.30. A mobile station as defined in claim 28, further comprising:means for transitioning from the second set of states to a third set of states; and means for observing one or more third behavioral characteristics associated with the third set of states of the finite state machine, wherein the one or more third behavioral characteristics comprise a third subset of the observable behavioral characteristics.31. A mobile station, comprising:a processor configured to:observe one or more first behavioral characteristics associated with a first set of states of a finite state machine, wherein the one or more first behavioral characteristics comprise a first subset of observable behavioral characteristics;transition from the first set of states to a second set of states; andobserve one or more second behavioral characteristics associated with the second set of states of the finite state machine, wherein the one or more second behavioral characteristics comprise a second subset of the observable behavioral characteristics.32. A mobile station as defined in claim 31, wherein the observable behavioral characteristics comprise APIs.33. A mobile station as defined in claim 31, wherein the processor is further configured to:transitioning from the second set of states to a third set of states; and observing one or more third behavioral characteristics associated with the third set of states of the finite state machine, wherein the one or more third behavioral characteristics comprise a third subset of the observable behavioral characteristics.34. A computer program product, comprising:computer-readable medium, comprising:code for causing a computer to observe one or more first behavioral characteristics associated with a first set of states of a finite state machine, wherein the one or more first behavioral characteristics comprise a first subset of observable behavioral characteristics;code for causing a computer to transition from the first set of states to a second set of states; andcode for causing a computer to observe one or more second behavioral characteristics associated with the second set of states of the finite state machine, wherein the one or more second behavioral characteristics comprise a second subset of the observable behavioral characteristics.35. A computer program product as defined in claim 34, wherein the observable behavioral characteristics comprise APIs.36. A computer program product as defined in claim 34, further comprising:code for causing a computer to transition from the second set of states to a third set of states; andcode for causing a computer to observe one or more third behavioral characteristics associated with the third set of states of the finite state machine, wherein the one or more third behavioral characteristics comprise a third subset of the observable behavioral characteristics.
METHOD FOR EFFICIENT BEHAVIORAL ANALYSIS ON A MOBILE STATIONBACKGROUNDField[0001] The present invention relates generally to efficient behavioral analysis on a mobile station.Background[0002] Detection of malware on a mobile station, such as a cellular telephone, is constrained by the device's limited resources (power, memory, bandwidth, etc.). Thus, PC-style signature matching on a mobile device is not an effective solution for malware detection and removal. An alternative is for a thin client on a device to generate a signature/hash of installed applications, and to forward the signature(s) to a network-based server for signature matching. Unfortunately, network-based signature matching generally fails to protect against "zero-day" attacks, or against web-applications and web-based malware.[0003] Behavior analysis may be used to detect programs and applications that are actively malicious, or poorly written. However, performing behavioral analysis on a mobile station also may be challenging due to limited resources.[0004] There is therefore a need for a technique for efficient behavioral analysis on a mobile station.SUMMARY[0005] An aspect of the present invention may reside in a method for efficient behavioral analysis on a mobile station. In the method, one or more first behavioral characteristics associated with a first state of a finite state machine are observed. The one or more first behavioral characteristics may comprise a first subset of observable behavioral characteristics. The mobile station transitions from the first state to a second state. One or more second behavioral characteristics associated with the second state of the finite state machine are observed. The one or more second behavioral characteristics may comprise a second subset of the observable behavioral characteristics.[0006] In more detailed aspects of the invention, the observable behavioral characteristics may comprise application program interfaces (APIs). The one or more first behavioral characteristics may be associated with transitions from the first state, and the one or more second behavioral characteristics may be associated with transitions from the second state.[0007] In other more detailed aspects of the invention, the method may further include the mobile station transitioning from the second state to a third state. One or more third behavioral characteristics associated with a third state of the finite state machine may be observed. The one or more third behavioral characteristics may comprise a third subset of the observable behavioral characteristics. Also, the first state may comprise an initial state, and the third state may comprise a final state.[0008] Another aspect of the invention may reside in mobile station, comprising: means for observing one or more first behavioral characteristics associated with a first state of a finite state machine, wherein the one or more first behavioral characteristics comprise a first subset of observable behavioral characteristics; means for transitioning from the first state to a second state; and means for observing one or more second behavioral characteristics associated with the second state of the finite state machine, wherein the one or more second behavioral characteristics comprise a second subset of the observable behavioral characteristics.[0009] Another aspect of the invention may reside in a mobile station comprising a processor configured to: observe one or more first behavioral characteristics associated with a first state of a finite state machine, wherein the one or more first behavioral characteristics comprise a first subset of observable behavioral characteristics; transition from the first state to a second state; and observe one or more second behavioral characteristics associated with the second state of the finite state machine, wherein the one or more second behavioral characteristics comprise a second subset of the observable behavioral characteristics.[0010] Another aspect of the invention may reside in a computer program product, comprising computer-readable medium, comprising: code for causing a computer to observe one or more first behavioral characteristics associated with a first state of a finite state machine, wherein the one or more first behavioral characteristics comprise a first subset of observable behavioral characteristics; code for causing a computer to transition from the first state to a second state; and code for causing a computer to observe one or more third behavioral characteristics associated with a third state of the finite state machine, wherein the one or more third behavioral characteristics comprise a third subset of the observable behavioral characteristics.[0011] An aspect of the present invention may reside in a method for efficient behavioral analysis on a mobile station. In the method, one or more first behavioral characteristics associated with a first set of states of a finite state machine are observed. The one or more first behavioral characteristics may comprise a first subset of observable behavioral characteristics. The mobile station transitions from the first set of states to a second set of states. One or more second behavioral characteristics associated with the second set of states of the finite state machine are observed. The one or more second behavioral characteristics may comprise a second subset of the observable behavioral characteristics.[0012] In more detailed aspects of the invention, the method may further include the mobile station transitioning from the second set of states to a third set of states. One or more third behavioral characteristics associated with the third set of states of the finite state machine may be observed. The one or more third behavioral characteristics comprise a third subset of the observable behavioral characteristics.[0013] Another aspect of the invention may reside in a mobile station, comprising: means for observing one or more first behavioral characteristics associated with a first set of states of a finite state machine, wherein the one or more first behavioral characteristics comprise a first subset of observable behavioral characteristics; means for transitioning from the first set of states to a second set of states; and means for observing one or more second behavioral characteristics associated with the second set of states of the finite state machine, wherein the one or more second behavioral characteristics comprise a second subset of the observable behavioral characteristics.[0014] Another aspect of the invention may reside in a mobile station comprising a processor configured to: observe one or more first behavioral characteristics associated with a first set of states of a finite state machine, wherein the one or more first behavioral characteristics comprise a first subset of observable behavioral characteristics; transition from the first set of states to a second set of states; and observe one or more second behavioral characteristics associated with the second set of states of the finite state machine, wherein the one or more second behavioral characteristics comprise a second subset of the observable behavioral characteristics.[0015] Another aspect of the invention may reside in a computer program product, comprising computer-readable medium, comprising: code for causing a computer to observe one or more first behavioral characteristics associated with a first set of states of a finite state machine, wherein the one or more first behavioral characteristics comprise a first subset of observable behavioral characteristics; code for causing a computer to transition from the first set of states to a second set of states; and code for causing a computer to observe one or more second behavioral characteristics associated with the second set of states of the finite state machine, wherein the one or more second behavioral characteristics comprise a second subset of the observable behavioral characteristics. BRIEF DESCRIPTION OF THE DRAWINGS[0016] FIG. 1 is a block diagram of an example of a wireless communication system.[0017] FIG. 2 is a block diagram of an example of a mobile station for detecting malicious activity in conjunction with generic malicious behavior patterns received from a network-based server.[0018] FIG. 3 is a block diagram of a finite state machine.[0019] FIG. 4 is a flow diagram of a a method for efficient behavioral analysis on a mobile station, according to the present invention.[0020] FIG. 5 is another block diagram of a finite state machine.[0021] FIG. 6 is a block diagram of a computer including a processor and a memory.[0022] FIG. 7 is a block diagram of a finite state machine having bounding boxes for defining a set of states.DETAILED DESCRIPTION[0023] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.[0024] With reference to FIG. 2, a security system 200 in a mobile station 102 may dynamically decide what to observe, and at what levels of detail, through efficient query mechanisms and through dynamic interaction of an analyzer 230 with an observer 240 having access to hardware, sensors, and drivers to enable efficient observation. Techniques for malicious activity detection in a mobile station are described in more detail in U.S. Patent Application Publication No.— ; (Application Serial No. 13/741,388, filed January 15, 2013), which application is incorporated herein by reference. The malicious activity detection may involve observation of behavioral characteristics associated with application programming interfaces (APIs).[0025] The observer 240 may observe the APIs to generate behavior signatures (e.g., vectors of real numbers or graphs). The analyzer 230 takes a behavior signature as an input and correlates the observations against models to perform behavior analysis.[0026] With reference to FIG. 3, when using state-based behavior specifications, each behavior is specified in terms of a finite state machine with an initial state, a final state, and a set of intermediate states (states 1 through N). State transitions may correspond to API calls, or conditions based on API calls, and their parameters.[0027] With further reference to FIGS. 4 and 5, an aspect of the present invention may reside in a method 400 for efficient behavioral analysis on a mobile station 102. In the method, one or more first behavioral characteristics (e.g., API1 and API2) associated with a first state SI of a finite state machine 500 are observed (step 410). The one or more first behavioral characteristics may comprise a first subset of observable behavioral characteristics. The mobile station transitions from the first state SI to a second state S2 (step 420). One or more second behavioral characteristics (e.g., API3) associated with the second state of the finite state machine are observed (step 430). The one or more second behavioral characteristics may comprise a second subset of the observable behavioral characteristics.[0028] In more detailed aspects of the invention, the one or more first behavioral characteristics may be associated with transitions from the first state SI, and the one or more second behavioral characteristics may be associated with transitions from the second state S2.[0029] In other more detailed aspects of the invention, the method may further include the mobile station 102 transitioning from the second state S2 to a third state S3. One or more third behavioral characteristics (e.g., API4 and API5) associated with a third state of the finite state machine 400 may be observed. The one or more third behavioral characteristics may comprise a third subset of the observable behavioral characteristics. Also, the first state may comprise an initial state, and the third state may comprise a final state.[0030] The technique of the present invention uses incremental observation to provide a novel methodology to minimize resources incurred in performing the behavioral analysis at run-time. In essence, the technique pre-computes the question of what to observe next, bypassing the analyzer and thereby taking it out of the decision of what to observe next. The technique may minimize the observation overhead (number of API's being observed) based on state -based behavior specifications.[0031] As an example, in FIG. 5, the total of observable APIs would be seven. Observing all of these APIs would incur much computation and memory/storage overhead. Using state-based incremental observation, at each stage, only those APIs that correspond to the outgoing transitions of the current state in each behavior would need to be observed/monitored. This may significantly reduce the observation overhead because, without the state-based incremental adaptation, all seven APIs would need to be observed all the time, incurring CPU and memory overhead.[0032] With further reference to FIG. 6, a mobile station 102 may comprise a computer 600 that includes a processor 610, a storage medium 620 such as memory and/or a disk drive, a display 630, and an input such as a keypad 640, and a wireless connection 650.[0033] Another aspect of the invention may reside in mobile station 102, comprising: means610 for observing one or more first behavioral characteristics associated with a first state SI of a finite state machine 500, wherein the one or more first behavioral characteristics comprise a first subset of observable behavioral characteristics; means 610 for transitioning from the first state to a second state S2; and means 610 for observing one or more second behavioral characteristics associated with the second state of the finite state machine, wherein the one or more second behavioral characteristics comprise a second subset of the observable behavioral characteristics.[0034] Another aspect of the invention may reside in a mobile station 102 comprising a processor 610 configured to: observe one or more first behavioral characteristics associated with a first state SI of a finite state machine 500, wherein the one or more first behavioral characteristics comprise a first subset of observable behavioral characteristics; transition from the first state to a second state S2; and observe one or more second behavioral characteristics associated with the second state of the finite state machine, wherein the one or more second behavioral characteristics comprise a second subset of the observable behavioral characteristics.[0035] Another aspect of the invention may reside in a computer program product, comprising computer-readable medium 620, comprising: code for causing a computer 600 to observe one or more first behavioral characteristics associated with a first state SI of a finite state machine 500, wherein the one or more first behavioral characteristics comprise a first subset of observable behavioral characteristics; code for causing a computer to transition from the first state to a second state S2; and code for causing a computer to observe one or more third behavioral characteristics associated with a third state of the finite state machine, wherein the one or more third behavioral characteristics comprise a third subset of the observable behavioral characteristics.[0036] With further reference to FIG. 7, an aspect of the present invention may reside in a method for efficient behavioral analysis on a mobile station 102. In the method, one or more first behavioral characteristics (e.g., APIl, API2 and API3) associated with a first set 710 of states of a finite state machine 700 are observed. The one or more first behavioral characteristics may comprise a first subset of observable behavioral characteristics. The mobile station transitions from the first set of states to a second set 720 of states. One or more second behavioral characteristics (e.g., API4, API5, API6 and API7) associated with the second set of states of the finite state machine are observed. The one or more second behavioral characteristics may comprise a second subset of the observable behavioral characteristics.[0037] In more detailed aspects of the invention, the method may further include the mobile station 102 transitioning from the second set of states to a third set of states. One or more third behavioral characteristics associated with the third set of states of the finite state machine may be observed. The one or more third behavioral characteristics comprise a third subset of the observable behavioral characteristics.[0038] This technique of using a bounding box incremental adaptation resolves to the basic incremental adaptation for bounding boxes with just one node in each. The bounding box may further address the observation overhead with the selection of appropriate bounding box sizes. The incremental observation technique of the invention has several benefits. The observation overhead may be limited to the APIs needed to continue constructing the behaviors of interest. The benefits may be multi-fold if certain APIs that generate significant log traffic can be filtered out once observed.[0039] Another aspect of the invention may reside in a mobile station 102, comprising: means 610 for observing one or more first behavioral characteristics associated with a first set 710 of states of a finite state machine 700, wherein the one or more first behavioral characteristics comprise a first subset of observable behavioral characteristics; means 610 for transitioning from the first set of states to a second set 720 of states; and means 610 for observing one or more second behavioral characteristics associated with the second set of states of the finite state machine, wherein the one or more second behavioral characteristics comprise a second subset of the observable behavioral characteristics.[0040] Another aspect of the invention may reside in a mobile station 102 comprising a processor 610 configured to: observe one or more first behavioral characteristics associated with a first set 710 of states of a finite state machine 700, wherein the one or more first behavioral characteristics comprise a first subset of observable behavioral characteristics; transition from the first set of states to a second set 720 of states; and observe one or more second behavioral characteristics associated with the second set of states of the finite state machine, wherein the one or more second behavioral characteristics comprise a second subset of the observable behavioral characteristics.[0041] Another aspect of the invention may reside in a computer program product, comprising computer-readable medium 620 , comprising: code for causing a computer 600 to observe one or more first behavioral characteristics associated with a first set 710 of states of a finite state machine 700, wherein the one or more first behavioral characteristics comprise a first subset of observable behavioral characteristics; code for causing a computer to transition from the first set of states to a second set 720 of states; and code for causing a computer to observe one or more second behavioral characteristics associated with the second set of states of the finite state machine, wherein the one or more second behavioral characteristics comprise a second subset of the observable behavioral characteristics. [0042] With reference to FIG. 1, a wireless remote station (RS) 102 (e.g. a mobile station MS) may communicate with one or more base stations (BS) 104 of a wireless communication system 100. The wireless communication system 100 may further include one or more base station controllers (BSC) 106, and a core network 108. Core network may be connected to an Internet 110 and a Public Switched Telephone Network (PSTN) 112 via suitable backhauls. A typical wireless mobile station may include a handheld phone, or a laptop computer. The wireless communication system 100 may employ any one of a number of multiple access techniques such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), space division multiple access (SDMA), polarization division multiple access (PDMA), or other modulation techniques known in the art.[0043] Those of skill in the art would understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0044] Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.[0045] The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.[0046] The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.[0047] In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software as a computer program product, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both non-transitory computer-readable storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.[0048] The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The invention discloses metal-oxide-semiconductor (MOS) transistors with reduced subthreshold conduction, and methods of fabricating the same. Transistor gate structures are fabricated in these transistors of a shape and dimension as to overlap onto the active region from the interface between isolation dielectric structures and the transistor active areas. Minimum channel length conduction is therefore not available at the isolation-to-active interface, but rather the channel length along that interface is substantially lengthened, reducing off-state conduction.
1.A metal oxide semiconductor MOS transistor structure comprising:One or more isolated dielectric structures disposed at selected locations on a semi-conductive surface of the body, the isolated dielectric structures defining a substantially rectangular active region of the surface adjacent thereto, the active regions having a first side First and second parallel edges extending upward, and third and fourth parallel edges extending in a second direction perpendicular to the first direction;a gate dielectric layer disposed on at least a portion of the active region;a gate structure disposed on a portion of the gate dielectric layer at the active region, the gate structure extending to an isolation dielectric structure adjacent to the active region, and the gate structure comprising:a central portion disposed on the active area and extending in the second direction;First and second end portions abutting the central portion, each end portion being disposed on the isolation dielectric structure adjacent to the active region, the first and second end portions respectively being associated with the active region The first and second edges overlap;Source and drain regions of the active region disposed on opposite sides of the central portion, each doped to be opposite to a portion of a channel region underlying the active region of the gate structure Type of conductivity;Wherein each of the adjacent first and second end portions also overlaps the third and fourth edges of the active area.2.The transistor structure of claim 1 wherein said gate structure comprises a plurality of parallel central portions disposed on said active region;Wherein the first end portion abuts the plurality of central portions at one end, and the second end portion abuts the plurality of central portions at the other end;And wherein the plurality of central portions and the first and second end portions are formed by a single abutting structure.3.The transistor structure of claim 1 wherein said gate structure comprises polysilicon.4.The transistor structure of claim 1 wherein said gate structure comprises a material selected from the group consisting of a metal and a conductive metal compound.5.The transistor structure of claim 1 wherein said isolating dielectric structure has an upper surface that is substantially coplanar with said surface at said active region.6.The transistor structure of claim 1 wherein said central portion of said gate structure has a width in said first direction;And wherein the first and second end portions respectively overlap the first and second edges of the active area to extend to the active area to at least about 50% of the width of the central portion .7.A method of fabricating an integrated circuit comprising at least one metal oxide semiconductor MOS transistor, the method comprising the steps of:Forming an isolation dielectric structure at a selected location of the semi-conductive surface of the body, the isolation dielectric structure defining a substantially rectangular active region of the first conductivity type at the surface, the active region having a first direction extending First and second parallel edges, and third and fourth parallel edges extending in a second direction perpendicular to the first direction;Forming a gate dielectric layer at the surface of the active region;Depositing a gate material on the gate dielectric layer;A selected portion of the deposited gate material is removed to define a gate structure overlying a portion of the active region, the gate structure comprising:a central portion that extends in the second direction over the active area;First and second end portions at opposite ends of the first portion, each end portion being disposed adjacent to the active region on an isolation dielectric structure, the first and second end portions respectively associated with the active region The first and second edges overlap;Doping a position of the active region on an opposite side of the central portion of the gate structure to a second conductivity type to form a source/drain region;And wherein the first and second end portions of the gate structure each also overlap the third and fourth edges of the active area.8.The method of claim 7 wherein said gate structure comprises a plurality of parallel central portions;And wherein the removing step defines the gate structure as a single abutting structure such that the first end portion abuts the plurality of central portions at one end and the second end portion is at the other end Adjacent to the plurality of central portions.9.The method of claim 7 wherein said gate structure comprises one or more materials selected from the group consisting of polysilicon, metals, and conductive metal compounds.10.The method of claim 7 wherein said step of forming said isolated dielectric structure comprises:Etching the recess in the surface at the selected location;Depositing a dielectric material as a whole;The dielectric material is planarized to expose the active region and the isolated dielectric structure is formed as the dielectric material remaining in the recess.11.A metal oxide semiconductor MOS transistor structure comprising:One or more isolated dielectric structures disposed at selected locations on a semi-conducting surface of the body, the isolated dielectric structure defining an active region of the surface adjacent thereto;a gate dielectric layer disposed on at least a portion of the active region;a gate structure disposed between the source/drain regions of the active region on the gate dielectric layer at the active region, the source/drain regions being doped and underlying a portion of the active region of the gate structure opposite in conductivity type, and the gate structure comprises:a central portion disposed on the active region and having a width in a first direction that is parallel to a direction of current flow between the source/drain regions;a first and a second end portion abutting the central portion, each of the first and second end portions having a ratio in the first direction and on each side of the central portion Said width of said central portion being greater than a width of said width of said central portion of at least about 50%, and wherein each of said first and second end portions is disposed on said isolating dielectric structure adjacent said effective And overlapping the active area with at least about 50% of the width of the central portion.12.The transistor structure of claim 11 wherein said gate structure comprises polysilicon.13.The transistor structure of claim 11 wherein said gate structure comprises a material selected from the group consisting of metals and conductive metal compounds.14.The transistor structure of claim 11 wherein said isolating dielectric structure has an upper surface that is substantially coplanar with said surface at said active region.15.A method of fabricating an integrated circuit comprising at least one metal oxide semiconductor MOS transistor, the method comprising the steps of:Forming an isolation dielectric structure at a selected location of the semi-conductive surface of the body, the isolation dielectric structure defining an active region of the first conductivity type at a location where the active region is absent;Forming a gate dielectric layer at the surface of the active region;Depositing a gate material on the gate dielectric layer;A selected portion of the deposited gate material is removed to define a gate structure overlying a portion of the active region, the gate structure comprising:a central portion that extends over the active area;First and second end portions at opposite ends of the first portion, each end portion disposed on the isolation dielectric structure adjacent to the active area and overlapping the active area;Doping a position of the active region on an opposite side of the central portion of the gate structure to a second conductivity type to form a source/drain region;Wherein the central portion of the gate structure has a width in a first direction that is parallel to a current conducting direction between the source/drain regions;Wherein the first and second end portions each have at least a greater width of the central portion than the width of the central portion in the first direction and on each side of the central portion Approximately 50% of the width;And wherein the first and second end portions each overlap to the active area to at least about 50% of the width of the central portion.16.The method of claim 15 wherein said gate structure comprises one or more materials selected from the group consisting of polysilicon, metal, and conductive metal compounds.17.The method of claim 15 wherein said step of forming said isolated dielectric structure comprises:Etching the recess in the surface at the selected location;Depositing a dielectric material as a whole;The dielectric material is planarized to expose the active region and the isolated dielectric structure is formed as the dielectric material remaining in the recess.
I-shaped gate electrode for improved subthreshold MOSFET performanceCross-reference to related applicationsAffirmation of federally sponsored research or developmentTechnical fieldBackground techniqueThe invention is in the field of integrated circuits. Embodiments of the invention are more specifically directed to metal oxide semiconductor (MOS) transistors.Many modern electronic devices and systems now contain considerable computing power to control and manage a wide range of functions and useful applications. As is fundamental in the art, achieving a reduction in the size of the physical feature size of the structure of transistors and other solid state devices enables greater integration of circuit functions per unit "chip" area, or conversely, given circuit functions. Consumes a smaller chip area. Due to this miniaturization trend, the capabilities of integrated circuits at a given cost have greatly increased.As is fundamental in the art, MOS transistors ideally conduct very low drain currents at gate-to-source voltages below the threshold voltage of the transistor. The drain current conducted by the MOS transistor under the drain-source bias but at a gate voltage below the threshold voltage is not desirable in digital circuits, especially in applications that are sensitive to power consumption, such as mobile. Devices, implantable medical devices, and other battery powered systems. In recent years, some analog circuits, such as voltage reference circuits, have been implemented by designing MOS transistors that are biased in subthreshold regions to conduct low level currents at low power supply voltages while still providing a stable output reference voltage. In each of these circuit applications, minimal subthreshold conduction is required.Another non-ideal characteristic of MOS transistors is referred to in the art as "1/f" noise, or "flicker" noise, which relates to frequency dependent random variations in the device drain current. The flicker noise is present in the MOS transistor under both strong inversion (saturation) and weak inversion (sub-threshold). The MOS transistor flicker noise is manifested by the difference in circuit performance and design. For example, flicker noise in the context of signal processing and communication appears as phase noise (i.e., random fluctuations in the phase of the periodic signal) or "jitter" when expressed in the time domain. It has been observed that analog circuits with subthreshold biased MOS transistors are particularly prone to flicker noise.Advances in semiconductor technology in recent years have enabled the reduction of the minimum device feature size (e.g., the width of the gate electrode) to the depth sub-micron range. The gate width of prior art MOS transistors is now on the order of a quarter micron. Especially in these sub-micron devices, the subthreshold behavior is degraded by a mechanism commonly referred to as the inverse narrow width effect ("INWE"), where the threshold voltage becomes lower with a narrower channel width. It has been observed that this effect is concentrated at the edge of the channel of the transistor, especially at the effective-field edge underlying the gate electrode.Figures 1a and 1b illustrate the construction of a conventional n-channel MOS transistor 2 that is susceptible to INWE. The transistor 2 is formed at an effective area of the surface of the semiconductor substrate 4, which is surrounded by the isolating dielectric structure 5. In the plan view of Fig. 1a, the source/drain regions 6 are visible portions of this active region, which also include the surface of the substrate 4 underlying the gate structure 8. A gate structure 8 typically formed of polysilicon, metal, or a conductive metal compound covers the gate dielectric 7 (Fig. 1b) at the surface of the active region and extends into the isolation dielectric structure 5. The gate dielectric 7 is typically formed of silicon dioxide, silicon nitride, a combination of the two, or in some cases, a "high k" material such as hafnium oxide. As is essential in the art, the channel region of transistor 2 is defined by those locations underlying the active region of gate structure 8 between source region and drain region 6. For this n-channel example, the source/drain regions 6 are heavily doped n-type portions at the surface of the p-type substrate 4, formed in a self-aligned manner with respect to the gate structure 8. The channel region underlying the gate structure 8 remains p-type. In this example, transistor 2 has a wide channel region relative to its channel length, as established by four segments of gate structure 8 that extend across the active region. These four segments of the gate structure 8 are connected in parallel by covering the continuous end regions of the isolation dielectric structure 5. Therefore, the alternation of the source/drain regions 6 corresponds to the source and drain of the transistor 2, respectively. Thus, the source/drain conduction in transistor 2 travels in a direction perpendicular to the longer axis of gate structure 8, shown in this example by channel CH. The contact location 9 is shown in Figure 1a, by which the overlying metal conductor can contact the source/drain region 6 and the gate structure 2 in a conventional manner.Figure 1b illustrates the INWE mechanism in transistor 2 by a cross-sectional view taken at the interface between the substrate 4 and the surface of the isolating dielectric structure 5, under the effective region at the edge of the transistor channel of the gate structure 8. The cause. The source/drain current is conducted in the direction of entering and leaving the page of Figure 1b. In this example, the isolating dielectric structure 5 is of the type known in the art as shallow trench isolation (STI). Conventionally, an STI structure is formed by etching a recess into a surface of a substrate at a selected location; depositing a dielectric material such as silicon dioxide into those etched recesses; and subsequently removing excess deposited The dielectric (eg, by chemical-mechanical polishing) planarizes the surface of the STI structure with the surface of adjacent active regions.Due to the effects of conventional processes, variations in the uniformity of the gate dielectric 7 may exist at the interface IF between the active region and its adjacent isolation dielectric structure 5. For the purposes of this description, Figure 1b illustrates this deviation in an exaggerated manner. More specifically, the recess entering the underlying structure is formed at the interface IF and is filled by the gate dielectric 7 and the gate structure 8. The gate dielectric 7 is typically locally thinner in this recess at the interface IF compared to the rest of the film. This deviation often appears as a lower conduction threshold in the electrical characteristics of transistor 2, ie, a lower threshold voltage compared to the rest of the channel of transistor 2 at a given gate-to-source voltage. And higher current density. This lower conduction threshold is believed to be due to the thinner gate dielectric 7 at the interface IF and also due to the "gate winding" effect caused by the gate structure 8 immersing in that location in the recess. The reduction in conduction threshold is also referred to in the art as the "bimodal" effect. This effect has been observed to be more prevalent in integrated circuits constructed with STI isolation than other isolation techniques (eg, local oxidation of silicon, or "LOCOS"). Since this edge effect more strongly affects transistors with shorter physical gate widths, the resulting degradation in electrical performance is categorized as a result of INWE behavior.In a circuit embodiment, premature edge conduction at the interface IF between the active region and the isolating dielectric structure 5 is reflected in performance degradation in several ways. The increased current density and lower threshold voltage at the channel edge of the process exhibit a higher level of subthreshold conduction, especially at elevated temperatures. Unlike subthreshold conduction in the main portion of the transistor channel, it has been observed that this edge conduction has a lower body-effect coefficient than the main portion of the channel. As a result, the increased reverse bias applied to the body of the transistor (i.e., where the well region of transistor 2 is formed, or the substrate itself, as the case may be) will reduce subthreshold conduction in the main portion of the channel, but will There is a much smaller effect of conduction relative to the edge, allowing premature edge conduction under that bias condition to dominate the level of subthreshold conduction of transistor 2. An analog circuit constructed with a transistor attributed to this mechanism having a lower conduction threshold at the edge of the channel also exhibits a high level of flicker noise, especially at low gate voltages and under reverse bias.The off-state leakage due to the edge effect described above exhibits a relatively high variation between the plurality of transistors. This larger device-device variation is somewhat inherent due to the nature of this mechanism, where a significant portion of the sub-threshold channel current is conducted at the poorly controlled channel edge of interface IF. This dominates at sub-threshold gate bias and is particularly pronounced under reverse bias applied to the body node because the current through the main channel is reduced under those conditions. Processes such as chemical mechanical planarization (CMP) and wet oxide etching typically have higher process variations, thereby randomizing the INWE mechanism and thus resulting in significant mismatch between transistors in a given integrated circuit. These device mismatches are particularly problematic in those analog circuits that rely on good matching of device characteristics, such as low power bandgap voltage reference circuits, such as Joly et al., "Low Power Bands Designed in Subthreshold Regions" "Temperature and Hump Effect Impact on Output Voltage Spread of Low Power Bandgap Designed in the Sub-threshold Area", International Symposium on Circuits and Systems ( IEEE, May 2011), described on pages 2549-52, which is incorporated herein by reference.Manufacturing techniques that address the edge conduction effects described above are known in the art. One method involves forming a thicker gate dielectric at the active-isolated interface at the edge of the trench region. The gate dielectric on the remaining portion of the channel remote from this edge is maintained at its nominal thickness for the desired technique. The thicker gate dielectric "fence" at the interface inhibits source-drain conduction along the channel edge of the transistor and may also eliminate the "gate winding" effect and the resulting enhanced subthreshold conduction . However, fabricating such dual gate dielectric structures is significantly more complicated than fabricating a single thickness of gate dielectric, which involves at least one additional lithography process and additional etching. In addition to increasing manufacturing costs, both additional lithography and etching processes add process variability between transistors in the same integrated circuit and between the wafers. This method also consumes significant chip area to maintain the original transistor drive characteristics. In many cases, it is actually difficult to control the extension of the fence into the active area, which is particularly costly because the tolerance and controllability of the fence becomes a significant portion of the active area. Therefore, the thicker dielectric barrier method is not useful at depth sub-micrometer widths.Another known method of addressing the effect of lower conduction thresholds at the effective-isolation interface is shown in plan view in Figure 1c. This example of transistor 2' is referred to in the art as a "ring FET" because its gate structure 8' has a ring shape in its portion that covers the active region. Therefore, the entirety of the channel region of the transistor 2' is also annular, having a source/drain region 6s defined as a portion within the interior of the annular gate structure 8', and being defined as an external portion of the gate structure 8' Another source/drain region 6d of a portion of the active region. This produces a channel region that does not have an edge at the effective-isolation interface. Rather, because the effective-isolation interface IF is located at the edge of the active region so as to constitute a potential conduction path between portions of the adjacent source/drain region 6d (which is necessary at a uniform potential), so along the interface The IF does not occur channel conduction that would significantly degrade subthreshold conduction performance, 1/f noise performance, or cause other effects described above with respect to Figures 1a and 1b. However, it has been observed that it is very difficult to fabricate the annular gate structure 8' because the size of the polysilicon structure of this shape is not well controlled as the orthogonal rectangular shape. For this reason, in most advanced technologies, the shape of the polysilicon or metal gate structure is constrained to be horizontal or vertical (i.e., "North-South" or "East" in the layout), thereby eliminating the ring gate shape. Furthermore, it is difficult to obtain a compact computer model for current conduction in a ring FET, and those models are not scalable, thereby limiting the flexibility of the variable width and length of the MOSFETs that can be used during circuit design.Through further background, such as Thakar et al., "High Performance 0.3nm CMOS using I-Line Lithography and BARC" technical paper abstract, VLSI seminar (Digest of Technical Papers, Symposium on VLSI Technology) (IEEE, 1995), pp. 75-76, and Saka et al. "Manufacturable High Performance Using I-Line Lithography and Gate Line Width Reduction Etching Processes Digest of Technical Papers (Symposium on VLSI Technology) (IEEE) , 1996), described on pages 216-17, both of which are incorporated herein by reference, in which it is known in the art to use a "hammer head" structure to pattern at the tip of its extension to the field oxide. And etching the polysilicon gate structure for avoiding passage of the polysilicon gate from the active region to the adjacent field oxide When the polysilicon gate is narrowed, and the field oxide from the "back" end of the gate line.Summary of the inventionEmbodiments of the present invention provide a transistor structure and method of fabricating the same that avoids subthreshold conduction degradation due to gate dielectric thinning and other mechanisms at the active-isolated structure interface at the edge of the transistor channel.Embodiments of the present invention provide this structure and method that ensures a low variation in sub-threshold conduction between a group of transistors.Embodiments of the present invention provide such structures and methods that are readily compatible with existing manufacturing processes and techniques, and that can be implemented with minimal increase in manufacturing cost.Embodiments of the present invention provide this structure that facilitates compact computer modeling to provide improved flexibility in the design process.Other objects and advantages of embodiments of the present invention will be apparent to those skilled in the <RTIgt;Embodiments of the invention may be implemented in a metal oxide semiconductor (MOS) integrated circuit and method of fabricating the same by constructing a transistor gate structure having one or more central portions at a semi-conductive surface of the body A transistor channel region extending across the active region in a first direction to define the active region. Each central portion of the gate structure has an end portion that widens relative to the width of the central portion itself and overlies the interface between the active region and the isolated dielectric structure adjacent thereto. The overlapping end portions of the gate structure effectively increase the channel length for conduction along the active-isolated interface, thus reducing the early turn-on of the transistor at subthreshold gate voltage and reducing conduction at the edge of the channel The extent to which it is dominated.DRAWINGS1a and 1c are plan views of a conventional metal oxide semiconductor (MOS) transistor, and Fig. 1b is a cross-sectional view thereof.2a, 2b and 2e are plan views of a MOS transistor constructed in accordance with an embodiment of the present invention, and Figs. 2c and 2d are cross-sectional views thereof.3 is a plan view of a MOS transistor having a larger channel width constructed in accordance with an embodiment of the present invention.4 is a flow chart of a manufacturing process flow for fabricating a MOS transistor in accordance with an embodiment of the present invention.Detailed waysThe invention will be described in connection with embodiments of the invention, which are implemented as integrated circuits comprising metal oxide semiconductor (MOS) transistors, as the invention is expected to be particularly advantageous in this embodiment. However, the present invention is expected to provide significant benefits when applied to many other integrated circuit structures and methods. Therefore, it is to be understood that the following description is provided by way of example only, and is not intended to limit the true scope of the claimed invention.2a and 2b are illustrated in plan view and Figs. 2c and 2d illustrate the construction of a transistor 20 in accordance with an embodiment of the present invention in a cross-sectional view. In this example, transistor 20 is a metal oxide semiconductor (MOS) transistor formed at selected locations on the surface of single crystal silicon substrate 22. More specifically, transistor 20 is an n-channel MOS transistor formed at active region 23 of the surface of p-well 26, said active region 23 being located between isolated dielectric structures 25 (or surrounded by a single such structure 25, depending on On the larger scale layout of integrated circuits). In this example, the isolating dielectric structure 25 is formed as a shallow trench isolation (STI) structure. As is known in the art, an STI structure is composed of a dielectric material element formed by deposition or the like into a recess in a surface etched to a surface of a semiconductor material (where a transistor will be formed); the term "shallow" is intended The isolation provided by the structure is the electrical isolation of adjacent surface semiconductor regions on one side of the structure from semiconductor regions on the other side of the structure. Typically, the shallow trench isolation structure is formed from a combination of a thermally grown silicon dioxide liner and a deposited (CVD) silica filler, but may alternatively be formed from other dielectric materials. The active area 23 and other active areas in the same integrated circuit forming transistors such as transistors 20 of Figures 2a through 2d are defined by the surface locations of the semiconductor material (e.g., substrate 22) in which the isolation dielectric structure 25 is absent.Figure 2a illustrates the portion of the integrated circuit that will form transistor 20 during the fabrication phase of the integrated circuit prior to gate formation. As is apparent from Figure 2a, the active area 23 is defined as a generally rectangular area of the surface of the substrate 22 in the interior of the surrounding isolated dielectric structure 25. This rectangular arrangement is typical for modern integrated circuits fabricated using sub-micron technology, where orthogonal rectangular feature shapes and conductive orthogonal orientations facilitate dimensional control in manufacturing and are also easily scalable. In this rectangular arrangement, the boundary of the active area 23 is a parallel edge E_H extending in the horizontal direction (in the view of Fig. 2a) and a parallel edge E_H extending in the vertical direction adjacent to the isolating dielectric structure 25; In the middle, the horizontal edge E_H is substantially perpendicular to the vertical edge E_V as shown.Referring to Figures 2b through 2e, this example of transistor 20 is an n-channel MOS transistor formed into p-type well 24, which in this example is formed into substrate 22 by conventional ion implantation and diffusion annealing. Doped area. Alternatively, transistor 20 can be formed into substrate 22 in the absence of a well region, such as shown in the examples of Figures 1a and 1b. Alternatively, transistor 20 can be formed at the surface of a semiconductor layer disposed on an insulating layer in accordance with conventional silicon-on-insulator (SOI) techniques, or other similar substrate structures known in the art. As will be appreciated by those skilled in the art in view of this specification, embodiments of the present invention are applicable to n-channel and p-channel MOS transistors.The gate structure 28 of transistor 20 overlies a portion of active region 23 and extends over either end to isolation dielectric structure 25, as shown in Figures 2b and 2d. In this embodiment of the invention, the gate structure 28 may be doped polysilicon material (n-type doping for this example of n-channel transistors) or metal or conductive metal compounds (eg, titanium, tungsten, tantalum, nitride) Titanium, tantalum nitride, tungsten nitride or the like) is formed. The gate structure 28 overlies the surface of the p-well 24 with the gate dielectric 27 disposed therebetween. The gate dielectric 27 is comprised of a thin layer of dielectric material, such as silicon dioxide, silicon nitride, or a combination thereof; alternatively, the gate dielectric 27 can be a "high-k" material such as HfO 2 or the like. For this example of transistor 20 with lightly doped drain extension, sidewall dielectric spacers 31 are optionally disposed on the sides of gate structure 28.In this embodiment of the invention, the source/drain regions 26 are heavily doped n-type portions at the surface of the p-well 24. In this example, source/drain regions 26 are formed in a self-aligned manner with respect to gate structure 28 and partially relative to sidewall spacers 31. As shown in FIG. 2b, the contact opening 29 is located at the source/drain region 26 and at the gate structure 28 (specifically at a location overlying the isolation dielectric structure 25) by means of which the contact opening, The overlying conductor (not shown) can contact the terminals of transistor 20 by an overlying interlevel dielectric material (not shown).The cross-sectional view of Figure 2c illustrates the construction of transistor 20 that is transverse to a portion of gate structure 28. As is apparent from Figure 2c, the source/drain regions 26 are n-type doped regions extending from the surface of the structure into the p-well 24. In this example, transistor 20 is a lightly doped drain type because the junction of source/drain regions 26 adjacent the edge of gate structure 28 is defined by sidewall spacers 31. As is well known in the art, the source/drain regions 26 are formed by a first ion implantation process performed after the definition of the gate structure 28 and a second implantation after the formation of the sidewall spacers 31 thereafter. . The first implant is generally a lower dose than the second implant such that a junction having a graded distribution is formed at the edge of the gate structure 26 between the source/drain region 26 and the p-well 24.Under appropriate bias conditions, transistor 20 is in the opposite source/direction in the direction indicated by arrow CH of FIG. 2c in response to a gate-to-source voltage applied to gate structure 28 that exceeds the threshold voltage of transistor 20. Current is conducted between the drain regions 26. Thus, the width of the gate structure 28 between the source/drain regions 26 defines the transistor channel length, and the length of the active region 23 underlying the gate structure 28 is perpendicular to the conduction direction (CH). The transistor channel width is defined in the direction. As a basic principle in the art, the current drive of transistor 20 in its on state is proportional to the ratio of channel width to channel length.In the embodiment of the invention illustrated in FIG. 2b, the gate structure 28 has a shape that reduces undesirable subthreshold conduction along the interface between the isolation dielectric structure 25 and the channel region underlying the gate structure 28. In this embodiment of the invention, the gate structure 28 has a central portion 28C that overlies the active region 23 and abuts the end portion 28E disposed at the opposite end of the central portion 28C. The central portion 28C has a width GW in a direction parallel to the source/drain conductive channel of the transistor 20 (arrow CH) and a length GL in a direction perpendicular to the conductive channel. The end portions 28E each have a width that is significantly larger than the width GW of the central portion 28C. In the example shown in Figure 2b, the width of each end portion 28E extends completely to overlap the vertical edge E_V of the active region 23 from the central portion 28C on the opposite side of the source/drain region 26 (i.e., substantially A vertical edge E_V) extending parallel to the length of the central portion 28C. Alternatively, end portion 28E need not be too wide to reach vertical edge E_V, however end portion 28E should be significantly wider than gate width GW, such as a gate that is at least about 50% wider than gate width GW on each side of central portion 28C The pole width GW is significantly lengthened along the current path of the interface IF as described below. The transistor 20' according to an example of this alternative configuration is illustrated in Figure 2e, which includes a width having a gate width GW greater than 50% on each side of the central portion 28C but not extending to the active region 23 as in Figure 2b. The distal end portion 28E of the distal edge.In accordance with an embodiment of the present invention, as shown in both Figures 2b and 2e, the end portions 28E are each at the interface IF between the isolation dielectric structure 25 and the active region 23 and the horizontal edge E_H (i.e., substantially perpendicular to the central portion) A corresponding one of the horizontal edges E_H) extended by the length of 28C overlaps by a distance OV. Figure 2d illustrates the overlap OV of the end portion 28E of the gate structure 28 by means of a cross-sectional view taken in a direction perpendicular to the cross-section of Figure 2c. As is apparent from Figure 2d, the overlap OV of the end portion 28E extends over the surface of the p-well 24. The p-type surface of p-well 24 underlies the end portion 28E (due to subsequent self-alignment of source/drain regions 26 with respect to gate structure 28) with gate dielectric 27 therebetween, as in Figure 1b. Shown in the section. Self-aligned source/drain regions 26 begin at the edges of end portions 28E within active region 23, as illustrated.As mentioned above and as a basic principle in the art, the on-state current drive of the MOS transistor is substantially proportional to the ratio W/L of the channel width to the channel length. Referring to the plan view of Fig. 2b, the channel width of transistor 20 is substantially determined by the gate length GL of central portion 28C, and its channel length is determined by the gate width GW of central portion 28C. Although some limited amount of on-state current may be conducted between the source/drain regions 26 at the inverted surface of the p-well 24 underlying the end portion 28E, this conduction path is considered (ie, a longer trench) The track length will be much longer and also much narrower (i.e., smaller channel width) than the channel underlying the central portion 28C of the gate structure 28, and this conduction will be minimal. According to an embodiment of the invention, it is contemplated that the overlap OV of the gate structure 28 to the active region 23 (ie, the surface of the well 24) will be at least about 50% of the gate width GW, which will be significantly lengthened for use along the active region Any conduction path of current conducted by the interface IF with the isolation dielectric structure 25. It is therefore expected that the on-state conduction below the end portion 28E will be substantially small and negligible.In accordance with an embodiment of the invention, in a sub-threshold biasing system (ie, a gate-to-source voltage below a threshold voltage), the overlap OV of the gate structure 28 onto the active region 23 is used to reduce along the interface IF. Subthreshold conduction. As discussed above in connection with Figures 1a and 1b, due to the thinning of the gate dielectric 37, the wraparound effect in the recess at the gate structure 28 to the interface IF, and the increased density due to the charge trapping sites at the interface IF, effective Subthreshold conduction is promoted at the interface IF between the region 23 and the isolation dielectric structure 25. However, according to an embodiment of the invention, due to the overlap OV of the end portions 28E, the position of the interface IF moves away from the main channel of the minimum channel length. Thus, the path for sub-threshold conduction along interface IF is much longer than the channel length defined by the gate width GW of central portion 28C. The charge conducted along interface IF must travel a distance OV to reach interface IF at a sub-threshold bias from one source/drain region 26, and travel a further distance OV to the opposite under sub-threshold bias from interface IF Source/drain regions. Figure 2c illustrates an example of a distributed conduction path P via interface IF. Therefore, not only is this conduction path P substantially longer than the minimum channel length distance of a conventional transistor, but this subthreshold conduction must also occur through the two semiconductor portions away from the interface IF. For both reasons, in a transistor constructed in accordance with an embodiment of the present invention, sub-threshold conduction and INWE threshold voltage degradation are expected to be reduced to negligible levels for two reasons compared to the conventional transistors described above with respect to Figures 1a and 1b. .In addition, because subthreshold conduction at the isolation-effective interface is significantly reduced in accordance with embodiments of the present invention, conduction along the interface no longer dominates the overall subthreshold conduction of the transistor. The subthreshold characteristics of the transistor as a whole are thus responsive to the application of the reverse bias, thereby enabling reverse bias to minimize the overall level of off-state leakage and minimize flicker noise at low gate-to-source voltages.As discussed above with respect to Figures 1a and 1b, conventional MOS transistors susceptible to sub-threshold conduction along the isolation-effective interface exhibit large variations in the conduction, resulting in poor device matching. This change is due to the significant randomness of the density and distribution of charge trapping sites, which largely determines the level of conduction. The reduction in subthreshold conduction levels provided by embodiments of the present invention thus results in much smaller variations in this conduction over a group of transistors, thereby reducing the mismatch in off-state behavior within a given integrated circuit.These important benefits are obtained in embodiments of the present invention while avoiding the challenges posed by conventional methods for subthreshold conduction problems at the isolation-effective interface. As discussed above, a conventional method uses a thicker gate dielectric "fence" at the isolation-effective interface to reduce this conduction. However, the process required to form gate dielectric layers having different thicknesses must be complicated and expensive; in contrast, different gate dielectric thicknesses are not necessary in accordance with embodiments of the present invention, and embodiments of the present invention are only required A change in the photomask pattern. Moreover, the subthreshold conduction characteristics of transistors formed in accordance with embodiments of the present invention are more tightly controllable than those of conventional devices having thicker gate dielectric barriers. Compared to the increased variability of the edges of thicker gate dielectric regions (especially active regions with less and less area), this improved controllability is derived from overlapping, in accordance with embodiments of the present invention. The patterning of the edge of the gate structure is inherently tighter. The improved accuracy of the plasma etch from the gate material results in this improved accuracy at the gate level compared to the process variations involved in the wet etching required to define the edge of the thicker gate dielectric barrier.Transistors constructed in accordance with embodiments of the present invention also avoid the limitations of conventional "ring FET" structures. More specifically, the chip area required for the transistor according to the present invention is much smaller than the chip area required for a ring FET transistor having an equivalent driving capability (W/L). Additionally, the shape and orientation of the gate structure in accordance with the present invention avoids the complex geometry of the gate structure of a ring FET such as that shown in Figure 1c. Loop FET transistors are also more complex and difficult to model, scale, and implement in parametric units ("p-units"); these complexities and challenges are avoided in accordance with embodiments of the present invention. In contrast, in accordance with embodiments of the present invention, the transistor gate structure can be limited to substantially orthogonal (ie, "North-South" or "East-West" in the layout) and rectangular, thereby allowing A compact computer model that scales and thus provides a large amount of flexibility to the design process can easily model the current conduction of the transistor gate structure.Referring back to Figure 2b, the length and width of the on-state conduction channel of transistor 20 is substantially defined by the gate width GW and gate length GL of central portion 28C of gate structure 28. This is in contrast to conventional MOS transistors, such as the conventional MOS transistors shown in Figure 1a, in which the channel width is defined by the distance between the opposite edges of the active regions (i.e., the interface at the isolation dielectric structure 5). Thus, for a given size of active area 23, the overlap OV at the opposite edge of the active area 23 will effectively reduce the transistor channel width. Therefore, in order to maintain the same channel width as the conventional transistor, the size of the active region 23 will not need to be increased, so that the inner edge of the gate structure 28 at the overlap OV will substantially correspond to the position of the interface IF of the conventional MOS transistor. This difference in layout can result in "loss" of the chip area relative to conventional MOS transistors, but as mentioned above, this loss will be much less than the losses involved in the ring FET configuration and will be compared between a large number of transistors. The consistency and matching involved in thick gate dielectric "fence" construction is more consistent and matched.As is apparent from Figure 1a, transistor 20 includes a single central portion 28C that defines its channel width and channel length. Embodiments of the present invention can be easily implemented as MOS transistors having a large channel width by providing a plurality of parallel central portions. 3 illustrates, in plan view, a transistor 20W that includes a gate structure 28' having four such central portions to define a substantially larger channel width, in accordance with an embodiment of the present invention. Similar to the case of transistor 20 of Figure 2b, gate structure 28' includes a number of end portions that each overlap to active region 23 by overlap OV (i.e., source/drain regions 26 and underlying gate structures) 28 on the surface of the p-well 26). Viewed in cross section, the configuration of transistor 20W is substantially identical to the configuration shown in Figures 2c and 2d discussed above. The contact location 29 to the source/drain region 26 and the gate structure 28' is shown in Figure 3 to indicate where the overlying metal conductor will be in physical contact. The source/drain regions 26 will alternate between the source and drain biases, thus defining the transistor 20W as having a channel width four times the channel width of the transistor 20, resulting in four times the transistor 20 The drive current capability is additionally assumed to be equivalent to the device size between the two. Through the overlap OV of the gate structure 28' to the active region 23, the transistor 20W enjoys reduced sub-threshold conduction in response to reverse bias and improved similar benefits in device matching, as described above with respect to transistor 20 of Figure 2b. Described.Referring now to Figure 4, a stream of a process flow for fabricating an integrated circuit comprising transistors of the type described above with respect to Figures 2a through 2d and 3 will now be described in accordance with an embodiment of the present invention. Alternative and additional processes, or both, may be incorporated into a transistor for constructing a device in accordance with the present invention, as will be appreciated by those of ordinary skill in the art in view of this disclosure. The specific process flow. It will therefore be appreciated that this description is provided by way of example only and that the examples are provided in a singular manner.It will be further appreciated that transistors constructed in accordance with embodiments of the present invention may be either or both of an n-channel MOS and a p-channel MOS device as desired for a particular circuit implementation and fabrication technique. N-channel MOS transistors 20, 20W are shown and described herein by way of example only. The specific structures and layers referred to in this description correspond to the structures and layers described above in connection with Figures 2a through 2d and 3.The portion of the fabrication stream shown in Figure 4 begins at process 40, wherein either or both of the n-well and the p-well (e.g., p-well 24) are formed at selected locations on the substrate 22 in a conventional manner. As is known in the art, the n-well and the p-well are each formed by lithography defining the location of the surface of the substrate 22 where the well will be positioned, followed by masked ion implantation and activation annealing.The reduction in subthreshold conduction at the isolation-effective interface obtained in accordance with embodiments of the present invention enables the isolation dielectric structure 25 to be of the shallow trench isolation (STI) type. The formation of the STI isolation dielectric structure 25 begins with deposition, patterning, and etching of the isolation stack in process 40. This spacer isolation protective substrate 22, for example, includes an oxide liner on which silicon nitride is deposited. The process 40 also includes patterning and etching of the isolation stack to define a location at the surface of the substrate 22 where the isolation dielectric structure 25 will be formed. In the recess etch process 42, the recess of the desired depth is etched into the surface of the substrate 22 at a location that is not protected by the remaining isolation stack (the protected location becomes the active region 23 of the integrated circuit, eg, as in Figure 2a) Shown). In process 43, the exposed silicon in the etched recess is oxidized to form a liner oxide film, followed by chemical vapor deposition of silicon dioxide or another dielectric material into the lined recess. Typically, the dielectric deposition overfills the etched recess, and thus, the chemical mechanical planarization of the structure is performed in a conventional manner in process 44 to remove oxide from the active region 23 and deposit the deposited dielectric in the recess. The surface of the surface and the adjacent active area 23 is planarized; nitride stripping may be performed to remove the remaining nitride components of the isolation stack. Ion implantation is performed in a conventional manner in this fabrication phase in process 45 to form p-well regions 24 (and n-wells, as needed), and to adjust the threshold voltage of the final transistor (in n-channel and p-channel devices) Either or both).In process 46, gate dielectric film 37 is integrally formed by thermal oxidation followed by optional nitridation or by chemical vapor deposition, depending on the desired material and properties of the transistor gate dielectric. Embodiments of the invention are also suitable for use with high k dielectric materials such as hafnium oxide. In any event, as described above, embodiments of the present invention enable the gate dielectric film 37 to be formed to a single thickness without the need to form a thicker "fence" at the isolation-effective interface IF of the transistor in the integrated circuit. "Dielectric.In accordance with an embodiment of the invention, gate structure 28 is formed and defined at a desired location of transistor 20 in process 48. For an example of a polysilicon gate structure, process 48 includes bulk deposition of polysilicon followed by conventional photolithography and polysilicon etching. Photolithography of the gate structure 28 can be performed in a conventional manner by integrally disposing the photoresist resist, followed by conventional photolithographic patterning and development, such that those locations of the polysilicon layer corresponding to the gate structure 28 A photoresist resist mask element is left. In accordance with an embodiment of the invention, as described above, this patterning of the gate material is performed using a photomask or reticle to define a gate structure 28 having a desired shape and size. More specifically, the gate structure defined by the patterning of process 48 has one or more central portions defining transistor channel regions, each of which is contiguous with an end portion having the above with respect to FIG. 2a The overlap of OV on the active area 23 in the manner described by 2d and 3. The details of the distances of overlapping OVs may depend on the particular transistor to be formed, including the circuitry and physical location of those devices within the integrated circuit. Process 48 accomplishes the definition of gate structure 28 by etching a polysilicon layer protected by a patterned photoresist. As mentioned above, the etching of process 48 is preferably a plasma etch to achieve optimum accuracy.Alternatively, the gate structure 28 may be formed of a metal or a metal compound or a composite of a plurality of material layers, as is known in the art.Transistor 20 is typically formed with a lightly doped drain extension as shown in Figures 2c and 2d. In process 50, the drain extension is formed by shallow ion implantation of a conductivity type opposite to the underlying active region. These drain extensions are self-aligned with the gate structure 28; LDD spacers may be formed along the sidewalls as needed to post-displace the drain extensions from the sides of the gate. Also in process 50, a "halo" implant can also be performed, typically as an angled implant of the same conductivity type dopant as the channel region, to reach below the edge of the gate structure 28 and establish the desired Dopant profile. The sidewall dielectric spacers 31 are then formed in a conventional manner in process 51 by integrally depositing the desired dielectric material (eg, silicon nitride), followed by an anisotropic etch to remove the dielectric material from the planar surface, Wall spacers 31 are thus left on the sidewalls of the gate structure 28. Of course, the transistor 20 can be formed without such a lightly doped drain extension, in which case the processes 50, 51 will be omitted.In either case (ie, with or without spacer 31 and drain extension implanted), source/drain ion implantation is performed in process 52 at the desired dose and energy to define the source of transistor 20 / The dopant concentration in the drain region 26. If the gate structure 28 is formed of polysilicon, the doped gate structure 28 can also be implanted through the source/drain to ensure proper transistor operation and good conductivity. Process 58 also typically includes the desired activation anneal of the implanted spacer to the desired junction depth and concentration profile.If the integrated circuit is a CMOS integrated circuit, the source/drain implantation and annealing process 52 (and possibly optional process 50) will be performed for a channel conductivity type transistor 20, with another trench The position of transistor 20 of the conductivity type is masked without undergoing those processes. In this case, the processes 50, 52 will then be repeated to form another channel conductivity type of transistor, wherein those transistors 20 formed in the first pass of these processes are suitably masked.As is known in the art, an optional silicidation process 54 can now be performed to cover the source/drain regions 26 and the gate structure 28 with a metal silicide to achieve improved conductivity. Optional process 54 includes depositing a metal from which a silicide will be formed, such as titanium, tungsten, tantalum, cobalt, nickel, platinum, and the like. After depositing the metal layer, the structure is subjected to a high temperature anneal, which is also part of the process 54, thereby causing the deposited metal to react with the silicon material in contact therewith to form a metal silicide compound covering the underlying structure.The interlevel dielectric layer is then deposited integrally in process 56 in a conventional manner. The integrated circuit is then completed, starting at process 58, which includes defining the contacts and vias and etching the contacts and vias to the underlying structure, followed by deposition and patterning of the appropriate overlying metal conductors. The processes 56, 58 are repeated in accordance with the number of conductor levels to be formed in the integrated circuit.Thus, in accordance with embodiments of the present invention, the manufacturing process flow required to implement an integrated circuit in accordance with embodiments of the present invention is fully compatible with conventional and existing prior art integrated circuit fabrication process flows. By implementing embodiments of the present invention, it is not necessary to incur additional processing costs because, in accordance with embodiments of the present invention, no additional circuitry is required to reduce MOS subthreshold conduction.Although the present invention has been described in terms of the embodiments of the present invention, it is understood that modifications and alternatives to these embodiments are readily apparent to those skilled in the art in These modifications and alternatives. Such modifications and alternatives are contemplated to be within the scope of the invention as claimed herein.
Methods and apparatuses for maintaining continuity of augmentations are disclosed. In one embodiment, a method for use with an augmented reality enabled device (ARD) comprises tracking a plurality of objects and a background based at least in part on visual information derived from an image, maintaining states of the plurality of objects based at least in part on information other than the visual information, and providing data for rendering augmentation in response to the states of the plurality of objects.
We claim: 1. A method for use with an augmented reality enabled device (ARD), comprising: tracking a plurality of objects and a background based at least in part on visual information derived from an image; maintaining states of at least one object of the plurality of objects based at least in part on information other than the visual information; and providing data for rendering augmentation in response to the states of the plurality of objects. 2. The method of claim 1, wherein the tracking comprises performing 3- dimensional tracking comprising: determining relative poses of the plurality of objects with respect to the ARD; and updating states of the plurality of objects using the relative poses, wherein the states of the plurality of objects include relational information of the plurality of objects. 3. The method of claim 2, wherein determining relative poses comprises: detecting a new object in the image; and updating the plurality of objects to include the new object. 4. The method of claim 1, wherein the tracking comprises tracking at least one object of the plurality of objects using the visual information when the at least one object is within a field of view of the ARD, and wherein the maintaining comprises maintaining the state of the at least one object using the information other than the visual information when the at least one object is out of the field of view. 5. The method of claim 1, wherein maintaining states of the plurality of objects comprises: maintaining states of a first set of the plurality of objects in view of the ARD; and maintaining states of a second set of the plurality of objects out of view of the ARD. 6. The method of claim 5, wherein maintaining states of a second set of the plurality of objects out of view of the ARD comprises: tracking offsets of the second set of the plurality of objects with respect to the first set of the plurality of objects in view of the ARD; and determining positions of the second set of the plurality of objects using the offsets. 7. The method of claim 5, wherein maintaining states of a second set of the plurality of objects out of view of the ARD further comprises: tracking relative movement of the ARD with respect to the second set of the plurality of objects out of view of the ARD; and determining positions of the second set of the plurality of objects using position and relative movement of the ARD. 8. The method of claim 7, wherein tracking relative movement of the ARD is based at least in part on at least one of: visual odometry; dead reckoning with accelerometer; and dead reckoning with gyroscope. 9. The method of claim 5, wherein maintaining states of a second set of the plurality of objects out of view of the ARD further comprises: receiving information related to wireless signals for determining relative positions of the plurality of objects; and updating positions of the second set of the plurality of objects using the information received. 10. The method of claim 9, wherein the wireless signals are received by the ARD from an RFID tag attached to at least one object in the second set of the plurality of objects. 11. The method of claim 9, wherein the wireless signals comprise at least one of near field communication signals and Bluetooth signals. 12. The method of claim 9, wherein the background comprises a mat including one or more sensors configured to detect the relative positions of the plurality of objects, and wherein the information is indicative of the relative positions of the plurality of objects detected by the one or more sensors. 13. The method of claim 9, wherein the information is received at a processor or chip integrated into the ARD based on the wireless signals being received at the ARD. 14. The method of claim 5, further comprising: tracking at least one object in the second set of the plurality of objects out of view of the ARD; determining the at least one object in the second set of the plurality of objects still exists; and rendering at least one of sound and graphics in a position of the at least one object in the second set of the plurality of objects. 15. The method of claim 5, further comprising: tracking at least one object in the second set of the plurality of objects out of view of the ARD; determining the at least one object in the second set of the plurality of objects no longer exists; and rendering at least one of a fading out transition and an ambient sound in a position of the at least one object in the second set of the plurality of objects. 16. The method of claim 5, further comprising: ceasing to track a first object in the second set when the ARD is panned to a location where the first object is expected to be located and it is determined that the first object is not present at the location; and ceasing an audio augmentation associated with the first object. 17. The method of claim 5, further comprising: ceasing to track a first object in the second set when a new scene is detected; and ceasing an audio augmentation associated with the first object. 18. The method of claim 1, wherein rendering augmentation comprises at least one of: rendering sound and graphics in a position when an indication of confidence of the states of the plurality of objects meets a first predetermined value; rendering sound in the position when the indication of confidence of the states of the plurality of objects meets a second predetermined value; rendering an ambient sound in the position when the indication of confidence of the states of the plurality of objects meets a third predetermined value; and rendering a fading out transition in the position when the indication of confidence of the states of the plurality of objects meets a fourth predetermined value. 19. The method of claim 1, wherein the plurality of objects are game pieces and the background is a game board. 20. The method of claim 1, wherein the states of the plurality of objects comprise at least one of: relational information of the plurality of objects with respect to each other; relational information of the plurality of objects with respect to the background; geometrical relationships of the plurality of objects with respect to each other; and geometrical relationships of the plurality of objects with respect to the background. 21. The method of claim 1, further comprising: tracking the plurality of objects and the background with multiple augmented reality enabled devices (ARDs); maintaining states of the plurality of objects across the multiple ARDs; and providing data for rendering augmentations in the multiple ARDs in response to the states of the plurality of objects. 22. The method of claim 1, wherein the background comprises at least one of: a mat; and a wall. 23. An augmented reality enabled device (ARD), comprising: a control unit including processing logic, the processing logic comprising: logic configured to track a plurality of objects and a background based at least in part on visual information derived from an image; logic configured to maintain states of at least one object of the plurality of objects based at least in part on information other than the visual information; and logic configured to provide data for rendering augmentation in response to the states of the plurality of objects. 24. The augmented reality enabled device of claim 23, wherein the logic configured to track comprises performing 3 -dimensional tracking comprising: logic configured to determine relative poses of the plurality of objects with respect to the ARD; and logic configured to update states of the plurality of objects using the relative poses, wherein the states of the plurality of objects include relational information of the plurality of objects. 25. The augmented reality enabled device of claim 24, wherein logic configured to determine relative poses comprises: logic configured to detect poses of the plurality of objects with respect to a previously captured image of the plurality of objects. 26. The augmented reality enabled device of claim 24, wherein logic configured to determine relative poses comprises: logic configured to detect a new object in the image; and logic configured to update the plurality of objects to include the new object. 27. The augmented reality enabled device of claim 23, wherein logic configured to maintain states of the plurality of objects comprises: logic configured to maintain states of a first set of the plurality of objects in view of the ARD; and logic configured to maintain states of a second set of the plurality of objects out of view of the ARD. 28. The augmented reality enabled device of claim 27, wherein logic configured to maintain states of a second set of the plurality of objects out of view of the ARD comprises: logic configured to track offsets of the second set of the plurality of objects with respect to the first set of the plurality of objects in view of the ARD; and logic configured to determine positions of the second set of the plurality of objects using the offsets. 29. The augmented reality enabled device of claim 27, wherein logic configured to maintain states of a second set of the plurality of objects out of view of the ARD further comprises: logic configured to track relative movement of the ARD with respect to the second set of the plurality of objects out of view of the ARD; and logic configured to determine positions of the second set of the plurality of objects using position and relative movement of the ARD. 30. The augmented reality enabled device of claim 29, wherein logic configured to track relative movement of the ARD is based at least in part on at least one of: visual odometry; dead reckoning with accelerometer; and dead reckoning with gyroscope. 31. The augmented reality enabled device of claim 27, wherein logic configured to maintain states of a second set of the plurality of objects out of view of the ARD further comprises: logic configured to receive information related to wireless signals for determining relative positions of the plurality of objects; and logic configured to update positions of the second set of the plurality of objects using the information received. 32. The augmented reality enabled device of claim 31 , wherein the wireless signals are received by the ARD from an RFID tag attached to at least one object in the second set of the plurality of objects. 33. The augmented reality enabled device of claim 31 , wherein the wireless signals comprise at least one of near field communication signals and Bluetooth signals. 34. The augmented reality enabled device of claim 31 , wherein the background comprises a mat including one or more sensors configured to detect the relative positions of the plurality of objects, and wherein the information is indicative of the relative positions of the plurality of objects detected by the one or more sensors. 35. The augmented reality enabled device of claim 31 , wherein the information is received at a processor or chip integrated into the ARD based on the wireless signals being received at the ARD. 36. The augmented reality enabled device of claim 27, further comprising: logic configured to track at least one object in the second set of the plurality of objects out of view of the ARD; logic configured to determine the at least one object in the second set of the plurality of objects still exists; and logic configured to render at least one of sound and graphics in a position of the at least one object in the second set of the plurality of objects. 37. The augmented reality enabled device of claim 27, further comprising: logic configured to track at least one object in the second set of the plurality of objects out of view of the ARD; logic configured to determine the at least one object in the second set of the plurality of objects no longer exists; and logic configured to render at least one of a fading out transition and an ambient sound in a position of the at least one object in the second set of the plurality of objects. 38. The augmented reality enabled device of claim 27, further comprising: logic configured to cease to track a first object in the second set when the ARD is panned to a location where the first object is expected to be located and it is determined that the first object is not present at the location; and logic configured to cease an audio augmentation associated with the first object. 39. The augmented reality enabled device of claim 27, further comprising: logic configured to cease to track a first object in the second set when a new scene is detected; and logic configured to cease an audio augmentation associated with the first object. 40. The augmented reality enabled device of claim 23, wherein logic configured to render augmentation comprises at least one of: logic configured to render sound and graphics in a position when an indication of confidence of the states of the plurality of objects meets a first predetermined value; logic configured to render sound in the position when the indication of confidence of the states of the plurality of objects meets a second predetermined value; logic configured to render an ambient sound in the position when the indication of confidence of the states of the plurality of objects meets a third predetermined value; and logic configured to render a fading out transition in the position when the indication of confidence of the states of the plurality of objects meets a fourth predetermined value. 41. The augmented reality enabled device of claim 23, wherein the plurality of objects are game pieces and the background is a game board. 42. The augmented reality enabled device of claim 23, wherein the states of the plurality of objects comprise at least one of: relational information of the plurality of objects with respect to each other; relational information of the plurality of objects with respect to the background; geometrical relationships of the plurality of objects with respect to each other; and geometrical relationships of the plurality of objects with respect to the background. 43. The augmented reality enabled device of claim 23, further comprising: logic configured to track the plurality of objects and the background with multiple augmented reality enabled devices (ARDs); logic configured to maintain states of the plurality of objects across the multiple ARDs; and logic configured to provide data for rendering augmentations in the multiple ARDs in response to the states of the plurality of objects. 44. The augmented reality enabled device of claim 23, wherein the background comprises at least one of: a mat; and a wall. 45. A non-transitory medium storing instructions for execution by one or more computer systems, the instructions comprising: instructions for tracking a plurality of objects and a background based at least in part on visual information derived from an image; instructions for maintaining states of at least one object of the plurality of objects based at least in part on information other than the visual information; and instructions for providing data for rendering augmentation in response to the states of the plurality of objects. 46. A system, comprising: means for tracking a plurality of objects and a background based at least in part on visual information derived from an image; means for maintaining states of at least one object of the plurality of objects based at least in part on information other than the visual information; and means for providing data for rendering augmentation in response to the states of the plurality of objects.
Maintaining Continuity of Augmentations CROSS REFERENCE TO RELATED APPLICATIONS [0001] This application claims the benefit of U.S. Application No. 13/844,756, filed March 15, 2013, and entitled "Maintaining Continuity of Augmentations"; U.S. Provisional Application No. 61/676,246, filed July 26, 2012, and entitled "Interactions of Tangible and Augmented Reality Objects"; U.S. Provisional Application No. 61/676,249, filed July 26, 2012, and entitled "Maintaining Continuity of Augmentations"; U.S. Provisional Application No. 61/676,278, filed July 26, 2012, and entitled "Method and Apparatus for Controlling Augmented Reality"; U.S. Provisional Application No. 61/676,255, filed July 26, 2012, and entitled "Interactions of Tangible and Augmented Reality Objects"; and U.S. Provisional Application No. 61/676,274, filed July 26, 2012, and entitled "Tangible Items' Effect on Particle System Augmentation in Virtual Spaces". The aforementioned United States applications are hereby incorporated by reference in their entirety. FIELD [0002] The present disclosure relates to the field of augmented reality. In particular, the present disclosure relates to maintaining continuity of augmentations. BACKGROUND [0003] Conventional augmented reality applications provide a live view of a real-world environment whose elements may be augmented by computer-generated sensory input such as video, sound, graphics or GPS data. With such applications, a view of reality may be modified by a computing device, and they can enhance a user's perception of reality and provide more information about the user's environment. For example, augmented contents may be applied in real-time and in semantic context with environmental elements, such as game statistics and summaries during a match. With the proliferation of mobile devices, such as smart phones, information about the surrounding real world of a user may be displayed on a mobile device with additional augmented contents, such as artificial information about the environment with virtual objects being overlaid on the real-world objects. For example, the mobile device can be configured to play augmented reality games; such games may include play sets and game pieces. [0004] One of the problems of the conventional augmented reality applications is that when an object being tracked is no longer in view of the camera of the mobile device, the conventional augmented reality applications would stop tracking the object. This approach may lead to inadequate user experience, especially in situations where the mobile devices may be moved around when the users interact with their environment, or when one or more game pieces may no longer be in view of the mobile devices. Therefore, there is a need for method, computer program product, and augmented reality enabled device that can improve the conventional augmented reality applications. SUMMARY [0005] The present disclosure relates to maintaining continuity of augmentations. According to embodiments of the present disclosure, a method for use with an augmented reality enabled device (ARD) comprises tracking a plurality of objects and a background based at least in part on visual information derived from an image, maintaining states of the plurality of objects based at least in part on information other than the visual information, and providing data for rendering augmentation in response to the states of the plurality of objects. [0006] According to another embodiment of the present disclosure, an augmented reality enabled device comprises a control unit including processing logic; the processing logic comprises logic configured to track a plurality of objects and a background based at least in part on visual information derived from an image, logic configured to maintain states of at least one object of the plurality of objects based at least in part on information other than the visual information, and logic configured to provide data for rendering augmentation in response to the states of the plurality of objects. [0007] According to yet another embodiment of the present disclosure, a computer program product for use with an augmented reality enabled device comprises a non-transitory medium storing instructions for execution by one or more computer systems; the instructions comprises instructions for tracking a plurality of objects and a background based at least in part on visual information derived from an image, instructions for maintaining states of at least one object of the plurality of objects based at least in part on information other than the visual information, and instructions for providing data for rendering augmentation in response to the states of the plurality of objects. [0008] According to yet another embodiment of the present disclosure, a system comprises means for tracking a plurality of objects and a background based at least in part on visual information derived from an image, means for maintaining states of at least one object of the plurality of objects based at least in part on information other than the visual information, and means for providing data for rendering augmentation in response to the states of the plurality of objects. BRIEF DESCRIPTION OF THE DRAWINGS [0009] The aforementioned features and advantages of the disclosure, as well as additional features and advantages thereof, will be more clearly understandable after reading detailed descriptions of embodiments of the disclosure in conjunction with the following drawings. [0010] Figure 1 illustrates an augmented reality enabled device according to some aspects of the present disclosure. [0011] Figure 2 illustrates a block diagram of an exemplary augmented reality enabled device according to some aspects of the present disclosure. [0012] Figure 3 illustrates a method of providing interactions based at least in part on tracking markings in a background according to some aspects of the present disclosure. [0013] Figure 4 illustrates another method of providing interactions based at least in part on tracking multiple objects in a background according to some aspects of the present disclosure. [0014] Figure 5 illustrates yet another method of providing interactions based at least in part on tracking items in a real environment according to some aspects of the present disclosure. [0015] Figure 6 illustrates yet another method of providing interactions based at least in part on tracking items in both virtual and real environment according to some aspects of the present disclosure. [0016] Figure 7 illustrates a method of maintaining continuity of augmentations when target is out of view according to some aspects of the present disclosure. [0017] Figure 8 illustrates another method of maintaining continuity of augmentations by providing correction for lost tracking according to some aspects of the present disclosure. [0018] Figure 9 illustrates yet another method of providing interactions based at least in part on tracking with RFID according to some aspects of the present disclosure. [0019] Figure 10 illustrates a method of providing interactions across multiple augmented reality enabled devices according to some aspects of the present disclosure. [0020] Figure 11 illustrates a flow diagram of maintaining continuity of augmentations according to some aspects of the present disclosure. [0021] Like numbers are used throughout the figures. DESCRIPTION OF EMBODIMENTS [0022] Embodiments of maintaining continuity of augmentations are disclosed. The following descriptions are presented to enable any person skilled in the art to make and use the disclosure. Descriptions of specific embodiments and applications are provided only as examples. Various modifications and combinations of the examples described herein will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other examples and applications without departing from the spirit and scope of the disclosure. Thus, the present disclosure is not intended to be limited to the examples described and shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein. The word "exemplary" or "example" is used herein to mean "serving as an example, instance, or illustration." Any aspect or embodiment described herein as "exemplary" or as an "example" in not necessarily to be construed as preferred or advantageous over other aspects or embodiments. [0023] Figure 1 illustrates an augmented reality enabled device according to some aspects of the present disclosure. As shown in Figure 1, the augmented reality enabled device (ARD) 14 includes housing 101, display 112, one or more speakers 118, and microphone 116. The display 112, which may be a touch screen display, may illustrate images captured by the camera 108, or any other desired user interface information. Of course, the ARD 14 may include additional components that are not necessarily related to the present disclosure. [0024] As used herein, an ARD device refers to any portable electronic device such as a cellular or other wireless communication device, personal communication system (PCS) device, personal navigation device (PND), Personal Information Manager (PIM), Personal Digital Assistant (PDA), laptop or other suitable mobile platform. The mobile platform may be capable of receiving wireless communication and/or navigation signals, such as navigation positioning signals. The term ARD is also intended to include devices which communicate with a personal navigation device (PND), such as by short-range wireless, infrared, wireline connection, or other connection, regardless of whether satellite signal reception, assistance data reception, and/or position-related processing occurs at the device or at the PND. Also, ARD is intended to include all electronic devices, including wireless communication devices, computers, laptops, tablet computers, smart phones, digital cameras etc. which are capable of capturing images used in pose tracking, as well as capable of performing augmented reality user interface functions. [0025] Figure 2 illustrates a block diagram of an exemplary augmented reality enabled device according to some aspects of the present disclosure. The mobile platform of the ARD 14 includes a camera 108 for capturing images of the environment, which may be either individual photos or frames of video. The mobile platform of the ARD 14 may also include sensors 109, which may be used to provide data with which the mobile platform of the ARD 14 can determine its position and orientation, i.e., pose. Examples of sensors that may be used with the mobile platform of the ARD 14 include accelerometers, quartz sensors, gyros, micro-electromechanical system (MEMS) sensors used as linear accelerometers, as well as magnetometers. In some implementations, galvanic skin response (GRS) sensors or other biometric sensors may be placed on the sides or surfaces of the ARD 14. [0026] The mobile platform of the ARD 14 may also include a user interface 110 that includes display 112 capable of displaying images. The user interface 1 10 may also include a keypad 1 14 or other input device through which the user can input information into the mobile platform of the ARD 14. If desired, the keypad 1 14 may be obviated by integrating a virtual keypad into the display 112 with a touch sensor. The user interface 110 may also include a microphone 116 and one or more speakers 118, for example, if the mobile platform is a cellular telephone. Of course, mobile platform of the ARD 14 may include other components unrelated to the present disclosure. [0027] The mobile platform of the ARD 14 further includes a control unit 120 that can be connected to and communicates with the camera 108 and sensors 109, as well as the user interface 110, along with any other desired features. The control unit 120 may be provided by one or more processors 122 and associated memory/storage 124. The control unit 120 may also include software 126, as well as hardware 128, and firmware 130. The control unit 120 includes a tracking unit 132 configured to track the position of the ARD 14 as well as to track positions of one or more objects monitored by the ARD 14. The control unit 120 may further include augmented reality user interface unit 134 configured to present augmented reality interactions on the display 112 of the ARD 14. The control unit 120 may further include RFID controller 136 configured to communicate with one or more RFID sensors or signatures. The tracking unit 132, augmented reality user interface unit 134 and RFID controller are illustrated separately from processor 122 and/or hardware 128 for clarity, but may be combined and/or implemented in the processor 122 and/or hardware 128 based on instructions in the software 126 and the firmware 130. [0028] According to aspects of the present disclosure, the ARD 14 may be used in conjunction with one or more tangible interface items. In many of the examples described herein, the tangible interface items are referred to as "objects" or "toys." However, other types of tangible objects may also be used and the techniques disclosed herein are not limited to toys. For example, the tangible interface items may include one or more items in the user's environment, such as a cola can, a coffee cup, a magazine, or other tangible item that may be within the field of view of the camera of the ARD 14. [0029] The augmentation provided by the ARD 14 can form a continuous story path. Such a continuous story path may be referred to herein as a "scene." The augmentation logic of the ARD 14 can be configured to monitor the attentiveness of a user and to change scenes if it appears that the user has lost interest in a particular scene. Techniques for interacting with the user and for tailoring the augmentation content provided by the ARD 14 are described in greater detail below. [0030] According to embodiments of the present disclosure, the ARD 14 is configured to provide a coherent user experience, to preserve the suspension of disbelief, and to encourage exploration. The disclosed methods maintain continuity of a scene while the user explores the environment, even if certain objects may be out of the camera view of the ARD 14. In other words, the ARD 14 can be configured to track the environment independent of the object being tracked. In addition, the ARD 14 may be configured to further augment the environment with additional information, such as a floor and/or one or more virtual windows 36, virtual doors 37, and virtual walls 38 in the augmented environment 16 as illustrated in Figure 3. [0031] In some implementations, the method of tracking a reference background 12 (such as a mat) may include but not limited to: 1) tracking sub areas of the mat; 2) tracking markings or sub-features on the mat as illustrated in Figure 3; 3) tracking multiple small mats that may be combined, temporarily or permanently, to form a larger mat (for example tiles on a bathroom floor, such as 12a-12e) as illustrated in Figure 4; and 4) tracking relationships of these sub-areas/markings/small mats to the overall mat such that having one sub-area/marking/small mat in the camera view of the ARD 14 can enable the ARD 14 to determine where on the larger mat the user may be looking at. In some other implementations, the environment may include one or more tangible walls 18, which may be attached to the mat, to create a playroom as illustrated in Figure 5. The playroom may be augmented with augmented window(s) 36 and augmented door(s) 37. In other implementations, the environment of the actual playroom may be used, that is, the tangible playroom may not be augmented. The wall(s) 18 and subsections of the wall may be tracked as described below. [0032] As shown in Figure 6, the method includes identifying and tracking details in the environment to create a map of the environment on-the-fly (using reference free AR) and then identify which subsection the user is currently focused on, and the relationship between the subsection and the overall map. The method may further include the ability to expand the virtual environment 16 beyond the reference background 12, such as a table and objects on the table 19 via on-the-fly mapping of the real world environment (using reference free AR). [0033] According to aspects of the present disclosure, a simultaneous localization and tracking (SLAM) framework may be employed by the ARD 14 to track objects in its environment. For example, the ARD 14 can be configured to build up a SLAM environment. The SLAM environment can be configured as a dense mesh or a dense/sparse point cloud, for example, with three-dimensional (3D) positions relative to the SLAM environment coordinate frame origin. Each feature point in the environment may include one or more descriptors that describe the visual appearance of the feature point, and/or 3D structural information about the feature. For example, surfaces and their corresponding surface normals of the 3D environment may be used to describe various feature points in the SLAM environment. Note that the feature points may be captured by the mobile device over a series of image frames. [0034] In some implementations, augmentation may continue when a target is out of view. Upon initiating a play, when a background 12, such as a bathroom or a floor comes into view, augmentation 16 for that scene can be displayed. When an object comes into view, its corresponding augmentation can be shown in the scene - such as a bathtub 22 causes an augmentation of an animated bathtub 32 with bubbles 33 and a Rubber Duck 39, and audio of bubbles may be played. When the augmented bathtub 32 goes out of frame, for example when the physical object is no longer in view due to movement of the ARD 14, the position of the bathtub 22 relative to the reference background 12, for example the floor, can be recorded in memory and the bathtub 22 may continue to affect the scene as long as tracking of the environment is maintained, as illustrated in Figure 7. In this example, when the target is out of view of the ARD 14, augmentation may continue by having 1) audio of bubbles continues to play; 2) video of bubbles 33 float in the air; and 3) the user pans to Bernie in the bathroom (not shown) and he says, "Oh, Rubber Duck, there you are!" [0035] In one approach, the augmentation of the bathtub may appear to emanate from its location on the bathroom floor. The sound of the bubbles can be made louder when the user is near the location and get quieter when the user moves away. The augmentation may continue, emanating from the same spot on the bathroom floor as long as the view is within a predetermined distance from the bathtub, for example can be heard in the bathroom, but not in the living room, up to the extent of tracking the environment. The augmentation may resume when the view returns to within the predetermined distance. [0036] In another approach, the augmentation may continue as follows, including but not limited to: 1) as long as two hands of the user remain on the ARD 14 as detected by galvanic skin response (GSR) or other biometric sensors on the sides or surfaces of the ARD 14; 2) as long as at least one hand remains on the camera as detected by GSR or other biometric sensors on the sides or surfaces of the device; 3) as long as the same user is detected holding the camera as detected by comparing biometric sensor data from the device over time. For example, heart rhythm signature, or fingerprint from sensor on any surface of the device; 4) until the bathtub is seen moving, for example in a new floor position, or until the area of the bathroom floor previously associated with the object is seen empty; 5) until a predetermined period of time has passed without returning to the bathtub or bathtub area; 6) after the camera has been stationary for a time t; or 7) as long as the camera is moving. Note that in some implementations, the control unit 120 may assume an object may be static if not perceived in view; and the control unit 120 may assume objects do not move when the camera is being moved. [0037] According to embodiments of the present disclosure, after a scene starts, it continues to play and does not start over under the following conditions, including but not limited to: 1) as long as the camera has been touched in a predetermined time interval; 2) as long as the camera is moving; 3) as long as the camera has moved in a predetermined time interval; 4) as long as the camera is in a hand as determined by biometric sensors; or 5) as long as the same user is holding the camera as determined by no substantial change in biometric sensor data. [0038] According to embodiments of the present disclosure, the ARD 14 is configured to correct for lost tracking, including but not limited to the following situations. First, if the ARD 14 is within a close proximity to an object, for example Birdie, and then loses the object, the control unit 120 of the ARD 14 can be configured to assume the object may still be there for a predetermined amount of time as shown in Figure 8. For example, the control unit 120 may assume the object may have gotten too close to effectively identify or track an object, thus the scene may continue to be displayed. Second, if the ARD 14 is moving towards the object (for example Birdie's relative size is increasing), then object may be lost from view, the control unit 120 may assume that the object is still there for a predetermined period of time. The control unit 120 may further assume the user may intend to zoom in on the object but has miss- aligned the ARD 14 with the object, occluded the object with user's hand, etc. Third, if the object goes out of view in one location (e.g. the bathroom floor) and is later detected in another location (e.g. in another area of the bathroom), the scene continues. The object may be augmented by the new location. In this case, the control unit 120 would not start over or lose its history. Last but not least, if a user has one scene in play, for example Birdie watching TV, then the ARD 14 may zoom in onto Birdie to cause a scene change, when the ARD 14 zooms back out, the scene may have Birdie resumes watching TV. The scene may be augmented with interactions during the zooming operation, but the control unit 120 would not start over or lose the history of the scene. [0039] According to embodiments of the present disclosure, the ARD 14 can be configured to combine different methods of establishing continuity of scene augmentation with off-camera tangible interface items. Along with visual object recognition and tracking, additional methods may be used to maintain a location map of objects with respect to a background, such as a floor or a mat. In some implementations as illustrated in Figure 9, near field tracking using RFIDs can be implemented in the ARD 14 such that even relative location of an object (10) to a background (12) can be established if the item is still in the room. [0040] In one exemplary approach, the field of view of the ARD 14 has been moved so the bathtub on the bathroom floor may be out of view. An RFID controller associated with the reference background 12, such as a mat, can be configured to detect the RFID signature (represented by wave 200) of the bathtub 22. The RFID controller may send the information (represented by wave 210) to the ARD 14 as shown in Figure 9. The ARD 14 may be configured to assume the bathtub remains still in the last location it was observed. Thus, the ARD 14 may continue to provide augmentation based at least in part on the location information of the bathtub received from the RFID controller. In the event that the RFID controller does not detect the RFID signature of the bathtub, it may pass this information to the ARD 14. The ARD 14 may assume the bathtub has moved, thus stops augmentation by having the bubble sound gracefully fades out, or by having bubbles pop in the air pop. [0041] In another approach, the near field tracking on the mat includes a method for determining sub-position of objects on the mat, for example by using a series or a grid of RFID coils in the mat. In this way, the RFID controller associated with the mat maintains both an inventory of what objects are on the mat as well as their positions or approximate positions. Then, the RFID controller may send the location information to the ARD 14. In addition, the RFID controller may send any change of location, such as addition, or removal of an object to the ARD14 . The ARD 14 can be configured to track both in view objects and out of view objects from the perspective of the ARD 14, and uses such location information to provide augmentations. Note that audio augmentation may continue even if no identifiable object or environment may be in the camera view of the ARD 14. [0042] In yet another approach, one or more mats equipped with RFID capabilities may be configured to maintain an inventory and placements of objects, and optionally maintain relative locations of objects with respect to the mat. In one approach, the information from different mats can be used in conjunction to make inferences about the scene and provide appropriate augmentation regardless of the camera view of the ARD 14. For example, if a character (e.g. Bernie) moves from one room to another room so that it is now in the same room with another character (e.g. Brett), and the camera view of the ARD 14 may be in the second room, the characters can begin to interact regardless whether the characters are in the camera view of the ARD 14 . An exemplary augmentation may show Brett turns to address Bernie who has entered the room but Bernie may not be in the camera view. [0043] In some other implementation, the ARD 14 may be configured to use sensor data received from at least one of accelerometer, gyro, and magnetometer to augment visual tracking (for example, using dead reckoning in some embodiments). In one approach, the ARD 14 may be configured to track the relative distance and direction to an object (e.g. bathtub) using sensor data to supplement visual tracking, when a visual reference is out of view. The ARD 14 may use the sensor data to provide continuation of the augmentation by using the technique of dead reckoning in determining position relative to a target. [0044] In another approach, the ARD 14 may be configured to use sensor data together with visual tracking to determine movement of the object (e.g. bathtub) relative to the ARD 14. If sensor data indicates the ARD 14 may be relatively still, the control unit 120 of the ARD 14 may assume the bathtub is moving (e.g. out of the scene) and adjusts augmentation accordingly. If sensor data indicates ARD 14 is moving, and the movement is determined to be sufficient to justify the movement seen on the screen, then the control unit 120 assumes the bathtub is still in place and the ARD 14 is moving, and keeps the augmentation accordingly. Alternatively if the movement may be determined to be insufficient to justify the movement seen on the screen, then the control unit 120 may assume both the bathtub 22 and the ARD 14 may be moving, and adjusts the augmentation accordingly. [0045] According to embodiments of the present disclosure, multiple ARDs may be configured to maintain augmentation across the multiple ARDs. As illustrated in Figure 10, if multiple users with corresponding augmented reality enabled devices are playing with the same play set at or near the same time, certain augmentation elements can remain substantially the same across the multiple ARDs, while others augmentation elements may differ. [0046] In one exemplary implementation, if a door from a bathroom to a living room is seen open at the same time across multiple ARDs pointing at the door from different rooms or different directions. The door remains open across the multiple ARDs until a user closes it. In another exemplary implementation, if a user 30 turns Dog 25 into Super Dog 35, another user 32 on another ARD 15 may see Dog 25 as Super Dog 35 as well. Note that the sound augmentation from each ARD may be related to the play the particular ARD may be pointing at. [0047] In addition, sound in another room (e.g. in the bathroom when a user is playing in the living room) may not be heard at all as long as no virtual window or door is open; the sound may be heard quietly or not at all if a virtual window or door is open; or the sound may be heard upon a virtual window or door is being opened, and then it may fade. For example, if a bathroom window is opened, birds may be heard at first and then fade out after certain period of time. [0048] According to embodiments of the present disclosure, the ARD 14 can be configured to provide environmental sound augmentation. In some implementations, the sound for objects in view can be the only sound heard, louder than other sound, or balanced according to recent events. The sound for objects out of view may differ in loudness, which can be determined by the duration the objects have been out of view. [0049] According to embodiments of the present disclosure, the ARD 14 can be configured to maintain sound continuity within a scene. In some implementations, the scene may be preserved in situations if the ARD 14 is set down, objects may be occluded by the hand, or the ARD 14 device momentarily points away. [0050] In one approach, if scene progression audio is being played (e.g. a character is speaking or a video is playing), then the audio continues (e.g. video sound plays through) in the following scenarios, including but not limited to: 1) when the ARD 14 is facing the play, for example, some part of an object or an area of floor at or near the action ("the play area") is still in view, and the view may not be moving or the sensors do not sense movement; 2) the device is not set down but no characters are in site (e.g. hand is occluding camera, the user's hand has drooped, the device has lost tracking); 3) the device briefly points to another character then returns to original play area within a predetermined period of time (e.g. 0 to 3 seconds); 4) the ARD 14 moves towards the objects in the same scene flow, then off screen sound may reduce volume, or an off screen character may continue to talk and incorporate new item in the scene. For example, Bernie is talking to his Rubber Duck when a user pans to a car, an augmented Bernie may say, "I know what, Ducky, let's take a ride in the car!" [0051] In another approach, the audio may conclude and then stop when the ARD 14 is set down not facing the play area. For example, the play is not in view, the view is not moving, or the sensors do not sense movement. Alternatively, the ARD 14 may move to a new object in a similar scene flow. For example, the ARD 14 is on Bernie and Brett and then moves to Birdie for the first time in this play session. In yet another approach, the audio may stop, for example video sound stops or fades out, if the view of the ARD 14 has moved to a different set of objects for more than a predetermined period of time. [0052] According to some aspects of the present disclosure, the functions described in Figure 11 may be implemented by the control unit 120 of Figure 2. In some implementations, the functions may be performed by processor 122, software 126, hardware 128, and firmware 130, or a combination of these blocks to perform various functions of the ARD described above, including the functions performed by the tracking unit 132 and the augmented reality user interface unit 134. [0053] Figure 11 illustrates a flow diagram of maintaining continuity of augmentations according to some aspects of the present disclosure. In block 1102, the control unit 120 can be configured to track a plurality of objects and a background based at least in part on visual information derived from an image. In block 1104, the control unit 120 can be configured to maintain states of the plurality of objects based at least in part on information other than the visual information. In block 1106, the control unit 120 can be configured to provide data for rendering augmentation in response to the states of the plurality of objects. [0054] According to embodiments of the present disclosure, the methods performed in block 1102 may further include methods performed in block 1 110. For example, in block 1110, the control unit 120 can be configured to determine relative poses of the plurality of objects with respect to the ARD, and update states of the plurality of objects using the relative poses, where the states of the plurality of objects include relational information of the plurality of objects. The methods performed in block 1110 may further include methods performed in blocks 1120-1122. In block 1120, the control unit 120 detects poses of the plurality of objects with respect to a previously captured image of the plurality of objects. In block 1122, the control unit 120 detects a new object in the image, and updates the plurality of objects to include the new object. [0055] The methods performed in block 1104 may further include methods performed in block 1112. In block 1112, the control unit 120 maintains states of a first set of the plurality of objects in view of the ARD, and maintains states of a second set of the plurality of objects out of view of the ARD. The methods performed in block 1112 may further include methods performed in blocks 1124-1128. In block 1124, the control unit 120 tracks offsets of the second set of the plurality of objects with respect to the first set of the plurality of objects in view of the ARD 14, and determines positions of the second set of the plurality of objects using the offsets. In block 1126, the control unit 120 tracks relative movement of the ARD 14 with respect to the second set of the plurality of objects out of view of the ARD 14, and determines positions of the second set of the plurality of objects using position and relative movement of the ARD 14. The method of tracking relative movement of the ARD 14 is based at least in part on at least one of: visual odometry, dead reckoning with accelerometer, and dead reckoning with gyroscope. [0056] In block 1128, the control unit 120 receives wireless signals comprising information for determining relative positions of the plurality of objects, and updates positions of the second set of the plurality of objects using the information. In some implementations, the wireless signals are received by the ARD 14 from an RFID tag attached to at least one object in the second set of the plurality of objects. The wireless signals comprise at least one of near field communication signals and Bluetooth signals. The background comprises one or more sensors configured to detect a position of at least one object in the plurality of objects, and the information is indicative of a position detected by the one or more sensors. [0057] The methods performed in block 1106 may further include methods performed in block 1114. In block 1114, the control unit 120 is configured to render sound and graphics in a position when an indication of confidence of the states of the plurality of objects meets a first predetermined value, render sound in the position when the indication of confidence of the states of the plurality of objects meets a second predetermined value, render an ambient sound in the position when the indication of confidence of the states of the plurality of objects meets a third predetermined value, and render a fading out transition in the position when the indication of confidence of the states of the plurality of objects meets a fourth predetermined value. [0058] In some implementations, the plurality of objects in block 1102 may be game pieces and the background is a game board. The states of the plurality of objects may comprise relational information of the plurality of objects with respect to each other, relational information of the plurality of objects with respect to the background, geometrical relationships of the plurality of objects with respect to each other, and geometrical relationships of the plurality of objects with respect to the background. [0059] In block 1112, the control unit 120 may be further configured to track at least one object in the second set of the plurality of objects out of view of the ARD 14, determine the at least one object still exists, and render at least one of sound and graphics in a position of the at least one object. In addition, the control unit 120 may be further configured to track at least one object in the second set of the plurality of objects out of view of the ARD 14, determine the at least one object on longer exists, and render at least one of a fading out transition and an ambient sound in a position of the at least one object. [0060] In some other implementations, the control unit 120 may be further configured to track the plurality of objects and the background with multiple augmented reality enabled devices (ARDs), maintain states of the plurality of objects across the multiple ARDs, and provide data for rendering augmentations in the multiple ARDs in response to the states of the plurality of objects. [0061] According to aspects of the present disclosure, a computer program product for use with an augmented reality enabled device comprises a non-transitory medium storing instructions for execution by one or more computer systems; the instructions comprises instructions for tracking a plurality of objects and a background based at least in part on visual information derived from an image, instructions for maintaining states of at least one object of the plurality of objects based at least in part on information other than the visual information, and instructions for providing data for rendering augmentation in response to the states of the plurality of objects. [0062] The instructions for tracking comprises performing 3 -dimensional tracking comprises instructions for determining relative poses of the plurality of objects with respect to the ARD, and instructions for updating states of the plurality of objects using the relative poses, where the states of the plurality of objects include relational information of the plurality of objects. The instructions for determining relative poses comprise instructions for detecting poses of the plurality of objects with respect to a previously captured image of the plurality of objects. The instructions for determining relative poses comprise instructions for detecting a new object in the image, and instructions for updating the plurality of objects to include the new object. [0063] The instructions for maintaining states of the plurality of objects comprises instructions for maintaining states of a first set of the plurality of objects in view of the ARD, and instructions for maintaining states of a second set of the plurality of objects out of view of the ARD. The instructions for maintaining states of a second set of the plurality of objects out of view of the ARD comprises instructions for tracking offsets of the second set of the plurality of objects with respect to the first set of the plurality of objects in view of the ARD, and instructions for determining positions of the second set of the plurality of objects using the offsets. The instructions for maintaining states of a second set of the plurality of objects out of view of the ARD further comprises instructions for tracking relative movement of the ARD with respect to the second set of the plurality of objects out of view of the ARD, and instructions for determining positions of the second set of the plurality of objects using position and relative movement of the ARD. The instructions for tracking relative movement of the ARD are based at least in part on at least one of: visual odometry, dead reckoning with accelerometer, and dead reckoning with gyroscope. [0064] The instructions for maintaining states of a second set of the plurality of objects out of view of the ARD further comprises instructions for receiving information related to wireless signals for determining relative positions of the plurality of objects, and instructions for updating positions of the second set of the plurality of objects using the information received. The wireless signals are received by the ARD from an RFID tag attached to at least one object in the second set of the plurality of objects. The wireless signals comprise at least one of near field communication signals and Bluetooth signals. The background comprises a mat including one or more sensors configured to detect the relative positions of the plurality of objects, and the information is indicative of the relative positions of the plurality of objects detected by the one or more sensors. The information is received at a processor or chip integrated into the ARD based on the wireless signals being received at the ARD. [0065] According to aspects of the present disclosure, the computer program product further comprises instructions for tracking at least one object in the second set of the plurality of objects out of view of the ARD, instructions for determining the at least one object in the second set of the plurality of objects still exists, and instructions for rendering at least one of sound and graphics in a position of the at least one object in the second set of the plurality of objects. The computer program product further comprises instructions for tracking at least one object in the second set of the plurality of objects out of view of the ARD, instructions for determining the at least one object in the second set of the plurality of objects no longer exists, and instructions for rendering at least one of a fading out transition and an ambient sound in a position of the at least one object in the second set of the plurality of objects. The computer program product further comprises instructions for ceasing to track a first object in the second set when the ARD is panned to a location where the first object is expected to be located and it is determined that the first object is not present at the location, and instructions for ceasing an audio augmentation associated with the first object. The computer program product further comprises instructions for ceasing to track a first object in the second set when a new scene is detected, and instructions for ceasing an audio augmentation associated with the first object. [0066] The instructions for rendering augmentation comprise at least one of: instructions for rendering sound and graphics in a position when an indication of confidence of the states of the plurality of objects meets a first predetermined value, instructions for rendering sound in the position when the indication of confidence of the states of the plurality of objects meets a second predetermined value, instructions for rendering an ambient sound in the position when the indication of confidence of the states of the plurality of objects meets a third predetermined value, and instructions for rendering a fading out transition in the position when the indication of confidence of the states of the plurality of objects meets a fourth predetermined value. The plurality of objects is game pieces and the background is a game board. The states of the plurality of objects comprise at least one of: relational information of the plurality of objects with respect to each other, relational information of the plurality of objects with respect to the background, geometrical relationships of the plurality of objects with respect to each other, and geometrical relationships of the plurality of objects with respect to the background. [0067] The computer program product further comprises instructions for tracking the plurality of objects and the background with multiple augmented reality enabled devices (ARDs), instructions for maintaining states of the plurality of objects across the multiple ARDs, and instructions for providing data for rendering augmentations in the multiple ARDs in response to the states of the plurality of objects. The background comprises at least one of: a mat, and a wall. [0068] According to aspects of the present disclosure, identifying and tracking features in image frames may be performed using a number of techniques. In one approach, a method of identifying features may be performed by examining the minimum eigenvalue of each 2 by 2 gradient matrix. Then the features are tracked using a Newton-Raphson method of minimizing the difference between the two windows. The method of multi-resolution tracking allows for relatively large displacements between images. Note that during tracking of features from one frame to the next frame, errors may accumulate. To detect potentially bad features, the mobile device may be configured to monitor whether the image signal in the window around the feature in the current frame is still similar to the image signal around the feature in the previous frame. Since features may be tracked over many frames, the image content may be deformed. To address this issue, consistency check may be performed with a similarity or an affine mapping. [0069] According to aspects of the present disclosure, to identify an object in an image, points on the object may be extracted to provide feature descriptions (also referred to as keypoints, feature points or features for short) of the object. This description, extracted from a training image, may then be used to identify the object when attempting to locate the object in a test image containing many other objects. To perform reliable recognition, the features extracted from the training image may be detectable even under changes in image scale, noise and illumination. Such points usually lie on high-contrast regions of the image, such as object edges. [0070] Another characteristic of these features is that the relative positions between them in the original scene may not change from one image to another. For example, if only the four corners of a door are used as features, they may work regardless of the door's position; but if points in the frame are used, the recognition may fail if the door is opened or closed. Similarly, features located in articulated or flexible objects may typically not work if any change in their internal geometry happens between two images in the set being processed. In some implementations, SIFT detects and uses a larger number of features from the images, which can reduce the contribution of the errors caused by the local variations in the average error of all feature matching errors. Thus, the disclosed method may identify objects even among clutter and under partial occlusion; because the SIFT feature descriptor can be invariant to uniform scaling, orientation, and partially invariant to affine distortion and illumination changes. [0071] For example, keypoints of an object may first be extracted from a set of reference images and stored in a database. An object is recognized in a new image by comparing each feature from the new image to this database and finding candidate matching features based on Euclidean distance of their feature vectors. From the full set of matches, subsets of keypoints that agree on the object and its location, scale, and orientation in the new image may be identified to filter out good matches. The determination of consistent clusters may be performed by using a hash table implementation of a generalized Hough transform. Each cluster of 3 or more features that agree on an object and its pose may then be subject to further detailed model verification and subsequently outliers may be discarded. The probability that a particular set of features indicates the presence of an object may then be computed based on the accuracy of fit and number of probable false matches. Object matches that pass the tests can be identified as correct with high confidence. [0072] According to aspects of the present disclosure, image feature generation transforms an image into a large collection of feature vectors, each of which may be invariant to image translation, scaling, and rotation, as well as invariant to illumination changes and robust to local geometric distortion. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Key locations may be defined as maxima and minima of the result of difference of Gaussians function applied in scale space to a series of smoothed and resampled images. Low contrast candidate points and edge response points along an edge may be discarded. Dominant orientations are assigned to localized keypoints. This approach ensures that the keypoints are more stable for matching and recognition. SIFT descriptors robust to local affine distortion may then be obtained by considering pixels around a radius of the key location, blurring and resampling of local image orientation planes. [0073] Features matching and indexing may include storing SIFT keys and identifying matching keys from the new image. In one approach, a modification of the k-d tree algorithm which is also referred to as the best-bin-first search method that may be used to identify the nearest neighbors with high probability using a limited amount of computation. The best-bin-first algorithm uses a modified search ordering for the k-d tree algorithm so that bins in feature space may be searched in the order of their closest distance from the query location. This search order requires the use of a heap-based priority queue for efficient determination of the search order. The best candidate match for each keypoint may be found by identifying its nearest neighbor in the database of keypoints from training images. The nearest neighbors can be defined as the keypoints with minimum Euclidean distance from the given descriptor vector. The probability that a match is correct can be determined by taking the ratio of distance from the closest neighbor to the distance of the second closest. [0074] In one exemplary implementation, matches in which the distance ratio is greater than 0.8 may be rejected, which eliminates 90% of the false matches while discarding less than 5% of the correct matches. To further improve the efficiency of the best-bin-first algorithm, search may be cut off after checking a predetermined number (for example 100) nearest neighbor candidates. For a database of 100,000 keypoints, this may provide a speedup over exact nearest neighbor search by about 2 orders of magnitude, yet results in less than a 5% loss in the number of correct matches. [0075] Note that with the exemplary implementation, the Hough Transform may be used to cluster reliable model hypotheses to search for keys that agree upon a particular model pose. Hough transform may be used to identify clusters of features with a consistent interpretation by using each feature to vote for object poses that may be consistent with the feature. When clusters of features are found to vote for the same pose of an object, the probability of the interpretation being correct may be higher than for any single feature. An entry in a hash table may be created to predict the model location, orientation, and scale from the match hypothesis. The hash table can be searched to identify clusters of at least 3 entries in a bin, and the bins may be sorted into decreasing order of size. [0076] According to aspects of the present disclosure, each of the SIFT keypoints may specify 2D location, scale, and orientation. In addition, each matched keypoint in the database may have a record of its parameters relative to the training image in which it is found. The similarity transform implied by these 4 parameters may be an approximation to the 6 degree-of-freedom pose space for a 3D object and also does not account for any non-rigid deformations. Therefore, an exemplary implementation may use broad bin sizes of 30 degrees for orientation, a factor of 2 for scale, and 0.25 times the maximum projected training image dimension (using the predicted scale) for location. The SIFT key samples generated at the larger scale may be given twice the weight of those at the smaller scale. With this approach, the larger scale may in effect able to filter the most likely neighbors for checking at the smaller scale. This approach also improves recognition performance by giving more weight to the least-noisy scale. According to aspects of the present disclosure, to avoid the issue of boundary effects in bin assignment, each keypoint match may vote for the 2 closest bins in each dimension, giving a total of 16 entries for each hypothesis and further broadening the pose range. [0077] According to aspects of the present disclosure, outliers may be removed by checking for agreement between each image feature and the model, for a given parameter solution. For example, given a linear least squares solution, each match may be required to agree within half the error range that is used for the parameters in the Hough transform bins. As outliers are discarded, the linear least squares solution may be resolved with the remaining points, and the process may be iterated. In some implementations, if less than a predetermined number of points (e.g. 3 points) remain after discarding outliers, the match may be rejected. In addition, a top-down matching phase may be used to add any further matches that agree with the projected model position, which may have been missed from the Hough transform bin due to the similarity transform approximation or other errors. [0078] The decision to accept or reject a model hypothesis can be based on a detailed probabilistic model. The method first computes an expected number of false matches to the model pose, given the projected size of the model, the number of features within the region, and the accuracy of the fit. A Bayesian probability analysis can then give the probability that the object may be present based on the actual number of matching features found. A model may be accepted if the final probability for a correct interpretation is greater than a predetermined percentage (for example 95%). [0079] According to aspects of the present disclosure, in one approach, rotation invariant feature transform (RIFT) method may be employed as a rotation-invariant generalization of SIFT to address under clutter or partial occlusion situations. The RIFT descriptor may be constructed using circular normalized patches divided into concentric rings of equal width and within each ring a gradient orientation histogram may be computed. To maintain rotation invariance, the orientation may be measured at each point relative to the direction pointing outward from the center. [0080] In another approach, a generalized robust invariant feature (G-RIF) method may be used. The G-RIF encodes edge orientation, edge density and hue information in a unified form combining perceptual information with spatial encoding. The object recognition scheme uses neighboring context based voting to estimate object models. [0081] In yet another approach, a speeded up robust feature (SURF) method may be used which uses a scale and rotation-invariant interest point detector / descriptor that can outperform previously proposed schemes with respect to repeatability, distinctiveness, and robustness. SURF relies on integral images for image convolutions to reduce computation time, and builds on the strengths of the leading existing detectors and descriptors (using a fast Hessian matrix -based measure for the detector and a distribution-based descriptor). The SURF method describes a distribution of Haar wavelet responses within the interest point neighborhood. Integral images may be used for speed, and 64 dimensions may be used to reduce the time for feature computation and matching. The indexing step may be based on the sign of the Laplacian, which increases the matching speed and the robustness of the descriptor. [0082] In yet another approach, the principle component analysis SIFT (PCA- SIFT) method may be used. In some implementations, the PCA-SIFT descriptor is a vector of image gradients in x and y direction computed within the support region. The gradient region can be sampled at 39x39 locations. Thus, the vector can be of dimension 3042. The dimension can be reduced to 36 with PC A. In yet another approach, the Gradient location-orientation histogram (GLOH) method can be employed, which is an extension of the SIFT descriptor designed to increase its robustness and distinctiveness. In some implementations, the SIFT descriptor can be computed for a log-polar location grid with three bins in radial direction (the radius set to 6, 11, and 15) and 8 in angular direction, which results in 17 location bins. The central bin may not be divided in angular directions. The gradient orientations may be quantized in 16 bins resulting in 272 bin histogram. The size of this descriptor can be reduced with PCA. The covariance matrix for PCA can be estimated on image patches collected from various images. The 128 largest eigenvectors may then be used for description. [0083] In yet another approach, a two-object recognition algorithm may be employed to use with the limitations of current mobile devices. In contrast to the classic SIFT approach, the Features from Accelerated Segment Test (FAST) corner detector can be used for feature detection. This approach distinguishes between the off-line preparation phase where features may be created at different scale levels and the on-line phase where features may be created at a current fixed scale level of the mobile device's camera image. In one exemplary implementation, features may be created from a predetermined fixed patch size (for example 15x15 pixels) and form a SIFT descriptor with 36 dimensions. The approach can be further extended by integrating a scalable vocabulary tree in the recognition pipeline. This allows an efficient recognition of a larger number of objects on mobile devices. [0084] According to aspects of the present disclosure, the detection and description of local image features can help in object recognition. The SIFT features can be local and based on the appearance of the object at particular interest points, and may be invariant to image scale and rotation. They may also be robust to changes in illumination, noise, and minor changes in viewpoint. In addition to these properties, the features may be highly distinctive, relatively easy to extract and allow for correct object identification with low probability of mismatch. The features can be relatively easy to match against a (large) database of local features, and generally probabilistic algorithms such as k-dimensional (k-d) trees with best-bin-first search may be used. Object descriptions by a set of SIFT features may also be robust to partial occlusion. For example, as few as 3 SIFT features from an object may be sufficient to compute its location and pose. In some implementations, recognition may be performed in quasi real time, for small databases and on modern computer hardware. [0085] According to aspects of the present disclosure, the random sample consensus (RANSAC) technique may be employed to remove outliers caused by moving objects in view of the camera. Note that the RANSAC uses an iterative method to estimate parameters of a mathematical model from a set of observed data which contains outliers. This method can be a non-deterministic as it produces a reasonable result with an associated probability, where the probability may increase as more iteration is performed. [0086] In one exemplary implementation, a set of observed data values, a parameterized model which can be fitted to the observations with corresponding confidence parameters. In this exemplary implementation, the method iteratively selects a random subset of the original data. These data can be hypothetical inliers and the hypothesis may then be tested as follows: 1. A model can be fitted to the hypothetical inliers, i.e. all free parameters of the model are reconstructed from the inliers. 2. All other data can then be tested against the fitted model and, if a point fits well to the estimated model; it can be considered as a hypothetical inlier. 3. The estimated model can be considered acceptable if sufficiently number of points have been classified as hypothetical inliers. 4. The model can be re-estimated from all hypothetical inliers, because it has only been estimated from the initial set of hypothetical inliers. 5. Finally, the model can be evaluated by estimating the error of the inliers relative to the model. [0087] The above procedure can be repeated for a predetermined number of times, each time producing either a model which may be rejected because too few points are classified as inliers or a refined model together with a corresponding error measure. In the latter case, the refined model can be kept if the error is lower than the previously saved model. [0088] In another exemplary implementation, moving objects in view of the camera can be actively identified and removed using a model based motion tracking method. In one approach, the objective of tracking can be treated as a problem of model recognition. A binary representation of the target can be tracked, and a Hausdorff distance based search can be used to search regions of the image for the object. For a binary representation of the target (a model), output from the standard canny edge detector of the Gaussian smoothed image can be augmented with the notion of a model history. At each frame, a Hausdorff search can be performed on each target, using the canny edges from the current image and the current model. In addition, an affine estimation may be performed to approximate the net background motion. From the results of these two searches, information can be gathered about the target, and be used to approximate the motion of the target, as well as separate the background from motion in the region of the target. To be able to handle hazard/unusual conditions (such as the object becoming occluded going into a shadow, the object leaving the frame, or camera image distortion providing bad image quality), history data about the target may be retained, such as the target's past motion and size change, characteristic views of the target (snapshots throughout time that provide an accurate representation of the different ways the target has been tracked), and match qualities in the past. [0089] The history of tracking the target can be useful in more than just aiding hazard/unusual conditions; that part of a solid motion tracking method can involve history data, and not just a frame by frame method of motion comparison. This history state can provide information regarding how to decide what should be considered part of the target (e.g. things moving close to the object moving at the same speed should be incorporated into the object), and with information about motion and size, the method can predictively estimate where a lost object may have gone, or where it might reappear (which has been useful in recovering targets that leave the frame and reappear later in time). [0090] An inherent challenge in the motion tracking method may be caused by the fact that the camera can have an arbitrary movement (as opposed to a stationary camera), which makes developing a tracking system that can handle unpredictable changes in camera motion difficult. A computationally efficient affine background estimation scheme may be used to provide information as to the motion of the camera and scene. [0091] According to aspects of the present disclosure, an affine transformation for the image can be performed at time t to the image at time t+dt, which allows correlating the motion in the two images. This background information allows the method to synthesize an image at time t+dt from the image at time t and the affine transform that can be an approximation of the net scene motion. This synthesized image can be useful in generating new model information and removing background clutter from the model space, because a difference of the actual image at t+dt and the generated image at t+dt can be taken to remove image features from the space surrounding targets. [0092] In addition to the use of the affine transform as a tool to clean-up the search space, it can also be used to normalize the coordinate movement of the targets: by having a vector to track how the background may be moving, and a vector to track how the target may be moving, a difference of the two vector may be taken to generate a vector that describes the motion of the target with respect to the background. This vector allows the method to predictively match where the target should be, and anticipate hazard conditions (for example looking ahead in the direction of the motion can provide clues about upcoming obstacles, as well as keeping track of where the object may be in case of a hazard condition. When an object enters a hazard condition, the method may still be able to estimate the background motion, and use that coupled with the knowledge of the model's previous movements to guess where the model may reappear, or re-enter the frame. [0093] The background estimation can be a key factor in the prolonged tracking of objects. Note that short term tracking may be performed without background estimation, but after a period of time, object distortion and hazards may be difficult to cope with effectively without a good estimation of the background. [0094] According to aspects of the present disclosure, one of the advantages of using the Hausdorff distance as a matching operator is that it can be quite tolerant of changes in shape during matching, but using the Hausdorff distance as a matching operator may require the objects being tracked be more accurately defined. [0095] In one approach, straight dilation-based methods of grabbing a new model from the time t+1 image can be used. Note that in some situations where there can be non-object features close to the object (which occurs quite often), the dilation method may not be effective because it may slowly incorporate the entire scene into the model. Thus, a method of updating the model from frame to frame that can be tolerant to changes in the model shape, but not so relaxed that causing incorporating non-model pixels into the model may be adopted. One exemplary implementation is to use a combination of background removal and adding the previous models to the current model match window and taking what seems to be stable pixels, as well as the new ones surrounding them, which over time may either get eliminated from the model because they may not be stable, or get incorporated into the model. This approach can be effective in keeping the models relatively clean from clutter in the image. For example, with this approach, no longer does a road close to a truck get pulled into the model pixel by pixel. Note that the models may appear to be dilated, but this may be a result of the history effect of how the models are constructed, but it may also have the feature of making the search results more definite because this method can have more model pixels to possibly match in the next frame. [0096] Note that at each frame, there may be a significant amount of computation to be performed. According to some implementations, the mobile device can be configured to perform smoothing/feature extraction, Hausdorff matching each target (for example one match per model), as well as affine background estimation. Each of these operations can be quite computationally expensive individually. In order to achieve real-time performance on a mobile device, the design can be configured to use as much parallelism as possible. [0097] Note that at least paragraphs [0098]-[0010], Figures 1-2, Figure 11 and their corresponding descriptions provide means for tracking a plurality of objects and a background based at least in part on visual information derived from an image, means for maintaining states of at least one object of the plurality of objects based at least in part on information other than the visual information, and means for providing data for rendering augmentation in response to the states of the plurality of objects. [0098] The methodologies and mobile device described herein can be implemented by various means depending upon the application. For example, these methodologies can be implemented in hardware, firmware, software, or a combination thereof. For a hardware implementation, the processing units can be implemented within one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, electronic devices, other electronic units designed to perform the functions described herein, or a combination thereof. Herein, the term "control logic" encompasses logic implemented by software, hardware, firmware, or a combination. [0099] For a firmware and/or software implementation, the methodologies can be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. Any machine readable medium tangibly embodying instructions can be used in implementing the methodologies described herein. For example, software codes can be stored in a memory and executed by a processing unit. Memory can be implemented within the processing unit or external to the processing unit. As used herein the term "memory" refers to any type of long term, short term, volatile, nonvolatile, or other storage devices and is not to be limited to any particular type of memory or number of memories, or type of media upon which memory is stored. [00100] If implemented in firmware and/or software, the functions may be stored as one or more instructions or code on a computer-readable medium. Examples include computer-readable media encoded with a data structure and computer-readable media encoded with a computer program. Computer-readable media may take the form of an article of manufacturer. Computer-readable media includes physical computer storage media. A storage medium may be any available medium that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer; disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer- readable media. [00101] In addition to storage on computer readable medium, instructions and/or data may be provided as signals on transmission media included in a communication apparatus. For example, a communication apparatus may include a transceiver having signals indicative of instructions and data. The instructions and data are configured to cause at least one processor to implement the functions outlined in the claims. That is, the communication apparatus includes transmission media with signals indicative of information to perform disclosed functions. At a first time, the transmission media included in the communication apparatus may include a first portion of the information to perform the disclosed functions, while at a second time the transmission media included in the communication apparatus may include a second portion of the information to perform the disclosed functions. [00102] The disclosure may be implemented in conjunction with various wireless communication networks such as a wireless wide area network (WW AN), a wireless local area network (WLAN), a wireless personal area network (WPAN), and so on. The terms "network" and "system" are often used interchangeably. The terms "position" and "location" are often used interchangeably. A WW AN may be a Code Division Multiple Access (CDMA) network, a Time Division Multiple Access (TDMA) network, a Frequency Division Multiple Access (FDMA) network, an Orthogonal Frequency Division Multiple Access (OFDMA) network, a Single-Carrier Frequency Division Multiple Access (SC-FDMA) network, a Long Term Evolution (LTE) network, a WiMAX (IEEE 802.16) network and so on. A CDMA network may implement one or more radio access technologies (RATs) such as cdma2000, Wideband-CDMA (Ay- CDMA), and so on. Cdma2000 includes IS-95, IS2000, and IS-856 standards. A TDMA network may implement Global System for Mobile Communications (GSM), Digital Advanced Mobile Phone System (D-AMPS), or some other RAT. GSM and W-CDMA are described in documents from a consortium named "3rd Generation Partnership Project" (3 GPP). Cdma2000 is described in documents from a consortium named "3rd Generation Partnership Project 2" (3GPP2). 3GPP and 3GPP2 documents are publicly available. A WLAN may be an IEEE 802.1 lx network, and a WPAN may be a Bluetooth network, an IEEE 802.15x, or some other type of network. The techniques may also be implemented in conjunction with any combination of WW AN, WLAN and/or WPAN. [00103] A mobile station refers to a device such as a cellular or other wireless communication device, personal communication system (PCS) device, personal navigation device (PND), Personal Information Manager (PIM), Personal Digital Assistant (PDA), laptop or other suitable mobile device which is capable of receiving wireless communication and/or navigation signals. The term "mobile station" is also intended to include devices which communicate with a personal navigation device (PND), such as by short-range wireless, infrared, wire line connection, or other connection - regardless of whether satellite signal reception, assistance data reception, and/or position-related processing occurs at the device or at the PND. Also, "mobile station" is intended to include all devices, including wireless communication devices, computers, laptops, etc. which are capable of communication with a server, such as via the Internet, Wi-Fi, or other network, and regardless of whether satellite signal reception, assistance data reception, and/or position-related processing occurs at the device, at a server, or at another device associated with the network. Any operable combination of the above are also considered a "mobile station." [00104] Designation that something is "optimized," "required" or other designation does not indicate that the current disclosure applies only to systems that are optimized, or systems in which the "required" elements are present (or other limitation due to other designations). These designations refer only to the particular described implementation. Of course, many implementations are possible. The techniques can be used with protocols other than those discussed herein, including protocols that are in development or to be developed. [00105] One skilled in the relevant art will recognize that many possible modifications and combinations of the disclosed embodiments may be used, while still employing the same basic underlying mechanisms and methodologies. The foregoing description, for purposes of explanation, has been written with references to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described to explain the principles of the disclosure and their practical applications, and to enable others skilled in the art to best utilize the disclosure and various embodiments with various modifications as suited to the particular use contemplated.
A variable sector size for a flash memory device is disclosed. The total available memory of the flash memory device is divided into sub-units. Each sub-unit has a pre-decoder coupled with it to enable operations on the memory within that sub-unit. A sector size control register is coupled with pre-decoder enabling logic which is coupled with the pre-decoders. The sector size control register and pre-decoder enabling logic determines how many pre-decoders, and therefore how many sub-units, are activated at a given time for a given memory operation.
We claim: 1. A high density flash memory device comprising:an array comprising a plurality of sub-units, each of said plurality of sub-units comprising one or more single level flash memory cells, each of said plurality of sub-units being coupled with a sub-unit pre-decoder, said sub-unit pre-decoder being operative to enable an operation on said one or more single level flash memory cells of said corresponding sub-unit; a sector selector coupled with said sub-unit pre-decoders and operative to decode an input memory address and activate one or more of said sub-unit pre-decoders corresponding to said input memory address for said operation; and a sector size control register coupled with said sector selector and operative to control the number of sub-unit pre-decoders activated for said input memory address and said operation. 2. The high density flash memory device of claim 1, wherein the number of sub-unit pre-decoders activated for a given input address is a function of data stored in said sector size control register.3. The high density flash memory device of claim 2, wherein said data is characterized by first and second values and further wherein:said sector decoder activates two of said sub-unit pre-decoders corresponding to said input memory address when said data is equal to said first value; and said sector decoder activates one of said sub-unit pre-decoders corresponding to said input memory address when said data is equal to said second value. 4. The high density flash memory device of claim 1, wherein each of said plurality of sub-units comprises 512 Kilobits of single level flash memory cells.5. The high density flash memory device of claim 1, wherein said operation is an erase operation.6. The high density flash memory device of claim 1, wherein said operation is a program operation.7. A method of Varying the sector size of a high density flash memory device comprising an array of single level flash memory cells and a sector size control register, said method comprising:(a) subdividing said array of single level flash memory cells into a plurality of sub-units, each of said sub-units further comprising a sub-unit pre-decoder; (b) storing data in said sector size control register, said stored data representing the number of sub-units to be enabled for a memory address of said array; (c) decoding an input memory address; and (d) enabling one or more of said sub-unit pre-decoders based on said decoded input memory address and said stored data. 8. The method of claim 7, wherein each of said plurality of sub-units comprises 512 Kilobits of single level flash memory cells.9. The method of claim 7, wherein said data is characterized by first and second values and further wherein (d) further comprises enabling two sub-unit pre-decoders when said data equals said first value and enabling one sub-unit pre-decoder when said data equals said second value.10. The method of claim 7, further comprising:(e) performing a memory operation on said enabled one or more sub-units. 11. The method of claim 10, wherein said operation further comprises erasing said enabled one or more sub-units.12. The method of claim 10, wherein said operation further comprises programming said enabled one or more sub-units.13. A sector decoder for a flash memory array, said flash memory array being sub-divided into a plurality of sub-units, said sector decoder comprising:a sector size register operative to store data representing the number of said plurality of sub-units to be enabled for a memory operation; an address decoder operative to decode an input memory address; selection logic coupled with said address decoder, said sector size register and said flash memory array and operative to enable one or more of said plurality of sub-units corresponding to said decoded input memory address and said stored data for said operation. 14. The sector decoder of claim 13, wherein said data is characterized by first and second values and further wherein said selection logic is further operative to enable two of said one or more sub-units when said data is equal to said first value and enable one of said one or more sub-units when said data is equal to said second value.15. The sector decoder of claim 13, wherein said operation is an erase operation.16. The sector decoder of claim 13, wherein said operation is a program operation.
REFERENCE TO EARLIER FILED APPLICATIONThis application claims the benefit of the filing date pursuant to 35 U.S.C. [section]119(e) of Provisional Application Serial No. 60/199,671, filed Apr. 25, 2000, the disclosure of which is hereby incorporated by reference.COPYRIGHT NOTICEA portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.BACKGROUNDComputers, personal digital assistants, cellular telephones and other electronic systems and devices typically include processors and memory. The memory is used to store instructions (typically in the form of computer programs) to be executed and/or data to be operated on by the processors to achieve the functionality of the device. In some applications, the systems and devices may require that the instructions and/or data be retained in some form of a permanent/non-volatile storage medium so that the information is not lost when the device is turned off or power is removed. Exemplary applications include computer BIOS storage and diskless handheld computing devices such as personal digital assistants.One way to provide such non-volatile storage capability is to include a mass-storage device such as a hard disk drive. Hard disk drives are mechanical devices which store data on rotating magnetic platters. However, such devices may be difficult to fit in small systems and may have significant reliability, cost and manufacturing constraints. An alternative to such devices are integrated-circuit based non-volatile memories. One type of non-volatile memory that can be used is Erasable Programmable Read Only Memory ("EPROM"). While conventional EPROM's provide reliable non-volatile storage, they may not be able to be reprogrammed in the field in a practical manner. For example, EPROM's typically require exposure to ultraviolet light to erase them which may require that the EPROM memory chips he removed from the device. Once erased and reprogrammed, they are placed back in the device. In many applications, removing the memory to reprogram the device is not practical. In addition, besides not being easily reprogrammed, EPROM's may not have satisfactory data storage densities.To avoid the complexity of EPROM's and to provide a device that can be reprogrammed in the field, many electronic designs use Electrically Erasable Programmable Read Only Memory ("EEPROM"), Static Random Access Memory ("SRAM") or flash memory, which can be reprogrammed electrically and without special hardware. SRAM is not technically a form of non-volatile memory but can be used in some applications requiring non-volatile capability.EEPROM has the disadvantages of being expensive and having a very limited life cycle, i.e. an EEPROM can only be erased and rewritten a limited number of times before the device becomes non-functional. SRAM offers high operating speeds but only maintains its contents as long as power is supplied, therefore requiring a battery or other power source. This necessitates additional hardware to maintain power to the SRAM to preserve the stored contents which increases manufacturing cost and complexity. Further, the additional hardware may put undesirable constraints on the physical size of the design. In addition, EEPROM's and SRAM's may not have as high a data storage density as compared to other forms of storage. Therefore, where cost, size or density is a factor, flash memories are preferred because they may be simpler to reprogram in the field then EPROM's, less expensive than EEPROM's, easier to implement than battery-backed SRAM's and available in higher data storage densities.Flash memory (or flash RAM) is a form of non-volatile storage which uses a memory cell design with a floating gate. High voltages are applied to the memory cell inputs to program/store charge on the floating gate or to erase/remove charge from the floating gate. Programming occurs by hot electron transfer to place charge on the floating gate while erasure makes use of Fowler-Nordheim tunneling in which electrons pierce through a thin dielectric material, reducing the amount of electronic charge on the floating gate. Erasing a cell sets the logical value of the cell to "1" while programming the cell sets the logical value to "0". Aside from programming or erasing operations, a flash memory operates similarly to a randomly accessible read only memory (ROM). Conventionally, a flash memory chip, including the flash memory storage cells and support logic/circuitry, is made by fabricating layers of semiconductor material and interconnect layers of polysilicon and first and second metal layers onto a substrate. It will be appreciated that there are numerous integrated circuit fabrication techniques, involving more or fewer layers, which are applicable herein.Prior flash memories could only be erased by erasing the entire memory chip also known as bulk erasure. Byte by byte erasure was not possible. To somewhat alleviate this problem, modern flash memory is typically divided logically into blocks called "sectors" where each sector contains a portion of the total bytes of data storage available. For example, a typical flash memory may have 32 megabits of total storage and be logically broken down into 64 sectors, each sector containing 64 Kilobytes of data (one byte being equal to eight bits). This arrangement allows for the option of erasure of one sector at a time in addition to bulk erasure of the entire memory. While typical flash memories are still incapable of byte by byte erasure, data in the flash memory may still be programmed byte by byte (or sometimes word by word, where a word equals two or four bytes) depending on the implementation. It will be appreciated that the granularity by which a flash memory device can be programmed or erased may vary and that granularities down to bit level programming/erasure are contemplated.In order to program and/or erase a flash memory, typically a complex process must be followed. For example, before erasing a particular sector, that sector must be programmed (known as "pre-programming"). These steps of erasing and programming involve complex application of high voltages to the memory cells for specified periods of time and in particular sequences. Many flash memories provide embedded state machines which perform the complex programming and erasing operations automatically. These processes of programming and erasing a flash memory may take a long time to complete. A typical erase sequence can take anywhere from 0.7 seconds up to 15 seconds per sector. To erase an entire chip can take up to 49 seconds depending on the number of sectors. While programming is much faster, on the order of 7 to 300 microseconds per byte, it is still slow compared to other memory devices. Programming an entire chip can still take up to 120 seconds (including the time to verify the data) depending on the capacity of the chip. Typically, standard Dynamic Random Access Memory ("DRAM") offers write access times on the order of nano-seconds, a difference between flash memory of many orders of magnitude.Another problem with existing flash memory devices has been the low density of storage offered as compared with traditional dynamic random access memory ("DRAM"). With the ever increasing need for storage space in modem electronic devices combined with the need to reduce the number of discrete components, there has been a corresponding pressure to increase the amount of storage available on a single flash memory device. This increase in storage density must not come at the expense of reliability.One way to increase the storage capacity of a flash memory device is to use a core cell with a dual-level floating gate structure. Such a structure allows one core cell to represent more than one bit of information without increasing the size/area of the device. However, such dual-level core cells are difficult to design and implement because they require complex programming, erase and read logic. This is because the multiple voltage levels that can be stored in the cell now represent more than one logical value and the programming, erase and read logic must now be able to discriminate among these voltage levels. This raises concerns with the ability of the flash memory device to reliably store and retrieve data.SUMMARYThe present invention is defined by the following claims, and nothing in this section should be taken as a limitation on those claims. By way of introduction, the preferred embodiments described below relate to a high density flash memory device with a variable sector size. The device includes an array comprising a plurality of sub-units with each of the sub-units comprising one or more single level flash memory cells. Each of the plurality of sub-units is coupled with a sub-unit pre-decoder which is operative to enable an operation on the one or more single level flash memory cells of the corresponding sub-unit. The device further includes a sector selector coupled with the sub-unit pre-decoders and operative to decode an input memory address and activate one or more of the sub-unit pre-decoders corresponding to the input memory address for the operation. In addition, the device includes a sector size control register coupled with the sector selector and operative to control the number of sub-unit pre-decoders activated for the input memory address and the operation.The preferred embodiments further relate to a method of varying the sector size of a high density flash memory device comprising an array of single level flash memory cells and a sector size control register. The method comprises: subdividing the array of single level flash memory cells into a plurality of sub-units, each of the sub-units further comprising a sub-unit pre-decoder; storing data in the sector size control register where the stored data represents the number of sub-units to be enabled for a memory address of the array; decoding an input memory address; and enabling one or more of the sub-unit pre-decoders based on the decoded input memory address and the stored data.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 depicts a block diagram of a 64 Mb flash memory chip according to the present invention.FIG. 2 depicts a block diagram of a first embodiment of a flash memory device having a variable sector size.FIG. 3 depicts a schematic diagram of a preferred 128K sector activation logic for use with the embodiment o FIG. 2.FIG. 4 depicts a schematic diagram of a preferred sector pre-decoder selector for use with the embodiment of FIG. 2.FIG. 5 depicts a schematic diagram of a preferred sector pre-decoder for use with the embodiment of FIG. 2.DETAILED DESCRIPTION OF THE PRESENTLY PREFERRED EMBODIMENTSHerein, the phrase "coupled with" is defined to mean directly connected to or indirectly connected with through one or more intermediate components. Further, as used herein. the phrase "high logic level" is used to indicate a logic level of 1 and the phrase "low logic level" is used to indicate a logic level of 0. It will be understood that the signals underlying these representations are actually represented by voltage values. A signal is said to be "asserted" when it has a value which is significant to the logic it is driving. Some signals are asserted when they are at a low logic level (also referred to as "active low") and some signals are asserted when they are at a high logic level (also referred to as "active high"). It will be appreciated that all forms of digital logic representation are contemplated including mixed logic. It will further be appreciated that the underlying voltages of the logic signals may also vary, with typical values being 2 or 3 Volts representing a logic 1 and 0 Volts representing logic 0.Referring now to the Figures and in particular, FIG. 1, there is schematically shown a flash memory device 100 according to the present invention that provides 64 megabits (Mb) of storage using a single level NOR type flash memory cell. An exemplary flash memory device 100 is the Am29LV640DU and Am29LV641DU 64 Mb flash memory chips manufactured by Advanced Micro Devices, Inc., located in Sunnyvale, Calif. These devices are discussed in more detail in "Advance Information: Am29LV640DU/Am29LV641DU 64 Megabit (4 M*16-Bit) CMOS 3.0 Volt-only Uniform Sector Flash Memory with Versatile I/O(TM) Control," published by Advanced Micro Devices, Inc., located in Sunnyvale, Calif., herein incorporated by reference.The exemplary flash memory device 100 utilizes a single level NOR flash memory cell which is fabricated using a 0.25[mu]m technology. This allows higher densities and smaller die sizes. In addition single level NOR flash memory cells require less complex programming, erase and read logic versus dual level memory cells. Further, it is easier to ensure uniform cell performance across a large array of single level NOR cells. For example, each cell only needs to be characterized by one threshold voltage.The device 100 includes a state control and command register 102, a program voltage generator 104, a Vcc detector 106, a timer 108, sector switches 110, an erase voltage generator 112, chip and output enable logic 114, an address latch 116, a Y-decoder 118, an X-decoder 120, input/output buffers 122, a data latch 124, Y-gating 126 and the cell matrix/array 128. The device 100 further includes inputs and outputs for ready/busy 130, labeled "RY/BY#", operating power 132, abeled "Vcc", ground 134, labeled"Vss", reset 136, labeled "RESET#", write enable 138, labeled "WE#", write protect 140, labeled "WP#", accelerate 142, labeled "ACC", chip enable 144, labeled "CE#", output enable 146, labeled "OE#", a 22 bit address input bus 148, labeled "A0-A21", output buffer power 150, labeled "Vio", and a 16 bit data input/output bus 152, labeled "DQ0-DQ15". The# following a signal name indicates that this signal is asserted when it has a low logic value (active low). In one embodiment, all of the components of FIG. 1 are contained on a single integrated circuit chip. The operation and use of these input and output signals is further explained in the above mentioned reference.Note that the exemplary flash memory device 100, having 64 megabits (or 8 megabytes) is word addressable and therefore accommodates a 22 bit address input 148 and a 16 bit data input/output 152. It will be appreciated that the data size granularity with which the device 100 can be accessed can vary with the implementation and amount of total storage, with a smaller granularity requiring more input address bits and fewer data input/output bits and vice versa, and all such implementations are contemplated. For example, a device 100, having 64 megabits of storage. which is byte addressable requires 23 address bit inputs 148 and 8 data input/outputs 152. In another alternative, the device 100 supports both word and byte addressing on the same integrated circuit.The state control and command register 102 includes the state machine and control logic which controls the operation of the device 100. This includes controlling the embedded programming and erase operations as well as other general operations of he device 100, which are discussed in more detail below. The state control and command register is responsive to the reset input 136, the write enable input 138, the write protect input 140, the accelerate input 142 and the chip enable input 144. The reset input is used to perform a hardware reset of the device 100. The write enable input 138 is used to signal the device 100 that data is to be stored in the array 128. The write protect input 140 is used to control the write protect functions of the device 100 which prevent accidental erasure of the contents stored in the array 128. The accelerate input 142 is used to speed up programming and erase functions. The chip enable input 144 is used to enable access to the device 100. The state control and command register further includes a ready/busy output 130 which indicates when the device is busy undergoing an embedded operation.The PGM voltage generator 104 generates the necessary voltages for programming the flash memory cells of the cell matrix/array 128. The erase voltage generator 112 generates the necessary voltages for erasing the flash memory cells of the array 128. The voltage generators 104 and 112 contain voltage pumps (not shown) and switching multiplexors (not shown) which generate and route the necessary high voltages for erasing and programming flash memory cells as well as generating the necessary voltages for read operations under the direction of the state control and command register 102. These voltage pumps include a VPXGG pump, a voltage booster circuit, a VPPIG pump, a drain pump and a negative pump.The VPXGG pump is a positive power supply for generating and supplying a regulated positive potential to the control gate of selected flash memory cells via the word lines. Many different voltage pumps known in the art are suitable for use in the present invention. A more detailed explanation of one technology which can be included in VPXGG pump can be found in U.S. Pat. No. 5,291,446, "VPP POWER SUPPLY HAVING A REGULATOR CIRCUIT FOR CONTROLLING A REGULATED POSITIVE POTENTIAL" to Van Buskirk et al, the entire contents of which are incorporated herein by reference.During read o erations, the voltage booster is used to boost the word line voltage while the drain pump is used to boost the bit line voltage prior to sensing the output voltage levels. A more detailed description of one exemplary implementation of a voltage booster circuit can be found in U.S. Pat. No. 5,708,387, "FAST 3-STATE BOOSTER CIRCUIT", to Cleveland et al, the entire contents of which are incorporated herein by reference. Many booster circuits and selection circuits known in the art are suitable for use in the present invention.The VPPIG pump is a high voltage pump used to pass high voltage to the drain of the memory cells. Various drain power supplies, known in the art, can be used for the present in vention. One exemplary drain pump is disclosed in U.S. Pat. No. 5,263,000, "DRAIN POWER SUPPLY", to Van Buskirk, et al., the entire contents of which are incorporated herein by reference.The negative pump is used to generate a relatively high negative voltage to the control gates of selected memory cells via the word lines. One example of a negative pump can be found in U.S. Pat. No. 5,612,921, "LOW SUPPLY VOLTAGE NEGATIVE CHARGE PUMP", to Chang et al, the entire contents of which are incorporated herein by reference.Referring back to FIG. 1, the flash memory device 100 further includes a Vcc detector 106 which detects when normal operating power is applied to the device 100. The Vcc detector 106 signals the state control and command register 102 when proper Vcc is detected. The timer 108 is used by the state control and command register 102 to properly control and synchronize the embedded program and erase operations. The sector switches 110 are used to route the voltages used during the erase operation to the proper sectors which are undergoing erase. The Chip and output enable logic 114 is responsive to the chip enable 144 and output enable 146 inputs. This logic is used to enable the device 100 to receive and pass data via the input/output buffers 122. The address latch 116 receives the address for a read or write operation from the address inputs 148. The address latch 116 latches the address for subsequent decoding. The Y-decoder 118 decodes the column address in the memory array 128 from the address latched in the address latch 116 . The X-decoder 120 decodes the row address in the memory array 128 from the address latchled in the address latch 116. The input/output buffers 122 buffer read data that is being output and write data that is being input to/from the external data bus 152 of the device 100. The input/output buffers receive power from an external voltage source, Vio 150 . The data latch 124 latches and holds data being written to the array 128 coming from the input/output buffers 122 or data being read from the array 128 going to the buffers 122. The data latch 124 holds the data steady so it can be written or output depending on the operation underway. The Y-gating 126 gates the data being read from or written to the array 128. The cell matrix/array 128 includes an array of flash memory cells arranged in a row and column addressable format. Alternatively, the cell matrix/array 128 may include one or more banks to subdivide the accessible memory along with the additional hardware necessary to support multiple banks. The individual memory cells in the array 128 are further sub-grouped into sectors such that one or more sectors may be erased at any given time. In the exemplary flash memory device 100, the array 128 is arranged as 128 64 kilobyte sectors. It will be appreciated that there are many ways to implement the basic structure of the flash memory device 100 including alternate input/output interfaces, alternate memory array structures along with accompanying supporting logic and all such alternatives are contemplated.The memory device 100 is programmed using an embedded programming sequence and is erased using an embedded erase sequence. The embedded sequences allow a processor to initiate a program or erase sequence and perform other tasks while the program and erase sequences are being carried out. The embedded program and erase sequences are controlled by the state control and command register 102, which uses a command register to manage the commencement of either sequence. The erase and programming operations are only accessed via the command register which controls an internal state machine that manages device Operations. Commands are written to the command register via the data inputs 152 to the memory device 100.In the memory device 100, each memory cell, within the cell array 128, includes a single level nor-type floating gate transistor (not shown). It will be appreciated by those skilled in the art, however, that there are many ways to implement a single level flash memory cell and that the configurations and operating characteristics may vary. It will further be appreciated that the embodiments disclosed herein are generally applicable and not limited to one particular implementation of a single level flash memory cell.The exemplary transistor has three connections called the source, drain and control gate. In a typical flash memory array, the control gates of the memory cells are connected to the word lines of the array which are used to address the data stored in the array. The sources are selectively connected to ground (for a read operation) depending on which bits are to be read. The drains are connected to the bit lines which are used to sense/read the stored data out of the array.During an erase operation, the source input of the memory cell transistor is connected to a high positive voltage, the drain/bit line is left to float and the control gate/word line is connected to a relatively high negative voltage supplied by the negative pump. An exemplary high positive voltage applied to the source during an erase is approximately 5 volts and an exemplary high negative voltage applied to the control gate/word line by the negative pump is approximately minus 9 volts although other voltages and input combinations can be used. Based on this input configuration, any charge stored on the floating gate of the memory cell transistor will discharge by flowing out to the source due to Fowler-Nordheim Tunneling.During a program operation, the source input of the memory cell transistor is connected to ground, the drain/bit line is connected to a high positive voltage provided by the VPPIG Dpump drain power supply and the control gate/word line is connected to a high voltage provided by the VPXGG pump positive power supply. An exemplary high voltage applied to the drain by the VPPIG is approximately 5 Volts while an exemplary high voltage applied to the control gate by the VPXGG pump is approximately 9 Volts. It will be appreciated by those skilled in the art that other voltage and input combinations can also be used. Based on this input configuration, charge will flow by hot electron transfer to the floating gate of the memory cell transistor and accumulate there.While programming and erasing the memory cell requires higher than normal voltages, reading from the cell only requires the availability of the normal supply voltage. To read from the memory cell, the source is connected to ground (also referred to as Vss) and the control gate/word line are connected to the booster power supply. Prior to selecting the transistors for a read, the bit lines are charged up via the drain pump. When the cells turn on (if erased), they will connect their respective bit line to ground, grounding out the bit line. The current value of the memory cell is then sensed from the drain/bit line connection. The booster power supply is used to boost the word lines during a read operation. An exemplary Vcc supply voltage is 3.0 Volts although other supply voltages are known in the art. An exemplary booster voltage is 5.0 Volts, although the use of the other voltages on the control gate for read operations is possible. If there is charge stored on the floating gate, i.e. the memory cell has been programmed, the flow of current from the drain to the source (ground) will be inhibited and the memory cell will read as a logical "0". If the memory cell has been erased, there will be no charge stored on the floating gate and with a voltage applied to the control gate greater than the threshold voltage of the transistor, current will flow from the drain to the source and the memory cell will read as a logical "1". Note that a transistor that is on, grounds its respective bit line. Data read out of the array is considered in its complimentary form, therefore the grounded bit lines are interpreted as logical 1's and the non-grounded bit lines are considered logical 0's.Application of the particular voltages necessary for each operation is handled by the state command and control register 102. This logic 102 controls the multiplexors that place the proper voltages from the various power supplies and Vcc on the memory cell inputs depending on the desired function.A number of different vendors produce flash memory devices, many of these devices having various differing capabilities and tradeoffs. Often, one or more vendors may offer a capability in their devices which becomes adopted by a significant portion of the flash memory customer base. This customer base will then demand that capability (i.e. compatibility) from any devices that they are going to buy from other vendors. For example, a particular chip interface offered by one vendor may provide for easier routing of printed circuit board signal paths and therefore be preferred by a manufacturing company over other vendors flash memory devices. Electronic devices incorporating flash memory devices are complicated and difficult to design. Therefore, once this interface is incorporated into a current design, most likely, future generations of that design will also incorporate it. If a competitive vendor wishes to sell their devices to this company, they must offer a compatible interface because the company is unlikely to go back and change their design, incurring significant re-design and verification costs, just to use a competitive product. However, the vendor, typically, must still support the remaining customer base which has not adopted the capability.Effectively, this means that a vendor must design and maintain an inventory of flash memory devices which are compatible with the different devices of other vendors so that they can compete with these other vendors for customers who demand those capabilities. Another example of a compatibility issue has to do with sector size. In the exemplary flash memory device 100, each sector is 64 kilobytes in size. As was noted above, erase operations are performed sector by sector. If a user wishes to erase a portion of the flash memory device 100, they must supply the erase command along with an address of the sector which encompasses the portion they want to erase. Often, the portion to be erased is larger than one sector. In this case, the exemplary flash memory device 100 allows the user to specify the addresses for each sector to be erased, one after the other. Once these addresses are specified, the device 100 performs an embedded operation to erase each specified sector. Therefore, to erase two successive 64 kilobyte sectors, the user must give the erase command followed by the address of the first sector and then the address of the second sector. The device 100 will then erase the two sectors.Other vendors of flash memory devices utilize a sector with a larger size. For example, the devices of some vendors utilize a sector which stores 128 kilobytes. This larger sector size has the advantage that to erase the equivalent amount of memory, the 128 kilobyte sector requires only one address be given to the device while in al device 100 with a 64 kilobyte sector size, two addresses must be given. Products designed to utilize a 128 kilobyte sector cannot work with devices 100 which provide a 64 kilobyte sector because the product is not designed to give two addresses to erase 128 kilobytes of the flash memory.Therefore, in order to maintain sector size compatibility, the exemplary flash memory device 100 offers a variable sector size which can be set so that the device has either a 64 kilobyte sector size or a 128 kilobyte sector size. It is preferable that only two sector sizes be offered to reduce logical complexity, a minimum sector size and a maximum sector size equal to twice the minimum sector size. However, it will be appreciated that the disclosed embodiments are scaleable and can be used to offer sector sizes larger than 128 kilobytes such as 192, 256 or 512 kilobytes. Further, where the minimum individual sector size is reduced, a more granular range of sector sizes can be offered. For example, where the minimum sector size is 32 kilobytes, sector sizes of 32, 64, 96, 128 kilobytes, etc. can be offered.Referring to FIG. 2, there is shown a block diagram of the logic of the state command and control register 102 and address decoding logic 116, 118, 120 which implements the variable sector size in the exemplary device 100. For the sake of clarity, a number of the components of the state command and control register 102 and the address decoding logic 116, 118, 120 have been deleted in FIG. 2. This logic includes a 128K Content Addressable Memory 202 ("128K CAM"), 128K sector activation logic 204, labeled "EN-2S", sector pre-decoder selectors 206, labeled "SPDEC_LOGIC", and sector pre-decoders 208, labeled "SPDEC". As was described above, the memory array 128 is divided into sub-units 210 called sectors. The exemplary memory array 128 comprises 128 sectors 210, each storing 64 kilobytes of data. Alternatively, the sector 210 size can be larger or smaller. Each sector 210 has a corresponding sector pre-decoder 208 and there is one sector pre-decoder selector 206 for each pair of sector pre-decoders 208 corresponding to two consecutive sectors 210.The 128K CAM 202 is a control register which stores a data value representing the desired sector size for the device 100. This CAM 202 is actually a flash memory cell which can be programmed or erased depending on the desired sector size. If the CAM 202 is programmed, i.e. has a logical value of "0", the sector size will be 128 kilobytes. If the CAM 202 is erased, i.e. has a logical value of "1", the sector size will be 64 kilobytes. In one alternative embodiment, the values the 128K CAM 202 represent other sector sizes. Alternatively, the 128K CAM 202 can store more than one bit of data to represent a range of potential sector sizes for the device 100.The output of the 128K CAM 202, labeled "CONSEC2", is coupled with 128K sector activation logic 204. Referring to FIG. 3, this logic 204 interprets the value stored in the CAM 202 and enables the sector pre-decoder selectors 206 to select one or two sector pre-decoders 208 depending on the desired sector 210 size. The 128K sector activation logic 204 is coupled with each sector predecoder selector 206 by the output signal labeled "EN-2S". The 128K sector activation logic 204 includes a NAND gate 302 and an inverter 304. The CONSEC2 signal is coupled with the NAND gate 302. In alternative embodiments, other control signals which enable a 128 kilobyte sector size are also coupled with the NAND gate 302. The output of the NAND gate 302 is coupled with the inverter 304. The output of the inverter 304 is the EN-2S signal. When the CONSEC2 signal is a logical 1, the NAND gate 302 and inverter 304 will assert the EN-2S signal as a logical 1 as well.Referring back to FIG. 2, each sector pre-decoder selector 206 is coupled with the address decoding logic (shown in FIG. 1, reference numerals 116, 118, 120). For a given address decoded by the address decoding logic, the sector pre-decoder selector 206 corresponding to the appropriate sector 210 is activated. Depending on the value of the 128K CAM 202, the activated sector pre-decoder selector 206 will, in turn, activate one or two sector pre-decoders 208 for the sectors 210 to be erased. Each sector pre-decoder selector 206 is coupled to two sector pre-decoders by signal paths labeled "Z2SP" and "Z2SP+1".In actual operation, a user who wishes to erase one or more sectors 210 first sends an erase command to the device 100. Following the erase command, the user can then transmit an address corresponding to the first sector 210 to be erased. After transmitting the first address, the user can then send another address for a second sector 210 to be erased, etc. After a pre-determined time elapses without the user having sent an address, the device 100 begins the process of erasing the particular sectors 210 indicated by the user. More detail on the operation of the erase function of the exemplary flash memory device 100 can be found in "Advance Information: Am29LV640DU/Am29LV641DU 64 Megabit (4 M*16-Bit) CMOS 3.0 Volt-only Uniform Sector Flash Memory with Versatile I/O(TM) Control," published by Advanced Micro Devices, Inc., located in Sunnyvale, Calif., herein incorporated by reference.Internally, the state control and command register 102 recognizes the erase command sent by the user. Each address sent subsequent to the erase command is decoded by address decoding logic 116, 118, 120 into the appropriate sector predecoder 208 for the sector 210 to be erased. Each sector pre-decoder 208 is coupled to the address decoding logic 116, 118, 120 through a sector pre-decoder selector 206. As will be discussed below and shown in FIG. 5, each sector predecoder 208 includes a latch 502 which, when set, indicates that the corresponding sector is to be erased. The decoding of the address sent by the user sets the latch 502 in the sector pre-decoder 208. Once the last address has been sent by the user and the state command and control register 102 begins the erase process, each sector 210 whose corresponding sector pre-decoder 208 has its latch 502 set will be erased.Referring now to FIG. 4, there is shown a preferred sector pre-decoder selector 206 for use with the present embodiments. The selector 206 includes inputs 402, 404 for the sector pre-decoder 208 activation signal from the address decoding logic 116, 118, 120, labeled "Z2(v*2)" and "(Z2(v*2+1)", an input 406 for the EN-2S signal from the 128K sector activation logic 204 and outputs 408, 410 for the selected sector pre-decoder activation signals, labeled "Z2SP(v*2)"and "Z2SP(v*2+1)". Each sector pre-decoder selector 206 is used to enable the sector pre-decoders 208 of two consecutive sectors 210.The Z2(v*2) input 402 is coupled to the input of an inverter 412 whose output is coupled to one input of a NAND gate 414. The output of the NAND gate 414 is the Z2SP(v*2) output 408. The Z2(v*2+1) input 404 is coupled to the input of an inverter 416 whose output is coupled to one input of a NAND gate 418. The output of the NAND gate 418 is the Z2SP(v*2+1) output 410. In addition, the Z2(v*2) and Z2(v*2+1) inputs 402, 404 are also coupled to the inputs of a NOR gate 420. The output of the NOR gate 420 is connected to the input of a NAND gate 422. The other input of the NAND gate 422 is connected with the EN-2S input 406. The output of the NAND gate 422 is coupled to a second input of NAND gate 414 and a second input of NAND gate 418.In this configuration, when the EN-2S input 406 is unasserted, assertion of one of the sector pre-decoder 208 activation signal inputs 402, 404 will result in the assertion of only the corresponding selected sector pre-decoder activation signal output 408. 410. If the EN-2S input 406 is asserted., assertion of either of the sector pre-decoder 208 activation signal inputs 402, 404 will result in the assertion of both of the corresponding selected sector pre-decoder 208 activation signal outputs 408, 410.Referring now to FIG. 5, there is shown a preferred sector pre-decoder 208 for use with the present embodiments. The sector pre-decoder 208 includes an input 504 for the selected sector pre-decoder 208 activation signal output 408 from the sector pre-decoder selector 206. The sector pre-decoder 208 also includes a latch 502. The input 504 is coupled with the latch 502 through the intermediary logic 506. When the input 504 is asserted, the latch 502 can be set to indicate that the corresponding sector 210 should be erased.In this way, a variable sector size is implemented in the exemplary flash memory device 100. The variable sector size allows the selection of either a 64 kilobyte or 128 kilobyte sector size. The implementation is based on dividing the memory array into sub-units, each sub-unit representing the minimum sector size available. The control logic and sector size control registers then control how many sub-units are activated or operated upon for a given operation. By activating more than one sub-unit, a larger sector size is achieved. The disclosed implementation utilizes simple logic to achieve a flexible and scaleable sector size. It will be appreciated that the size of the sectors and the number of available sizes offered by the device 100 may vary and that all such combinations are contemplated.It is to be noted that suitable transistor sizes specifying channel width to length ratios (measured in micrometers or microns) for the transistors which make up the depicted circuits have been omitted from the figures. It will be appreciated that suitable ratios may be chosen depending on the design requirements and the capabilities and limitations of the particular integrated circuit fabrication process used for implementation of the circuit as well as the performance requirements of the specific embodiment.It will be appreciated that there are many ways to implement the disclosed logic. It is therefore intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is the following claims, including all equivalents, that are intended to define the spirit and scope of this invention.
Detecting suspicious or performance-degrading mobile device behaviors intelligently, dynamically, and/or adaptively determine computing device behaviors that are to be observed, the number of behaviors that are to be observed, and the level of detail or granularity at which the mobile device behaviors are to be observed. The various aspects efficiently identify suspicious or performance-degrading mobile device behaviors without requiring an excessive amount of processing, memory, or energy resources. In an embodiment, a method for observing mobile device behaviors over a period of time to recognize mobile device behaviors inconsistent with normal operation patterns is disclosed. The method comprises determining in a processor of a mobile device a feature that is to be observed in the mobile device in order to identify a suspicious behavior of the mobile device, and adaptively observing the determined feature by collecting behavior information from a hardware component associated with the determined feature.
CLAIMSWhat is claimed is:1. A method for observing mobile device behaviors over a period of time to recognize mobile device behaviors inconsistent with normal operation patterns, the method comprising:determining in a processor of a mobile device a feature that is to be observed in the mobile device in order to identify a suspicious behavior of the mobile device; andadaptively observing the determined feature by collecting behavior information from a hardware component associated with the determined feature.2. The method of claim 1, wherein adaptively observing the determined feature by collecting behavior information from the hardware component comprises collecting behavior information from one or more of:an inertia sensor component;a battery hardware component;a browser supporting hardware component;a camera hardware component;a subscriber identity module (SIM) hardware component;a location hardware component;a microphone hardware component;a radio interface hardware component;a speaker hardware component;a screen hardware component;a synchronization hardware component;a storage component;a universal serial bus hardware component;a user interaction hardware component;an inertia sensor driver component; a battery hardware driver component;a browser supporting hardware driver component;a camera hardware driver component;a SIM hardware driver component;a location hardware driver component;a microphone hardware driver component;a radio interface hardware driver component;a speaker hardware driver component;a screen hardware driver component;a synchronization hardware driver component;a storage driver component;a universal serial bus hardware driver component;hardware component connected through a universal serial bus; and a user interaction hardware driver component.3. The method of claim 2, wherein collecting behavior information from the hardware component associated with the feature comprises collectinginformation from a log of application programming interface (API) calls that temporarily or permanently stores API call information for access or use of the hardware component by software applications of the mobile device.4. The method of claim 1, wherein determining the feature that is to be observed in the mobile device to identify the suspicious behavior of the mobile device comprises:applying machine learning techniques to generate a first family of classifier models that describe a cloud corpus of behavior vectors;determining which factors in the first family of classifier models have a highest probably of enabling a mobile device to conclusively determine whether a mobile device behavior is malicious or benign; generating a second family of classifier models that identify significantly fewer factors and data points as being relevant for enabling the mobile device to conclusively determine whether the mobile device behavior is malicious or benign based on the determined factors;generating a mobile device classifier model based on the second family of classifier models; andusing the generated classifier model to identify the feature that is to be observed.5. The method of claim 4, further comprising using the generated classifier model to analyze the collected behavior information.6. A mobile computing device, comprising:a processor configured with processor-executable instructions to perform operations comprising:determining a feature that is to be observed to identify a suspicious behavior of the mobile device; andadaptively observing the determined feature by collecting behavior information from a hardware component associated with the determined feature.7. The mobile computing device of claim 6, wherein the processor is configured with processor-executable instructions to perform operations such that adaptively observing the determined feature by collecting behavior information from the hardware component comprises collecting behavior information from one or more of:an inertia sensor component;a battery hardware component;a browser supporting hardware component;a camera hardware component; a subscriber identity module (SIM) hardware component;a location hardware component;a microphone hardware component;a radio interface hardware component;a speaker hardware component;a screen hardware component;a synchronization hardware component;a storage component;a universal serial bus hardware component;a user interaction hardware component;an inertia sensor driver component;a battery hardware driver component;a browser supporting hardware driver component;a camera hardware driver component;a SIM hardware driver component;a location hardware driver component;a microphone hardware driver component;a radio interface hardware driver component;a speaker hardware driver component;a screen hardware driver component;a synchronization hardware driver component;a storage driver component;a universal serial bus hardware driver component; anda user interaction hardware driver component.8. The mobile computing device of claim 7, wherein the processor is configured with processor-executable instructions to perform operations such that collecting behavior information from the hardware component associated with the feature comprises collecting information from a log of application programming interface (API) calls that stores API call information for access or use of the hardware component by software applications of the mobile device.9. The mobile computing device of claim 6, wherein the processor is configured with processor-executable instructions to perform operations such that determining the feature that is to be observed in the mobile device to identify the suspicious behavior of the mobile device comprises:applying machine learning techniques to generate a first family of classifier models that describe a cloud corpus of behavior vectors;determining which factors in the first family of classifier models have a highest probably of enabling a mobile device to conclusively determine whether a mobile device behavior is malicious or benign;generating a second family of classifier models that identify significantly fewer factors and data points as being relevant for enabling the mobile device to conclusively determine whether the mobile device behavior is malicious or benign based on the determined factors;generating a mobile device classifier model based on the second family of classifier models; andusing the generated classifier model to identify the feature that is to be observed.10. The mobile computing device of claim 9, wherein the processor is configured with processor-executable instructions to perform operations further comprising using the generated classifier model to analyze the collected behavior information.11. A mobile computing device, comprising:means for determining a feature that is to be observed to identify a suspicious behavior of the mobile device; and means for adaptively observing the determined feature by collecting behavior information from a hardware component associated with the determined feature.12. The mobile computing device of claim 11, wherein means for adaptively observing the determined feature by collecting behavior information from the hardware component comprises means for collecting behavior information from one or more of:an inertia sensor component;a battery hardware component;a browser supporting hardware component;a camera hardware component;a subscriber identity module (SIM) hardware component;a location hardware component;a microphone hardware component;a radio interface hardware component;a speaker hardware component;a screen hardware component;a synchronization hardware component;a storage component;a universal serial bus hardware component;a user interaction hardware component;an inertia sensor driver component;a battery hardware driver component;a browser supporting hardware driver component;a camera hardware driver component;a single or dual SIM hardware driver component;a location hardware driver component;a microphone hardware driver component;a radio interface hardware driver component; a speaker hardware driver component;a screen hardware driver component;a synchronization hardware driver component;a storage driver component;a universal serial bus hardware driver component; anda user interaction hardware driver component.13. The mobile computing device of claim 12, wherein means for collecting behavior information from the hardware component associated with the feature comprises means for collecting information from a log of applicationprogramming interface (API) calls that stores API call information for access or use of the hardware component by software applications of the mobile device.14. The mobile computing device of claim 11, wherein means for determining the feature that is to be observed in the mobile device to identify the suspicious behavior of the mobile device comprises:means for applying machine learning techniques to generate a first family of classifier models that describe a cloud corpus of behavior vectors;means for determining which factors in the first family of classifier models have a highest probably of enabling a mobile device to conclusively determine whether a mobile device behavior is malicious or benign;means for generating a second family of classifier models that identify significantly fewer factors and data points as being relevant for enabling the mobile device to conclusively determine whether the mobile device behavior is malicious or benign based on the determined factors;means for generating a mobile device classifier model based on the second family of classifier models; andmeans for using the generated classifier model to identify the feature that is to be observed.15. The mobile computing device of claim 14, further comprising means for using the generated classifier model to analyze the collected behaviorinformation.16. A non-transitory processor readable storage medium having stored thereon processor-executable software instructions configured to cause a mobile device processor to perform operations for observing mobile device behaviors over a period of time to recognize mobile device behaviors inconsistent with normal operation patterns, the operations comprising:determining a feature that is to be observed to identify a suspicious behavior of the mobile device; andadaptively observing the determined feature by collecting behavior information from a hardware component associated with the determined feature.17. The non-transitory processor readable storage medium of claim 16, wherein adaptively observing the determined feature by collecting behavior information from the hardware component comprises collecting behavior information from one or more of:an inertia sensor component;a battery hardware component;a browser supporting hardware component;a camera hardware component;a single or dual subscriber identity module (SIM) hardware component; a location hardware component;a microphone hardware component;a radio interface hardware component;a speaker hardware component;a screen hardware component;a synchronization hardware component;a storage component; a universal serial bus hardware component;a user interaction hardware component;an inertia sensor driver component;a battery hardware driver component;a browser supporting hardware driver component;a camera hardware driver component;a single or dual SIM hardware driver component;a location hardware driver component;a microphone hardware driver component;a radio interface hardware driver component;a speaker hardware driver component;a screen hardware driver component;a synchronization hardware driver component;a storage driver component;a universal serial bus hardware driver component; anda user interaction hardware driver component.18. The no n- transitory processor readable storage medium of claim 17, wherein collecting behavior information from the hardware component associated with the feature comprises collecting information from a log of applicationprogramming interface (API) calls that stores API call information for access or use of the hardware component by software applications of the mobile device.19. The non-transitory processor readable storage medium of claim 18, wherein determining the feature that is to be observed in the mobile device to identify the suspicious behavior of the mobile device comprises:applying machine learning techniques to generate a first family of classifier models that describe a cloud corpus of behavior vectors; determining which factors in the first family of classifier models have a highest probably of enabling a mobile device to conclusively determine whether a mobile device behavior is malicious or benign;generating a second family of classifier models that identify significantly fewer factors and data points as being relevant for enabling the mobile device to conclusively determine whether the mobile device behavior is malicious or benign based on the determined factors;generating a mobile device classifier model based on the second family of classifier models; andusing the generated classifier model to identify the feature that is to be observed.20. The non-transitory processor readable storage medium of claim 19, further comprising using the generated classifier model to analyze the collected behavior information.
TITLEADAPTIVE OBSERVATION OF DETERMINED BEHAVIORAL FEATURES ONA MOBILE DEVICERELATED APPLICATIONS[0001] This application is a continuation-in-part of U.S. Patent Application No. 13/923,547 entitled "Adaptive Observation of Behavioral Features on a Mobile Device" filed June 21, 2013, which claims the benefit of priority to U.S.Provisional Application No. 61/756,963 entitled "Adaptive Observation of Behavioral Features on a Mobile Device" filed January 25, 2013 and U.S.Provisional Application No. 61/683,274, entitled "System, Apparatus andMethod for Adaptive Observation of Mobile Device Behavior" filed August 15,2012, the entire contents of all of which are hereby incorporated by reference for all purposes.[0002] This application also claims the benefit of priority to U.S. Provisional Application No. 61/882,833, entitled "Adaptive Observation of Driver and Hardware Level Behavioral Features on a Mobile Device" filed September 26,2013, the entire contents of which are hereby incorporated by reference for all purposes.BACKGROUND[0003] Cellular and wireless communication technologies have seen explosive growth over the past several years. This growth has been fueled by better communications, hardware, larger networks, and more reliable protocols.Wireless service providers are now able to offer their customers an ever- expanding array of features and services, and provide users with unprecedented levels of access to information, resources, and communications. To keep pace with these service enhancements, mobile electronic devices (e.g., cellular phones, tablets, laptops, etc.) have become more powerful and complex than ever. This complexity has created new opportunities for malicious software, software conflicts, hardware faults, and other similar errors or phenomena that can negatively impact a mobile device's long-term and continued performance and power utilization levels. Therefore, identifying and correcting the conditions and/or mobile device behaviors that may negatively impact the mobile device's long term and continued performance and power utilization levels is beneficial to consumers.SUMMARY[0004] The various aspects include methods, devices and systems for adaptive observations of behavior features of mobile devices in order to efficiently identify, prevent, and/or correct the conditions and/or mobile device behaviors that often degrade a mobile device's performance and/or power utilization levels over time. An aspect includes a method for observing mobile device behaviors over a period of time to recognize mobile device behaviors inconsistent with normal operation patterns. This aspect method may include dynamically selecting for observation one or more mobile device behaviors from the group mobile device operations, mobile device events, data network activity, system resource usage, mobile device state, inter-process communications, driver statistics, hardware component status, hardware counters, actions or operations of software applications, software downloads, changes to device or component settings, conditions and events at an application level, conditions and events at the radio level, and conditions and events at the sensor level, and adaptively observing the mobile device behaviors to identify a suspicious mobile device behavior from a limited set of observations.[0005] In an aspect method, the mobile device operations may include one or more of library application programming interface (API) calls in an application framework or run-time library, system call APIs, file-system and networking sub-system operations, file system activity, searches for filenames, categories of file accesses, creating files, deleting files, file read/write/seek operations, and changing file permissions.[0006] In an aspect method, the mobile device events may include one or more of device state changes and sensor devices state changes. In an aspect, data network activity may include one or more of types of connections, protocols, port numbers, server/client that the device is connected to, the number of connections, volume or frequency of communications, phone network activity, type and number of calls/messages sent, type and number of calls/messages received, type and number of calls/messages intercepted, call information, text messaging information, media messaging, user account information, transmissions, voicemail, and device identifiers.[0007] In an aspect, the mobile device system resource usage may include one or more of monitoring the number of forks, memory access operations, and the number of files open. In an aspect method, the mobile device state may include one or more of display on/off state, locked/unlocked state, battery charge state, camera state, and microphone state.[0008] In an aspect, the mobile device inter-process communications may include one or more of monitoring intents to crucial services, monitoring the degree of inter-process communications, and monitoring pop-up windows. In an aspect, driver statistics may include statistics from drivers for one or more of cameras, sensors, electronic displays, WiFi communication components, data controllers, memory controllers, system controllers, access ports, peripheral devices, wireless communication components, and external memory chips.[0009] In an aspect, the mobile device hardware component status may include one or more of cameras, sensors, electronic displays, WiFi communication components, data controllers, memory controllers, system controllers, access ports, timers, peripheral devices, wireless communication components, external memory chips, voltage regulators, oscillators, phase-locked loops, peripheral bridges, and other similar components used to support the processors and clients running on the mobile computing device.[0010] In an aspect, the mobile device hardware counters may include one or more of hardware counters that denote the state or status of the mobile computing device and/or mobile device sub-systems, and special-purpose registers of processors/cores that are configured to store a count or state of hardware-related activities or events.[0011] In an aspect, actions or operations of software applications may include monitoring of information used by software applications including one or more of location information, camera information, inertia information (i.e., information from sensors that observe or detect movements of the mobile device such as data from an accelerometer, a gyroscope and/or an electronic compass), browser information, content of browser-based communications, content of voice-based communications, short range radio communications, content of text-based communications, content of recorded audio files, phonebook or contact information, contacts lists, calendar information, recorded audio information, notifications communicated to and from a software application, userverifications, and a user password.[0012] In an aspect, software downloads may include one or more of software downloads from an application download server, and a first software application requesting the downloading and/or install of a second software application.[0013] In an aspect, changes to device or component settings may include changes to one or more of compass information, mobile device settings, battery life, gyroscope information, pressure sensors, and screen activity.[0014] In an aspect, conditions and events at the application level may include one or more of observing user via facial recognition software, observing social streams, observing notes entered by the user, observing event pertaining to use of an electronic payment service (such as PassBook, Google Wallet, and Paypal), observing events relating to use of virtual private networks, synchronization, voice searches, voice control, language translators, recognizing user gestures such as through camera images, toucvhscreen interactions, or sensors that track user hands or fingers in close proximity to the mobile device, offloading of data for computations, video streaming, camera usage without user activity, and microphone usage without user activity.[0015] In an aspect, conditions and events at the radio level may include determining the presence, existence or amount of any or all of: user interaction with the mobile device before establishing radio communication links or transmitting information, multiple subscriber identity module cards, Internet radio, mobile phone tethering, offloading data for computations, device state communications, the use as a game controller or home controller, vehicle communications, mobile device synchronization, monitoring the use of radios (WiFi, WiMax, Bluetooth, etc.) for positioning, peer-to-peer (p2p)communications, synchronization, vehicle to vehicle communications, and/or machine-to-machine (m2m), and monitoring network traffic usage, statistics, or profiles.[0016] In an aspect, conditions and events at the events at the sensor level may include of one or more of monitoring magnet sensors, detecting near- field communications, collecting information from a credit card scanner, barcode scanner, or mobile tag reader, detecting the presence of universal serial bus (USB) power charging source, detecting that a keyboard or auxiliary device has been coupled to the mobile device, detecting that the mobile device has been coupled to a computing device (e.g., via USB, etc.), determining if an LED, flash, flashlight, or light source has been modified or disabled (e.g., maliciously disabling an emergency signaling app, etc.), determining if a speaker or microphone has been turned on or powered, detecting a charging or power event, detecting that the mobile device is being used as a game controller, collecting information from medical purpose/healthcare sensors or from scanning the user's body, collecting information from an external sensor plugged into one of a USB port and an audio jack, collecting information from a tactile or haptic sensor, monitoring communications with and/or behaviors of hardware components coupled to the computing device via the USB or a wireless transceiver (e.g., WiFi, Bluetooth, NFC, etc.), and collecting information pertaining to the thermal state of the mobile device.[0017] In an aspect, dynamically selecting for observation one or more mobile device behaviors may include observing mobile device behaviors over the period of time, and identifying a limited set of behaviors associated with inconsistent operations as the mobile device behaviors to be observed.[0018] In an aspect, identifying a limited set of behaviors associated with inconsistent operations as the mobile device behaviors to be observed may include receiving behavior inputs from one or more of a high-level application, a system kernel and a driver API after filtering by an adaptive filter, receiving context information regarding operations of the mobile device, performing spatial correlations of the received behavior inputs and the received context input, and generating a behavior vector.[0019] In an aspect, generating a behavior vector may include generating a vector data structure that succinctly describes the observed mobile device behaviors. In an aspect, generating a behavior vector may include generating a vector that may include information collected from APIs at variouslevels/modules of the mobile device. In an aspect, generating a behavior vector may include generating a vector that may include information pertaining to one or more of library API calls, system calls, file-system and networking sub-system operations, sensor device state changes, file system activity, network activity, telephone activity, memory access operations, a state of the mobile device, a power on/off state of an electronic display of the mobile device, a locked/unlocked state the mobile device, an amount of battery power remaining, inter-process communications, driver statistics, and hardware counters.[0020] In an aspect, generating a behavior vector may include generating a vector data structure that may include series of numbers, each of which signifies a feature or a behavior of the mobile device. In an aspect, at least one of the series of numbers identifies one or more of whether a camera of the mobile device is in use or not in use, how much network traffic has been generated by the mobile device, and how many internet messages have been sent from the mobile device.[0021] In an aspect, generating a behavior vector may include generating a vector that may include at least one of call information, text messaging information, media messaging information, user account information, location information, camera information, and browser information and inertiainformation. Inertia information may be information from sensors that observe or detect movements of the mobile device, such as data from an accelerometer, a gyroscope, an electronic compass, a camera in which images are processed to detect movements of the background, pressure sensors, Global Positioning System (GPS) receivers, and modules or services that can detect changes in position or movement from wireless signal from a cellular network (e.g., processing of signals to detect Doppler shift, changes in cell IDs, and device location information provided by the network) to name some non-limiting examples. In an aspect, generating a behavior vector may include generating a vector that may include information collected at an application level of the mobile device. In an aspect, generating a behavior vector may include generating a vector that may include information collected at a radio level of the mobile device. In an aspect, generating a behavior vector may include generating a vector that may include information collected at a sensor level of the mobile device. [0022] In an aspect, identifying a limited set of behaviors associated with inconsistent operations as the mobile device behaviors to be observed further may include performing temporal correlations of the received behavior inputs and the received context input, wherein generating a behavior vector may include generating a behavior vector based on a result of the spatial and temporal correlations.[0023] A further aspect method may include observing mobile device behaviors over a period of time to recognize mobile device behaviors inconsistent with normal operation patterns, including determining in a processor of a mobile device a feature that is to be observed in the mobile device in order to identify a suspicious behavior of the mobile device, and adaptive ly observing the determined feature by collecting behavior information from a hardware component associated with the determined feature. In an aspect, adaptively observing the determined feature by collecting behavior information from the hardware component include collecting behavior information from one or more of: an inertia sensor component; a battery hardware component; a browser supporting hardware component; a camera hardware component; a single or dual subscriber identity module (SIM) hardware component; a location hardware component; a microphone hardware component; a radio interface hardware component; a speaker hardware component; a screen hardware component; a synchronization hardware component; a storage component; a universal serial bus hardware component; a user interaction hardware component (e.g., touchscreen, camera, near-surface ; a battery hardware driver component; a browser supporting hardware driver component; a camera hardware driver component; a single or dual SIM hardware driver component; a location hardware driver component; a microphone hardware driver component; a radio interface hardware driver component; a speaker hardware driver component; a screen hardware driver component; a synchronization hardware drivercomponent; a storage driver component; a universal serial bus hardware driver component; and a user interaction hardware driver component. [0024] As used herein, the term inertia sensor component (i.e., a component that can provide inertia sensor information) refers to any one or combination of sensors or modules that may observe or detect movements of the mobile device. Non-limiting examples of inertia sensor components include an accelerometer, a gyroscope, an electronic compass, a camera in which images are processed to detect movements of the background, pressure sensors, a GPS (or other satellite- based location system) receiver, and a module or service that can detect changes in position or movement from wireless signal from a cellular network (e.g., processing of signals to detect Doppler shift, changes in cell IDs, and device location information provided by the network).[0025] In an aspect, behavior information may be collected from multiple radio interface hardware components when the computing device includes multiple radio components to enable communications via multiple different RFtechnologies and protocols. For example, behavior information may be collected from multiple radio interface hardware components each supporting one of cellular telephone (e.g., G-3, UMTS, CDMA, etc.), WiFi, WiMax, Near Field Communication (NFC), personal area network, and Bluetooth communications. For ease of reference the different types of transceivers and modems supporting the different types of RF communications may be referred to collectively as simply radio interface hardware components.[0026] In an aspect, behavior information may be collected from a single radio interface hardware component supporting multiple different RF technologies and protocols. For example, a computing device may include a multifunction radio module that is configured to support RF communications over multiple frequencies, networks and protocols, such a radio interface hardware component that enables communications via WiFi, Bluetooth, NFC, and cellular data networks (e.g., GSM, WCDMA, etc.). In such implementations, the information regarding the RF communication behaviors (e.g., transmissions and receptions) of each of the various types of RF communications supported by the radio interface hardware component may be obtained from that single component. Thus, a single radio interface hardware component may be monitored for behaviors related to personal area networks, NFC links, and wide area networks.[0027] In an aspect, user interactions may be received by a computing device in the form of gesture inputs, such as hand, arm, and/or finger gestures that are detected by an appropriate sensor (e.g., a camera, wireless position sensors on the use's wrists, touchscreens, and/or sensors that can detect the location of a user's fingers or hand in close proximity to the device).[0028] In an aspect, collecting behavior information from the hardware component associated with the feature may include collecting information from a log of application programming interface (API) calls that temporarily or permanently stores API call information for the access or use of the hardware component by software applications of the mobile device.[0029] In an aspect, determining the feature that is to be observed in the mobile device to identify the suspicious behavior of the mobile device may include applying machine learning techniques to generate a first family of classifier models that describe a cloud corpus of behavior vectors, determining which factors in the first family of classifier models have the highest probably of enabling a mobile device to conclusively determine whether a mobile device behavior is malicious or benign, generating a second family of classifier models that identify significantly fewer factors and data points as being relevant for enabling the mobile device to conclusively determine whether the mobile device behavior is malicious or benign based on the determined factors, generating a mobile device classifier model based on the second family of classifier models, and using the generated classifier model to identify the feature that is to be observed. In an aspect, the method may further include using the generated classifier model to analyze the collected behavior information. [0030] A further aspect includes a mobile computing device having a multi-core processor including two or more processor cores, one or more of which is configured with processor-executable instructions to perform operations of the methods described above. A further aspect includes a mobile device having means for performing the functions and operations of the methods described above. A further aspect includes a non-transitory processor-readable storage medium having stored thereon processor-executable instructions configured to cause a processor to perform operations of the methods described above.BRIEF DESCRIPTION OF THE DRAWINGS[0031] The accompanying drawings, which are incorporated herein and constitute part of this specification, illustrate exemplary aspects of the invention, and together with the general description given above and the detaileddescription given below, serve to explain the features of the invention.[0032] FIG. 1 is an architectural diagram of an example system on chip suitable for implementing the various aspects.[0033] FIG. 2 is a block diagram illustrating example logical components and information flows in a computing system configured to perform dynamic and adaptive observations in accordance with the various aspects.[0034] FIG. 3 is a block diagram illustrating example logical components and information flows in an observer module configured to perform dynamic and adaptive observations in accordance with an aspect.[0035] FIG. 4 is a block diagram illustrating logical components and information flows in a computing system implementing observer modules in accordance with an aspect.[0036] FIG. 5A through 8B are block diagrams illustrating logical components and information flows in a computing system implementing observer modules and observer daemons in accordance with the various aspects. [0037] FIG. 9A is a process flow diagram illustrating an aspect method for performing adaptive observations on mobile devices.[0038] FIG. 9B is a process flow diagram illustrating another aspect method for performing adaptive observations on mobile devices.[0039] FIG. 10 is a process flow diagram illustrating another aspect method for performing adaptive observations on mobile devices.[0040] FIGs. 1 lA-1 1C are process flow diagrams illustrating further aspect methods for performing adaptive observations on mobile devices.[0041] FIG. 12 is a component block diagram of mobile device suitable for use with the various aspects.[0042] FIG. 13 is an illustration of an example mobile device suitable for use with the various aspects.[0043] FIG. 14 is an illustration of an example server computer suitable for use with the various aspects.DETAILED DESCRIPTION[0044] The various aspects will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and implementations are for illustrative purposes, and are not intended to limit the scope of the invention or the claims.[0045] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any implementation described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other implementations . [0046] The terms "mobile computing device" and "mobile device" are used interchangeably herein to refer to any one or all of cellular telephones, smartphones, personal or mobile multi-media players, personal data assistants (PDA's), laptop computers, tablet computers, smartbooks, ultrabooks, palm- top computers, wireless electronic mail receivers, multimedia Internet enabled cellular telephones, wireless gaming controllers, and similar personal electronic devices which include a memory, a programmable processor for which performance is important, and operate under battery power such that power conservation methods are of benefit. While the various aspects are particularly useful for mobile computing devices, such as smartphones, which have limited resources and run on battery, the aspects are generally useful in any electronic device that includes a processor and executes application programs.[0047] Computer program code or "program code" for execution on a programmable processor for carrying out operations of the various aspects may be written in a high level programming language such as C, C++, C#, Smalltalk, Java, JavaScript, Visual Basic, a Structured Query Language (e.g., Transact- SQL), Perl, or in various other programming languages. Program code or programs stored on a computer readable storage medium as used herein to refer to machine language code (such as object code) whose format is understandable by a processor.[0048] The term "performance degradation" is used herein to refer to a wide variety of undesirable mobile device operations and characteristics, such as longer processing times, lower battery life, loss of private data, malicious economic activity (e.g., sending unauthorized premium SMS message), operations relating to commandeering the mobile device or utilizing the phone for spying or botnet activities, etc.[0049] The term "system on chip" (SOC) is used herein to refer to a single integrated circuit (IC) chip that contains multiple resources and/or processors integrated on a single substrate. A single SOC may contain circuitry for digital, analog, mixed-signal, and radio-frequency functions. A single SOC may also include any number of general purpose and/or specialized processors (digital signal processors, modem processors, video processors, etc.), memory blocks (e.g., ROM, RAM, Flash, etc.), and resources (e.g., timers, voltage regulators, oscillators, etc.). SOCs may also include software for controlling the integrated resources and processors, as well as for controlling peripheral devices.[0050] The term "multicore processor" is used herein to refer to a single integrated circuit (IC) chip or chip package that contains two or moreindependent processing cores (e.g., CPU cores) configured to read and execute program instructions. An SOC may include multiple multicore processors, and each processor in an SOC may be referred to as a core. The term"multiprocessor" is used herein to refer to a system or device that includes two or more processing units configured to read and execute program instructions.[0051] Generally, the performance and power efficiency of a mobile device degrades over time. Recently, anti-virus companies (e.g., McAfee, Symantec, etc.) have begun marketing mobile anti-virus, firewall, and encryption products that aim to slow this degradation. However, many of these solutions rely on the periodic execution on the mobile device, of a computationally- intensive scanning engine that may consume many of the mobile device's processing and battery resources, slow or render the mobile device useless for extended periods of time, and/or otherwise degrade the user experience. In addition, these solutions are typically limited to detecting known viruses and malware, and do not address the multiple complex factors and/or the interactions that often combine to contribute to a mobile device's degradation over time (e.g., when the performance degradation is not caused by viruses or malware). For these and other reasons, existing anti-virus, firewall, and encryption products do not provide adequate solutions for identifying the numerous factors that may contribute to a mobile device's degradation over time, for preventing mobile device degradation, or for efficiently restoring an aging mobile device to its original condition.[0052] Various other solutions exist for modeling the behavior of processes or application programs executing on a computing device, and such behavior models may be used to differentiate between malicious and benignprocess/programs on computing devices. However, these existing modeling solutions are not suitable for use on mobile devices because such solutions generally require the execution of computationally- intensive processes that consume a significant amount of processing, memory, and energy resources, all of which may be scarce on mobile devices. In addition, these solutions are generally limited to evaluating the behavior of individual application programs or processes, and do not provide an accurate or complete model of the performance- degrading mobile device behaviors. For these and other reasons, existing modeling solutions are not adequate for identifying the numerous factors that may contribute to a mobile device's degradation over time, for preventing mobile device degradation, or for efficiently restoring an aging mobile device to its original condition.[0053] There are a variety of factors that may contribute to the degradation in performance and power utilization levels of a mobile device over time, including poorly designed software applications, malware, viruses, fragmented memory, background processes, etc. However, due to the complexity of modern mobile devices, it is increasingly difficult for users, operating systems, and/orapplication programs (e.g., anti-virus software, etc.) to accurately and efficiently identify the sources of such problems and/or to provide adequate remedies to identified problems. As a result, mobile device users currently have few remedies for preventing the degradation in performance and power utilization levels of a mobile device over time, or for restoring an aging mobile device to its original performance and power utilization levels. [0054] The various aspects provide devices, systems, and methods for efficiently identifying, preventing, and/or correcting the conditions and/or mobile device behaviors that often degrade a mobile device's performance and/or power utilization levels over time.[0055] As mentioned above, mobile devices are resource constrained systems that have relatively limited processing, memory, and energy resources. As also mentioned above, modern mobile devices are complex systems, and there are a large number (i.e., thousands) of factors that may contribute to the mobile device's degradation over time. Due to these constraints, it is often not feasible to monitor/observe all the various processes, behaviors, or factors (orcombinations thereof) that may degrade performance and/or power utilization levels of the complex yet resource-constrained systems of modern mobile devices.[0056] To overcome the above mentioned limitations of existing solutions, the various aspects intelligently, dynamically, and/or adaptively determine mobile device behaviors that are to be observed, the number of behaviors that are to be observed, and the level of detail (i.e., granularity) at which the mobile device behaviors are to be observed. The various aspects efficiently identify suspicious or performance-degrading mobile device behaviors without consuming an excessive amount of processing, memory, or energy resources. Various aspects may correct suspicious or performance-degrading mobile device behaviors. Various aspects may prevent the identified suspicious or performance-degrading mobile device behaviors from degrading the performance and power utilization levels of a mobile device over time. Various aspects may restore an aging mobile device to its original performance and power utilization levels.[0057] In an aspect, a mobile device processor may be configured to observe any or all of library application programming interface (API) calls, system call APIs, file-system operations, networking sub-system operations, driver API calls for the numerous sensors, state changes, and other similar events/operations at a high level, and perform real-time behavior analysis operations based on these high level observations to identify programs/processes that may contribute to the mobile device's degradation over time (e.g., programs that are actively malicious, poorly written, etc.). The mobile device processor may be configured to intelligently increase the level of detail (i.e., granularity) at which the mobile device behaviors are to be observed until enough information is available to identify and/or correct the cause of a suspicious or performance-degrading mobile device behavior.[0058] In an aspect, the mobile device processor may be configured todynamically change the set of observed behaviors (e.g., by selecting new behaviors to observe, observing fewer behaviors, etc.) based on the results of the on-line real-time analysis operations and/or the availability of system resources.[0059] In various aspects, the mobile device processor may be configured to dynamically adjust the observation granularity (i.e., the level of detail at which mobile device behaviors are observed) based on the results of the real-time analysis operations and/or based on the availability of system resources. For example, in various aspects, the mobile device processor may be configured to recursively increase the granularity of one or more observations (i.e., make finer or more detailed observations) until a source of a suspicious or performance- degrading mobile device behavior is identified, until a processing threshold is reached, or until the mobile device processor determines that the source of the suspicious or performance-degrading mobile device behavior cannot be identified from further increases in observation granularity.[0060] In an aspect, the mobile device processor may be configured todynamically adjust the observation granularity based on the availability of system resources. For example, the mobile device processor may be configured to increase the observation granularity in response to determining that mobile device resources are available or underutilized or that the mobile is currently connected to a power supply. As another example, the mobile device processor may be configured to reduce the observation granularity in response todetermining that the computing device is under heavy load or low battery.[0061] In an aspect, an observer process, daemon, module, or sub-system (herein collectively referred to as a "module") of the mobile device may instrument or coordinate various application programming interfaces (APIs) at various levels of the mobile device system, and collect behavior information from theinstrumented APIs. In an aspect, the mobile device may also include an analyzer module, and the analyzer module may generate one or more classifiers. The observer module may communicate (e.g., via a memory write operation, function call, etc.) the collected behavior information to the classifier module and/or the analyzer module (e.g., via a memory write operation, etc.) of the mobile device, which may analyze and/or classify the collected behavior information, generate behavior vectors, generate spatial and/or temporal correlations based on the behavior vector and information collected from various other mobile device sub-systems, and/or determine whether a particular mobile device behavior, software application, or process is benign, suspicious, or malicious/performance- degrading. In various aspects, the generated behavior vectors andspatial/temporal correlations may be used by various modules (e.g., by an actuation module, etc.) of the mobile device to identify and/or respond to behaviors that are determined to have a high probably of negatively impacting the mobile device's performance or battery consumption levels.[0062] The analyzer module of the mobile device may be configured to perform real-time analysis operations, which may include applying data, algorithms, and/or behavior models to behavior information collected by the observer module to determine whether a mobile device behavior is benign, suspicious, or malicious/performance-degrading. In an aspect, the analyzer module may be configured to determine that a mobile device behavior is suspicious when the classifier does not have sufficient information to classify or conclusively determine that the behavior is either benign or malicious. In an aspect, the analyzer module may be configured to communicate the results of its real-time analysis operations to the observer module when it determines that a device behavior is suspicious. The observer module may adjust the granularity of its observations (i.e., the level of detail at which mobile device behaviors are observed) and/or change the behaviors that are observed based on information received from the analyzer module (e.g., results of the real-time analysis operations), generate or collect new or additional behavior information, and send the new/additional information to the classifier module for furtheranalysis/classification.[0063] Such feedback communications between the observer and analyzer modules (e.g., analyzer module sending the results of its real-time analysis operations to the observer module, and the observer module sending updated behavior information to the analyzer module) may enable a mobile device processor to recursively increase the granularity of the observations (i.e., make finer or more detailed observations) or change the features/behaviors that are observed until a source of a suspicious or performance-degrading mobile device behavior is identified, until a processing or battery consumption threshold is reached, or until the mobile device processor determines that the source of the suspicious or performance-degrading mobile device behavior cannot be identified from further increases in observation granularity. Such feedbackcommunications also enable the mobile device processor to adjust or modify the data/behavior models locally in the mobile device without consuming an excessive amount of the mobile device's processing, memory, or energy resources.[0064] In various aspects, the observer module and/or analyzer module may generate behavior vectors that include a concise definition of the observed behaviors. That is, a behavior vector may succinctly describe observed behavior of the mobile device, software application, or process in a value or vector data- structure (e.g., in the form of a string of numbers, etc.). A behavior vector may also function as an identifier that enables the mobile device system to quickly recognize, identify, and/or analyze mobile device behaviors. In an aspect, the observer module and/or analyzer module may generate a behavior vector that includes series of numbers, each of which signifies a feature or a behavior of the mobile device. For example, numbers included in the behavior vector may signify whether a camera of the mobile device is in use (e.g., as zero or one), how much network traffic has been transmitted from or generated by the mobile device (e.g., 20 KB/sec, etc.), how many internet messages have beencommunicated (e.g., number of SMS messages, etc.), etc.[0065] The various aspects may be implemented in a number of different mobile devices, including single processor and multiprocessor systems, and a system-on- chip (SOC). FIG. 1 is an architectural diagram illustrating an example system- on-chip (SOC) 100 architecture that may be used in computing devices implementing the various aspects. The SOC 100 may include a number of heterogeneous processors, such as a digital signal processor (DSP) 101 , a modem processor 104, a graphics processor 106, and an application processor 108. The SOC 100 may also include one or more coprocessors 1 10 (e.g., vector coprocessor) connected to one or more of the heterogeneous processors 102, 104, 106, 108. Each processor 102, 104, 106, 108, 1 10 may include one or more cores, and each processor/core may perform operations independent of the other processors/cores. For example, the SOC 100 may include a processor that executes a first type of operating system (e.g., FreeBSD, LTNIX, OS X, etc.) and a processor that executes a second type of operating system (e.g., Microsoft Windows 8).[0066] The SOC 100 may also include analog circuitry and custom circuitry 1 14 for managing sensor data, analog-to-digital conversions, wireless datatransmissions, and for performing other specialized operations, such as processing encoded audio signals for games and movies. The SOC 100 may further include system components and resources 1 16, such as voltage regulators, oscillators, phase-locked loops, peripheral bridges, data controllers, memory controllers, system controllers, access ports, timers, and other similarcomponents used to support the processors and clients running on a computing device.[0067] The system components 1 16 and custom circuitry 1 14 may include circuitry to interface with peripheral devices, such as cameras, electronic displays, wireless communication devices, external memory chips, etc. The processors 102, 104, 106, 108 may be interconnected to one or more memory elements 1 12, system components, and resources 1 16 and custom circuitry 1 14 via an interconnection/bus module 124, which may include an array ofreconfigurable logic gates and/or implement a bus architecture (e.g.,CoreConnect, AMBA, etc.). Communications may be provided by advanced interconnects, such as high performance networks-on chip (NoCs).[0068] The SOC 100 may further include an input/output module (not illustrated) for communicating with resources external to the SOC, such as a clock 1 18 and a voltage regulator 120. Resources external to the SOC (e.g., clock 1 18, voltage regulator 120) may be shared by two or more of the internal SOCprocessors/cores (e.g., DSP 102, modem processor 104, graphics processor 106, applications processor 108, etc.).[0069] The SOC 100 may also include hardware and/or software components suitable for collecting sensor data from sensors, including speakers, user interface elements (e.g., input buttons, touch screen display, etc.), microphone arrays, sensors for monitoring physical conditions (e.g., location, direction, motion, orientation, vibration, pressure, etc.), cameras, compasses, GPS receivers, inertia sensor components, communications circuitry (e.g., Bluetooth®, WLAN, WiFi, etc.), and other well known components of modern electronic devices. [0070] In addition to the SOC 100 discussed above, the various aspects may be implemented in a wide variety of computing systems, which may include a single processor, multiple processors, multicore processors, or any combination thereof.[0071] FIG. 2 illustrates example logical components and information flows in a computing system 200 configured to perform dynamic and adaptive observations in accordance with the various aspects. In the example illustrated in FIG. 2, the computing system 200 includes a coarse observer module 202, an analyzer module 204, an external context information module 206, and an actuation module 208.[0072] Each of the modules 202-208 may be implemented in software, hardware, or any combination thereof. In various aspects, the modules 202-208 may be implemented within parts of the operating system (e.g., within the kernel, in the kernel space, in the user space, etc.), within separate programs or applications, in specialized hardware buffers or processors, or any combination thereof. In an aspect, one or more of the modules 202-208 may be implemented as software instructions executing on one or more processors of the mobile device 102.[0073] The behavior observer module 202 may be configured to instrument or coordinate APIs at various levels/modules of the mobile device, andmonitor/observe mobile device operations and events (e.g., system events, state changes, etc.) at the various levels/modules via the instrumented APIs, collect information pertaining to the observed operations/events, intelligently filter the collected information, generate one or more observations based on the filtered information, store the generated observations in a memory (e.g., in a log file, cache memory, etc.) and/or send (e.g., via memory writes, function calls, etc.) the generated observations to the behavior analyzer module 204.[0074] The behavior observer module 202 may monitor/observe mobile device operations and events by collecting information pertaining to library application programming interface (API) calls in an application framework or run-time libraries, system call APIs, file-system and networking sub-system operations, device (including sensor devices) state changes, and other similar events. The behavior observer module 202 may also monitor file system activity, which may include searching for filenames, categories of file accesses (personal info or normal data files), creating or deleting files (e.g., type exe, zip, etc.), file read/write/seek operations, changing file permissions, etc.[0075] The behavior observer module 202 may also monitor/observe data network activity, which may include types of connections, protocols, port numbers, server/client that the device is connected to, the number of connections, volume or frequency of communications, etc. The behavior observer module 202 may monitor phone network activity, which may include monitoring the type and number of calls or messages (e.g., SMS, etc.) sent out, received, or intercepted (e.g., the number of premium calls placed).[0076] The behavior observer module 202 may also monitor/observe the system resource usage, which may include monitoring the number of forks, memory access operations, number of files open, etc. The behavior observer module 202 may monitor the state of the mobile device, which may include monitoring various factors, such as whether the display is on or off, whether the device is locked or unlocked, the amount of battery remaining, the state of the camera, etc. The behavior observer module 202 may also monitor inter-processcommunications (IPC) by, for example, monitoring intents to crucial services (browser, contracts provider, etc.), the degree of inter-process communications, pop-up windows, etc.[0077] The behavior observer module 202 may also monitor/observe driver statistics and/or the status of one or more hardware components, which may include cameras, sensors, electronic displays, WiFi communication components, data controllers, memory controllers, system controllers, access ports, timers, peripheral devices, wireless communication components, external memory chips, voltage regulators, oscillators, phase-locked loops, peripheral bridges, and other similar components used to support the processors and clients running on the mobile computing device.[0078] The behavior observer module 202 may also monitor/observe one or more hardware counters that denote the state or status of the mobile computing device and/or mobile device sub-systems. A hardware counter may include a special- purpose register of the processors/cores that is configured to store a count or state of hardware-related activities or events occurring in the mobile computing device.[0079] The behavior observer module 202 may also monitor/observe actions or operations of software applications, software downloads from an application download server (e.g., Apple® App Store server), mobile device information used by software applications, call information, text messaging information (e.g., SendSMS, BlockSMS, ReadSMS, etc.), media messaging information (e.g., ReceiveMMS), user account information, location information, camera information, inertia information, browser information, content of browser-based communications, content of voice-based communications, short range radio communications (e.g., Bluetooth, WiFi, etc.), content of text-basedcommunications, content of recorded audio files, phonebook or contact information, contacts lists, etc.[0080] The behavior observer module 202 may monitor/observe transmissions or communications of the mobile device, including communications that include voicemail (VoiceMailComm), device identifiers (DevicelDComm), user account information (UserAccountComm), calendar information (CalendarComm), location information (LocationComm), recorded audio information(RecordAudioComm ), inertia information such as accelerometer information (AccelerometerComm), etc. [0081] The behavior observer module 202 may monitor/observe usage of and updates/changes to compass information, mobile device settings, battery life, gyroscope information, pressure sensors, magnet sensors, screen activity, etc. The behavior observer module 202 may monitor/observe notificationscommunicated to and from a software application (AppNotifications), application updates, etc. The behavior observer module 202 may monitor/observe conditions or events pertaining to a first software application requesting the downloading and/or install of a second software application. The behavior observer module 202 may monitor/observe conditions or events pertaining to user verification, such as the entry of a password, etc.[0082] The mobile device processor may be configured to observe conditions or events at multiple levels of the mobile device, including the application level, radio level, and sensor level. Application level observations may include observing the user via facial recognition software, observing social streams, observing notes entered by the user, observing events pertaining to use of an electronic payment service, such as PassBook /Google Wallet /Paypal, etc.Application level observations may also include observing events relating to the use of virtual private networks (VPNs) and events pertaining to synchronization, voice searches, voice control (e.g., lock/unlock a phone by saying one word), language translators, the offloading of data for computations, video streaming, camera usage without user activity, microphone usage without user activity, etc.[0083] Radio level observations may include determining the presence, existence or amount of any or more of: user interaction with the mobile device before establishing radio communication links or transmitting information, single, dual or multiple subscriber identity modules (SIM) or SIM cards, Internet radio, mobile phone tethering, offloading data for computations, device statecommunications, the use as a game controller or home controller, vehicle communications, mobile device synchronization, etc. Radio level observations may also include monitoring the use of radios (WiFi, WiMax, Bluetooth, etc.) for positioning, peer-to-peer (p2p) communications, synchronization, vehicle to vehicle communications, and/or machine-to-machine (m2m). Radio level observations may further include monitoring network traffic usage, statistics, or profiles.[0084] Sensor level observations may include monitoring a magnet sensor or other sensor to determine the usage and/or external environment of the mobile device. For example, the mobile device processor may be configured to determine whether the phone is in a holster (e.g., via a magnet sensor configured to sense a magnet within the holster) or in the user's pocket (e.g., via the amount of light detected by a camera or light sensor). Detecting that the mobile device is in a holster may be relevant to recognizing suspicious behaviors, for example, because activities and functions related to active usage by a user (e.g., taking photographs or videos, sending messages, conducting a voice call, recording sounds, etc.) occurring while the mobile device is holstered could be signs of nefarious processes executing on the device (e.g., to track or spy on the user). Other examples of sensor level observations related to usage or external environments include, detecting near-field communications (NFC), collecting information from a credit card scanner, barcode scanner, or mobile tag reader, detecting the presence of a USB power charging source, detecting that a keyboard or auxiliary device has been coupled to the mobile device, detecting that the mobile device has been coupled to a computing device (e.g., via USB, etc.), determining whether a light emitting diode (LED), flash, flashlight, or light source has been modified or disabled (e.g., maliciously disabling an emergency signaling app, etc.), detecting that a speaker or microphone has been turned on or powered, detecting a charging or power event, detecting that the mobile device is being used as a game controller, monitoring communications with and/or behaviors of hardware components coupled to the computing device via the USB or a wireless transceiver (e.g., WiFi, Bluetooth, or NFC), etc. Sensor level observations may also include collecting information from medical or healthcare sensors or from scanning the user's body, collecting information from an external sensor plugged into the USB/audio jack or coupled via a wireless data link (e.g., WiFi, Bluetooth, or NFC), collecting information from a tactile or haptic sensor (e.g., via a vibrator interface, etc.), collecting information pertaining to the thermal state of the mobile device, etc.[0085] To reduce the number of factors monitored to a manageable level, in an aspect, the behavior observer module 202 may perform coarse observations by monitoring/observing an initial set of behaviors or factors that are a small subset of all factors that could contribute to the mobile device's degradation. In an aspect, the behavior observer module 202 may receive the initial set of behaviors and/or factors from a network server 1 16 and/or a component in a cloud service provider network 1 18. In an aspect, the initial set of behaviors/factors may be specified in data/behavior models received from the network server 1 16 or cloud service provider network 1 18.[0086] The analyzer module 204 may include intelligence for utilizing the limited set of information (i.e., coarse observations) to identify behaviors, processes, or programs that are contributing to (or are likely to contribute to) the device's degradation over time, or which may otherwise cause problems on the device. For example, the analyzer module 204 may be configured to analyze information (e.g., in the form of observations) collected from various modules (e.g., the observer module 202, external context information module 206, etc.), learn the normal operational behaviors of the mobile device, generate behavior models of the mobile device's behaviors, and compare the generated models toinformation/observations received from the observer module 202 to identify suspicious mobile device behaviors.[0087] As mentioned above, the observer module 202 may monitor/observe mobile device operations and events. In various aspects, observing mobile device operations and events may include collecting information pertaining to any or all of library API calls in an application framework or run-time libraries, system call APIs, file-system and networking sub-system operations, device (including sensor devices) state changes, and other similar events. In an aspect, the observer module 202 may monitor file system activity, which may include searching for filenames, categories of file accesses (personal info or normal data files), creating or deleting files (e.g., type exe, zip, etc.), file read/write/seek operations, changing file permissions, etc. In an aspect, the observer module 202 may monitor data network activity, which may include types of connections, protocols, port numbers, server/client that the device is connected to, the number of connections, volume or frequency of communications, etc. In an aspect, the observer module 202 may monitor phone network activity, which may include monitoring the type and number of calls or messages (e.g., SMS, etc.) sent out, received, or intercepted (e.g., the number of premium calls placed). In an aspect, the observer module 202 may monitor the system resources that are used, which may include monitoring the number of forks, memory uses, number of files open, etc. In an aspect, the observer module 202 may monitor the device state, which may include monitoring various factors, such as whether the display is on or off, whether the device is locked or unlocked, the amount of battery remaining, the state of the camera, etc. In an aspect, the observer module 202 may also monitor inter-process communications (IPC) by, for example, monitoring intents to crucial services (browser, contracts provider, etc.), the degree of inter-process communications, pop-up windows, etc.[0088] To reduce the number of factors monitored to a manageable level, the observer module 202 may perform coarse observations by monitoring/observing a small subset of the factors that could contribute to the mobile device's degradation, and send the coarse observations to the analyzer module 204. In an embodiment, the initial set of behaviors and/or subset of the factors may be selected by analysis of benign and problematic applications on mobile devices.[0089] The analyzer module 204 may receive the coarse observations from the observer module 202 and identify subsystems, processes, and/or applications associated with the received coarse observations that may potentially contribute to the mobile device's degradation. This may be achieved by, for example, the analyzer module 204 comparing the received information with contextual information received from the external context information module 206.[0090] The analyzer module 204 may instruct the observer module 202 to perform or enable deeper logging/observations or final logging on the identified subsystems, processes or applications. The observer module 202 may perform deeper observations on the identified subsystems, processes or applications. The observer module 202 may send the results of the deeper observations to the analyzer module 204 for further (and deeper) analysis. These operations may be repeated until the source of a problem is identified or until it is determined that the identified subsystems, processes or applications are not likely to cause problems or degradation. The analyzer module 204 may then send the results of the analysis to the actuation module 208, which may receive the results and perform operations to heal, cure, isolate, or otherwise fix the identified problem.[0091] In an aspect, the observer module 202 and the analyzer module 204 may provide, either individually or collectively, real-time behavior analysis of the computing system's behaviors to identify suspicious behavior from limited and coarse observations, to dynamically determine behaviors to observe in greater detail, and to dynamically determine the level of detail required for the observations. In this manner, the observer module 202 enables the computing system 200 to efficiently identify and prevent problems from occurring on mobile devices without requiring a large amount of processor, memory, or battery resources on the device.[0092] In an aspect, the observer module 202 may store the observations in a space efficient and query-service-time efficient manner to reduce theperformance-impact on benign applications. The observer module 202 may provide the system with various observer modes to enable multi-level logging (e.g., fine grained and coarse-grained logging). The observer module 202 may provide the ability to automatically and dynamically switch between the different observer modes. The observer module 202 may monitor and restrictprocess/application that may exhaust system resources. The observer module 202 may manage communications (e.g., non-secure to secure world) overhead, such that the overhead is minimal and flow control is maintained/performed efficiently.[0093] In an aspect, the analyzer module 204 may be configured to receive and analyze information collected by various mobile device sub-systems and/or over various time periods to learn the normal operational behaviors of the mobile device under a variety of contexts and conditions, and generate models of normal mobile device behaviors under the various contexts/conditions. In an aspect, the analyzer module 204 may be configured to correlate the received observations against the generated behavior models, and perform behavior analysis operations based on the correlations to determine whether the received observations conflict with (or do not match) the learned normal operational behaviors.[0094] In various aspects, the mobile device may be configured to communicate with a network server, which may generate data/behavior models based on information received from a cloud service network server. The network server may send the generated data/behavior models to the mobile device, which may receive and implement, apply, or use lean data/behavior models to identify suspicious or performance-degrading mobile device behaviors, software applications, processes, etc. The mobile device may then correct or prevent the identified performance-degrading mobile device behaviors from degrading the performance and power utilization levels of the mobile device.[0095] In various aspects, the network server may be configured to generate or update the data/behavior models by performing, executing, and/or applying machine learning and/or context modeling techniques to behavior information and/or results of behavior analyses provided by many mobile devices. Thus, the network server may receive a large number of reports from many mobile devices and analyze, consolidate or otherwise turn such crowd-sourced information into useable information, particularly a data set or behavior model that can be used and/or accessed by many mobile devices. The network server may continuously reevaluate existing data/behavior models as new behavior/analysis reports are received from mobile devices, and/or generate new or updated data/behavior models based on historical information (e.g., collected from prior executions, previous applications of behavior models, etc.), new information, machine learning, context modeling, and detected changes in the available information, mobile device states, environmental conditions, network conditions, mobile device performance, battery consumption levels, etc.[0096] As mentioned above, mobile devices are resource constrained systems that have relatively limited processing, memory, and energy resources. As also mentioned above, modern mobile devices are complex systems, and there may be thousands of features/factors and billions of datapoints that require analysis to properly identify the cause or source of a mobile device's degradation. Due to these constraints, it is often not feasible to monitor/observe all the various processes, behaviors, or factors (or combinations thereof) that may degrade performance and/or power utilization levels of the complex yet resource- constrained systems of modern mobile devices.[0097] To provide better performance in view of these facts, the various aspects include mobile devices and network servers configured to work in conjunction with a cloud service or network (e.g., anti-virus partner, security partner, etc.) to intelligently and efficiently identify factors that may contribute to the degradation in performance and power utilization levels of mobile devices over time.Various aspects may identify performance-degrading factors on the mobile device without consuming an excessive amount of processing, memory, or energy resources of the mobile device. [0098] In an aspect, the analyzer module 204 module may be configured to generate one or more classifiers as a function of a training dataset, which may include thousands of features and billions of entries. In an aspect, one or more classifiers may be generated from a reduced training dataset that includes only the features/entries that are most relevant for determining whether a particular mobile device behavior, software application, or process is benign, suspicious, or malicious/performance-degrading.[0099] In an aspect, the analyzer module 204 of the mobile device may be configured to perform real-time analysis operations, which may include applying data, algorithms, and/or behavior models to behavior information collected by the observer module to determine whether a mobile device behavior is benign, suspicious, or malicious/performance-degrading. The analyzer module 204 may determine that a mobile device behavior is suspicious when it does not have sufficient information to classify or conclusively determine that the behavior is either benign or malicious.[0100] In an aspect, the analyzer module 204 of the mobile device may be configured to communicate the results of its real-time analysis operations to the observer module when the analyzer module 204 determines that a device behavior is suspicious. The observer module 202 may adjust the granularity of its observations (i.e., the level of detail at which mobile device behaviors are observed) and/or change the behaviors that are observed based on information received from the classifier module (e.g., results of the real-time analysis operations), generate or collect new or additional behavior information, and send the new/additional information to the classifier module for furtheranalysis/classification.[0101] Such feedback communications between the observer and classifier modules (e.g., classifier module sending the results of its real-time analysis operations to the observer module, and the observer module sending updated behavior information to the classifier module) may enable a mobile device processor to recursively increase the granularity of the observations (i.e., make finer or more detailed observations) or change the features/behaviors that are observed until a source of a suspicious or performance-degrading mobile device behavior is identified, until a processing or battery consumption threshold is reached, or until the mobile device processor determines that the source of the suspicious or performance-degrading mobile device behavior cannot be identified from further increases in observation granularity. Such feedback communication also enable the mobile device processor to adjust or modify the data/behavior models locally in the mobile device without consuming an excessive amount of the mobile device's processing, memory, or energy resources.[0102] In various aspects, the mobile device may be configured to communicate with a network server that includes an offline classifier and/or a real-time online classifier. The offline classifier may generate robust data/behavior models based on information received from a cloud service/network. The real-time online classifier may generate lean data/behavior models based on analyzing the larger and more complicated behavior models generated from information received from the cloud service/network. Both the online and offline classifiers may generate data/behavior models that include a reduced subset of information made available by the cloud service/network for a particular mobile device. In an aspect, generating the lean data/behavior models may include generating one or more reduced feature models (RFMs).[0103] The network server may send the generated lean data/behavior models to the mobile device. The mobile device may receive and implement, apply, or use lean data/behavior models to identify suspicious or performance-degrading mobile device behaviors, software applications, processes, etc. Since the lean data/behavior models include a reduced subset of the relevant information made available by the cloud service/network, the mobile device may use the lean data/behavior models to determine whether a mobile device behavior is malicious/performance-degrading or benign without consuming an excessive amount of processing, memory, or energy resources of the mobile device. The mobile device may then correct or prevent the identified performance-degrading mobile device behaviors from degrading the performance and power utilization levels of the mobile device.[0104] In various aspects, the network server may be configured to generate or update the lean data/behavior models by performing, executing, and/or applying machine learning and/or context modeling techniques to behavior information and/or results of behavior analyses provided by many mobile devices. Thus, the network server may receive a large number of reports from many mobile devices and analyze, consolidate or otherwise turn such crowd-sourced information into useable information, particularly a lean data set or focused behavior models that can be used or accessed by all mobile devices. The network server may continuously reevaluate existing lean data/behavior models as newbehavior/analysis reports are received from mobile devices, and/or generate new or updated lean data/behavior models based on historical information (e.g., collected from prior executions, previous applications of behavior models, etc.), new information, machine learning, context modeling, and detected changes in the available information, mobile device states, environmental conditions, network conditions, mobile device performance, battery consumption levels, etc.[0105] In an aspect, the network server may be configured to generate lean data/behavior models that include an initial feature set (e.g., an initial reduced feature model) and one or more subsequent feature sets (e.g., subsequent reduced feature models). The initial feature set may include information determined to have a highest probably of enabling the classifier module of the mobile devices to conclusively determine whether a particular mobile device behavior, software application, or process is malicious/performance-degrading or benign. Each subsequent feature set may include information determined to have the next highest probably of conclusively determining that the mobile device behavior, software application, or process is malicious/performance-degrading or benign. Each subsequent feature set may include a larger dataset than its preceding feature set, and thus the performance and power consumption costs associated with applying the data/behavior models may increase progressively for each subsequent feature set.[0106] In an aspect, the analyzer module 204 may include a classifier module that implements progressive behavior models (or classifiers) that enable the mobile device processor to evaluate the mobile device behaviors in stages. For example, the classifier module may be configured to first apply a lean data/behavior model that includes the initial feature set, then model that include progressively larger feature sets until the classifier module determines that a mobile device behavior is benign or malicious/performance-degrading. The classifier module may then send the results of its operations and/or success rates associated with the application of each model to the network server. The network server may use such results to update the lean data/behavior models (e.g., the features sets included in each model, etc.), thereby refining the data and/or models based on the results/success rates of all reporting mobile devices. The network server may then make the updated lean data/behavior models available to mobile devices so they have access to the lean data/behavior models. In this manner, mobile devices can instantly benefit from the behaviors and conclusions of other mobile devices.[0107] In an aspect, the network server may be configured to continuously update the online and offline classifiers, model generators, and/or cloud model. The network server may be configured to intelligently determine when the changes are substantial enough to warrant generating new models and when the changes may be ignored. For example, the network server may receive updates from many different mobile devices, perform machine learning operations to generate a first family of classifiers, determine whether there are enough changes to the generated first family of classifiers to warrant generating new models, determine which features in the generated first family of classifiers are the best features when it is determined that there are enough changes to the first family of classifiers, generate a second family of classifiers based on the best features, determine whether there are enough changes to the generated second family of classifiers, and generate/update mobile device classifier data/behavior models when it is determined that there are enough changes to the second family of classifiers.[0108] In an aspect, the analyzer module 204 may be configured to perform realtime behavior analysis operations, which may include performing, executing, and/or applying data, algorithms, classifiers or behavior models (collectively "classifier models") to the collected behavior information.[0109] Each classifier model may be a behavior model that includes information that may be used by a mobile device processor to evaluate a specific aspect of a mobile device behavior. The classifier models may be preinstalled on the mobile device, downloaded, received from a network server, generated in the mobile device, or any combination thereof. A classifier model may be generated by using machine learning and other similar techniques.[0110] Each classifier model may be categorized as a full classifier model or a lean classifier model. A full classifier model may be a robust data model that is generated as a function of a large training dataset, which may include thousands of features and billions of entries. A lean classifier model may be a more focused data model that is generated from a reduced dataset that includes only the features/entries that are most relevant for determining whether a particular mobile device behavior is benign or not benign (e.g.,, malicious or performance- degrading).[0111] As mentioned above, various aspects may include mobile devices and network servers configured to work in conjunction with one another tointelligently and efficiently identify the features, factors, and data points that are most relevant to determining whether a mobile device behavior is benign or not benign (e.g., malicious or performance-degrading). In various aspects, the network server may be configured to receive a large amount of information regarding mobile device behaviors and states, features, and conditions during or characterizing those behaviors from a cloud service/network. This information may be in the form of a very large cloud corpus of mobile device behavior vectors. The network server may use this information to generate a full classifier model (i.e., a robust data/behavior model) that accurately describes the very large cloud corpus of behavior vectors. The network server may generate the full classifier model to include all or most of the features, data points, and/or factors that could contribute to the degradation over time of any of a number of different mobile devices.[0112] In an aspect, the network server may generate the full classifier model to include a state machine expression or representation, such as a decision node or family of decision nodes. This state machine expression or representation can be quickly and efficiently culled, modified or converted into lean classifier models that are suitable for use or execution in a mobile device through application of culling algorithms at the mobile device processor. The state machine expression or representation may be an information structure that includes test conditions, state information, state-transition rules, and other similar information. In an aspect, the state machine expression or representation may be an information structure that includes a large or robust family of decision nodes that each evaluate or test a condition, feature, factor, or aspect of a behavior of the mobile device.[0113] The mobile device may be configured to receive a full classifier model from the network server, and use the received full classifier model to generate lean classifier models (i.e., data/behavior models) locally in the mobile device. The mobile device may generate these local lean classifier models by culling a set of decision nodes included in the received full classifier model into to a subset of decision nodes that identify, test, evaluate and/or depend upon a reduced or limited number of different mobile device states, features, behaviors, or conditions. This culling of the full set of decision nodes may be accomplished by: selecting a decision node; identifying all other decision nodes that depend upon the same mobile device state, feature, behavior, or condition as the selected decision node (and thus can be applied based upon one determination result); including in the lean classifier model the selected and all identified other decision nodes that depend upon the same mobile device state, feature, behavior, or condition; and repeating the process for a reduced/limited number of selected decision nodes not already included in the lean classifier model. By repeating the process using different numbers of mobile device states, features, behaviors, or conditions that are tested, a family of lean classifier models may be generated with varying degrees of leanness determined by the number of states, features, behaviors, or conditions that are evaluated. In addition, each of these lean classifier models may test or evaluate some or all of the same features or conditions as another lean classifier model, but using different threshold values and/or different weights assigned to the importance of the test results, features, or conditions evaluated. As such, the process of generating or regenerating the lean classifier models may include re-computing the threshold values and/or weights associated with the decision nodes.[0114] Since these lean classifier models include a reduced subset of states, features, behaviors, or conditions that must be tested (compared to the full classifier model), the observer and/or analyzer modules may use them to quickly and accurately determine whether a mobile device behavior is benign or contributing to the degradation in the performance of the mobile device without consuming an excessive amount of processing, memory, or energy resources of the mobile device. As noted above, the leanest of the family of lean classifier models (i.e., the lean classifier model based on the fewest number of test conditions) may be applied routinely until a behavior is encountered that the model cannot categorize as either benign or malicious (and therefore is categorized by the model as suspicious), at which time a more robust (i.e., less lean) lean classifier model may be applied in an attempt to categorize the behavior as either benign or malicious. The application of ever more robust lean classifier models within the family of generated lean classifier models may be applied until a definitive classification of the behavior is achieved. In this manner, the observer and/or analyzer modules can strike a balance between efficiency and accuracy by limiting the use of the most complete, but resource- intensive lean classifier models to those situations where a robust classifier model is needed to definitively classify a behavior.[0115] In various aspects, the mobile device may be configured to generate one or more lean classifier models by converting a state machinerepresentation/expression into decision nodes, culling the full set of decision nodes included in the full classifier model to a subset or subsets of decision nodes that depend upon a limited number of different mobile device states, features, behaviors, or conditions, and using the subset or subsets of decision nodes to intelligently monitor, analyze and/or classify a mobile device behavior. The use of decision nodes allows the observer and/or analyzer modules to generate and apply lean data models without communicating with the cloud or a network to retrain the data, which significantly reduces the mobile device's dependence on the network server and the cloud. This eliminates the feedback communications between the mobile device and the network server, which further improves the performance and power consumption characteristics of the mobile device.[0116] FIG. 3 illustrates example logical components and information flows in an observer module 202 of a computing system configured to perform dynamic and adaptive observations in accordance with an aspect. The observer module 202 may include an adaptive filter module 302, a throttle module 304, an observer mode module 306, a high-level behavior detection module 308, a behavior vector generator 310, and a secure buffer 312. The high-level behavior detection module 308 may include a spatial correlation module 314 and a temporal correlation module 316.[0117] The observer mode module 306 may receive control information from various sources, which may include an analyzer unit (e.g., the analyzer module 204 described above with reference to FIG. 2) and/or an application API. The observer mode module 306 may send control information pertaining to various observer modes to the adaptive filter module 302 and the high-level behavior detection module 308.[0118] The adaptive filter module 302 may receive data/information from multiple sources, and intelligently filter the received information to generate a smaller subset of information selected from the received information. This filter may be adapted based on information or control received from the analyzer module, or a higher-level process communicating through an API. The filtered information may be sent to the throttle module 304, which may be responsible for controlling the amount of information flowing from the filter to ensure that the high-level behavior detection module 308 does not become flooded oroverloaded with requests or information.[0119] The high-level behavior detection module 308 may receivedata/information from the throttle module 304, control information from the observer mode module 306, and context information from other components of the mobile device. The high-level behavior detection module 308 may use the received information to perform spatial and temporal correlations to detect or identify high level behaviors that may cause the device to perform at sub-optimal levels. The results of the spatial and temporal correlations may be sent to the behavior vector generator 310, which may receive the correlation information and generate a behavior vector that describes the behaviors of particular process, application, or sub-system. In an aspect, the behavior vector generator 310 may generate the behavior vector such that each high-level behavior of a particular process, application, or sub-system is an element of the behavior vector. In an aspect, the generated behavior vector may be stored in a secure buffer 312.Examples of high-level behavior detection may include detection of the existence of a particular event, the amount or frequency of another event, the relationship between multiple events, the order in which events occur, time differences between the occurrence of certain events, etc.[0120] In the various aspects, the observer module 202 may perform adaptive observations and control the observation granularity. That is, the observer module 202 may dynamically identify the relevant behaviors that are to be observed, and dynamically determine the level of detail at which the identified behaviors are to be observed. In this manner, the observer module 202 enables the system to monitor the behaviors of the mobile device at various levels (e.g., multiple coarse and fine levels). The observer module 202 may enable the system to adapt to what is being observed. The observer module 202 may enable the system to dynamically change the factors/behaviors being observed based on a focused subset of information, which may be obtained from a wide verity of sources.[0121] As discussed above, the observer module 202 may perform adaptive observation techniques and control the observation granularity based on information received from a variety of sources. For example, the high-level behavior detection module 308 may receive information from the throttle module 304, the observer mode module 306, and context information received from other components (e.g., sensors) of the mobile device. As an example, a high-level behavior detection module 308 performing temporal correlations might detect that a camera has been used and that the mobile device is attempting to upload the picture to a server. The high-level behavior detection module 308 may also perform spatial correlations to determine whether an application on the mobile device took the picture while the device was holstered and attached to the user's belt. The high-level behavior detection module 308 may determine whether this detected high-level behavior (e.g., usage of the camera while holstered) is a behavior that is acceptable or common, which may be achieved by comparing the current behavior with past behaviors of the mobile device and/or accessing information collected from a plurality of devices (e.g., information received from a crowd-sourcing server). Since taking pictures and uploading them to a server while holstered is an unusual behavior (as may be determined from observed normal behaviors in the context of being holstered), in this situation the high- level behavior detection module 308 may recognize this as a potentially threatening behavior and initiate an appropriate response (e.g., shutting off the camera, sounding an alarm, etc.).[0122] In an aspect, the observer module 202 may be implemented in multiple parts.[0123] FIG. 4 illustrates logical components and information flows in an example computing system 400 implementing an observer module in accordance with an aspect. The illustrated computing system 400 includes an application framework 402, a run time library 404, a user log API 406, and a logger library 408 in the user space. The computing system 400 may include a kernel core 410, kernel drivers 412, a kernel log API 414, an observer logger 424, a filter rules module 416, a throttling rules module 418, a ring buffer 422, and an observer daemon 420 in the kernel space. In an aspect, the ring buffer 422 may be a fixed-sized and/or circular buffer. In an aspect, the combination of the user log API 406 and the kernel log API 414 may constitute the observer logger 424. In an aspect, the combination of the observer daemon 420 and the observer logger 424 may constitute the observer module 202.[0124] The application framework 402 and the run time library 404 may be preexisting software code/components of the mobile device, each of which may be instrumented with logic to monitor activities and send information to the user log API 406 in the user space. The user log API 406 may provide an API that enables the user space applications to communicate with the kernel via the kernel log API 414.[0125] In an aspect, the observer logger 414 may be automatically invoked whenever a particular event, action, or API (e.g., an API identified in a list of APIs as being of particular importance) is invoked, and the corresponding information may be stored in the ring buffer 422. The information stored in the ring buffer 422 may include, for example, information for identifying the caller, information for identifying the exact function being called, the parameters that have been passed to the function call, and other similar information. In an aspect, this information may be stored in the ring buffer 422 in a raw format. Alternatively, the ring buffer 422 may be used to store information after the log has been processed.[0126] The observer logger 424 may be controlled by a set of filter and throttling rules 416, 418. The filter rules 416 may specify whether a particular API is to be logged or not. The throttling rules 418 may specify conditions under which the system is to termination the logging/monitoring of a specific API to prevent overloads.[0127] The filter and throttling rules 416, 418 may be created, updated, and/or maintained by the observer daemon 420. For example, if after observing the mobile device for ten minutes, the observer daemon 428 decides that a particular API is no longer of interest (e.g., it is not providing the system with useful information), the observer daemon 420 may update the filter rules 416 such that events relating to that particular API are no longer monitored/logged.[0128] FIG. 5A illustrates logical components and information flows in a computing system 500 implementing an observer module 202 in accordance with another aspect. The computing system 500 illustrated in FIG. 5A includes all the components described above with reference to FIG. 4, except that the filter rules 416 are enforced on the user log API 406 in the user space and/or kernel space on the device. Thus, instead of each call coming to the observer logger 424 and the observer logger 424 deciding whether the call should be logged or not (as described with reference to FIG. 4), the filter rules 416 may be implemented within the instrumentations (e.g., user log API, etc.) such that the call itself will not reach the logger based on the filter rules 416. Implementing theconfiguration illustrated in FIG. 5A may further improve the mobile device efficiency because function calls do not need to be made to a logger inside the kernel.[0129] FIG. 5B illustrates logical components and information flows in a computing system 550 implementing an observer module in accordance with yet another aspect. The computing system 550 illustrated in FIG. 5B includes all the components described above with reference to FIG. 5A, except that the observer daemon 420 is in the user space. In an aspect, the observer daemon 420, filter rules 416, throttling rules 418, and observer logger 424 may be part of the same component. Implementing the configuration illustrated in FIG. 5B may further improve the mobile device efficiency because the observer daemon 420 may update the filter rules without functions calls into the kernel space.[0130] At any given time, several applications and several kernel threads may be attempting to store/write information in the ring buffer, which may cause contention issues that hinder scalability. In an aspect, the system's scalability may be improved via the inclusion of multiple ring buffers, as illustrated in FIGs. 6A-B. The computing system 600 illustrated in FIG. 6A includes all the components described above with reference to FIG. 5A, but includes multiple ring buffers 430. The computing system 600 may include a ring buffer for each application, throttle, and kernel thread being monitored by the system. For example, the computing system 600 may include a ring buffer for a kernel thread being monitored by the system, and one or more ring buffers for each application and/or throttle being monitored by the system. Alternatively, the computing system 600 may include a ring buffer for groups of applications, groups of throttles, and/or groups of kernel threads being monitored by the system. The inclusion of multiple ring buffers enables the computing system 600 to avoid contention issues from arising and reduces bottle necks.[0131] The computing system 650 illustrated in FIG. 6B includes all the components described above with reference to FIG. 6A, except that the observer daemon 420 is in the user space. Implementing the configuration illustrated in FIG. 6B may further improve the mobile device efficiency because the observer daemon 420 may update the filter rules without functions calls into the kernel space.[0132] FIG. 7A illustrates logical components and information flows in a computing system 700 implementing an aspect observer daemon 420. The computing system 700 may include an analyzer component (e.g., the analyzer module 204 illustrated in FIG. 2), a filter rules 416 component, a throttling rules 418 component, multiple ring buffers 430, a database 702, a secure buffer 704, and an observer daemon 420. The observer daemon 420 may include a ring buffer API 706, system health monitor 708, a behavior detector 712, a database engine 714, a rules manager 710, a secure buffer manager 716, a query processor 720, a query API 718, a database API 722. A logger (not illustrated) may store information in the ring buffers 430. The observer daemon 420 may extract the information from the ring buffers 430 via the ring buffer API 706. The behavior detector 712 may receive information from the ring buffer API 706, and perform correlation and formatting operations on the received data to generate a behavior vector.[0133] The generated behavior vector may be sent to the database engine 714 for storing in the database 702. The database engine 714 may manage all of the specificities of the database implementation (e.g., kind of data structure that is implemented, types of information included in the data structure, etc.). [0134] The rules manager 710 may be configured to receive inputs from different components (e.g., system health monitor, behavior detection unit, analyzer, etc.), and update the filter and throttle rules 416, 418 based on the received inputs. For example, the rules manager 710 may receive log statistics from the behavior detector 712 and update the filter and throttle rules 416, 418 based on the log statistics.[0135] The system health monitor 708 may be configured to monitor system resources, and inform the rules manager 710 of the system health. For example, the system health monitor 708 may inform the rules manager 710 about the amount of energy that remains stored in the battery, how much memory is available, whether there are enough resources to perform a detailed observation, etc. The rules manager 710 may use the information received from the system health monitor 708 to update the rules. For example, if the system health monitor 708 indicates that the device battery state is below a certain threshold, the rules manager 710 may update the filter rules 416 such that the system performs more coarse observations in order to reduce power consumption.[0136] The query processor 720 may be configured to perform conversions between various API's, such as from a query API 718 to a database-specific API 722.[0137] The secure buffer 704 may enable kernel space components (e.g., in the un-trusted region) to communicate with the user space components (e.g., in the trusted region).[0138] The secure buffer manager 716 may be configured to control the communications that occur via the secure buffer 704.[0139] The database engine 714 may be configured to store the database response to the secure buffer manager 716, which may perform flow control operations and store the information in the secure buffer 704. [0140] The information generated by the observer daemon 420may be utilized by an analyzer 204, which may be implemented in the kernel space, user space, or in a trusted computing base of a system-on-chip (SOC).[0141] FIG. 7B illustrates logical components and information flows in a computing system 750 implementing another aspect observer daemon 420. The computing system 750 may include an analyzer 204 component, a filter rules 416 component, a throttling rules 418 component, multiple ring buffers 430, a secure buffer 704, a secure buffer manager 716, and an observer daemon 420. The observer daemon 420 may include a ring buffer API 706, system health monitor 708, a behavior detector 712, a database engine 714, and a rules manager 710. A logger (not illustrated) may store information in the ring buffers 430. The computing system 750 may perform the same operations as the computing system 700 illustrated in FIG. 7A, except that the secure buffer manager 716 is in the kernel space and may control the data that is sent to an analyzer 204 in the user space.[0142] FIG. 8A illustrates logical components and information flows in a computing system 800 implementing another aspect observer daemon. The computing system 800 illustrated in FIG. 8A includes all of the components described above with reference to FIG. 7A, except for a query processor because the database in this aspect is included as part of the secure buffer. In this configuration, whenever the analyzer issues a query, the query may come directly from the database engine. Similarly, responses to the query may be sent directly from the secure buffer to the analyzer.[0143] FIG. 8B illustrates logical components and information flows in a computing system 800 implementing yet another aspect observer daemon. In the example illustrated in FIG. 8B, the observer daemon includes a behavior detector 712 and a database engine 714 in the user space, and a secure buffer manager 716, a rules manager 710, and a system health monitor 708 in the kernel space. [0144] The various aspects provide cross-layer observations on mobile devices encompassing webkit, SDK, NDK, kernel, drivers, and hardware in order to characterize system behavior. The behavior observations may be made in real time.[0145] An important feature of the various aspects is that the observer module may perform adaptive observation techniques and control the observation granularity. As discussed above, there are a large number (i.e., thousands) of factors that could contribute to the mobile device's degradation, and it may not be feasible to monitor/observe all of the different factors that may contribute to the degradation of the device's performance. To overcome this, the various aspects dynamically identify the relevant behaviors that are to be observed, and dynamically determine the level of detail at which the identified behaviors are to be observed.[0146] FIG. 9A illustrates an aspect method 900 for dynamically selecting mobile device behaviors for observation in order to identify suspicious mobile device behaviors. In block 902, the mobile device processor may select for observation mobile device behaviors and/or states that will be observed. This selection of device behaviors and/or states may include the selection of a subset of a wide range of behaviors, actions and states. Thus, the selection in block 902 may be one or more of mobile device operations, mobile device events, data network activity, system resource usage, mobile device state, inter-processcommunications, driver statistics, hardware component status, hardware counters, actions or operations of software applications, software downloads, changes to device or component settings, conditions and events at an application level, conditions and events at the radio level, conditions and events at the sensor level, conditions and events at a hardware level, conditions and events at a driver level, and conditions and events at a high level. In block 904, the mobile device may begin observing the selected device behaviors and/or states and process the observations in order to identify suspicious mobile device behaviors. Since only the selected subset of device behaviors and/or states are observed, this enables the processor to detect suspicious behaviors based on a limited set ofobservations.[0147] Examples of mobile device operations that may be selected in block 902 and observed in block 904 include, for example, one or more of library API calls in an application framework or run-time library, system call APIs, file-system and networking sub-system operations, file system activity, searches for filenames, categories of file accesses, creating files, deleting files, fileread/write/seek operations, and changing file permissions.[0148] Examples of mobile device events that may be selected in block 902 and observed in block 904 include, for example, device state changes and/or sensor devices state changes.[0149] Examples of mobile device data network activities that may be selected in block 902 and observed in block 904 include, for example, one or more of types of connections, protocols, port numbers, server/client that the device is connected to, the number of connections, volume or frequency of communications, phone network activity, type and number of calls/messages sent, type and number of calls/messages received, type and number of calls/messages intercepted, call information, text messaging information, media messaging, user account information, transmissions, voicemail, and device identifiers (e.g.,DevicelDComm).[0150] Examples of mobile device system resource usage that may be selected in block 902 and observed in block 904 include, for example, monitoring the number of forks, memory access operations, and/or the number of files open.[0151] Examples of mobile device states that may be selected in block 902 and observed in block 904 include, for example, display on/off state, locked/unlocked state, battery charge state, camera state, and microphone state. [0152] Examples of mobile device inter-process communications that may be selected in block 902 and observed in block 904 include, for example, monitoring intents to crucial services (browser, contracts provider, etc.), monitoring the degree of inter-process communications, and monitoring pop-up windows.[0153] Examples of mobile device driver statistics that may be selected in block 902 and observed in block 904 include, for example, statistics from drivers for one or more of cameras, sensors, electronic displays, WiFi communication components, data controllers, memory controllers, system controllers, access ports, peripheral devices, wireless communication components, and external memory chips.[0154] Examples of mobile device driver hardware component status that may be selected in block 902 and observed in block 904 include, for example, cameras, sensors, electronic displays, WiFi communication components, data controllers, memory controllers, system controllers, access ports, timers, peripheral devices, wireless communication components, external memory chips, voltage regulators, oscillators, phase-locked loops, peripheral bridges, and other similar components used to support the processors and clients running on the mobile computing device.[0155] Examples of mobile device hardware counters that may be selected in block 902 and observed in block 904 include, for example, hardware counters that denote the state or status of the mobile computing device and/or mobile device sub-systems, and special-purpose registers of processors/cores that are configured to store a count or state of hardware-related activities or events.[0156] Examples of mobile device driver statistics that may be selected in block 902 and observed in block 904 include, for example, statistics from drivers for one or more of cameras, sensors, electronic displays, WiFi communication components, data controllers, memory controllers, system controllers, access ports, peripheral devices, wireless communication components, and external memory chips.[0157] Examples of mobile device actions or operations of software applications that may be selected in block 902 and observed in block 904 include, for example, monitoring of information used by software applications including one or more of location information, camera information, inertia information, browser information, content of browser-based communications, content of voice-based communications, short range radio communications, content of text- based communications, content of recorded audio files, phonebook or contact information, contacts lists, calendar information, location information(LocationComm), recorded audio information, notifications communicated to and from a software application, user verifications, and a user password.[0158] Examples of mobile device software downloads that may be selected in block 902 and observed in block 904 include, for example, software downloads from an application download server, and a first software application requesting the downloading and/or install of a second software application.[0159] Examples of changes to device or component settings that may be selected in block 902 and observed in block 904 include, for example, changes to one or more of compass information, mobile device settings, battery life, gyroscope information, pressure sensors, and screen activity.[0160] Examples of mobile device conditions and events at the application level that may be selected in block 902 and observed in block 904 include, for example, observing user via facial recognition software, observing social streams, observing notes entered by the user, observing event pertaining to the use of an electronic payment service, such as PassBook /Google Wallet /Paypal, observing events relating to the use of VPNs, synchronization, voice searches, voice control, language translators, offloading of data for computations, video streaming, camera usage without user activity, and microphone usage without user activity.[0161] Examples of mobile device conditions and events at the radio level that may be selected in block 902 and observed in block 904 include, for example, determining the presence, existence or amount of any or all of: user interaction with the mobile device before establishing radio communication links or transmitting information, single, dual or multiple SIMs or SIM cards, Internet radio, mobile phone tethering, offloading data for computations, device state communications, the use as a game controller or home controller, vehicle communications, mobile device synchronization, monitoring the use of radios (WiFi, WiMax, Bluetooth, etc.) for positioning, peer-to-peer (p2p)communications, synchronization, vehicle to vehicle communications, and/or machine-to-machine (m2m), and monitoring network traffic usage, statistics, or profiles.[0162] Examples of mobile device conditions and events at the events at the sensor level that may be selected in block 902 and observed in block 904 include, for example, monitoring magnet sensors, detecting near-field communications, collecting information from a credit card scanner, barcode scanner, or mobile tag reader, detecting the presence of USB power charging source, detecting that a keyboard or auxiliary device has been coupled to the mobile device, detecting that the mobile device has been coupled to a computing device (e.g., via USB, etc.), determining whether a light emitting diode, flash, flashlight, or light source has been modified or disabled (e.g., maliciously disabling an emergency signaling app, etc.), determining whether a speaker or microphone has been turned on or powered, detecting a charging or power event, detecting that the mobile device is being used as a game controller, collecting information from medical purpose/healthcare sensors or from scanning the user's body, collecting information from an external sensor plugged into the USB/audio jack, collecting information from a tactile or haptic sensor (e.g., via a vibrator interface, etc.), monitoring communications with and/or behaviors of hardware components coupled to the computing device via the USB or a wireless transceiver (e.g., WiFi, Bluetooth, or NFC), and collecting information pertaining to the thermal state of the mobile device.[0163] Examples of mobile device conditions and events at the hardware level that may be selected in block 902 and observed in block 904 include the number of times, durations, and when location hardware is activated, such as hardware for calculating horizontal dilution of precision (HDoP) for GPS and wireless access point location data, and hardware for measuring round-trip time (RTT) for wireless access point location data. The location hardware may be used to determine location without having to access a location API. The information from the location hardware may be gathered and used by software other than the software of the mobile device, such as cloud-based software, to determine the location of the mobile device. Monitoring the location hardware usage may aid in determining, for example, whether the location of the mobile device is being monitored.[0164] Examples of mobile device conditions and events at the hardware level that may be selected in block 902 and observed in block 904 include the number of times, durations, and when personal area network (PAN) hardware is activated, such as hardware for supporting and implementing Bluetooth, WiFi Direct, ZigBee, and the like short range wireless networking protocols, and HDoP and RTT hardware. The PAN hardware may be used to determine the devices that are visible to and connected to the mobile device. This information from the PAN hardware may make it possible to determine the location of the mobile device based on knowing the location of the visible or connected devices. For example, the locations of PAN enabled devices in a commercial environment used to track or transfer information to and from the mobile device may be used locate the mobile device. The PAN hardware may also be used to determine the versions of and capabilities of the PAN protocols used by the mobile device. Monitoring the PAN hardware usage may aid in determining, for example, whether the location of the mobile device is being monitored, or whether mobile device information is being accessed.[0165] Examples of mobile device conditions and events at the hardware level that may be selected in block 902 and observed in block 904 include the number of times, durations, and when microphone hardware is activated, such as hardware used to support voice activated commands on the mobile device, including waking-up the mobile device from an idle state, hardware used to support listening by the microphone, and hardware used to support ultrasound capabilities. The microphone hardware for voice activated commands on the mobile device may induce the microphone hardware to be in an always-on state, and the information captured that triggers the mobile device to become active or execute other commands may be identified. This information may be used to reproduce signals to cause the mobile device to activate and execute functions not requested by the user. The microphone hardware supporting listening, in some instances in conjunction with the always-on state, may capture information that may be used to record sound, including conversations, and to identify people, venues, and times of the sounds. The microphone hardware for ultrasound capabilities may be used to locate the mobile device within an environment, such as by echolocation. Monitoring the microphone hardware usage may aid in determining, for example, whether the mobile device and its functions are being inappropriately activated and whether the location of the mobile device is being monitored.[0166] Examples of mobile device conditions and events at the hardware level that may be selected in block 902 and observed in block 904 include the number of times, durations, and when speaker hardware is activated, such as hardware used to support ultrasound capabilities. Similar to the microphone hardware for ultrasound capabilities, the speaker hardware for ultrasound capabilities may be used to locate the mobile device within an environment, such as by echolocation. Monitoring the speaker hardware usage may aid in determining, for example, whether the location of the mobile device is being monitored.[0167] Examples of mobile device conditions and events at the hardware level that may be selected in block 902 and observed in block 904 include the number of times, durations, and when camera hardware is activated, such as hardware for supporting light sensing, hardware for supporting non-touch gesture or motion detection, hardware for supporting computational photography, and hardware for supporting zoom functions. The camera hardware for light sensing may produce readings of the amount of light in the environment around the mobile device, which may be used to determine the type of environment (e.g. indoors, or outdoors) in which the mobile device is located. The camera hardware for non- touch gesture or motion detection may produce information causing the mobile device to execute different functions. This information may be used to reproduce signals that may cause the mobile device to execute functions not requested by the user. The hardware for supporting computational photography and zoom functions may be used in an image capture process for the camera. Images captured by the camera could be offloaded and viewed, used to identify people, environments, or time, and could also be stored. Monitoring the camera hardware usage may aid in determining, for example, whether the environment of the mobile device is being monitored and whether the functions of the mobile device are being inappropriately activated and used to capture information and images.[0168] Examples of mobile device conditions and events at the hardware level that may be selected in block 902 and observed in block 904 include the number of times, durations, and when screen hardware is activated, such as hardware for supporting non-touch input/output and hardware supporting visual light communication. Used in conjunction with the camera hardware for non-touch gesture or motion detection, the screen hardware for non-touch input/output may be used to identify signals that control the screen. This information may be used to reproduce the signals, which may be used to keep the screen deactivated while other processes are executed to avoid user detection of malware operations. The screen hardware for visible light communication may be used to send and receive information. The information from the screen hardware for visible light communication may be used to send information from the mobile device, alter information received by the mobile device, and identify the mobile device.Monitoring the screen hardware usage may aid in determining, for example, whether the functions of the mobile device are being inappropriately controlled and whether communications are being watched or tampered with.[0169] Examples of mobile device conditions and events at the hardware level that may be selected in block 902 and observed in block 904 include the number of times, durations, and when USB hardware is activated. The information from the USB hardware may be used with the USB version identifier and known bandwidth to determine the amount of available bandwidth on a USB connection. The bandwidth may be monitored by unauthorized software to determine whether unauthorized transfers of data may be executed without affecting theperformance of the USB connection. The information may also be used to maliciously throttle the USB connection so that the performance is less than expected. Monitoring the mobile device conditions and events at the driver level for USB hardware may aid in determining, for example, whether unauthorized data transfer or bandwidth limiting is occurring, such as to or from external hardware components coupled to the computing device through the USB connection.[0170] Examples of mobile device conditions and events at the hardware level that may be selected in block 902 and observed in block 904 include the number of times, durations, and when synchronization hardware is activated, such as hardware for securing/coding communication channels. The synchronization hardware may be used to identify a type of connection (e.g. WiFi, USB, wired, or wireless), the version of the connection protocol, and the activity level of the connection . This information from the synchronization hardware may be used to determine the bandwidth of the connection and when the connection can be used to transfer information without detection, or to throttle the connectionthroughput. Monitoring the synchronization hardware usage may aid in determining, for example, whether the connection is being used for unauthorized transfers, or whether the connection performance is being degraded.[0171] Examples of mobile device conditions and events at the driver level that may be selected in block 902 and observed in block 904 for location hardware drivers include the number of times and/or times of occurrence of: requests to send (RTS)/clear to send (CTS) transactions; data null/data acknowledgement transactions; reads of the number of visible location satellites (e.g., GPS satellites); connection attempts of different types when indoors and outdoors; floor messages; and reads of a received strength indication (RSSI). A high number of RTS/CTS transaction or data null/data acknowledgment transactions, which are related to location queries, may indicate attempts to determine the location of the mobile device. A high number of reads of the number of visible location satellites may indicate attempts to determine the accuracy of a location of the mobile device. When the mobile device is indoors and it continues to attempt to communicate with location satellites or make a high number of RTT measurements to wireless access points may indicate an attempt to determine the location of the mobile device. Similarly, when the mobile device is outdoors and it continues to attempt to make RTT measurements to indoor type wireless access points may indicate an attempt to determine the location of the mobile device. A high number of request for floor information or reads of the RSSI may also indicate an attempt to determine the location of the mobile device. Monitoring the mobile device conditions and events at the driver level for location hardware drivers may aid in determining, for example, whether the location of the mobile device is being monitored. [0172] Examples of mobile device conditions and events at the driver level that may be selected in block 902 and observed in block 904 for personal area network (PAN) hardware drivers include packet exchange statistics, the number of times and/or times of occurrence of: reads of the RSSI , reads of the devices connected or visible to the mobile device; and reads of the versions of the PAN protocols and capabilities of the connected PAN devices. Similar to the location hardware drivers, the number of reads of the RSSI and high numbers and rates of packet exchanges may indicate an attempt to determine the location of the mobile device. The packet exchange statistics may also indicate unauthorizedtransmissions of data. The number of reads of the connected or visible PAN devices and their wireless protocols and capabilities may indicate an attempt to find the location of the mobile device, as this information may help indicate the range of these connected and visible devices. Monitoring the mobile device conditions and events at the driver level for PAN hardware drivers may aid in determining, for example, whether the location of the mobile device is being monitored, or if mobile device information is being accessed.[0173] Examples of mobile device conditions and events at the driver level that may be selected in block 902 and observed in block 904 for near fieldcommunication (NFC) hardware drivers include packet exchange statistics, and number of times and/or times of occurrence of: reads of the distance or signal strength between the mobile device and an NFC device; reads of the NFC devices connected or visible to the mobile device; and reads of the versions of the NFC protocols and capabilities of the connected NFC devices. The packet exchange statistics may indicate unauthorized transmissions of data between the mobile device and NFC devices. The number of reads of the distance or signal strength between the mobile device and an NFC device, the connected or visible PAN devices, and their wireless protocols and capabilities may indicate an attempt to find the location of the mobile device. For example, when the location of an NFC device is known or NFC is used for checking in at a location, connection with the NFC device may indicate the location of the mobile device. Also, connection with an NFC device may alter security levels on the mobile device, putting a device in a lower security state due to the low power and short distance nature of NFC communication. This low security state may leave the mobile device vulnerable to unauthorized access or the introduction of malware. Monitoring the mobile device conditions and events at the driver level for NFC hardware drivers may aid in protecting the mobile device from unauthorized access during a low security level state by indicating the existence of potentially harmful entities, such as software.[0174] Examples of mobile device conditions and events at the driver level that may be selected in block 902 and observed in block 904 for microphone hardware drivers include the number of times and/or when input/output control (ioctl) calls to access the microphone or calls for digital communication via an audio port occur. As discussed previously, access to the microphone may be used for surreptitious recording and echolocation. Unauthorized access to the microphone drivers may be identified by an unusually high number of ioctl clients running concurrently. In many cases, it may be unusual for even more than one ioctl client to be running for the microphone. Audio ports may be used as inputs for receiving information from connected peripheral devices, such as magnetic strip readers for processing credit card information. Unauthorized access to the communications over audio ports may compromise thisinformation. Like for the microphone, monitoring the number of clients reading the data from the audio port may identify whether unauthorized access to communications on the audio ports is occurring. Monitoring the mobile device conditions and events at the driver level for microphone hardware drivers may aid in determining, for example, whether the mobile device and its functions are being inappropriately activated and whether the location of the mobile device is being monitored.[0175] Examples of mobile device conditions and events at the driver level that may be selected in block 902 and observed in block 904 for speaker hardware drivers include the number of time or when input/output control (ioctl) calls to access the speaker occur. As discussed previously, the speaker may be used to echolocate the mobile device. Much like the microphone and the audio port, the number of clients accessing the speaker is likely to be limited, and an unusually high number of clients accessing the speaker may be indicative of unauthorized access. Monitoring the mobile device conditions and events at the driver level for speaker hardware drivers may aid in determining, for example, whether the location of the mobile device is being monitored.[0176] Examples of mobile device conditions and events at the driver level that may be selected in block 902 and observed in block 904 for camera hardware drivers include the number of time and/or when image capture, computational photography, flashlight and zoom functions are used. These functions of the camera may be used to capture images. Images captured by the camera could be offloaded and viewed, used to identify people, environments, or time, and could also be stored. Monitoring the mobile device conditions and events at the driver level for camera hardware drivers may aid in determining, for example, whether the functions of the mobile device are being inappropriately activated and used to capture information and images.[0177] Examples of mobile device conditions and events at the driver level that may be selected in block 902 and observed in block 904 for gyroscope hardware drivers include the number of times and/or when input/output control (ioctl) calls to access the gyroscope occur. The information accessible when the gyroscope is active may include positional data related to the mobile device, including the tilt of the mobile device in a three dimensional space. Such information may be used to deduce the location of the mobile device. For example, a substantially flat tilt in the axis perpendicular to the ground may indicate that the mobile device is on a table. Similarly, a substantially vertical tilt in the axisperpendicular to the ground may indicate that the mobile device is docked in a peripheral device or holder. Monitoring the mobile device conditions and events at the driver level for gyroscope hardware drivers may aid in determining whether the location of the mobile device is being monitored, such as when active operations or functions are inconsistent with the orientation of the mobile device.[0178] Examples of mobile device conditions and events at the driver level that may be selected in block 902 and observed in block 904 for browser supporting hardware drivers include the number of times and/or when HTML5 or JavaScript are utilized, and graphics processing units (GPUs) or digital signal processors (DSPs) are utilized. Some World Wide Web Consortium (W3C) standardized languages, such as HTML 5, and scripting languages, such as JavaScript, may be able to access the processors, such as the GPU or DSP, of the mobile device. These languages may also have access to the sensors on the mobile device via the Internet, and the information from the sensor may be offloaded to a cloud server. The languages may be used to access information from the processors and sensors. The processors may also be used to run unauthorized code. Monitoring the mobile device conditions and events at the driver level for browser supporting hardware drivers may aid in determining, for example, whether unauthorized monitoring of the sensors and the processors of the mobile device is occurring, or the processors are being used to run unauthorized code.[0179] Examples of mobile device conditions and events at the driver level that may be selected in block 902 and observed in block 904 for battery hardware drivers include the number of times and/or when the instantaneous discharge rate or charging state indicators are read. Unauthorized software may track the instantaneous discharge rate and the charging state to determine how much of the resources of the mobile to use while avoiding impacting the performance of the mobile device which could lead to detection of the unauthorized software. For example, when the instantaneous discharge rate indicates that the mobile device's battery is depleting at a high rate, the unauthorized software may use minimal resources to avoid increasing the discharge rate. However, if the charging state indicates that the mobile device is charging, the unauthorized software may determine that it may use more resources without adversely affecting the battery charge level. Monitoring the mobile device conditions and events at the driver level for battery hardware drivers may aid in determining, for example, whether unauthorized software is running on the mobile device.[0180] Examples of mobile device conditions and events at the driver level that may be selected in block 902 and observed in block 904 for universal serial bus (USB) hardware drivers include the number of times or when a connection mode and an activity mode are read. The information from the USB hardware drivers may be used with the USB version identifier and known bandwidth to determine the amount of available bandwidth on a USB connection. The bandwidth may be monitored by unauthorized software to determine whether unauthorized transfers of data may be executed without affecting the performance of the USBconnection. The information may also be used to maliciously throttle the USB connection so that the performance is less than expected. Monitoring the mobile device conditions and events at the driver level for USB hardware drivers may aid in determining, for example, whether unauthorized data transfer or bandwidth limiting is occurring.[0181] Examples of mobile device conditions and events at the driver level that may be selected in block 902 and observed in block 904 for storage hardware drivers include the number of times and/or when data is transferred between the mobile device and a memory, a mode of the memory (e.g., privacy or protected mode) is read, and a type or speed indicator of the memory is read. Unauthorized software may use the information related to the storage hardware drivers to determine when and how to transfer data to and from the memory to reduce the risk of being discovered, such as making transfers when the memory is not otherwise occupied and additional transfers would not cause a perceivable change in the performance. The information could also be used by unauthorized software to maliciously reduce the performance of data transfers with the memory. Monitoring the mobile device conditions and events at the driver level for storage hardware drivers may aid in determining, for example, whether unauthorized data transfer or performance limiting is occurring.[0182] Examples of mobile device conditions and events at the driver level that may be selected in block 902 and observed in block 904 for user interaction hardware drivers include the number of times or when statistics of keystrokes or touch events by screen area or by frequency are accessed, as well as actions of device sensors used to recognize and react to user gestures (i.e., gesture recognition sensors and modules). User interfaces, such as touchscreens or keyboards, may be used to frequently input sensitive information. For example, users may repeatedly interact with the user interface to unlock the mobile device or login to an account by entering a password or gesture based pattern, or users may frequently enter credit card numbers to make a purchase. Statistical information about how the user interacts with the user interface may be used by the mobile device for predictive input purposes, such as suggesting a word to type, or modifying a virtual keyboard so that the user might type moreaccurately. This information, when accessed without authorization, may be used to determine common patterns of interaction and deduce the sensitiveinformation the user may have entered. Monitoring the mobile device conditions and events at the driver level for user interaction hardware drivers may aid in determining, for example, whether unauthorized access to the user interaction with the user interface statistics are being monitored.[0183] Examples of observations of user gestures that may be observed in block 904 for user interactions include whether and the frequency at which user movement gestures are recognized and acted upon. Gesture recognition devices and modules may include cameras and image processing modules, inertia sensors (e.g., accelerometers and gyroscopes) and associated processing, relative position sensors communicating with the computing device (e.g., wrist devices that cooperate with a mobile device to resolve a three-dimensional relative positions to enable arm position/movement gestures), and sensors that are capable of detecting and locating parts of the user's body (e.g., fingers or hands) when close but not touching the device. For example, a camera on the computing device positioned to image the user and algorithms executing on the device processor may be configured to recognize when user postures and/or movements match to recognizable gestures correlated to user commands or data inputs. Monitoring the computing device's use or execution of gesture recognition systems and/or analysis modules, particularly in the context of other device states or behaviors, may reveal malicious use of such capabilities (e.g., to monitor images of the user without the user's knowledge).[0184] Examples of mobile device conditions and events at the driver level that may be selected in block 902 and observed in block 904 for synchronization hardware drivers include the number of times and/or when a type of channel security is read. The information for the synchronization hardware drivers may be used to identify a type security used to protect communication on a channel (e.g. WPA/WPA2, VPN, and SSL). This information for the synchronization hardware drivers may be used to determine when a connection is secured and how difficult it might be to crack the security protocol protecting thecommunications. This information may be used to determine when to attempt read unsecured data transfers, or when it may be easier to crack the security protocol to read the data transfers without authorization. Monitoring the mobile device conditions and events at the driver level for synchronization hardware drivers may aid in determining, for example, whether unauthorized attempts are being made to read data being transferred to and from the mobile device.[0185] Examples of mobile device conditions and events at the driver level that may be selected in block 902 and observed in block 904 for radio interface hardware drivers include the number of times and/or when a usage mode is read. Such modes may include peer-to-peer, mobile-to-mobile, vehicle-to-vehicle, and infrastructure modes. The mode information may identify the types of communication that may be transferred via the radio interfaces. Unauthorized reading of the various communications during different modes may provide information to relate mobile devices and users with other connected machines. Monitoring the mobile device conditions and events at the driver level for radio interface hardware drivers may aid in determining, for example, whether unauthorized attempts are being made to read data being transferred to and from the mobile device.[0186] Examples of mobile device conditions and events at a high level that may be selected in block 902 and observed in block 904 for location hardware include the number of times and/or when the identity of the servers, such as AD servers or Pol servers, the mobile device is trying to access are read. The mobile device may try to access the nearest servers to help reduce lag time in thecommunications between the mobile device and the servers. The location of the mobile device may be determined based on the identity of the servers it is trying to access by knowing the location of the servers. Monitoring the mobile device conditions and events at the high level for location hardware may aid in determining, for example, whether unauthorized tracking of the mobile device is occurring.[0187] Examples of mobile device conditions and events at a high level that may be selected in block 902 and observed in block 904 for near field communication (NFC) hardware include the number of times and/or when a check- in indicator is read. The mobile device may check- in at a location via an NFC communication with an NFC enabled device, such as a payment device to purchase items or a coupon dispenser in a store. The location of the mobile device may be determined based on the identity and location of the NFC device with which the mobile device checks-in. Monitoring the mobile device conditions and events at the high level for NFC hardware may aid in determining, for example, whether unauthorized tracking of the mobile device is occurring. [0188] Examples of mobile device conditions and events at a high level that may be selected in block 902 and observed in block 904 for screen hardware include the number of times and/or when a screen brightness level is read or a screen capture occurs. Light sensors on the mobile device may indicate when the mobile device is in low or high light areas, which may indicate whether the mobile device is indoors or outdoors. The screen may adjust to the conditions by adjusting its brightness to be brighter when outdoors and darker when indoors. This information may be used to determine the type of environment in which the mobile device is located. Unauthorized software may also take screen captures of what is displayed on the screen. Depending on the timing of such screen captures, sensitive information may be exposed to anyone who views them. Monitoring the mobile device conditions and events at the high level for screen hardware may aid in determining, for example, whether unauthorized tracking of the mobile device is occurring, or whether unauthorized recording of the information being displayed on the screen is occurring.[0189] Examples of mobile device conditions and events at a high level that may be selected in block 902 and observed in block 904 for browser supporting hardware include the number of times and/or when JavaScript statistics are read or sensors are accessed. JavaScript statistics may include CPU and memory usage. Much like other instances of CPU and memory information, these statistics may be used by unauthorized software to determine when to use the CPU and memory to minimize chances of detection by using these resources when they are only managing a lighter load and having little impact on the performance of the mobile device. The sensors of the mobile device (e.g., the camera, an accelerometer, a gyroscope, and the like) may be accessed via the Internet through the browser. The information captured by the sensors may be offloaded to a cloud server through the browser as well. Monitoring the mobile device conditions and events at the high level for browser supporting hardware may aid in determining, for example, whether unauthorized software is being run on the mobile device or whether unauthorized access of the sensors on the mobile device is occurring.[0190] Examples of mobile device conditions and events at a high level that may be selected in block 902 and observed in block 904 for storage hardware include the number of times and/or when reads from and writes to the storage device occur. Unauthorized software may read sensitive information from the storage device of the mobile device. The unauthorized software may also write harmful code to or overwrite, thus deleting, data from the storage device. Monitoring the mobile device conditions and events at the high level for storage hardware may aid in determining, for example, whether unauthorized software is manipulating the storage device of the mobile device, or getting unauthorized access to the data stored on the storage device.[0191] Examples of mobile device conditions and events at a high level that may be selected in block 902 and observed in block 904 for inertia sensor components include the number of times and/or when readings of accelerometer data occur. For example, inertia sensor components (e.g., an accelerometer) in the mobile device may detect whenever the mobile device is moved. Certain movements may invoke certain functions of the mobile device, or may be correlated with subsequent functions of the mobile device. For example, a certain gesture may be used to unlock or wake-up the mobile device, or initiate a data transfer to another device. Similarly, a certain movement may commonly occur before a particular function is invoked. For example, the mobile device suddenly moving in a substantially vertical direction may be indicative of a user picking up the mobile device for use, and may be commonly followed by unlocking the mobile device. The inertia information may be used to recreate the movements that invoke a function, or to indicate to the observer module to monitor a feature of the device in response to a specific movement in order to glean more information from correlating the movement and the function that is likely to follow.Monitoring the mobile device conditions and events at the high level for inertia sensor components may aid in determining, for example, whether unauthorized function calls are occurring, or unauthorized recordings of actions are occurring.[0192] Examples of mobile device conditions and events at a high level that may be selected in block 902 and observed in block 904 for synchronization hardware include the number of times and when changes to the synchronization settings occur. Unauthorized software may modify synchronization settings, such as the destination server, black-listed and white-listed servers, and location and network settings. Changes to the synchronization settings may direct the synchronization procedures to send data to an unauthorized destination, reduce the protection level of the data being transmitted, or cause synchronization errors. Monitoring the mobile device conditions and events at the high level for synchronization hardware may aid in determining, for example, whether data may becompromised by transmitting to the unauthorized destination or transmitting the data in a less secure format, or whether synchronization procedures are failing.[0193] Examples of mobile device conditions and events at a high level that may be selected in block 902 and observed in block 904 for dual SIM hardware include the number of times and/or when information flows between secure and unsecure SIM cards occurs. Mobile devices may contain multiple SIM cards for different purposes. For example, a mobile device may have an unsecure SIM card formatted for regular use of the communication features of the mobile device, and a secure SIM card to provide greater security for transmission and storage of sensitive information. A secure SIM card may invoke encrypting data transmitted from and stored on the mobile device, and invoke decrypting data received by the mobile device. Mobile devices that use secure SIM cards often transmit data to other secure devices with secure SIM cards. The transfer of data from the secure SIM card to the unsecure SIM card may be less common, because the data may then be more easily accessed by an unauthorized party. The number of times data transmissions occur from the secure SIM card to the unsecure SIM card may be indicative of unauthorized transfers of secure data. For example, the number of times the secure SIM card places a call to the unsecure SIM card. The secure and unsecure SIM cards may also be on different mobile devices. Monitoring the mobile device conditions and events at the high level for dual SIM hardware may aid in determining, for example, whether unauthorized transfers of data are occurring.[0194] Examples of mobile device conditions and events at a high level that may be selected in block 902 and observed in block 904 for radio interface hardware include the number of times, when, and which radio interfaces are active, and the correlation of traffic statistics across the radio interfaces. Unauthorized software may activate various radio interfaces on the mobile device to executeunauthorized data transfers or to locate the mobile device. Also, the mobile device may be used by unauthorized software to make multiple repeated requests for access to or to communicate with a remote server or other device as part of a denial-of-service (DOS) or distributed denial-of-service (DDOS) attack. The correlation of the traffic statistics across the radio interfaces may show when a radio interface has a high level of traffic compared to the other radio interfaces. Greater disparities in traffic levels may occur at different times, such as when the mobile device is otherwise idle and the other radio interfaces have little traffic. A high level of traffic on a particular radio interface may be indicative of unauthorized use of the radio interface as part of some such attack. Monitoring the mobile device conditions and events at the high level for radio interface hardware may aid in determining, for example, whether unauthorized software is causing unauthorized data transmissions or to involve the mobile device in attack on another device.[0195] Examples of mobile device conditions and events at a high level that may be selected in block 902 and observed in block 904 for features unrelated related to any specific hardware include the number of times and when: a motion state or a non-motion state is read; a combination of location information and Bluetooth or NFC information are accessed; a connectivity state is checked; microphone functionality is accessed or used; a combination of a camera and communication functions are used; communication NFC details are accessed; and a combination of no prior user interaction and a camera or microphone function are used.Motion state information could be used to determine whether the mobile device is moving, and potentially its speed. For example, a slow rate of movement may indicate that a user is standing with the mobile device because while standing the user may make slow and short movements. Faster movements may indicate that the user of the mobile device may be walking, driving, flying, etc. Similarly, a non-motion state may indicate a relative lack of movement of the mobile device. For example, infrequent or lack of movement of the mobile device may indicate that the mobile device is placed on a table or in a docking device, in a pocket or holster of a user who is staying relatively stationary, such as sitting in a chair. Tracking access to this information may aid in determining, for example, whether unauthorized access is occurring to potentially gather information on the movements of the mobile device.[0196] The location or Bluetooth or NFC information alone may be used to identify the location of the mobile device, but the combination of location information and Bluetooth or NFC information may be used to determine the location of the mobile device with increased accuracy. Multiple sources of information to determine the location may be used to determine the correctness of one or more of the information sources, or used in combination to locate the mobile device in an area and then used to further pinpoint the device within the area. For example, location information may be less accurate in a shopping mall than out in the open where multiple cell towers and GPS satellites may be observed, and it may not be possible to identify a vertical position of the mobile device from the location information that may be gleaned from a mobile device within a mall. However, knowing the general location of the mobile device, Bluetooth or NFC information may indicate that a connection to a network has been established by the mobile device via a transceiver within certain stores. The combination of knowing that the mobile device is generally in the shopping mall and that the mobile device is connected to a network belonging to a particular store may allow the mobile device to be located with precision within the shopping mall, possibly by comparing the information from the mobile device with information about the location. Tracking access to this information may aid in determining, for example, whether unauthorized access is occurring, potentially to determine the location of the mobile device.[0197] The connectivity state may indicate when the mobile device attempts to or is connected to a network. This information may be used to locate the mobile device, track the data transmitted over the network connection to and from the mobile device, and to transmit data over the network connection. Theconnectivity state may also indicate the communication network that the mobile device is attempting to connect to or is connected to supports, such as cellular, WiFi, Bluetooth, SMS, or any other type of communication with which the mobile device has the necessary radio transceivers. The mobile device location may be determined based on the coverage of the network to which the mobile device is connected, and a series of connections may be used to track the movements of the mobile device over time. The data transmitted to and from the mobile device may be tracked when a connection state indicates an attempt to connect or a connection to a network, as the connection state may trigger software to begin unauthorized monitoring of the data being sent and received via the connection. Similarly, the connection state may prompt software to use the connection to make unauthorized transmissions and receptions of data over the connection. Tracking access to the connection state may aid in determining, for example, whether unauthorized tracking, monitoring of data transmission, or use of the connection is occurring.[0198] The microphone functionality may be used to record sounds directed to or in the environment around the mobile device. The sound recordings may be used to store conversations, identify participants of the conversations, or echolocate the mobile device within its environment. The microphone functionality may be subject to unauthorized use or monitoring when legitimately used. Tracking access or use of the microphone functionality of the mobile device may aid in determining, for example, whether unauthorized monitoring of the sound captured by the microphone is occurring.[0199] The combination of the camera function and the communication function usage may be used to capture unauthorized light sensing or image data, which may be transmitted to a destination external to the mobile device, like another mobile device or a cloud server. This data may be used to locate the mobile device, for example, by analyzing the light sensing data, either alone or in combination with other data, the mobile device may be determine to be located in the user's pocket, indoors, outdoors, etc. Image analysis may also be used to locate the mobile device. The images may also be stored on a device external to the mobile device. Monitoring the use of the camera and communication functions may aid in determining, for example, whether unauthorized use of these functions is occurring.[0200] The communication NFC details may be closely related to electronic commerce information. Access to the communication NFC details may be used to identify retailers and where, when, how and, what purchases are made. It may also be used to access sensitive information about the authorizations for making the purchases that could be used to make unauthorized purchases. Similarly, communication NFC details may indicate check-ins at secure areas, and identify locations, times, and authorizations for those check-ins. Tracking the access of the communication NFC details may aid in determining, for example, whether unauthorized monitoring of sensitive information communicated over NFC is occurring.[0201] As described above, camera or microphone functions, may be used for numerous unauthorized uses. The combination of the lack of user interaction with the mobile device just before camera or microphone functions are used is an unlikely combination of events in view of normal user interaction with the mobile device. This is because users typically interact with the mobile device through a user interface on the mobile device to initiate the camera ormicrophone functions. Even in instances of sensor triggered use of these functions, such a motion or sound detection setting, which may be suspended or idle for periods of time would likely require user interaction to initially setup the use of these settings. By tracking the combination of the use of the camera or microphone functions and the lack of user interaction with the mobile device just before these camera or microphone functions are used, a mobile device may identify the unauthorized use of these functions.[0202] FIG. 9B illustrates another example method 910 for performing dynamic and adaptive observations in accordance with an aspect. In block 912, the mobile device processor may perform coarse observations by monitoring/observing a subset of large number factors/behaviors that could contribute to the mobile device's degradation. In block 913, the mobile device processor may generate a behavior vector characterizing the coarse observations and/or the mobile device behavior based on the coarse observations. In block 914, the mobile device processor may identify subsystems, processes, and/or applications associated with the coarse observations that may potentially contribute to the mobile device's degradation. This may be achieved, for example, by comparing information received from multiple sources with contextual information received from sensors of the mobile device. In block 916, the mobile device processor may perform behavioral analysis operations based on the coarse observations. In determination block 918, the mobile device processor may determine whether suspicious behaviors or potential problems can be identified and corrected based on the results of the behavioral analysis. When the mobile device processor determines that the suspicious behaviors or potential problems can be identified and corrected based on the results of the behavioral analysis (i.e., determination block 918 = "Yes"), in block 928, the processor may initiate a process to correct the behavior and return to block 912 to perform additional coarse observations. [0203] When the mobile device processor determines that the suspicious behaviors or potential problems can not be identified and/or corrected based on the results of the behavioral analysis (i.e., determination block 918 = "No"), in determination block 919 the mobile device processor may determine whether there is a likelihood of a problem. In an embodiment, the mobile device processor may determine that there is a likelihood of a problem by computing a probability of the mobile device encountering potential problems and/or engaging in suspicious behaviors, and determining whether the computed probability is greater than a predetermined threshold. When the mobile device processor determines that the computed probability is not greater than the predetermined threshold and/or there is not a likelihood that suspicious behaviors or potential problems exist and/or are detectable (i.e., determination block 919 = "No"), the processor may return to block 912 to perform additional coarse observations.[0204] When the mobile device processor determines that there is a likelihood that suspicious behaviors or potential problems exist and/or are detectable (i.e., determination block 919 = "Yes"), in block 920, the mobile device processor may perform deeper logging/observations or final logging on the identified subsystems, processes or applications. In block 922, the mobile device processor may perform deeper and more detailed observations on the identified subsystems, processes or applications. In block 924, the mobile device processor may perform further and/or deeper behavioral analysis based on the deeper and more detailed observations. In determination block 918, the mobile device processor may again determine whether the suspicious behaviors or potential problems can be identified and corrected based on the results of the deeper behavioral analysis. When the mobile device processor determines that the suspicious behaviors or potential problems can not be identified and corrected based on the results of the deeper behavioral analysis (i.e., determination block 918 = "No"), the processor may repeat the operations in blocks 920-924 until the level of detail is fine enough to identify the problem or until it is determined that the problem cannot be identified with additional detail or that no problem exists.[0205] When the mobile device processor determines that the suspicious behaviors or potential problems can be identified and corrected based on the results of the deeper behavioral analysis (i.e., determination block 918 = "Yes"), in block 928, the mobile device processor may perform operations to correct the problem/behavior, and the processor may return to block 912 to perform additional operations.[0206] In an aspect, as part of blocks 912-928 of method 910, the mobile device processor may perform real-time behavior analysis of the system's behaviors to identify suspicious behavior from limited and coarse observations, todynamically determine the behaviors to observe in greater detail, and to dynamically determine the precise level of detail required for the observations. This enables the mobile device processor to efficiently identify and prevent problems from occurring, without requiring the use of a large amount of processor, memory, or battery resources on the device.[0207] FIG. 10 illustrates an example observer method 1000 for performing dynamic and adaptive observations on a mobile device processor in accordance with an aspect. The observer method 1000 may be implemented as part of an observer module in the mobile device's kernel space, user space, or acombination thereof. In block 1002, the observer module operating on the processor may receive data, control, and/or context information from various sources, which may include an analyzer unit (e.g., analyzer module 204 described in FIG. 2), application APIs, Driver APIs, kernel threads, user threads, processes, programs, mobile device sensors, etc. In block 1004, the observer module operating on the processor may adaptively and intelligently filter the received information to generate a smaller subset of the received information. In block 1006, the observer module operating on the processor may throttle control the filtered information to control/prevent flooding or overloading. In block 1008, the observer module operating on the processor may perform spatial and temporal correlations to detect/identify high level behaviors that may cause the device to perform at sub-optimal levels. In block 1010, the observer module operating on the processor may generate a behavior vector that describes the behaviors of particular process, application, or sub-system. In block 1012, the observer module operating on the processor may store the generated behavior vector in a secure buffer.[0208] FIG. 1 1 A illustrates another example method 1 100 for performing dynamic and adaptive observations by a mobile device processor in accordance with another aspect. In block 1 102, the mobile device processor maydynamically identify the relevant behaviors that are to be observed on the mobile device. In block 1 104, the mobile device processor may dynamically determine the level of detail at which the identified behaviors are to be observed. In optional block 1 106, the mobile device processor may dynamically adapt to what is being observed. In optional block 1 108, the mobile device processor may dynamically change or update the parameters, factors, behaviors, processes, applications, and/or subsystems that are to be observed. The operations of blocks 1 102- 1 108 may be repeated continuously or as is necessary to improve the mobile device performance (e.g., battery power consumption, processing speed, network communication speeds, etc.).[0209] FIG. 1 IB illustrates an aspect method 1 1 10 that may be performed as part of the operations of block 1 102 described above with reference to FIG. 1 1A. In order to dynamically identify relevant behaviors, the mobile device processor may observe any of the mobile device behaviors described above over a period of time in block 1 1 12. This observation may be for a set period of time or may be cumulative, such as in a continuous learning process. Thus, the longer that the mobile device operates, the more behavioral observations may be collected. In block 1 1 14 the processor may identify inconsistent behaviors of the mobile device, which may be indicative of a performance limiting condition. This may include performing any of the methods described herein. The inconsistent behaviors may be suspicious or potentially performance-degrading mobile device behaviors.[0210] In block 1 1 16, the mobile device processor may correlate or identify associations between the observed mobile device behaviors and identify inconsistent behaviors in order to identify correlations or patterns. For example, the processor may identify those observed mobile device behaviors that occur only during or immediately before identified inconsistent behaviors. As another example, the processor may identify those observed mobile device behaviors that occur frequently (though not necessarily always) during or immediately before identified inconsistent behaviors. As a further example, the processor may identify sets of observed behaviors which only or frequently occur together when inconsistent behaviors are identified. In block 1 1 18, the processor may select mobile device behaviors for observation from among the subset of behaviors that the processor has identified as associated or correlated with inconsistent behaviors. Thus, the selection of mobile device behaviors for observation may be dynamic, and the selection process may improve over time as more mobile device behaviors are observed and more inconsistent behaviors are identified. In this manner, the longer the mobile device operates, the better the processor may be able to identify those few behaviors that are most closely correlated or associated with inconsistent or undesirable behaviors. That is, the longer that the mobile device processor observes these mobile device behaviors, the more accurate its classifications of suspicious or potentially performance-degrading mobile device behaviors become.[0211] FIG. 1 1C illustrates an aspect method 1 120 that may be performed as part of the operations of block 1 1 16 described above with reference to FIG. 1 IB. As part of the process of identifying correlations between observed mobile device behaviors and inconsistent behaviors, the processor may receive behavior inputs from one or more of a high-level application, the system kernel, and a driver API in block 1 122. In an embodiment, these inputs may first be filtered by an adaptive filter that screens out those inputs that the processor can determine are not associated with suspicious or inconsistent behaviors in optional block 1 121.[0212] In block 1 124, the processor may receive context information regarding ongoing operations of the mobile device as described above. In block 1 126, the processor may perform correlations (e.g., spatial correlations, etc.) of the received behavior inputs and the received context information as described above. Optionally, the processor may also perform additional correlations (e.g., temporal correlations) of received behavior inputs, and receive context information in order to identify those observed behaviors that are related in optional block 1 128. For example, the processor may perform temporal correlations to identify behaviors that are related in time (e.g., preceding closely in time versus simultaneous) with inconsistent behaviors . Using thisinformation, the processor may generate a behavior vector that succinctly describes the observed mobile device behaviors in block 1 130 as described above. Such a behavioral vector may include information collected from APIs at various operational software levels and from various software/hardware modules of the mobile device.[0213] A behavior vector generated in block 1 130 may include, for example, information related to one or more of library API calls, system calls, file-system and network sub-system operations, sensor device state changes, file system activity, network activity, telephone activity, memory access operations, a state of the mobile device, a power on/off state of an electronic display, alocked/unlocked state of the mobile device, the amount of battery power remaining, inter-process communications (IPC), driver statistics, and hardware counters. [0214] A behavior vector generated in block 1 130 may have a vector data structure that includes a series of numbers, each of which signifies feature or behavior of the mobile device. Such numbers may include binary flags (i.e., a single bit having a value of either 1 or 0), such as to indicate whether a camera of the mobile device is in use or not, counter values, such as amount of network traffic that has been generated by the mobile device or a number of Internet messages that have been sent by the mobile device within a period of time.[0215] A behavior vector generated in block 1 130 may also include one or more of call information, text messaging information, media messaging information, user account information, location information, camera information, inertia sensor information, and browser information. As discussed above, theinformation used to generate the behavior vector may include information collected at an application level of the mobile device, at a radio level of the mobile device, at a sensor level of the mobile device (e.g., a camera or microphone), at a hardware level, at a driver level, and at a high level.[0216] Example components and modules of an exemplary, non-limiting aspect of such a mobile device 102 are illustrated in FIG. 12. A mobile computing device 120 may include a circuit board 1202 of electronic components, some or all of which may be integrated into an on-chip system, that includes a control processor 1201 coupled to memory 1204. The control processor 1201 may further be coupled to a digital signal processor 1206 and/or an analog signal processor 1208, which also be coupled together. In some embodiments, the control processor 1201 and a digital signal processor 1206 may be the same component or may be integrated into the same processor chip. A display controller 1210 and a touchscreen controller 1212 may be coupled to the control processor 1201 and to a display/touchscreen 1214 within or connected to the mobile computing device 102. [0217] The control processor 1201 may also be coupled to removable memory 1216 (e.g., an SD memory or SIM card in the case of mobile computing devices) and/or to external memory 1218, such as one or more of a disk drive, CD drive, and a DVD drive. The control processor 1201 may also be coupled to aUniversal Serial Bus (USB) controller 1220 which couples to a USB port 1222. Other devices (not shown) may be coupled to the control processor 1201 through the USB port 1222 and USB controller 1220. For example, an external microphone (not shown) may be coupled to the control processor 1201 via the USB port 1222 and USB controller 1220. The various aspects may include monitoring of processes involving external hardware via the USB port 1222 and USB controller 1220.[0218] In various aspects, a power supply 1221 may be coupled to the circuit board 1202 through the USB controller 1220 or through different electrical connections to provide power (e.g., DC power) to the various electronic components.[0219] The control processor 1201 may also be coupled to a video encoder 1224, e.g., a phase alternating line (PAL) encoder, a sequential couleur a memoire (SECAM) encoder, or a national television system(s) committee (NTSC) encoder. Further, the video encoder 1224 may be coupled to a video amplifier 1226 which may be coupled to the video encoder 1224 and thedisplay/touchscreen 1214. Also, a video port 1228 may be coupled to the video amplifier 1226 to enable connecting the mobile computing device 102 to an external monitor, television or other display (not shown).[0220] The control processor 1201 may be coupled to a radio frequency interface hardware component 1230, such as via an analog signal processor 1208. The radio interface hardware component 1230 may be coupled to an RF antenna 1218 for transmitting and receiving RF signals. In the example illustrated in FIG. 12, a single radio interface hardware component 1230 is configured to support multiple different RF technologies and protocols. For example, the radio interface hardware component 1230 may be a multifunction radio module that is configured to support RF communications over multiple frequencies, networks and protocols, including for example, cellular telephone (e.g., G-3, UMTS, CDMA, etc.), WiFi, WiMax, Near Field Communication (NFC), and Bluetooth, or a subset of those example protocols.[0221] While FIG. 12 shows a single radio interface hardware component 1230, multiple different types of radio interface hardware component and/or multifunction RF transceivers may be coupled to the control processor 1201 in order to transmit and receive communication signals of a number of different wireless communication protocols including, for example, cellular telephone (e.g., G-3, UMTS, CDMA, etc.), WiFi, WiMax, Near Field Communication (NFC), and Bluetooth. Also, the control processor 1201 may be coupled to external hardware (e.g., Bluetooth headsets or microphones) and to external systems (e.g., a point of sale device via an NFC RF transceiver), as well as Internet servers and systems via radio interface hardware component 1230 and RF antenna 1218. The various aspects may include monitoring of processes involving external hardware, systems and services connected via the radio interface hardware component(s) 1230 and RF antenna 1218.[0222] The control processor 1201 may further be coupled to a network card 1232 which may be coupled to a network connector 1231 and/or the RF transceiver 1230 and configured to enable communications via an external network (e.g., local area networks, the Internet, an intranet, WiFi networks, Bluetooth networks, personal area network (PAN) etc.) The network card 1232 may be in the form of a separate chip or card, or may be implemented as part of the control processor 1201 or the RF transceiver 1230 (or both) as a full solution communication chip. [0223] A number of analog devices may be coupled to the control processor 1201 via the analog signal processor 1208, such as a keypad 1234. In otherimplementations, a keypad or keyboard may include its own processor so that the interface with the control processor 1201 may be via direct connection (not shown), via a network connection (e.g., via the network card), or via the USB port 1222.[0224] In some implementations, a digital camera 1236 may be coupled to the control processor 1201. In an exemplary aspect, the digital camera 1236 may be a charge-coupled device (CCD) camera or a complementary metal-oxide semiconductor (CMOS) camera. The digital camera 1236 may be built into the mobile computing device 102 or coupled to the device by an external cable.[0225] In some implementations, an audio CODEC 1238 (e.g., a stereo CODEC) may be coupled to the analog signal processor 1208 and configured to send sound signals to one or more speakers 1240 via an audio amplifier 1242. The audio CODEC 1238 may also be coupled to a microphone amplifier 1244 which may be coupled to a microphone 1246 (e.g., via a microphone jack). A headphone jack 1248 may also be coupled to the audio CODEC 1238 for outputting audio to headphones.[0226] In some implementations, the mobile computing device 102 may include a separate RF receiver circuit 1250 which may be coupled to an antenna 1252 for receiving broadcast wireless communication signals. The receiver circuit 1250 may be configured to receive broadcast television signals (e.g., EBMSbroadcasts), and provide received signals to the DSP 1206 for processing. In some implementations, the receiver circuit 1250 may be configured to receive FM radio signals, in which case the received signals may be passed to the Audio CODEC 1238 for processing.[0227] In an aspect, processor-executable instructions for accomplishing one or more of the method operations described above may be stored in the internal memory 1204, removable memory 1216 and/or non-volatile memory 1218 (e.g., as on a hard drive, CD drive, or other storage accessible via a network). Such processor-executable instructions may be executed by the control processor 1201 in order to perform the methods described herein.[0228] The various aspects may be implemented on a variety of mobile computing devices, an example of which is illustrated in FIG. 13 in the form of a smartphone. A smartphone 1300 may include a processor 1301 coupled to internal memory 1302, a display 1303, and to a speaker. Additionally, the smartphone 1300 may include an antenna 1304 for sending and receiving electromagnetic radiation that may be connected to a wireless data link and/or cellular telephone transceiver 1305 coupled to the processor 1301. Smartphone 1300 typically also include menu selection buttons or rocker switches 1306 for receiving user inputs.[0229] A typical smartphone 1300 also includes a sound encoding/decoding (CODEC) circuit 1312, which digitizes sound received from a microphone into data packets suitable for wireless transmission and decodes received sound data packets to generate analog signals that are provided to the speaker to generate sound. Also, one or more of the processor 1301, wireless transceiver 1305 and CODEC 1312 may include a digital signal processor (DSP) circuit (not shown separately). As mentioned above, the processor 1301 may also be coupled to external hardware through a data network wireless transceiver 1307, such as a WiFi transceiver, a Bluetooth transceiver or an NFC transceiver.[0230] Portions of the aspect methods may be accomplished in a client-server architecture with some of the processing occurring in a server, such as maintaining databases of normal operational behaviors, which may be accessed by a mobile device processor while executing the aspect methods. Such aspects may be implemented on any of a variety of commercially available server devices, such as the server 1400 illustrated in FIG. 14. Such a server 1400 typically includes a processor 1401 coupled to volatile memory 1402 and a large capacity nonvolatile memory, such as a disk drive 1403. The server 1400 may also include a floppy disc drive, compact disc (CD) or DVD disc drive 141 1 coupled to the processor 1401. The server 1400 may also include network access ports 1404 coupled to the processor 1401 for establishing data connections with a network 1405, such as a local area network coupled to other broadcast system computers and servers.[0231] The processors 1301, 1401 may be any programmable microprocessor, microcomputer or multiple processor chip or chips that can be configured by software instructions (applications) to perform a variety of functions, including the functions of the various aspects described below. In some mobile devices, multiple processors 1301 may be provided, such as one processor dedicated to wireless communication functions and one processor dedicated to running other applications. Typically, software applications may be stored in the internal memory 1302, 1402, 1403 before they are accessed and loaded into the processor 1301 , 1401. The processor 1301, 1401 may include internal memory sufficient to store the application software instructions.[0232] Many mobile computing devices operating system kernels are organized into a user space (where non-privileged code runs) and a kernel space (where privileged code runs). This separation is of particular importance in Android® and other general public license (GPL) environments where code that is part of the kernel space must be GPL licensed, while code running in the user-space may not be GPL licensed. It should be understood that the various software components/modules discussed here may be implemented in either the kernel space or the user space, unless expressly stated otherwise.[0233] The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the steps of the various aspects must be performed in the order presented. As will be appreciated by one of skill in the art the order of steps in the foregoing aspects may be performed in any order. Words such as "thereafter," "then," "next," etc. are not intended to limit the order of the steps; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles "a," "an" or "the" is not to be construed as limiting the element to the singular.[0234] The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may beimplemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.[0235] The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or anycombination thereof designed to perform the functions described herein. A general-purpose processor may be a multiprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a multiprocessor, a plurality of multiprocessors, one or more multiprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some steps or methods may be performed by circuitry that is specific to a given function.[0236] In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable medium or non-transitory processor-readable medium. The steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module which may reside on a non-transitory computer-readable or processor-readable storage medium. Non-transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor- readable media may include RAM, ROM, EEPROM, FLASH memory, CD- ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non- transitory computer-readable and processor-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non- transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product.[0237] The preceding description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein.
A semiconductor chip having a plurality of flash memory devices, shallow trench isolation in the periphery region, and LOCOS isolation in the core region. A hard mask is used first to create the shallow trench isolation. The LOCOS isolation is then created. Subsequent etching is used to remove stringers. The flash memory is able to use shallow trench isolation to limit encroachment. The flash memory may also have a nitridated tunnel oxide barrier layer. A hard mask is used to prevent nitride contamination of the gate oxide layer. Periphery stacks have hate oxide layers of different thicknesses.
1. A flash memory chip, comprising:a semiconductor substrate having a core region and a periphery region; at least one shallow trench isolation (STI) formed in the periphery region only of said substrate; at least one local oxidation of silicon (LOCOS) isolation formed in the core region only of said substrate; a plurality of core memory devices formed on said core region only of the semiconductor substrate, each core memory device of the plurality of core memory devices comprising: a nitridated tunnel oxide barrier layer formed on the surface of the substrate; a first polysilicon layer formed on the nitridated tunnel oxide barrier layer; an interpoly dielectric layer formed on the first polysilicon layer; and a second polysilicon layer formed on the interpoly dielectric layer; and a plurality of periphery memory devices formed on a periphery region only of the semiconductor substrate, each periphery memory device of the plurality of periphery memory devices comprising: a gate oxide layer formed on the surface of the substrate, the gate oxide layer being un-nitridated, and part of the substrate surface under the gate oxide layer being un-nitridated; and a first polysilicon layer formed over the gate oxide layer. 2. A flash memory chip, as recited in claim 1,wherein some of the gate oxide layers of the plurality of periphery memory devices are thicker than other gate oxide layers of the plurality of periphery memory devices.
FIELD OF THE INVENTIONThe present invention relates to nonvolatile memory devices. Even more particularly, the present invention relates to flash memory utilizing periphery and core stacks.BACKGROUND OF THE INVENTIONMemory devices such as flash memory or electrically erasable programmable read only memory (EEPROM) are known. Memory devices such as flash memory comprise, core stacks, which hold the erasable programmable data, and periphery stacks which are used to program the core stacks. Manufacturing periphery stacks and core stacks on the same chip is advantageous and is done in the related art. However, sometimes using local oxidation of silicon (LOCOS) on part of the flash memory and shallow trench isolation (STI) on other parts of the flash memory is desirable. For instance, where shallow trench isolation is used for the periphery stacks, corner recesses, which are detrimental to the periphery stacks, form around the shallow trench isolation. In addition, core stacks and periphery stacks require different manufacturing steps. Some of these different processing steps for the core stacks are harmful to the periphery stacks and vice versa. One example of these problems is related to the use of a nitrogen implant or other nitridation methods to improve the functionality of the tunnel oxide of the core stacks. In the related art, such a nitrogen implant tends to contaminate the gate oxide of the periphery stack, thereby diminishing the performance of the gate oxide. Manufacturing periphery stacks and core stacks on a single chip is desirable. Thus, minimizing damage to the periphery stacks and core stacks from the different processes required to manufacture the different stacks is also desirable. Also, having periphery stacks with gate oxides of different thicknesses is a desirable condition.BRIEF SUMMARY OF THE INVENTIONAccordingly, the present invention involves the use of successive hard masks to provide STI and LOCOS isolation on a single chip and the fabrication of a flash memory device on a substrate by using a hard mask to protect the periphery before forming a nitridated tunnel oxide. Advantages of the present invention include the capability of fabricating a plurality of semiconductor devices on a single chip, wherein some of the devices are separated by shallow trench isolation and other devices are separated by local oxidation of silicon, the capability of fabricating a flash memory with a reduced contamination of the gate oxide, and the capability of fabricating a flash memory device with improved stack isolation. Other features of the present invention are disclosed or apparent in the section entitled: "DETAILED DESCRIPTION OF THE INVENTION."BRIEF DESCRIPTION OF DRAWINGSFor a fuller understanding of the present invention, reference is made to the below-referenced accompanying drawings. Reference numbers refer to the same or equivalent parts of the present invention throughout the several figures of the drawing.(1) FIG. 1 is a cross-sectional view of a semiconductor substrate used in a preferred embodiment of the present invention.(2) FIG. 2 is a cross-sectional view of the substrate shown in FIG. 1, with trenches.(3) FIG. 3 is a cross-sectional view of the substrate shown in FIG. 2, with a trench oxide.(4) FIG. 4 is a cross-sectional view of the substrate shown in FIG. 3, with corner recesses.(5) FIG. 5 is a cross-sectional view of the substrate shown in FIG. 4, before LOCOS.(6) FIG. 6 is a cross-sectional view of the substrate shown in FIG. 5, with LOCOS.(7) FIG. 7 is a cross-sectional view of the substrate shown in FIG. 6, after the removal of the hard mask used for LOCOS.(8) FIG. 8 is a cross sectional view of the substrate shown in FIG. 7, with a hard mask layer.(9) FIG. 9 is a cross-sectional view of the substrate shown in FIG. 8, with a tunnel oxide and first polysilicon layer.(10) FIG. 10 is a cross-sectional view of the substrate shown a interpoly dielectric layer.(11) FIG. 11 is a cross-sectional view of the substrate shown in FIG. 10, with a first gate oxide layer.(12) FIG. 12 is a cross-sectional view of the substrate shown in FIG. 11, where the first gate oxide layer is etched back.(13) FIG. 13 is a cross-sectional view of the substrate shown in FIG. 12, after the photoresist mask has been removed.(14) FIG. 14 is a c ross-sectional view of the substrate shown in FIG. 13, with thin and thick oxide layers.(15) FIG. 15 is a cross-sectional view of the substrate shown in FIG. 14, with periphery stacks and core stacks.DETAILED DESCRIPTION OF THE INVENTION AND BEST MODE OF THE INVENTIONFIG. 1 is a cross-sectional view of a semiconductor substrate 10 used in a preferred embodiment of the invention. A pad oxide layer 12 is form ed over a surface of the semiconductor substrate 10. A 1000-Å to 2000-Å first hard mask layer 14 is formed over the pad oxide layer 12. In the preferred embodiment of the invention, the first hard mask layer 14 is a material selected from a group consisting of silicon oxynitride (SiON), silicon nitride (Si3N4), and polysilicon (poly-Si). A photoresist mask 16 is formed over the first hard mask layer 14. Regions of the first hard mask layer 14 that are not covered by the photoresist mask 16 are etched away to form apertures 18 in the first hard mask layer 14. In this embodiment, the apertures 18 are disposed only over the periphery region and interface region of the semiconductor substrate 10.The photoresist mask 16 is removed; and the semiconductor substrate 10 is subjected to an etch, which creates shallow trenches 20 in the semiconductor substrate 10 below the apertures 18 in the first hard mask layer 14, as shown in FIG. 2. In the preferred embodiment, the depth of the trenches 20 into the substrate 10 surface is in a range of approximately 0.15 [mu]m to 0.35 [mu]m. A trench oxide 22 is formed in the trenches 20, as shown in FIG. 3.The semiconductor substrate 10 is then subjected to an etch for removing the first hard mask layer 14, as shown in FIG. 4. In the preferred embodiment, the substrate 10 is then subjected to a cleaning step. The top of the trench oxide 22 has corner recesses 24 greater than about 50 Å deep. In the related art, such corner recesses could be severely extended below the silicon surface.A second hard mask 26, which in the preferred embodiment is about 1000-Å to 2000-Å thick is formed over the surface of the trench oxide 22 and pad oxide 12, as shown in FIG. 5. In the preferred embodiment of the invention, the second hard mask 26 is a material selected from a group consisting of silicon oxynitride (SiON), silicon nitride (Si3N4), and polysilicon (poly-Si). A photoresist mask (not shown) is used to form apertures 28 in the second hard mask 26 over the core region and interface region of the substrate 10. The photoresist mask (not shown) is then removed. The semiconductor substrate 10 is subjected to a clean step to remove greater than about 30 Å of oxide.The semiconductor substrate 10 is then subjected to a low temperature oxidation at about 1050[deg.] C. to form LOCOS oxides 30, as shown in FIG. 6. The second hard mask 26 is then removed, and the remaining oxides 12, 22, 30 are subjected to a cleaning step using hydrofluoric acid (HF) to remove any remaining stringers in the oxide, as shown in FIG. 7. The semiconductor substrate 10 has both STI and LOCOS isolation on a single substrate 10 and is ready for the manufacture of periphery and core stacks between the LOCOS oxides 30 and the trench oxide 22.To begin the manufacture of the periphery and core stacks, a 100-Å to 500-Å third hard mask layer 42 is placed on the pad oxide 12 over both the periphery region and core region, as shown in FIG. 8. In the preferred embodiment of the invention, the third hard mask layer 42 is material selected from a group consisting of silicon oxynitride (SiON), silicon nitride (Si3N4), and polysilicon (poly-Si). A photoresist layer (not shown) is placed over the top surface of the third hard mask layer 42 and then etched back to form a photoresist mask 44 that does not cover the core section of the semiconductor substrate 10, as shown in FIG. 8. The trench oxide 22, pad oxide 12, and LOCOS oxide 30 are not drawn to scale so that more features may be shown in the figure.The semiconductor substrate 10 is subjected to an etching process, which removes the third hard mask layer 42 and the pad oxide 12 over the core region, as shown in FIG. 9. The photoresist mask 44 is then removed. A tunnel oxide layer 46 is formed over the core region. The tunnel oxide layer 46 may also be formed over the third hard mask layer 42. Various methods are known for forming the tunnel oxide layer 46, such as growing an oxide layer or depositing an oxide layer. In the preferred embodiment, the tunnel oxide layer 46 is nitridated (i.e., doping a tunnel oxide layer with nitrogen). Various methods are known for nitridating a tunnel oxide layer such as providing nitrous oxide (N2O), nitric oxide (NO), and implanting nitrogen (N2) into a tunnel oxide layer. A first polysilicon layer 48 is formed over the tunnel oxide layer 46. A photoresist mask 49 is placed over parts of the first polysilicon layer 48 over the core region.The semiconductor substrate 10 is subjected to an etching process, which removes parts of the first polysilicon layer 48 and tunnel oxide layer 46, as shown in FIG. 10. The photoresist mask 49 is removed. An interpoly dielectric layer 50 is formed over the substrate 10, third hard mask 42, and first polysilicon layer. In the preferred embodiment, the interpoly dielectric layer 50 is an oxide-nitride-oxide (ONO) layer. A photoresist mask 52 is formed over the interpoly dielectric layer 50 over the core region.The semiconductor substrate 10 is then subjected to a two-step etch that (1) first removes the portion of the interpoly dielectric layer 50 over the periphery region and (2) then removes the third hard mask 42 and the remaining pad oxide, as shown in FIG. 11. The photoresist mask 52 is then removed. The semiconductor substrate 10 is then subjected to a first thermal oxidation which forms a first gate oxide layer 54 over the semiconductor substrate 10 in the periphery region. In the preferred embodiment, the first gate oxide layer 54 is about 100 Å thick. A photoresist mask 56 is formed over portions of the first gate oxide layer 54 in the periphery region and over the interpoly dielectric layer 50.The parts of the first gate oxide layer 54 not covered by the photoresist mask 56 are etched away, as shown in FIG. 12. The photoresist layer 56 is then stripped away, as shown in FIG. 13 with the remaining first oxide layer 54 becoming the thick oxide regions 58. The semiconductor substrate 10 is then subjected to a second thermal oxidation, which forms thin oxide layers 60 in the uncovered regions of the substrate 10 and thick oxide layers 62 at the thick oxide regions 58, as shown in FIG. 14. In the preferred embodiment, the thin oxide layers 60 are 40 Å to 80 Å thick and the thick oxide layers 62 are 100 Å to 150 Å thick. A second polysilicon layer 64 is placed on the substrate 10, the thin oxide layers 60, the thick oxide layers 62, and the interpoly layer 50. The second polysilicon layer 64 is then etched back to form periphery stacks 66 with thin gates 60, periphery stacks 68 with thick gates 62, and core stacks 70, as shown in FIG. 15.Conventional processes are then used to complete the flash memory structure. The inventive method allows the production of periphery gates 66 with thin gates 60 and periphery stacks 68 with thick gates 62 to provide gates having different threshold voltages. In addition, core stacks 70 with nitridated tunnel oxide layers are provided, without contaminating the gate oxide layers and allows the use of STI and LOCOS isolation on a single chip.Information as herein shown and described in detail is fully capable of attaining the above-described object of the invention and is understood that it is the presently preferred embodiment of the present invention and is, thus, representative of the subject matter which is broadly contemplated by the present invention, that the scope of the present invention fully encompasses other embodiments which may become obvious to those skilled in the art, and that the scope of the present invention is accordingly to be limited by nothing other than the appended claims, in which reference to an element in the singular is not intended to mean "one and only one" unless explicitly so stated, but rather "one or more." All structural and functional equivalents to the elements of the above-described preferred embodiment that are known to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the present claims. Moreover, a device or method need not address each and every problem sought to be solved by the present invention, for such problem to be encompassed by the present claims. Furthermore, no element, component, or method step in the present disclosure is intended to be dedicated to the public regardless of whether the element, component, or method step is explicitly recited in the claims. No claim element herein is to be construed under the provisions of 35 U.S.C. [section]112, sixth paragraph, unless the element is expressly recited using the phrase "means for."
The invention relates to built-in self-testing of a programmable vision accelerator for a system on chip. In various examples, the VPU and associated components may be optimized to improve VPU performance and throughput. For example, a VPU may include a minimum/maximum collector, automatic storage prediction functionality, SIMD data path organization allowing inter-channel sharing, transpose load/storage with stride parameter functionality, load, hardware, logic, and memory layout functionality with permutation and zero insertion functionality, to allow two-point and two-point-by-point lookup, and to allow for inter-channel sharing. And each memory bank loads a cache function. Further, a decoupling accelerator may be used to offload VPU processing tasks to improve throughput and performance, and a hardware sequencer may be included in a DMA system to reduce programming complexity of the VPU and DMA systems. The DMA and the VPU may perform a VPU configuration mode that allows the VPU and the DMA to operate without a processing controller for performing a dynamic region-based data movement operation.
1. A system comprising:Direct memory access DMA system;processing controller; andA Multiple Input Signature Register MISR hardware component comprising processing circuitry for:receiving a plurality of data channels from the DMA system based on sequencing by the processing controller;calculating a plurality of MISR values by performing a cyclic redundancy check (CRC) calculation on each of the plurality of channels to calculate a MISR value;calculating a final MISR value using the plurality of MISR values;comparing said final MISR value with the signature value; andBased at least in part on the comparison, a MISR status is output.2. The system of claim 1, further comprising a memory, wherein the data from the DMA system comprises data retrieved from the memory using the DMA system or data corresponding to the data retrieved from the memory At least one of the address data.3. The system of claim 1, wherein a seed value for each CRC calculation is programmed for each channel using a MISR control of the MISR hardware component, the MISR control being configured using the processing controller.4. The system of claim 1, further comprising a vector processing unit (VPU), wherein said plurality of data channels from said DMA system are retrieved from vector memory VMEM after said VPU processes a MISR test.5. The system according to claim 4, wherein the MISR test includes test data and test instructions, the VPU processes the test data according to the test instructions and writes the output of the MISR test into the VMEM.6. The system of claim 1, further comprising a security processor, wherein the MISR status is output to the security processor when the MISR status indicates an error or a timeout.7. The system of claim 1 , wherein the MISR hardware component is further configured to mask one or more of the plurality of channels based at least in part on a configuration of a channel mask register of the MISR hardware component , the channel mask register is configured using the processing controller.8. The system of claim 1, wherein the system is included in at least one of:Control systems for autonomous or semi-autonomous machines;Perception systems for autonomous or semi-autonomous machines;systems for performing simulated operations;systems for performing deep learning operations;System-on-chip SoC;Systems including Programmable Vision Accelerators PVA;systems including vision processing units;Systems implemented using edge devices;Systems implemented using robots;A system that incorporates one or more virtual machines VM;a system implemented at least in part in a data center; orA system implemented at least in part using cloud computing resources.9. A Multiple Input Signature Register MISR hardware assembly comprising processing circuitry for:receiving multiple channels of data from a direct memory access DMA system based on sequencing by a processing controller;calculating a plurality of MISR values by performing a cyclic redundancy check (CRC) calculation on each of the plurality of channels to calculate a MISR value;calculating a final MISR value using the plurality of MISR values;comparing said final MISR value with the signature value; andA MISR status is output based at least in part on the comparison.10. The MISR hardware component of claim 9, wherein the data from the DMA system includes data retrieved from memory using the DMA system or address data corresponding to the data retrieved from the memory at least one.11. The MISR hardware component of claim 9, wherein a seed value for each CRC calculation is programmed for each channel using a MISR control of the MISR hardware component configured using the processing controller .12. The MISR hardware component of claim 9, wherein the plurality of data channels from the DMA system are retrieved from a vector memory VMEM after a vector processing unit (VPU) processes a MISR test.13. The MISR hardware component as claimed in claim 12, wherein said MISR test includes test data and test instructions, said VPU processes said test data according to said test instructions and writes the output of said MISR test into said VMEM.14. The MISR hardware component of claim 9, wherein the MISR status is output to a security processor when the MISR status indicates an error or a timeout.15. The MISR hardware component of claim 9 , further comprising masking one or more of the plurality of channels based at least in part on configuration of a channel mask register of the MISR hardware component, the channel mask Registers are configured using the processing controller.16. The MISR hardware component of claim 9, wherein the MISR hardware component is included in at least one of the following:Control systems for autonomous or semi-autonomous machines;Perception systems for autonomous or semi-autonomous machines;systems for performing simulated operations;systems for performing deep learning operations;System-on-chip SoC;Systems including Programmable Vision Accelerators PVA;systems including vision processing units;Systems implemented using edge devices;Systems implemented using robots;A system that incorporates one or more virtual machines VM;a system implemented at least in part in a data center; orA system implemented at least in part using cloud computing resources.17. A method comprising:receiving multiple channels of data from a direct memory access DMA system one channel at a time based on sequencing by a processing controller;Computing multiple Multiple Input Single Register MISR values by performing a Cyclic Redundancy Check CRC calculation for each channel to compute MISR values;calculating a final MISR value using the plurality of MISR values;comparing said final MISR value with the signature value; andA MISR status is output based at least in part on the comparison.18. The method of claim 17, wherein the data from the DMA system includes data retrieved from a memory using the DMA system or address data corresponding to the data retrieved from the memory at least one.19. The method of claim 17, wherein a seed value for each CRC calculation is programmed for each channel using a MISR control of the MISR hardware component configured using the processing controller.20. The method of claim 17, wherein the plurality of data channels from the DMA system are retrieved from the vector memory VMEM after the vector processing unit VPU processes the MISR test.
Built-in self-test for programmable vision accelerators for SoCsBackground techniqueA Vector Processing Unit (VPU) is used to execute Single Instruction Multiple Data (SIMD) operations in parallel. Popular uses of VPUs include operations such as image processing, computer vision, signal processing, deep learning (for example, for convolution operations), and others.In some computer vision applications, for example, the dynamic range of intermediate values is well understood. Therefore, to detect anomalies, the calculated values can be compared with these dynamic ranges. However, conventional solutions for detecting and analyzing these minima and maxima include writing all values to memory and then analyzing the values in memory, which requires additional processing cycles. Additionally, high clock rate processors may perform software pipelining and/or loop unrolling in order to achieve high throughput despite load usage latencies. However, in cases where the raw iteration count is not evenly divided by the unroll factor, some iteration counts may remain after the unrolled loop is complete, requiring an additional remainder loop to compute the value for one or more final iterations. This remainder loop increases the code size and latency of the system—for example, because the remainder loop cannot be unrolled for optimal performance. In conventional single instruction multiple data (SIMD) operations, each SIMD unit can operate in parallel and independently of each other in its own data lane. Some architectures may allow sharing between adjacent neighbors, but this limited sharing is restrictive and makes implementations of operations often require copying the same operands to each data lane for processing. In addition, vector SIMD processors may require each memory read operation to use standard or consistent units, eg, equal to the vector processing width, which may be inefficient if the memory banks are wide. For example, reading from elements 4 to 67, which has a memory width of 64 bytes, may require two memory reads—for example, one from 0 to 63 and one from 64 to 67. However, this causes many additional values to be read -- for example, values 0-3 and values 68-127 -- even if those values are not required for the current operation. In traditional instruction sets that require additional data manipulation, additional instructions can be used to operate on memory data in registers after the data has been read out and stored in the registers. For example, this might require loading data, performing a permutation on the data, and then performing operations with the restructured data. Therefore, data operations require additional cycles and increase latency. When using an existing VPU to perform a table lookup, the table can be duplicated so that each individual value can be extracted from the duplicated table, or an additional read port can be added to each memory bank to allow reading from the same table in the same bank Read multiple values from . However, copying the table for each value requires additional memory and processing, and adding additional read ports requires additional space on the chip. In a traditional VPU, since the VPU is programmed to execute on a smaller set of highly optimized code, data caching may not be possible because the programmer may manage the contents of the local data memory. However, by doing so, each access requires reading a value from each memory bank, even if the data for the next iteration includes overlap with one or more previous read operations.To optimize the performance of a processor (such as a VPU), the instruction set architecture (ISA) can be enhanced to create custom instructions to speed up common operations -- such as table lookups, convolution operations, etc. However, using the ISA in this way requires the processor itself to also perform these operations, which means the processor is busy during the execution of these enhanced instructions.Additionally, the VPU may use a direct memory access (DMA) system to retrieve data for processing by the VPU. In this way, the DMA system can operate as a data movement engine, but can also perform additional operations such as image padding, address manipulation, overlapping data management, traversal order management, frame size management, etc. However, as DMA resources (eg, descriptors, channels, triggers, etc.) increase, so does the programming complexity of programming the DMA system and VPU. In cases where the tiles of a frame contain spatial or temporal dependencies, the dynamic update of DMA resources becomes a processing burden on the system. Traditional DMA systems require a processing controller (eg, an R5 or ARM processing core) to intervene in a processing cycle when unknown or data-dependent data is fetched to determine updated information to guide the next processing iteration. For example, in object or feature tracking, the VPU can calculate the next location of the object or feature, then the processing controller will intervene, update the memory addressing information, and then trigger the DMA system to use the updated information. However, processing controller intervention adds latency and requires more complex programming to operate with region-dependent data movement algorithms.Furthermore, in safety-critical applications such as autonomous and semi-autonomous machine applications, there are stringent requirements for permanent fault detection and isolation. For example, when deep learning, computer vision, sensor processing, and/or other applications are implemented in machines, permanent fault detection must be performed regularly within the allotted time budget for accurate testing while also allowing the application to perform correctly - For example, low latency. For this, end-to-end coverage may be required, with low latency, while meeting the runtime budget of each specific application. Traditional approaches use built-in self-test (BIST) to identify faults, but these BIST techniques either do not include sufficient coverage, introduce excessive latency in the system, and/or do not meet the run-time budget of some applications.Contents of the inventionEmbodiments of the present disclosure relate to improvements to vector processing units (VPUs), decoupled accelerators available to handle offloaded processing from the VPUs, and direct memory access (DMA) systems to support data movement between memory and the VPUs. To address various shortcomings of traditional or existing solutions, the VPU of the present disclosure may include a min/max hardware collector included in the data path from the VPU to memory, allowing Store min/max values. In this way, the minimum/maximum values can be available immediately after the memory write operation is complete, thereby reducing the delay in determining the minimum/maximum values after storing the values in memory. In addition, the VPU can include an automatic prediction function that can apply a prediction flag by setting the prediction bit for each value computed in iterations after the final iteration. As a result, each set of iterations may include the same number of executed iterations, but one or more values from the final set of iterations may not be written out to memory due to the prediction flag. To address the limitation of sharing between data lanes of existing solutions, the SIMD architecture of the present disclosure can define slices in the processor, each slice includes multiple lanes, and each lane can be configured to communicate between each other . This way, operands from one channel can be used by other channels, eliminating the requirement to copy each operand to each channel for processing. To address the inefficiency of loading from a single wide memory bank, the VPU may include multiple smaller memory banks to allow for smaller bit alignments—eg, 16-bit alignment, where the memory banks are 16 bits each. This way, the example reading values 4 through 67 could happen in one memory read instead of two memory reads for 0-63 and 64-127. In addition to this memory bank organization, the VPU may include transpose load and/or store functionality to allow stored values to be offset within the memory bank so that memory bank conflicts do not occur and each cycle can read or write more data. To address the data manipulation deficiencies of traditional instruction sets, a load with permutation instruction can be used to send permutation patterns along with memory addresses to local memory to retrieve data from memory according to permutation or data manipulation patterns. This way, data manipulation and data loading can be performed in the same cycle, reducing latency. To address the disadvantages of table duplication per value or additional read ports for table lookups, two-point or two-point lookups can be performed such that each table can look up two or four points per cycle, respectively. To achieve this, table and offset memory patterns per memory bank address bus and associated logic and routing can be used to allow parallel lookups of two or four points. In an embodiment, each memory bank may include an associated data cache, which may be enabled or disabled according to a given operation. For example, for filter operations with significant data overlap between iterations, a data cache can be used to store values from one or more previous lookups so that each memory bank requires minimal reads, conserving energy and power for the system .To address the shortcomings of traditional ISAs for VPUs or other processor types, the systems and methods of the present disclosure can use decoupled accelerators that can be configured by the VPU and communicate with the VPU through shared memory, but can perform specific tasks independently of the VPU to allow the VPU to continue other processing tasks in parallel with the accelerator. For example, a Decoupled Lookup Table (DLUT) accelerator can be used to improve system performance when performing lookup tables. In this way, the DLUT accelerator can identify conflicts, resolve them and increase the throughput of the system instead of the VPU performing memory bank conflict detection and resolution online.To address the shortcomings of conventional DMA systems, the systems and methods of the present disclosure may include a hardware sequencer that operates on frame data including command sequences for the hardware sequencer. For example, a hardware sequencer can operate at the frame level instead of the tile level, and can perform sequencing for the DMA engine, removing the programming complexity of programming the DMA engine to perform the same operations (such as padding, address manipulation, etc.). In some embodiments, the DMA system may include a DMA triggered mode, where the DMA engine controls tile movement to vector memory (VMEM), rather than requiring the VPU to trigger the DMA to load the next tile. So the command sequence is reversed and the DMA becomes a trigger for the VPU. To address the shortcomings of region-dependent data movement operations in DMA systems, DMA systems can use DMA and VPU to operate in tightly coupled loops without processing controller intervention. For example, the VPU can update location information in VMEM for various features and/or objects being tracked, and the DMA can use this updated information to update descriptors in descriptor memory to provide to the VPU for processing the next The data corresponds to the next position of the feature or object. This process can be repeated until processing is complete, eliminating the need for processing controller intervention and reducing system latency.Furthermore, to address the deficiencies of conventional approaches to BIST, the present systems and methods can implement Multiple Input Signature Register (MISR) BIST—for example, to perform fault detection of a Programmable Vision Accelerator (PVA) of a System-on-Chip (SoC). For example, in various embodiments of the present disclosure, a PVA may include one or more DMA systems and one or more VPUs using one or more processing controllers (or control processors) such as R5 processing and ARM processors, CPUs and/or similar devices) to control. Therefore, each component of the PVA may need to be tested, and the present system and method perform MISR BIST to detect permanent faults in an end-to-end manner. In this way, permanent fault detection can be performed to cover end-to-end blocks of control and data logic, reporting errors directly to the safety processor to reduce latency, and tailored for specific applications to meet associated run-time budgets.Description of drawingsThe present system and method for improving a vector processing unit (VPU) are described in detail below with reference to the accompanying drawings, in which:FIG. 1A is an example min/max collection system according to some embodiments of the present disclosure;FIG. 1B is a flowchart illustrating a method for min/max collection according to some embodiments of the present disclosure;2A is an example system of a processor including an address generation unit with automatic prediction capabilities, according to some embodiments of the present disclosure;Figure 2B is a table showing a sequence of state changes over time according to some embodiments of the present disclosure;Figure 2C is a flowchart illustrating a method for automatically storing predictions according to some embodiments of the present disclosure;Figure 3A is a diagram of an example Single Instruction Multiple Data (SIMD) data path organization according to some embodiments of the present disclosure;3B-3D illustrate operand sharing between slices of SIMD architectures for filter operations, dot product operations, and sort operations with payloads, respectively, according to some embodiments of the present disclosure;3E includes a flowchart of a method of computing output using shared operands across lanes of a SIMD architecture, according to some embodiments of the present disclosure.4A is a logical view of transposed loads for reading and writing memory and a memory bank view of transposed loads corresponding to the logical views, according to some embodiments of the present disclosure;4B is a logical view of transposed loads with various row spacing and stride parameters for reading and writing memory and a memory bank view of transposed loads corresponding to the logical views, according to some embodiments of the present disclosure;4C is a flowchart illustrating a method of configuring a write operation of a transpose load with a stride parameter according to some embodiments of the present disclosure;FIG. 4D is a flowchart illustrating a method of performing a write operation of a transposed load using a stride parameter according to some embodiments of the present disclosure;5A-5B illustrate data and coefficient layout tables in SIMD architectures for different functions, according to some embodiments of the present disclosure;Figure 5C illustrates a hardware architecture for performing a load with permutation and zero insertion, according to some embodiments of the present disclosure;Figure 5D illustrates an example use of the hardware architecture of Figure 5C according to some embodiments of the present disclosure;Figure 5E is a flowchart illustrating a method of utilizing permutation loading according to some embodiments of the present disclosure;Figure 6A illustrates a 16-way parallel table organization for single-point lookups according to some embodiments of the present disclosure;Figure 6B shows an 8-way parallel table organization for two-point lookups according to some embodiments of the present disclosure;6C shows a logical view of a 2-way parallel word type table for 2x2 point lookups according to some embodiments of the present disclosure;Figure 6D shows a memory view of a 2-way parallel word type table for the 2x2 point lookup of Figure 6C according to some embodiments of the present disclosure;Figure 6E illustrates a layout for processing lane pairs using horizontal mixing with interleaved data operations, according to some embodiments of the present disclosure;Figure 6F shows intermediate and final results of horizontal mixing with interleaved data operations according to some embodiments of the present disclosure;Figure 6G is a flowchart of a method for performing a multipoint lookup according to some embodiments of the present disclosure;Figure 7A shows elements of data and coefficient arrays according to some embodiments of the present disclosure;7B-7C illustrate read operations required for data operands and coefficient operands, respectively, using data caches for memory banks, according to some embodiments of the present disclosure;Figure 7D illustrates a memory bank organization for use with a load cache according to some embodiments of the present disclosure;Figure 7E illustrates a hardware architecture for using data caches in memory banks according to some embodiments of the present disclosure;Figure 7F is a flowchart of a method of using data caching for memory banks according to some embodiments of the present disclosure;Figure 8A illustrates a system including one or more decoupled accelerators according to some embodiments of the present disclosure;8B is a flowchart of a method of performing one or more operations using decoupled accelerators according to some embodiments of the present disclosure;Figure 9A illustrates a system including a decoupled look-up table accelerator, according to some embodiments of the present disclosure;Figure 9B is a table illustrating the actions of different components of a decoupled lookup table accelerator in performing various operations according to some embodiments of the present disclosure;9C is a flowchart of a method of performing one or more operations using a decoupled lookup table accelerator, according to some embodiments of the present disclosure;Figure 10A is a visualization illustrating filling a frame with fill values, according to some embodiments of the present disclosure;Figure 10B is a visualization illustrating address manipulation of a frame's descriptor, according to some embodiments of the present disclosure;10C is a visualization showing overlapping data between tiles of a frame, according to some embodiments of the present disclosure;Figure 10D includes a visualization showing various raster traversal orders according to some embodiments of the present disclosure;Figure 10E is a visualization showing a three-pass order, according to some embodiments of the present disclosure;Figure 10F includes a visualization showing various vertical mining traversal orders according to some embodiments of the present disclosure;Figure 10G is a visualization showing various image sizes in a pyramid configuration according to some embodiments of the present disclosure;Figure 10H is a direct memory access (DMA) system including a hardware sequencer according to some embodiments of the present disclosure;FIG. 10I is a frame format for storing sequence commands controlled by a hardware sequencer for the DMA system of FIG. 10H according to some embodiments of the present disclosure;Figure 10J is an example of the frame format of Figure 10I for a raster scan sequence, according to some embodiments of the present disclosure;10K is an example tile structure with hardware ordering in a raster scan sequence using the example frame format of FIG. 10J for frame address processing, according to some embodiments of the present disclosure;10L is a flowchart of a method of using a hardware sequencer in a DMA system according to some embodiments of the present disclosure;11A shows a data flow diagram of a process for configuring a direct memory access (DMA) system using a vector processing unit (VPU), according to some embodiments of the present disclosure;11B is a table showing the format of the VPU configuration written by the VPU into vector memory (VMEM) and read by the DMA system, according to some embodiments of the present disclosure;11C is a flowchart of a method of configuring a DMA system using a VPU according to some embodiments of the present disclosure;12A is a built-in self-test (BIST) system diagram for performing cyclic redundancy check (CRC) calculations of a programmable vision accelerator (PVA), according to some embodiments of the present disclosure;12B is a BIST system diagram for parallel channel CRC calculation of PVA according to some embodiments of the present disclosure;12C is a flowchart of a method of implementation (BIST) for permanent fault detection in a PVA according to some embodiments of the present disclosure;Figure 13A is an illustration of an example autonomous vehicle, according to some embodiments of the present disclosure;13B is an example of camera positions and fields of view for the example ego vehicle of FIG. 13A , according to some embodiments of the present disclosure;13C is a block diagram of an example system architecture of the example autonomous vehicle of FIG. 13A , according to some embodiments of the present disclosure;13D is a system diagram for communication between one or more cloud-based servers and the example autonomous vehicle of FIG. 13A , according to some embodiments of the present disclosure;Figure 14 is a block diagram of an example computing device suitable for implementing some embodiments of the present disclosure; and15 is a block diagram of an example data center suitable for implementing some embodiments of the present disclosure.Detailed waysSystems and methods are disclosed relating to various components of a system on chip (SoC) such as a vector processing unit (VPU), a direct memory access (DMA) controller, and a hardware accelerator (e.g., a programmable vision accelerator (PVA), such as PVA including one or more pairs of VPU and DMA). For example, in various embodiments of the present disclosure, a PVA may include one or more DMA systems and one or more VPUs using one or more processing controllers (or control processors) such as R5 processing and ARM processors, CPUs and/or similar devices) to control. Although the present disclosure (including various components of the SoC) may be described with respect to an example autonomous vehicle 1300 (also referred to herein as "vehicle 1300" or "ego vehicle 1300", examples of which are referred to with reference to FIGS. 13A-13D ), this Not restrictive. For example, the systems and methods described herein can be implemented by, but not limited to, non-autonomous vehicles, semi-autonomous vehicles (e.g., in one or more advanced driver assistance systems (ADAS)), driving and non-driving robots or robot-using platforms, Warehouse vehicles, off-road vehicles, vehicles coupled to one or more trailers, aircraft, boats, shuttles, emergency response vehicles, motorcycles, electric or motorized bicycles, aircraft, construction vehicles, underwater vehicles, drones and /or other vehicle types. Additionally, while the present disclosure may be described with respect to computer vision, machine learning, artificial intelligence, image processing, etc., this is not intended to be limiting, and the systems and methods described herein may be used in augmented reality, virtual reality, mixed reality, robotics, Security and surveillance, autonomous or semi-autonomous machine applications, and/or any other technology space where Vector Processing Units (VPUs), Direct Memory Access (DMA) systems, Instruction Set Architectures (ISA), Programmable Vision Accelerators (PVA ), a decoupled accelerator, a decoupled lookup table, a hardware sequencer, a single-input multiple-data (SIMD) architecture, and/or one or more other components of the SoC. Furthermore, although the components and related processes described herein may be described with respect to a SoC, this is not meant to be limiting, and these components may be implemented as stand-alone components, discrete components of a system, and/or integrated components of a SoC. In some embodiments, the systems, components, features, functions, and/or methods of the present disclosure may be integrated into the example autonomous vehicle 1300 of FIGS. 13A-13D , the example computing device 1400 of FIG. 14 , and/or the example data center 1500 of FIG. 15 middle.Min/max hardware collector for anomaly detectionFor example, in computer vision applications, especially in safety-critical vision applications, computing the dynamic range of intermediate results is an important task. For example, to detect noise or errors in intermediate calculations, known or expected ranges of dynamic values can be used to identify values that fall outside of these ranges. In such examples, where values fall outside a known or expected dynamic range, the values may be flagged as corresponding to noise, error, and/or another problem. Therefore, it may be necessary to collect minimum (min) and maximum (max) values of intermediate results to detect data anomalies. In practice, these anomalies can be caused by, but are not limited to, noise in the image sensor, algorithmic extremes, or data corruption in memory or interconnects. To address these issues, collecting min/max values is an effective way to detect outliers in this data. Min/Max are also used in some algorithms.For a specific example, in an autonomous vehicle application, a runtime exception—such as an infinity or a non-number—may be an invalid value or produce an error, cause a malfunction, or other undesired outcome. With this in mind, algorithms executed as part of the autonomous vehicle platform can be evaluated to determine the range of values (intermediate or otherwise) that may be generated during processing. Once the range of values is known, the actual calculated value can be compared to the known range, and values outside a minimum or maximum threshold can be flagged as errors. In the case of flagging errors, in-process changes can be performed—such as ignoring data for a given iteration, identifying and fixing problems, etc. This way, runtime exceptions are not allowed and are not dependent on the autonomous vehicle since potential runtime exceptions are taken into account.As another example, min/max gathering can be used in some algorithms to normalize intermediate results to a range of values, allowing for greater accuracy in processing - for example, block floating point. The normalization process may include a dynamic range collection step of collecting minimum and/or maximum values of the array, and an adjustment step of applying a scaling factor to the array. However, to collect min/max values, the traditional process requires writing all values to memory, then analyzing the min/max of these values and adjusting the scaling.Therefore, these traditional methods for min/max evaluation are performed in software and require additional processing cycles. For example, the algorithm itself can be run to calculate the value, and then the software can be run to determine the min/max and compare the min/max to a known range of values to identify anomalies. The software needs to execute additional instructions to read elements in the intermediate result array and then perform min/max operations. As a result, the run time of the system for detecting anomalies increases as the algorithm is executed to completion and then additional processes are performed to calculate the min/max values of the algorithm output. This may cause downstream processing to be delayed until min/max values are calculated and compared to thresholds, or may cause downstream tasks to start performing calculations on data that contains errors while min/max evaluations are taking place. Not only does this increase run time, but it also increases the processing requirements and energy consumption of the system as these additional loops are executed to identify anomalous data.Reference is made to FIG. 1A , which is an example processor architecture 100 for min/max collection, according to some embodiments of the present disclosure. It should be understood that this and other arrangements described herein are set forth by way of example only. Other arrangements and elements (eg, machines, interfaces, functions, orders, functional groupings, etc.) may be used in addition to or instead of those shown, and some elements may be omitted entirely. Furthermore, many of the elements described herein are functional entities that can be implemented as discrete or distributed components or in combination with other components, in any suitable combination and location. Various functions described herein as being performed by entities may be performed by hardware, firmware, and/or software. For example, various functions may be performed by a processor executing instructions stored in memory. In some embodiments, architecture 100 may include similar components, features, and/or functionality to example autonomous vehicle 1300 of FIGS. 13A-13D , example computing device 1400 of FIG. 14 , and/or example data center 1500 of FIG. 15 .To address the deficiencies of conventional min/max evaluation procedures such as those described herein, the present disclosure includes systems and methods for min/max collection using hardware. For example, during computation, computed values may be written out to memory 106 (eg, local memory) and used in downstream computations within the same algorithm or another algorithm. To reduce runtime and processing, min/max collection hardware (e.g., min/max collector 104) can be used to capture min/max values before or as they are written out to memory 106—e.g., Instead of waiting for the values to be read out of memory 106, the min/max values are then analyzed. For example, an enable bit may be used to enable the min/max collection function of the min/max collector 104, and once enabled, the values are calculated and written out to memory 106 as the processor 102 is used (e.g., at prior to storing or concurrently with storing to memory 106), min/max collector 104 may update the min/max values. In an embodiment, the enable bit may indicate the type of array being computed—for example, signed or unsigned—so that min/max collector 104 is configured to collect the min/max values for a particular type of array value. For example, an enable bit or another type of control feature can be used to disable the min/max collector 104 and/or configure the min/max collector 104 to collect unsigned min/max or to collect signed min/max maximum value. In the data storage datapath, the min/max collection logic of the min/max collector 104 may be included, reading values to update or maintain the min/max values as they are calculated using the processor 102 and stored in the register file maximum value.For example, during operation, the current minimum and/or current maximum may be maintained in the minimum/maximum collector 104, and the current minimum and/or current maximum may be updated to new, lower minimum and maximum values /or a new, higher maximum value, which is written out to memory 106 . Where the newly calculated value is greater than the minimum value and/or less than the maximum value, the current minimum and/or maximum value may be maintained by the minimum/maximum value collector 104 . In this way, min/max collector 104 can maintain the current min and/or max as each value is calculated throughout the calculation. Once the computation for a given iteration is complete, the min/max values are immediately available in the min/max collector 104, and software and/or hardware can be used to compare these stored values with minimum and/or maximum thresholds, This minimum and/or maximum threshold is associated with the particular algorithm or calculation performed to determine if an anomaly exists. For example, a mechanism could be included to allow the min/max values collected to be read for evaluation. Therefore, in contrast to the previous approach, there is no need for another loop to calculate min/max values after the algorithm has been fully executed, since the min/max values are immediately available. Furthermore, in an embodiment, min/max collector 104 (e.g., including hardware and/or logic) may be aware of store predictions such that if a particular data item is prohibited from being stored to memory 106 via, for example, store prediction per lane, then the min Value/maximum collection may exclude that particular data item. For example, where the address from the address generator includes a store prediction flag, the computed value may be ignored for both storing to memory 106 and updating min/max collector 104 .In some embodiments, min/max collector 104 may be implemented as a feature of a system including an address generator—such as described in U.S. Nonprovisional Application No. 15/141,703 filed April 28, 2016 One or more address generators, the entire contents of which are incorporated herein by reference. The address generator can be included in any type of processor or other processing unit - such as vector processing unit (VPU), central processing unit (CPU), graphics processing unit (GPU), digital signal processor (DSP), data processing unit (DPU) and/or another type of processing unit (such as those described with respect to Figures 13A-13D, 14 and/or 15). In some embodiments, one or more VPUs may be included in a Programmable Vision Accelerator (PVA), and/or as part of a System-on-Chip (SoC).As a non-limiting example, inputs for a particular sensor type or algorithm may be limited to 16-bit units. In order to determine the dynamic range of that particular sensor and/or algorithm, the operations associated with the algorithm processing the sensor input can be evaluated. In such an example, assuming the first operation is the addition of two 16-bit numbers, then the first intermediate result is a 17-bit number. The 17-digit number can then be multiplied by the 5-digit number to produce a 22-digit number. If this is the end of the algorithm, you can be sure that the output probably won't exceed 22 bits. Similarly, the minimum value can be evaluated. So during deployment, the output may be flagged if the min/max values are outside this known range (e.g. 22 bits).In some embodiments, the storage data path (e.g., between processor 102 and memory 106) may include saturation and/or rounding logic 108 to constrain the values stored to memory 106 between certain upper and lower limits or thresholds. or rounded according to certain conventions. Therefore, in traditional methods, the evaluation of min/max values may be performed after saturation and/or rounding. In the presence of anomalies, these traditional methods may fail to detect anomalies because saturation and/or rounding may hide anomalies - for example, low and/or high values may be within the upper and lower bounds configured by the saturation logic for them Saturated in between.However, for a particular implementation, values or expectations may be unsaturated, unrounded, or absolute min/max values—for example, in addition to or instead of saturated min/max values from Saturation min/max. Accordingly, min/max collector 104 of the present disclosure may collect min/max values from raw or unsaturated data (eg, before manipulating values using saturation/rounding logic 108 ) for anomaly detection. In an embodiment, collection of an average value of the data or an average absolute value of the data may be performed. The average can be calculated, for example, by summing the elements, reading the sum back from the address generator configuration register, and dividing by a number of data items stored out (possibly known by the application). In this way, absolute min/max values, sum of values and/or sum of absolute values can be added to the processor memory datapath and configuration and collection of resulting statistics can be performed - for example, can be added to address Builder configuration feature sets, or can be managed individually. In some embodiments, min/max collector 104 may collect values before and/or after saturation, rounding, or other calculations using saturation/rounding logic 108 .Referring now to FIG. 1B , each block of the method 110 described herein includes a computational process that may be performed using any combination of hardware, firmware, and/or software. For example, various functions may be performed by a processor executing instructions stored in memory. Method 110 may also be embodied as computer usable instructions stored on a computer storage medium. Method 110 may be provided by a stand-alone application, service or hosted service (standalone or in combination with another hosted service), or a plug-in to another product, to name a few. Although described with respect to the architecture 100 of FIG. 1A , the method 110 may be performed by any one system or any combination of systems, including but not limited to those described herein.FIG. 1B is a flowchart illustrating a method 110 for min/max collection according to some embodiments of the present disclosure. At block B102, the method 110 includes calculating one or more values. For example, when executing one or more algorithms—eg, neural networks, computer vision algorithms, filtering algorithms, etc.—processor 102 may be used to calculate one or more values.At block B104, the method 110 includes comparing the value of the one or more values to a currently stored minimum value and a currently stored maximum value. For example, min/max collector 104 may compare each of any number of values to be stored in memory 106 (e.g., a value in a register file) with a currently stored minimum value and a currently stored maximum value ( For example, currently stored by the hardware min/max collector 104). In such examples, the min/max collector may compare the value to the currently stored minimum and/or maximum value as the value is calculated and before or while the value is stored in memory. In one or more embodiments, a min/max collector may be included along a data path between a hardware unit that computes the one or more values and a memory unit that stores the one or more values.At block B106, the method 110 includes determining whether the value is greater than one of the currently stored maximum values or less than one of the currently stored minimum values. For example, based on the comparison at block B104, the system (eg, hardware min/max collector 104) may determine whether each value to be stored to memory is greater than or less than the currently stored maximum value.At block B108, the method 110 includes updating the currently stored minimum value to the value based on the value being less than the current stored minimum value. For example, in the event that the calculated value to be stored to memory is less than the minimum value currently stored by the hardware min/max collector, the hardware min/max collector may update the currently stored minimum value as the calculated value.At block B110, the method 110 includes updating the currently stored maximum value to the value based on the value being greater than the currently stored maximum value. For example, in case the calculated value to be stored to memory is greater than the currently stored maximum value of the hardware min/max collector, the hardware min/max collector may update the currently stored maximum value as the calculated value.In this way, the min/max values can be dynamically updated during the storage of values such that once some (e.g. all) values are stored, the min/max values are read from the values currently stored by the min/max collector Max, Min/Max are immediately available.Automatic Storage ForecastIn high clock rate processors, a popular implementation is to configure the processor into multiple pipeline stages. Thus, there may be a delay between issuing an instruction to load a register from local memory and the time the register becomes available for another instruction operation - eg, a load-to-use delay. To achieve high throughput with load-to-use latency, processor compilers and application development can use software pipelining and/or loop unrolling. For example, software pipelines can be used for the execution of multiple iterations of overlapping loops, and loop unrolling can be used to expand the loop body by repeating its contents multiple times. Together, these techniques can allow multiple iterations of the loop's contents to be executed concurrently, reducing idle cycles (ideally none) in scheduling. When performing loop unrolling, the compiler can divide the loop interaction count by the unrolling factor. For example, the compiler might assume that the original iteration count is a multiple of the unrolling factor, so that the unrolled loop can perform with equivalent functional behavior. In such an example, if the original iteration count is 60, and the loop is to be unrolled by a factor of 6, the unrolled loop can run for 10 iterations. However, if the original loop iteration count was 64, by normal integer division, 64/6 would also result in 10, so the loop would not execute enough times (e.g., the additional 4 iterations might not be executed), resulting in a different Code behavior that may cause the application to fail. In some techniques, assert statements are added to ensure that the iteration count is indeed a multiple of the unroll factor.A collection of steps or operations in a loop body can have a narrow range of optimal or desired unrolling factors. For example, the lower bound of the unrolling factor might be the minimum number of copies of the loop code to be scheduled to fill the gaps due to various latencies and achieve optimal performance, or the upper bound could be scheduled at the limited capacity in the register file Maximum number of copies - for example, this could lead to excessive register spills (saves to and restores from the stack) and lead to suboptimal scheduling. As another example, due to the feasibility of choosing a combination of tile width and tile height, allowing iteration counts to be some power of 2 (e.g., 2, 4, 8, etc.), expanding by a power of 2 is useful for many applications This is acceptable. However, in an embodiment, the loop body may also optimally be unrolled 6 or 7 times, while unrolling 4 or 8 times may not be efficient. In any case, loop unrolling to achieve optimal scheduling may impose an inconvenient limit on the number of iterations. Therefore, conventional techniques for solving this problem may result in performance degradation and increased code size.For example, a limit on the number of iterations is inconvenient, so a programmer can write two loops -- say, a "many times" loop and a "remainder" loop -- when there shouldn't be such a limit on the number of iterations. As an example, the following illustrative sample code snippets show: Code 1 – vector addition loop without loop unrolling; Code 2 – same loop with loop unrolling to 6, only works when the iteration count is a multiple of 6; and Code 3 – double The loop solution, works for any iteration count, but the remainder loop is not unrolled, so it is less efficient, and also results in a larger code size due to the additional loop and iteration count calculations.Code 1:Code 2:Code 3:Using the vector processing unit (VPU) of the present disclosure, code 1 can achieve 6 cycles per iteration, code 2 can achieve 1 cycle per iteration, and code 3 performance can depend on the iteration count. For the number of iterations (niter), niter = 60 (a multiple of 6, so the remainder is not run), Code 3 may achieve 1.0 cycles per iteration, and for niter = 64 (the remainder loop runs 4 times), Code 3 may achieve 1.0 cycles per iteration An average of 1.3125 cycles is reached (eg, (60*1+4*6)/64=84/64=1.3125).Referring to FIG. 2A , FIG. 2A is an example system 200 including a processor 202 (eg, a VPU) including an address generation unit with automatic prediction capability, according to some embodiments of the present disclosure. It should be understood that this and other arrangements described herein are set forth by way of example only. Other arrangements and elements (eg, machines, interfaces, functions, orders, functional groupings, etc.) may be used in addition to or instead of those shown, and some elements may be omitted entirely. Furthermore, many of the elements described herein are functional entities that can be implemented as discrete or distributed components or in combination with other components, in any suitable combination and location. Various functions described herein as being performed by entities may be performed by hardware, firmware, and/or software. For example, various functions may be performed by a processor executing instructions stored in memory. In some embodiments, processor 202 may be included in and/or may include components with example autonomous vehicle 1300 of FIGS. 13A-13D , example computing device 1400 of FIG. 14 , and/or example data center 1500 of FIG. Components, features and/or functions that are similar in character and/or function.In embodiments of the present disclosure, loads and stores in code segments may use address generator 204 in processor 202 (eg, VPU). For example, on each load and store, address generator (agen) parameters (agen_a, agen_b, agen_c) may be provided to the load/store functions. The arguments may identify address generator registers that contain parameters that may be used for address calculations for a particular load and/or store operation—eg, address pointers, number of iterations, current loop variable values, etc. In some embodiments, the VPU may be designed so that each address generator register supports 6 (or other value) addressing dimensions, thus including 6 (or other value) iteration counts and 6 (or other value) loop variables.To address the limitation of loop unrolling on the number of iterations, the systems and methods of the present disclosure may include an address generator 204 with logic (e.g., a predictive flag or bit 208) for automatically predicting stores from the address generator 204 . For example, predictions can be used to provide indications of conditional execution, such as whether (or not) to do something. The value of the predictive bit 208 (eg, 0 for store or 1 for prevent store and vice versa) may be used to indicate whether the instruction will be executed. Execution may not refer to the actual execution of the iteration, but whether the resulting value of the iteration's execution is stored into memory. Thus, in an embodiment, an instruction that is not executed due to a speculative flag may refer to an instruction or iteration that is being executed, but the result of the execution is prevented or precluded from making changes to the state of memory 206 . Can include instruction-level prediction and lane-level prediction. Instruction-level prediction can be used to indicate whether an entire instruction should be executed, while lane-level prediction can be used to indicate which data lanes should or should not be executed.In some embodiments, after the loop variable has exhausted the iteration count, any subsequent execution of the store instruction is automatically predicted to inhibit further writing to memory 206 . In this way, automatic storage of predicted features can be done by rounding iteration counts that are not a multiple of 6 (or another unrolling factor) to the next multiple of 6 and by not changing iterations that are not a multiple of 6 (or another unrolling factor) count to allow for clearly written code. Although a factor of 6 was used, this is not intended to be limiting and any expansion factor may be used without departing from the scope of the present disclosure. Code 4 below includes an example of vector addition with automatic store prediction.Code 4:Code 4 with a raw number of iterations (niter) of 64 can run the unrolled loop 11 times at 1.03125 cycles per iteration (eg, 11x6/64=1.03125). Another way to think about the constraints on iteration counts that are multiples of the unroll factor is to compute the necessary predictive flags in the loop and provide the predictive flags in the store instruction. For example, Code 5 described below shows an example implementation of the prediction flag calculation.Code 5:Code 5 can compile to 1.5 loops per iteration in the VPU of the present disclosure, so automatic prediction can include a performance advantage over predictions computed in loops. In an embodiment, the VPU may include a 7-way Very Long Instruction Word (VLIW) instruction scheme, and each cycle may include 2 scalar slots for scalar operations required for predictive computation. If the loop has more vector operations per iteration, there may be enough scalar slots so that speculative computations can fit in the available slots without incurring a performance penalty. Even in compute loops where real-time computation predictions have no impact on performance, automatic predictions may still have advantages in terms of code size and energy consumption.Thus, software can be used to configure multiple iterations (eg, N1-N6), and software can cause address generator based loads/stores to be performed - typically in a loop. The address generator hardware can maintain loop variables (eg, variables I1-I6) and can advance address pointers as appropriate. When an address generator based load/store has been performed for more than a preconfigured number of iterations, the address pointer may get stuck at the last valid address, and automatic prediction can be turned off (e.g. by setting the prediction flag) to block subsequent stores to memory . As such, an "auto-prediction off" internal Boolean state may be included in the address generator 204, and the loop variable iteration logic may be configured to support turning off auto-prediction. For example, and with respect to FIG. 2B, when initializing the address generator, in addition to the loop variables I1-I6, the value of the parameter auto-pred off ("auto_pred_off") (e.g., predict bit 208) may be initialized or reset to "0 ". auto_pred_off may be updated to "1" after the loop variable has exhausted the programmed iteration count. As a result of the predict bit being "1," any subsequent execution of the store instruction can then be automatically predicted and further writes to memory can be prevented.In the example of FIG. 2B , the number of iterations of the address generator of registers N1-N6 can be programmed as N1=4, N2=2, N3=N4=N5=N6=1. The total programming iteration count can thus be 4*2*1*1*1*1=8, and as a result the sequence shown in Figure 2B can be executed. As shown, the initial state and subsequent 7 executions (e.g., the first 8 iterations) may correspond to an auto_pred_off bit with a value of 0, and the 8th and 9th executions (e.g., the last 2 iterations) may correspond to The auto_pred_off bit with a value of 1 prevents the results of the 9th and 10th executions from being stored in memory.In practice, a VPU may be configured to handle a number of vector units working simultaneously - eg, 8, 16, etc - so the VPU may require an array that is a multiple of the number of vector units. This setup works well if the array is a multiple of the number of vector elements. Often, however, an array may not be a multiple of vector units (e.g. because there is no guarantee that data will be computed against arrays of the same size), and therefore, arrays are padded so that processing is always performed in batches of the same size. For example, the remaining iterations could be filled with "0" values, but this would still require an additional cycle in software to process the filled values. As a result, padding can be inefficient because the added data is computationally wasteful and also complicates the software—a common problem in single-instruction, multiple-data (SIMD) software. Therefore, automatic storage prediction can be used to solve this problem.For a non-limiting example, where batches of 16 are used, as many as 16 batches can be generated from an array, and the remaining values can be included in the final batch, the remaining values within the 16 batches The down or remaining space usage prediction flag is predicted to be off. As a concrete example, if an array size is 82, 5 complete sets (of 16 each) may be generated, and in the last iteration, the remaining 2 elements may be included, while the other 14 Possibly automatic prediction is turned off - thus minimizing the computational waste of padding batches with 14 values and performing unnecessary calculations on the padding data. As another example, where the vector processing granularity includes a width of 32, and the array has 100 elements, 3 full 32-element vectors can be processed, and the remaining 4 can be processed through 4 of the 32 channels elements (for example, the predicted flag may be turned on), while the other 28 channels may be turned off by prediction. In this way, the programmer may be able to vectorize arrays that are not a multiple of the number of cells in the sample. For example, for each store, the hardware might actually calculate the number of elements to be written to the memory and communicate this information to the store unit. Therefore, even if the mathematical operation of padding or additional elements could be performed and stored, this additional computation and storage is inefficient. Thus, the predictive flag can be set such that no additional reads are required and writing of computed values from fill values to memory does not occur (eg, be blocked or excluded). This automated prediction can happen at the command level, and software can be added to additionally perform lane-level predictions.Also, for automatic prediction, additional information may not be needed, since the address generator can be programmed for multiple iterations - so the address generator has memory to support automatic prediction - and software instructions can be added to automatically store and predict between the predicted Move between close stores. This way, on the final iteration, the hardware can determine when to store the full result or when to store less than a full result -- for example, because prediction is turned off or otherwise signaled -- and this can be done at zero cost, while maintaining performance . In the case of software alone, the process would require additional cycles, slowing down the process.In some embodiments, prediction can be used at a per-pass level, so that these implementations can handle not only iteration counts that are not multiples of the loop unrolling factor, but also efficiently handle any problem size that is not a multiple of the vector width. In such an embodiment, vector registers can be used to drive per-lane prediction, which can provide the advantage of computing information in real time, and a shortcut that eliminates the requirement to copy from a vector register to a scalar prediction register can be achieved by using The scalar prediction register applies the prediction flags on a per-lane basis. For example, per-lane prediction can be performed from vector registers, which can be beneficial when computing per-lane prediction information in a loop, and the computation can be vectorized.For example, to perform some replacement of values in an array -- such as replacing any value over 100 with 999 -- the code could be written as follows:While this code may be functionally correct, it may result in poor performance. Therefore, the code can be vectorized using per-lane prediction by incorporating, for example the following code:When prediction computations are vectorized in this way, and predictions per channel can only be transferred through scalar prediction registers, the prediction information needs to be copied from vector registers to scalar prediction registers, increasing execution time.However, rather than performing bit-packing and moving the prediction mask from vector lane 0 to a scalar register, it is possible in this example to use per-lane prediction driven directly from the vector register feature described in this paper, as shown in the following code:Referring now to FIG. 2C , each block of the method 220 described herein includes a computational process that may be performed using any combination of hardware, firmware, and/or software. For example, various functions may be performed by a processor executing instructions stored in memory. Method 220 may also be embodied as computer usable instructions stored on a computer storage medium. Method 220 may be provided by a stand-alone application, service or hosted service (stand alone or in combination with another hosted service), or a plug-in to another product, to name a few. Although described with respect to system 200 of FIG. 2A , method 220 may be performed by any one system or any combination of systems, including but not limited to those described herein.FIG. 2C is a flowchart illustrating a method 220 for automatically storing predictions according to some embodiments of the present disclosure. At block B202, the method 220 includes determining a total number of iterations. For example, address generator 204 may determine the total number of iterations to perform for a given instruction.At block B204, the method 220 includes dividing the total number of iterations into sets of iterations. For example, address generator 204 may separate iterations by an unrolling factor to generate a loop body that includes multiple iterations of the loop.At block B206, the method 220 includes determining that a set of iterations of the sets of iterations includes a first number of iterations that is less than a second number of iterations corresponding to other sets of iterations of the sets of iterations. For example, address generator 204 may determine that after separating the iterations by the unrolling factor, one set of iterations includes fewer iterations than other sets. For example, with an unrolling factor of 6 and an iteration count of 62, there might be 11 sets of iterations - 10 sets of 6 iterations and 1 set of 2 iterations. In this way, the address generator 204 may determine that 2 iterations of the set of iterations including the remaining 2 iterations should be performed and the other four iterations should be predicted to close.At block B208, the method 220 includes, during execution of the set of iterations, generating a prediction flag corresponding to at least one iteration of the set of iterations. For example, upon determining that the set of iterations does not include a complete set of the same number of iterations as other sets of iterations, the address generator 204 may enable a predictive flag (change the value of the predictive off bit 208) to indicate that the results of the excess iterations should be stored or written to into memory.At block B210, the method 220 includes preventing writing of a value corresponding to at least one iteration of the set of iterations to memory based at least in part on the predictive flag. For example, based on a set prediction flag, it is possible to prevent or calculate calculated values from being written to memory.Enhanced SIMD datapath organization for vector processorsIn a conventional Single Instruction Multiple Data (SIMD) architecture, each SIMD processing unit operates on its own data lane in parallel and independently of each other. Some machines allow each SIMD processing unit to communicate directly with its neighbors (e.g., left and right neighbors as a linear array of processing units, or two-dimensional (2D) arrays or north, south, east and west neighbors in processing). However, communicating only between adjacent data paths is limiting and makes operations requiring multiple input operands expensive to implement. For example, convolution is a common operation in image processing, computer vision, machine learning, etc. During convolution, various filters can be applied to adjacent pixels, such as, for a non-limiting example, three-tap one-dimensional (1D) filtering involving three data operands and three coefficient operands. If these operands cannot be shared between the data lanes of a SIMD architecture, six operands need to be brought into each data lane to produce the result for that particular lane. With this in mind, some common approaches implement multiple read ports on one register file, but this requires additional surface area for the SIMD architecture as well as additional operating power.To address the deficiencies of conventional SIMD architectures, the SIMD architecture of the present disclosure may allow communication between lanes by defining slices, such as vector processing units (VPUs), in a processor that includes multiple lanes as groups. For non-limiting example, in a processor, a SIMD channel organization may include a hierarchical organization that may be divided into, for example, eight 48-bit (extended word) lanes, sixteen 24-bit (extended halfword) lanes, or a 384-bit data path for 32 lanes of 12 bits (extended bytes). In such an example, each byte can be extended by 4 bits. The first layer of communication over the individual lanes may be referred to as a SIMD slice, and may be (for example, but not limited to) 96-bit wide, consisting of two extended word lanes (e.g., two 48-bit lanes), four extended halfword lanes (for example, four 24-bit channels) or eight extended byte channels (for example, eight 12-bit channels). In a non-limiting embodiment, the entire processor data path may include four SIMD slices and layer 2 communication may be global across all four (or other numbers) of SIMD slices and all lanes. In this way, operand sharing between the lanes of each slice can be achieved, which can be useful in instructions such as filtering, dot product, payload ordering, etc. A SIMD architecture may be included in a VPU or other processor type, such as the processor of the example autonomous vehicle 1300 of FIGS. 13A-13D , the example computing device 1400 of FIG. 14 , and/or the example data center 1500 of FIG. 15 .Due to the physical routing of the SIMD architecture, the instruction set architecture (ISA) of SIMD may allow sharing between a certain number (eg, 8) of lanes within a slice. For example, as shown in the figure. As shown in FIG. 3A, within each slice, communication between 32-bit word data types, 16-bit half-word data types, and 8-bit byte data types is possible. As a result, in an example, such as the filter operation shown in Figure 3B, an 8-bit by 8-bit multiply and accumulate can be performed in halfwords with four input operands and four coefficients, where the coefficients can be compared with those from Data sharing of different channels. In a traditional SIMD architecture, each lane would need to load all 8 operands to perform the same computation that can be performed using only three input operands in the SIMD architecture of the present disclosure. Thus, only three read ports are required to save space and power to execute such instructions, since each read port is associated with increased surface area and power consumption. In operation, four accumulators (such as 0, 1, 2, and 3) may be filled with the results of the following calculations due to sharing between channels within a slice.ACC[0]+=D[0]*C[0]+D[1]*C[1]+D[2]*C[2]+D[3]*C[3]ACC[1]+=D[1]*C[0]+D[2]*C[1]+D[3]*C[2]+D[4]*C[3]ACC[2]+=D[2]*C[0]+D[3]*C[1]+D[4]*C[2]+D[5]*C[3]ACC[3]+=D[3]*C[0]+D[4]*C[1]+D[5]*C[2]+D[6]*C[3]As shown, for example, ACC[0] can access other channels of src1a, including D[1], D[2], and D[3], and can also access other channels of src2, including C[1], C [2] and C[3]. Similarly, other accumulators (ACC) can access individual channels of src1 and src2. This type of operation is not possible in traditional vector processors with limited or minimal sharing between channels. For example, these calculations may include sliding window methods, where each accumulator includes the result of shifting the sliding window relative to the previous accumulator. For example, the first accumulator operates on D[0], D[1], D[2], and D[3], and the second accumulator operates on D[1], D[2], D[3] and D[4], and so on. Each accumulator uses the same coefficients C[0], C[1], C[2] and C[3]. This is possible because of the shared physical routing between lanes of SIMD architectural slices.As another example implementation of the SIMD architecture of the present disclosure, and with respect to the diagram shown in FIG. 3C , the dot product in the vector multiplication operation can be performed using channel sharing. In such an example, two indices (eg, D[0][0]) indicate which channel the data belongs to and which output set the data belongs to. For dot product calculations, each channel uses only data operands from its own channel, but coefficients are shared between channels. Therefore, the output from each channel may use all four coefficients at some point during the dot product operation. In operation, four accumulators (such as 0, 1, 2, and 3) may be filled with the results of the following calculations due to sharing between channels within a slice.ACC[0]+=D[0][0]*C[0]+D[1][0]*C[1]+D[2][0]*C[2]+D[3][ 0]*C[3]ACC[1]+=D[0][1]*C[0]+D[1][1]*C[1]+D[2][1]*C[2 ]+D[3][1]*C[3]ACC[2]+=D[0][2]*C[0]+D[1][2]*C[1]+D[2] [2]*C[2]+D[3][2]*C[3]ACC[3]+=D[0][3]*C[0]+D[1][3]*C[ 1]+D[2][3]*C[2]+D[3][3]*C[3]As another example operation that may benefit from the SIMD architecture of the present disclosure, the two-point sort operation of FIG. 3D may be performed. For a two-point sort, the payload is sorted using two values. This two-point ordering exploits communication between pairs of channels within a slice and is useful, for example, in various computer vision applications. For example, the key for entry 0 is in channel 0, the corresponding payload is in channel 1, and so on, and the payloads can be sorted based on key comparisons—for example, for each key/payload pair in the following code:Referring now to FIG. 3E , each block of method 300 described herein includes a computational process that may be performed using any combination of hardware, firmware, and/or software. For example, various functions may be performed by a processor executing instructions stored in memory. Method 300 may also be embodied as computer usable instructions stored on a computer storage medium. The method 300 may be provided by a stand-alone application, service or hosted service (stand alone or in combination with another hosted service), or a plug-in to another product, to name a few. Although described with respect to the SIMD architecture of the present disclosure, method 300 may be performed by any one system or any combination of systems, including but not limited to those described herein.3E includes a flowchart of a method 300 of computing an output using shared operands across lanes of a SIMD architecture, according to some embodiments of the present disclosure. At block B302, the method 300 includes dividing the bit width of the processor into a plurality of data slices, each data slice comprising a second bit width less than the first bit width, each data slice of the plurality of data slices comprising a plurality of lanes, Each lane includes a third bit width less than the second bit width. For example, a vector processor may be divided into some number (eg, 4) of slices, and each slice may include some number of channels.At block B304, the method 300 includes loading the first vector into the first vector register such that the first of the plurality of lanes includes the first operand of the first vector, and the second of the plurality of lanes includes the first The second operand of the vector. For example, with respect to FIG. 3B , the first data operand D[0] of the first vector may be loaded into the first lane, and the second data operand D[1] corresponding to the first vector may be loaded into the second lane. channel.At block B306, method 300 includes loading a second vector into a second vector register such that a first of the plurality of lanes includes the third operand of the second vector, and a second of the plurality of lanes includes the second The fourth operand of the vector. For example, with respect to Figure 3B, the first coefficient operand C[0] of the third can be loaded into the first channel, and the second coefficient operand C[1] corresponding to the third vector can be loaded into the second channel.At block B308, the method 300 includes computing an output using the instruction based at least in part on the first operand, the second operand, the third operand, and the fourth operand. For example, with respect to FIG. 3B, the first accumulator (ACC[0]) may receive the calculation ACC[0]+=D[0]*C[0]+D[1]*C[1]+D[2]* The result of C[2]+D[3]*C[3], including D[0], D[1], C[0], C[1] and other values. This computation may occur due to internal sharing and routing between the channels of each slice.At block B310, method 300 includes storing the output to a register. For example, with respect to Figure 3B, the output of the calculation may be stored to the accumulator register ACC[0], which may then be stored to memory.Transpose load and store operations with stride parametersIn traditional vector single instruction multiple data (SIMD) processors, the local data memory can be sized to match the vector processing width. For example, for a 256-bit vector SIMD processor capable of processing 32 8-bit lanes, 16 16-bit lanes, or 8 32-bit lanes, the local data memory may include, for example, 256-bit wide memory or 512-bit wide memory (e.g., two times the processing bit width). In such an example, local data storage is organized as a single memory bank with full-width memory words. However, wide vector SIMD processors with a single full-width memory word can be inefficient—especially for unaligned memory accesses. For example, to load an array of 16-element 32-bit arrays at byte addresses 4 through 67, the processor may require two memory reads—for example, one read from addresses 0 through 63 (including addresses 0 through 3, whose current operation Unnecessary data) and a second read of addresses 64 to 127 (including addresses 68 to 127, which include data not required for the current operation). Thus, without the grouped memory architecture of the present disclosure, access patterns can be implemented with multiple loads or stores, which can result in slowdown of the computing cores, reduced performance, and increased power consumption.With this in mind, a single wide memory bank can instead be organized into multiple memory banks—eg, 16-bit memory banks (eg, 32 16-bit memory banks providing 512 bits of memory bandwidth per clock cycle). In this way, read and/or write operations can occur on any 16-bit aligned range - thereby reducing the number of redundant read/write operations such as those described in the above example. With such a memory organization, reading addresses 4 through 67 may only require one memory read. In addition to memory bank organizations comprising smaller individual memory banks, transposing load and/or store functions may also be implemented. For example, a channel offset parameter K can be used to define a row address offset to be applied to each subsequent channel in memory. A channel size may correspond to a data element size - eg, 8 bits, 16 bits, 32 bits, etc. When a 2D array is stored in memory with a row spacing of W*K+1 elements, interleaved access mode may be converted to vertical mode, where K is the offset parameter and W is 64/channel size (or size of data elements) . For example, for 32-bit data elements, the row spacing may be 16*K+1. In some embodiments, a SIMD processor can be included as a component, and/or can be included similar to the example autonomous vehicle 1300 of FIGS. 13A-13D , the example computing device 1400 of FIG. 14 , and/or the example data center 1500 of FIG. 15 components, features and/or functions.As an example, and as shown with respect to the diagram in FIG. 4A , table 400 may include an illustration of a logical view of transposed loads and a memory bank view of 17 transposed loads with a row spacing exceeding 256 bits. A memory bank ends up as 18 individual 16-bit bars in the memory bank view, this is for illustration purposes only. For example, memory banks may be 256 bits total, 512 bits total, or some other total number of bits - eg, each memory bank may be 16 bits wide. In the memory bank view using transpose load, with a row spacing of 17, a single load operation can be performed to retrieve each highlighted value of the array.Although transposing loads using this technique is beneficial for many operations, certain algorithms—such as some computer vision algorithms—may require access to data mode to be fetched and/or written. For example, instead of loading a 16-high vertical vector, it may be desirable to load an 8-high by 2-element-wide submatrix, a 4-high by 4-element wide matrix, or other matrix or submatrix sizes. For example, in a dot product operation, the accumulation may be for two rows of 16 elements, 16 bits at a time, so when storing the output, a T16 transpose storage option with appropriate row spacing may be required so that the two rows can be written as one memory Incoming transaction writes out. To solve this, the stride parameter can be used with transposed loads and/or stores. In some embodiments, the stride parameter may include a power step of 2 (although this is not limiting), such as a step of 2, 4, 8, 32, etc., which may be referred to as T2, T4, T8, T32 wait. An example of different transpose loads with a stride parameter is shown in table 410 of FIG. 4B , which includes a logical view and a memory bank view of the transpose load. The example of FIG. 4A , mirrored in FIG. 4B , includes a stride parameter of 1, however, the other stride parameters are multiples of 2. For example, T2 has a row spacing of 18, allowing a matrix of 2 elements wide by 8 height to be stored as a transposed load, such that each value can be retrieved using a single load transaction. Similarly, for T4, with a row spacing of 20 and a stride of 4, a matrix of 4 elements wide by 4 high can be stored so that each value can be retrieved with a single load transaction, etc. Although described as a load transaction, this type of format can also be used for store transactions, storing data in memory according to the transpose plus stride parameters.In such an example, the line spacing constraints can be adjusted according to the stride. For font T-transpose access, the line spacing can be 16K+1, for font T2-transpose access (for example, for a stride of 2), the line spacing can be 16K+2, for font T4- For transposed access (for example, for a stride of 4), the row spacing might be 16K+4, and so on. Thus, the line spacing can be equal to 16K+stride value, or 16K+1+(T-1), where T is the stride parameter.In operation, the architecture of the VPU's VMEM and the VPU's instruction set architecture (ISA) can be configured to perform transposed load and/or store operations, with or without stride parameters, to allow reads or writes in a single Data organized by columns in a logical view in a read operation. For example, the ISA may be configured to receive an indication of a starting address to read data from or write data to (e.g., for reading or writing data from a register file), an indication of the type of write (e.g., Transpose the write operation, with or without the stride parameter), the line spacing value (for example, the K value in 16*K+1), and/or the stride parameter value. Note that a value of 16 corresponds to the number of data elements for a particular implementation, but the value of 16 (or W) may vary in different embodiments. Thus, when writing data to memory according to a transpose write operation, the ISA may receive the start address, line spacing and/or stride parameters to be written into VMEM. As a result, when values are written, rather than writing them out in a single data column to a single memory bank, the data can be written out according to a transposition or offset such as shown in Figures 4A and 4B. In the case of a stride parameter, the first value of the stride can be written to memory, followed by the next number of elements corresponding to the stride, and row spacing can then be applied to write the next set of values to the memory bank, so that each value can be written to memory in one cycle. Similarly, during a read operation, the ISA may receive a start address, a load type (e.g., a transposed load, with or without a stride parameter), a line spacing value (e.g., K of value) and the stride parameter value (for example, a data type indicator such as byte, halfword, etc.). The ISA can then access data from the various memory banks according to the transposed load instructions (and/or stride parameters) to retrieve a column (or columns) of data in a single read cycle. In this way, a single vector can be returned from a single read operation by retrieving one element from each memory bank.Referring now to FIGS. 4C-4D , each block of the methods 420 and 430 described herein includes a computational process that may be performed using any combination of hardware, firmware, and/or software. For example, various functions may be performed by a processor executing instructions stored in memory. Methods 420 and 430 may also be embodied as computer usable instructions stored on a computer storage medium. Methods 420 and 430 may be provided by a stand-alone application, service or hosted service (standalone or in combination with another hosted service), or a plug-in to another product, to name a few. Although described with respect to the SIMD architecture of the present disclosure, methods 420 and 430 may be performed by any one system or any combination of systems, including but not limited to those described herein.FIG. 4C includes a flowchart of a method 420 of configuring a transpose store operation using a stride parameter, according to some embodiments of the present disclosure. At block B402, the method 420 includes determining the dimensions of the matrix. For example, the width of the matrix can be determined.At block B404, the method 420 includes determining a stride parameter and row spacing for storing the matrix based on the dimensions. For example, 16K+ stride values can be used to determine row spacing, and the stride value can be determined based on the width of the matrix.At block B406, method 420 includes using the stride parameter and row spacing such that the values of the matrix are stored into memory. For example, once the row spacing and stride are determined, the values of the matrix may be stored in memory such that the row spacing and stride parameter values do not cause memory bank conflicts when the matrix values are read from memory.Referring now to FIG. 4D , FIG. 4D includes a flowchart of a method 430 of configuring a transpose store operation using a stride parameter, according to some embodiments of the present disclosure. At block B408, the method 430 includes receiving data representing row spacing in a memory bank of a plurality of memory banks and a starting memory address corresponding to an element of a plurality of elements corresponding to column.At block B410, the method 430 includes, in a single read operation, reading a plurality of elements from a plurality of memory banks, based at least in part on the row spacing, and reading one of the plurality of elements from a corresponding one of the plurality of memory banks each element.Load with permutation and zero insertion in a single instructionIn a traditional processor instruction set, a load instruction can form a memory address through some index calculations, read the requested memory data from local memory, and store the memory data into a register. If the application requires additional data manipulation, additional instructions can be used to operate on the memory data in the registers. In some cases, data manipulation may include simple data reorganization. In conventional processors, even this simple manipulation of data in the register file requires additional instructions, and thus additional latency. For example, a traditional system may load data, perform a permutation on the loaded data, and then use the restructured data to perform one or more operations. If the load instruction is enhanced with this data reorganization capability, some processing time can be saved and the computing core can be executed with higher performance and lower power consumption.To address these shortcomings, the systems and methods of the present disclosure add a load with permutation instruction that sends the permutation pattern to local memory along with the memory address. As a result, existing data routing and multiplexing used to handle unaligned loads can be used to perform permutation without significant additional logic. In addition to saving instructions that would otherwise cost (e.g., 5 instructions for permutation with two-vector input and two-vector output), the overall latency of the permutation operation may be reduced. For example, there is no load-to-use delay and no compute delay (e.g. for performing permutations), the only delay is load-to-use delay. In some embodiments, loads with permutations and/or zero insertions described herein may be included in the example autonomous vehicle 1300 of FIGS. 13A-13D , the example computing device 1400 of FIG. 14 , and/or the example data center 1500 of FIG. components, features and/or functions, or may be similar to those components, features and/or functions.Thus, load with substitution features can be used to manipulate load data from memory into a desired format for operation. As an example, the coefficient data required by the various filter and dot product instructions may include a specific repeating pattern, which may be achieved by loading and permutation. Regarding filtering operations, such as described with respect to FIG. 3C , coefficients of 0, 1, 2 and 3 may be repeated over the vector width (eg, 16 bits)—eg, as shown in FIG. 5A . In such an example, a write to the first register could start at D[0]-D[15], then a sliding window of 4 could be used to start the next register at D[0]-D[19], etc. wait. In this filtering example, the coefficients C[0]-C[3] may repeat across the width of the vector, so using a permutation load may help to write the coefficients in this order directly from the load, rather than loading all the data and then Perform a permutation, then write the vector to a register. Thus, in this example, the permutation pattern of coefficient data may include {0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3). In the same example, the permutation pattern of the data operands might be {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,4,5, 6,7,8,9,10,11,12,13,14,15,16,17,18,19}. In this way, data operands and coefficient operands can be read out in permutation order, rather than being sequentially read out and then permuted before being written to a register for computation. As another example, for example, as shown in FIG. 5B, a filter instruction may include a double-vector coefficient operand and thus may include values such as {0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3 ,0,1,2,3,4,5,6,7,4,5,6,7,4,5,6,7,4,5,6,7} permutation pattern. The permutation pattern can be static or fixed, or can be calculated dynamically by an algorithm, which allows the permutation pattern to be flexible and dynamic. Where the pattern is a repeating pattern, in an embodiment, a first instance of the repeating element may be loaded, then copied, and then written out to a SIMD channel of the SIMD unit.In some cases, it may be preferable to mask certain portions of the memory data with zero values. For example, zeros can be inserted for unused entries for easier visualization in software development or to consume less energy (e.g. compared to retaining random data values). In other examples, zeros may be inserted to delineate blocks of data in a data structure, such as where each block of data is not a fixed length. In such an example, a value of zero may indicate a gap between two data blocks. When processing a constant-sized image block, for example, when extracting some variable-length information (e.g., the location of feature points) from each image block, zeros can be used to fill the remaining data that does not correspond to the extracted information.In practice, a permutation index can typically include 32 or 16 elements in a read—for example, in the range 0-31 or 0-15, respectively. To include zero values in the read, a permutation operation can be used to include negative index values in the load so that zeros are written in the corresponding lanes of the destination register. Thus, during a write operation, for example, negative values may be written into the corresponding lanes of the SIMD architecture instead of zero.As an example, a 30 wide by 30 high image patch can be processed by a vector operation using 16 consecutive entries at a time. Since the width of 30 is not divisible by 16, each row can be processed by two vector operations, the first for a full vector width of 16 entries, and the second for a partial vector width of 14 entries. In such examples, it may be beneficial if the load of the second 14-entry vector is zero-filled to fill the last two vector lanes, rather than random data values that may currently be in memory.In one or more embodiments, padded zeros may be inserted into desired lane locations of the SIMD architecture, eg, to save processing time required to write zeros to those lane locations. Where there are 16 channels, a normal permutation pattern might consist of 16 channel indices - eg, 0-15. In such an example, if there are values of {100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115} and the index as arrangement pattern is {0,1,2,3,4,5,6,7,8,9,10,11,12,13,- 1,-1}, the final value loaded into the destination register should be {100,101,102,103,104,105,106,107,108,109,110,111,112,113,0,0}. Therefore, two values that are -1 are automatically converted to 0 in the destination register based on the substitution pattern including negative values. In the previous method, -1, -1 would include 14, 15 respectively, and the value at 14, 15 in memory would be written to the register. However, these may include random values that may require additional processing time compared to including 0 values.To implement loads with permutation features, routing and multiplexing in memory logic can be used—for example, similar routing and logic used to perform unaligned memory loads. For example, to support loading a full memory width (e.g., 32x16 bits) from any 16-bit address (or 16x32-bit lanes from any 32-bit address), the memory logic may include multiplexing logic to select 32 lanes of the memory data Any one of them is routed to any destination register channel. For example, for unaligned memory loads, it can be driven according to the following logic:output_lane[0] = select(start_lane, memory_lane[0..31]);output_lane[1]=select((start_lane+1)%32, memory_lane[0..31]);output_lane[2]=select((start_lane+2)%32, memory_lane[0..31]);…output_lane[31] = select((start_lane+31) % 32, memory_lane[0..31]).In an embodiment, a modulo operator (%) may be used to wrap around the total number of wrapped lanes. So, for example, where the starting lane is lane 3, lanes 3, 4, 5, ..., 31, 0, 1, 2 would be used as outputs to register lanes.For loads with a replace feature, this same logic can essentially be reused, but modified logic can be included to perform the replace operation. An example of the modified logic is as follows:output_lane[0]=select((start_lane+permute[0])%32, memory_lane[0..31]);output_lane[1]=select((start_lane+permute[1])%32, memory_lane[0..31]);output_lane[2]=select((start_lane+permute[2])%32, memory_lane[0..31]);…output_lane[31]=select((start_lane+permute[31])% 32, memory_lane[0..31])As an example, and as shown with respect to the diagram in FIG. Data is retrieved anywhere in memory 512 . And drive data to any lane in the SIMD through corresponding multiplexers (mux) 514A-514N. In this way, any of the 16 inputs (or other wide memory or registers) may be able to be written to any of the 16 output locations or channels. This may help with unaligned accesses, so that load operations can start at any address and then align down. For example, if you read data in memory from locations 2-18, you can read data from 2-18 but aligned with lanes 0-16 (eg, 2 goes into lane 0, 3 into lane 1, etc.). This is not possible in legacy systems, where vector loads need to start at positions that are multiples of 16, such as 0, 16, 32, etc. As shown in Figure 5C, permutation can also be done because data from any memory index can be output to any lane in a SIMD unit such as a VPU. The multiplexer 518 can be used to inject or insert a permutation control for each lane to tell the multiplexer 514 of the crossbar 510 which memory location to read from based on the starting position (which can be aligned or not) and the permutation mode fetch data. Therefore, instead of simply fetching data from aligned locations, permutation patterns can be used to update memory read locations so that each multiplexer 514 sends the correct data to each lane of the SIMD unit. Additionally, multiplexer 516 may be used to insert zeros for permutation patterns that include negative values or other values indicative of zero insertion (eg, where values other than negative values are used to cause zero insertion). Thus, once the memory access location is sent from multiplexer 518 to crossbar 510, and the value from the memory access is sent to multiplexer 516 for zero insertion, the value corresponding to a negative value in the permutation pattern can The value that is converted to zero to fill the corresponding SIMD lane. Although only four sets of lanes, multiplexers, and memory indexes are shown in FIG. 5C, this is not limiting and any number of sets may be included without departing from the scope of this disclosure.FIG. 5D illustrates an example use of hardware architecture 500 . For example, the illustration in Figure 5D. 5D may be based on the following information:crossbar_mode=1;start_lane = 2;permute pattern={3,1,-1,...,2}={011b,001b,111b,...,010b};mem read bus={100,101,102,...,103}permute_low = {3,1,3,...,2}; //lower 2-bit permutationpermute_sign={0,0,1,...,0}; //permuted bit 3read data output={103,101,0,...,102}In addition, the following C code can describe the logic circuit of the hardware architecture of Figures 5C and 5D:Thus, in the example of FIG. 5D, a bit value of 1 in multiplexer 518 may indicate that a load displacement value should be selected, and these values {3,1,3,...,2} may be passed to the crossbar 510 of the corresponding multiplexer 514 . As such, the value of {103,101,103,...,102} can be read from memory and sent to multiplexer 516, where the permutation pattern can include a third value of -1, so the value of 103 can be passed through zero Insertion is converted to 0. Thus, the final value of {103,101,0,...,102} can be read back into the vector register.Referring now to FIG. 5E , each block of the method 550 described herein includes a computational process that may be performed using any combination of hardware, firmware, and/or software. For example, various functions may be performed by a processor executing instructions stored in memory. Method 550 may also be embodied as computer usable instructions stored on a computer storage medium. Method 550 may be provided by a stand-alone application, service or hosted service (stand alone or in combination with another hosted service), or a plug-in to another product, to name a few. Furthermore, by way of example, method 550 is described with respect to the hardware architecture of FIG. 5C. However, the method 550 may additionally or alternatively be performed by any system, structure or component or any combination of systems, structures or components, including but not limited to those described herein.FIG. 5E is a flowchart illustrating a method 550 for performing a load using a replace operation, according to some embodiments of the present disclosure. At block B502, the method 550 includes determining a replacement pattern for loading data from memory. For example, the permutation pattern can be static or dynamically calculated. Permutation patterns can be aligned (eg, 0 to 16, or 0 to 32), unaligned (eg, 2 to 18), repeated (eg, 0, 1, 2, 3, 0, 1, 2, 3, ... , etc.) and/or other schema types.At block B504, the method 550 includes determining a memory address location for each of the plurality of channels based at least in part on the permutation pattern. For example, a permutation pattern may indicate from which memory address location data for a particular channel or register should be loaded. The permutation pattern can be implemented using the multiplexer 518 so that the correct memory address according to the permutation pattern is sent to the crossbar 512 .At block B506, the method 550 includes loading a value into each of the plurality of channels based at least in part on the memory address location. For example, based on a memory address location, multiplexer 514 from crossbar 512 may retrieve corresponding values from memory to write to one or more channels within one or more vector registers. In some embodiments, multiplexer 516 may also be used to convert values associated with negative values in the permutation pattern (or other values indicative of padding with zeros) to zeros. Thus, where one or more negative values are included in the permutation pattern, the value loaded from memory may be converted to zero before being written to the vector register.At block B508, the method 550 includes performing one or more operations within each lane of the plurality of lanes using the values and at least one instruction. For example, once a vector register or processing lane of a SIMD unit is filled, one or more operations—such as arithmetic instructions, logic instructions, shift/rotate instructions, bit manipulation instructions, comparison instructions, conversion instructions, constant generation instructions, and and/or similar - may be performed using one or more processing units corresponding to one or more processing lanes.Multipoint lookup with blending for performing table lookupsIn a conventional processor with vector SIMD computations, the local memory may comprise a bit width matching that of the vector SIMD. As a result, these processors may often only support read and/or write alignment and granularity corresponding to the bit width. However, table lookups are a common technique in embedded environments such as digital signal processing (DSP) and computer vision to implement various nonlinear functions. For example, the square root, logarithmic, sine, and cosine functions may require a table lookup. To perform these functions, the input space can be sampled uniformly in a one-dimensional (1D) grid, and the output can be recorded at these input points in a 1D table. However, when implementing non-linear functions using table lookups, there is often a tradeoff between table size (eg, the number of entries in the table) and accuracy. To improve accuracy when large table sizes are not required, an interpolated lookup can be performed, where two points are looked up around a fractional index for linear interpolation, or three points around a fractional index for quadratic interpolation.As an example, where the sine function is implemented using a lookup table, and the sine values are tabulated in integer degrees, then table[0]=sin(0 degree), table[1]=sin(1 degree), table[2]=sin( 2 degrees), and so on. In such an example, if the evaluation is sin(1.7 degrees), the fraction of table[1]*0.3+table[2]*0.7 can be used to linearly interpolate between the two integer degree entries. In this example, the second entry of table[2] gets the score as weight, and the first entry gets 1 minus the score, so the closer the score is to 1.0 or where the second entry corresponds, the higher the second entry is weighted.As another example, an image or a patch of an image may be resampled, which may involve finding available pixels around some fractional pixel coordinate and then performing an interpolated lookup. In such an example, the table may include image patches and may be two-dimensional. In this case, bilinear interpolation can be performed to interpolate in two dimensions, each dimension being linear. For example, a patch at position Y=5.1, X=7.6 can be interpolated according to the following calculation:(patch[5][7]*0.4+patch[5][8]*0.6)*0.9+(patch[6][7]*0.4+patch[6][8]*0.6)*0.1 However, in Performing this type of interpolated lookup is expensive in conventional processors because a separate lookup needs to be performed for each value in each table. To speed up this process, a table can be replicated to allow any number of lookups to be made simultaneously using different instances of the table. For example, in the example above, when looking up patches at 5, 6, 7, and 8, the table might be replicated at least 4 times to allow parallel lookups across the four tables. For example, in the case of a processor (such as a VPU) that supports 32-way parallelism, the table might be replicated 32 times. However, while replicating tables may increase throughput per cycle, replication also requires additional memory capacity and usage, which may not be available or optimal in some implementations.With this in mind, the systems and methods described herein use two-point and/or two-by-two (2x2) point lookup operations to increase throughput (or match, eg, 32-way parallel throughput) while saving memory space. For example, using a per-memory bank address bus and associated logic and routing, parallel lookups of two points or 2x2 points (eg, 4 points) can be performed with less memory usage. So a single lookup of the table might yield two points in a two point lookup or four points in a 2x2 point lookup. This can be done based on hardware settings - eg, bank addresses, logic, routing, etc. - and memory patterns in memory, allowing multiple data to be read without bank conflicts. As mentioned above, without these features, to implement e.g. a 32-way parallel lookup would require the table to be replicated 32 times. For example, this 32-way parallel lookup can be performed using the following C code:In this example, the lookup portion of the loop can perform 32 lookups per cycle for two cycles (lookups and mixes are performed separately in memory and vector math slots, and each iteration is pipelined to two cycles), and Interpolated to produce 32 outputs. So the whole lookup/interpolation is 16 outputs per cycle and requires 32 copies of the table.As a further example, and with reference to Figure 6A, a 16-way parallelism for performing a single-point lookup with index vectors {0,1,2,3,4,5,4,3,...} is shown table organization. In such an example, using conventional architectures and memory layout techniques, the first and second lookups would need to be performed sequentially to read two entries from each memory bank. For example, the first memory bank, T0, contains the values at T0[0] and T0[1] to be read in a lookup operation, but because these values are all in the same memory bank, T0 (may only include a single read port), the first value T0[0] is read in the first pass and the second value T0[1] is read in the second sequential pass. With such a memory layout, if two reads occur in the same memory bank, bank conflicts can occur, which can cause processing delays and/or prevent algorithms or other calculations from performing correctly.However, using the architecture of the present disclosure, the same 32 lookups may require only 16 table copies for a two point lookup or 8 for a 2x2 point lookup. For example, for a two-point lookup, the same performance of 16 outputs per clock cycle can be achieved with 16 copies of the table, reducing the memory footprint by a factor of two. The 16-way parallel variant of the instruction can return a double vector, with the first entry in the lower single vector and the second entry in the upper single vector. In C code, this 16-way parallel lookup and interpolation can be expressed as follows:In such an example, the lookup and interpolation parts of the loop may only require a single clock cycle (the lookup and mix are performed in memory and vector math slots respectively, and pipelined to one cycle per iteration), and the interpolation to produce 16 output. So lookup/interpolation is 16 outputs per cycle. As an example, and with respect to FIG. 6B , an 8-way parallel table organization for performing a two-point lookup with index vectors {0,1,2,3,4,5,4,3,...} is illustrated. In such an example, since each memory bank T0, T1, T2, etc. contains only a single value to be read during a lookup operation, all 16 values can be read in a single pass instead of the example of Figure 6A, In FIG. 6A, only 8 values can be read in each of the two passes due to possible memory bank conflicts. To this end, in an embodiment, an instruction for lookup may include a single index and a pattern that includes not only the search index, but also the search index plus a location. Thus, an instruction may cause two values to be read for a two-point lookup, and the values may be written to the lookup table in such a format to allow that single read to be performed without memory bank conflicts.As an example, when performing vector operations, each channel of the VPU may process a set of pixel values retrieved from memory. In some cases, a channel can handle multiple values from the same memory bank, which can lead to memory bank conflicts because the memory bank may only include a single read port. Thus, the methods and systems of the present disclosure distribute values among memory banks such that memory bank conflicts do not occur, and for example each value for a single processing lane of a VPU can access every corresponding value read cycle in a single processing lane .In conventional systems that perform 2D bilinear interpolation lookups, each output requires four lookups (eg, 2x2), allowing an optimal throughput of 8 outputs and 32 table copies per clock cycle. With 2x2 point lookups, 8 outputs can be achieved per cycle, 8 copies of the table (compared to 32), reducing the memory footprint required for parallel subtables by a factor of four. For example, for a 2x2 point lookup, you can read two entries from one row of the 2D table, then read 2 entries from the next row. To avoid memory bank conflicts in any memory bank, the row spacing in a 2D table can be limited to m*k+2, where m is the number of entries stored horizontally in each subtable, and k is an integer sufficient to store a row of any table . For an 8-way parallel 16-bit table, m=32(16-bit memory word)/8(degree of parallelism)=4. For a 2-way parallel 32-bit table, m=16(32-bit memory word)/2(degree of parallelism)=8.As an example, and with respect to Figures 6C-6D, row spacing constraints can be used to avoid memory contention. In such an example, a 2-way parallel glyph table for a 2x2 dot lookup with a line spacing of 10 is illustrated. The number of consecutive elements in the sub-list (m) is 8, where A[0][0...7] is placed in the sub-list consecutively, conforming to the formula of 8k+2, where k can be any integer. Therefore, no matter which index value is used to start, the 2x2 points to be retrieved can be placed in different groups, which is guaranteed by the math. For example, group numbers for 2x2 points relative to subtables are outlined as follows:index%8,(index+1)% 8,(index+line_pitch)%8=(index+8k+2)%8=(index+2)%8,(index+line_pitch+1)%8=(index+8k+2+1)%8=(index+3)%8 There are usually 4 entries to retrieve by 2x2 lookup in group number, relative to the subtable is index %m, (index+1)%m, (index+2)%m, (index+3)%m). As long as m >= 4, there should be no bank conflicts. In the example of FIGS. 6C-6D , the lookup may include 2D indices of (0,1) and (1,3), using Y then X as a convention for storing pixels in row-major order. In Figure 6C, a logical view of two two-dimensional tables is shown, and in Figure 6D a memory layout view of values from the tables is shown. In the logical view, the lookup is 2x2 as shown, and the memory layout view shows four points, each in a different bank of memory (or a different column in the diagram), so each of these values Can be read in a single memory cycle or pass. Based on instructions and read patterns using indexes (for example, (0,1) and (1,3)), the values in the table can be stored in memory in such a way that each read from memory in a single pass value. Therefore, using this memory layout and read instruction, four entries for each subtable can be returned each cycle in the following format:Single vector with lower destination: A[0][1], A[0][2], B[1][3], B[1][4], (rest filled with zeros) higher destination A single vector of: A[1][1],A[1][2],B[2][3],B[2][4], (the rest are filled with zeros)Although shown in Figure 6C as two 10 element wide by 3 high 2D tables, such as A table and B table, this is not limiting and the tables may be of any width and/or height, depending on the embodiment . Similarly, the memory layout in Figure 6D, includes a 16 element wide by 3 high layout, but this is not limiting and the memory width and/or height can be any configuration depending on the embodiment.In some implementations, for example when sampling an image patch, interpolation between a fraction of pixels may be performed. In some embodiments, to interpolate the lookup value to manipulate the data without additional instructions, a Vector Horizontal Interleave Blend (VHBlend_I) instruction may be executed, which may include horizontal blending with mix in between. For example, with this instruction, it is possible to find bilinear interpolation after execution in the same loop. The instruction may process each channel pair according to the layout of the table of FIG. 6E. In this way, the calculation of Y0 and Y1 can be calculated as follows:Y0=x*(1–alpha0)+y*alpha0Y1=z*(1–alpha1)+w*alpha1Thus, this instruction may result in a horizontal mix between lane pairs x and y, z and w, and may cause the output to be interleaved in the destination register. For example, the following C code snippet can be used to achieve optimal performance on an 8-way parallel table using a 2x2 point lookup.In this 8-way parallel table organization, the subtables are designated as A, B, ..., H, and the loop can perform lookups and interpolations, resulting in 16 outputs per iteration. In such an example, the input could be organized as follows:idx.lo={idx0, idx1, idx2, idx3, idx4, idx5, idx6, idx7, (ignore the rest)}idx.hi={idx8, idx9, idx10, idx11, idx12, idx13, idx14, idx15, (ignore the rest)}x_frac.lo={xf0,xf0,xf1,xf1,…,xf7,xf7} //Pay attention to the repeating patternx_frac.hi={xf8,xf8,xf9,xf9,…,xf15,xf15} //Pay attention to the repeating patterny_frac = {yf0, xf8, yf1, yf9, ..., yf15} //Note interleaved mode An illustration of the intermediate and final results of this instruction is illustrated in Figure 6F, which includes arrows indicating mixed and interleaved modes of data.Referring now to FIG. 6G , each block of method 600 described herein includes a computational process that may be performed using any combination of hardware, firmware, and/or software. For example, various functions may be performed by a processor executing instructions stored in memory. Method 600 may also be embodied as computer usable instructions stored on a computer storage medium. Method 600 may be provided by a stand-alone application, service or hosted service (stand alone or in combination with another hosted service), or a plug-in to another product, to name a few. Furthermore, method 600 can be performed by any one or any combination of systems, structures or components, including but not limited to those described herein.6G is a diagram illustrating a method for performing a multipoint lookup (e.g., in a single clock cycle in a decoupled lookup table (DLUT) accelerator, such as described with respect to FIGS. 9A-9C ), according to some embodiments of the present disclosure. 600 flow chart. At block B602, method 600 includes copying the table to memory to include a first value at a first physical address in a first memory bank and a second value at a second physical address in a second memory bank, the second value at a second physical address in a second memory bank, The first value and the second value are included in the same column in the logical memory view of the table. For example, a table can be copied to memory any number of times to take advantage of the system's memory access parallelism. The table may include a first value at a first logical address and a second value at a second logical address in the same column as the first value, which, if stored to memory in this configuration, may result in memory Bank conflict because the two values may be stored to the same memory bank. Thus, when copying the table to memory, a write instruction can write the first value to the adjacent first physical address—for example, in another memory bank—as the second value, thus making it possible to write the first value at the same These two values are retrieved in cycles.At block B604, method 600 includes determining a first index corresponding to a first physical address in memory. For example, a read operation may use an index indicating the first location in memory from which to start reading a value.At block B606, the method 600 includes reading, during a single cycle, the first value at the first physical address and the second value at the second physical address based at least in part on the read instruction corresponding to the multipoint lookup. For example, when copying a table to memory, the table may be copied such that pairs of points in the same column or table (eg, corresponding to pixels in the same column of pixels) are stored in separate memory banks. Therefore, using a read instruction for a two-point lookup, which uses the index of the first point in a point pair to read the first point and the adjacent second point stored in different memory banks, can be done in a single cycle The first value and the second value are read from a first memory bank storing the first value and a second memory bank storing the second value. This operation can be performed for each pair of values in each replicated table to produce a high vector comprising the first value from each table and a low vector comprising the second value from each table, and these vectors can be used as The vector registers, and the instructions that generate the output (e.g., interpolate, etc.).At block B608, the method 600 includes performing one or more operations using the first value and the second value. For example, the first and second values may be loaded into one or more channels of the VPU, and square root, logarithmic, sine and cosine functions may be performed, linear or bilinear interpolation may be performed, and/or other The type of action to perform. In the case of performing interpolation, a table is copied 16 times, for example, 16 two-point lookup operations may occur to produce 32 values - 2 values per vector lane of the VPU - and can Perform interpolation on to output 16 results. Therefore, only 16 copies of the table can be used to produce 16 interpolated outputs per cycle. This may be a consequence of using two-point lookups, as the table containing the values may only need to be replicated half as often as a traditional single-point lookup operation (e.g., 16 times instead of 32) to allow half the memory footprint throughput for the same 32 values .per memory bank load cache in vector memoryIn a conventional processor, a data cache may have a width of, for example, 32 bytes per cache line. A cache line is a unit of data tracked by the hardware. For example, hardware can track cache line usage information in tag memory, including the full system address, whether the cache line has been written to, and when the cache line was last read relative to other cache lines to determine when cache line is evicted. In some implementations, the data cache is local memory, or a portion of local memory, used to temporarily map larger data structures stored in external memory into local memory so that the data can be accessed without being subjected to direct manipulation of external memory. The case of long latency memory is handled. This type of data cache is often used in traditional desktop or laptop computers.Programmable vision accelerators and/or VPUs include, by way of non-limiting example, embedded processors designed to run smaller sets of highly optimized code. In such processor types, data caching may not be possible because the programmer can manage the contents of the local data memory. The systems and methods of the present disclosure may include local memory managed by the programmer rather than being cached, but may also include additional data caching capabilities in one or more (eg, each) memory banks. The data cache may be narrow, such as but not limited to 16 bits wide, as compared to more traditional data caches comprising, for example, 32 bytes. Data caching can be used primarily to reduce power consumption, as opposed to traditional data caching where the primary goal is to reduce latency.For example, in computer vision processing, data access patterns often have some degree of locality (e.g., staying in a certain neighborhood for a while before moving on to the next neighborhood). For example, when performing 7x72D filtering using the VFilt4HHW instruction described here (computes 4 taps at a time), the data read stream can read 3 memory reads from a neighborhood, then move to another neighborhood and read 3 more time, wait. In the coefficient read of the operation, the same array of zero-filled values (e.g., 7*2*4 = 56 halfwords) can be used, advancing four halfwords at a time until the last set of 4 halfwords is read, and then Start returning from the beginning of the 56 halfword array again until the filtering kernel is complete.Therefore, to take advantage of these local access patterns and reduce power consumption due to memory accesses, a load data cache in each memory bank with bidirectional set associativity (holding eg a total of 64 halfwords) can be implemented. When load caching is enabled, the most recently read set of read data (e.g., most recent, most recent two, most recent three, etc.) in tag memory. As a result, when the same memory address is read again, there may be a cache hit and the cache may serve the data instead of requiring it to be read again from local memory. In an embodiment, a load cache may be located between the memory logging logic and the memory itself so that whenever there is a cache hit, memory reads for that particular address or value will cease or not occur to save power.Using this cache structure, and for the 7x7 2D filtering example above, the load cache can allow the system to skip almost two-thirds of data reads and almost all coefficient reads in steady state. A description of the use of data caches in each memory bank is illustrated in Figures 7A-7C. For example, the VFilt4HHW instruction may perform a 4-tap of a potentially larger filtering task, and may consume two single halfword data vectors—for example, data[0-15] and data[4-19]—and a single halfword Word vector coefficients—for example, coef[0-3]—repeated four times to fill a single vector of 16 elements. In a 7x7 2D filter implementation using the VFilt4HHW instruction in two vector math slots, the data element and coefficient arrays of Figure 7A can be used. Since the VPU of the present disclosure can be configured to read double vectors, data[y][0-15] and data[y][16-31] can be read as double vectors. Similarly, data[y][4-19] and data[y][20-35], and data[y][8-23] and data[y][24-39] can be read as double vectors . Likewise, the data and coefficient read patterns may correspond to those of FIGS. 7B-7C , assuming a row spacing of 100 for data and a row spacing of 8 for coefficients.Figure 7D illustrates memory bank organization. For example, a 2-entry fully associative cache holds two positions' worth of data in any supergroup, and data and coefficients can be placed into different supergroups to allow the cache to work efficiently. In a coefficient read, memory banks 0-3 may first retain coefficient elements 0-3, add elements 32-35, then read elements 64-67 will evict elements 0-3, which will be in the next coefficient read Repeat as a pattern. In steady state with load cache enabled, only four memory banks can be read per scan in coefficient read mode. Therefore, saving memory bank reads by using load cache for data may be (3*32-(32+4+4))/(3*32)=58.3%, for coefficients may be (14*16-4 )/(14*16)=98.2%.Therefore, in some algorithms—such as computer vision algorithms with sliding windows—load caching can be used to save power. For example, if the cache is not loaded, each memory bank needs to be read every cycle, even though most of the data is the same. In an example where 512 bits are read out per iteration, the first 512 bits may be read out, then another 512 may be read out, and so on. For example, if the sliding window is only 8 bytes, then only 64 bits are new each iteration and the remaining 448 bits are the same. If there is no data cache, the 448 bits need to be read from the data set again. However, using the data cache for each memory bank, these 448 bits can be fetched from the load cache, and only 64 new bits need to be read from the other memory banks. Thus, the power required to read 448 bits from the memory bank is saved. Examples of algorithms that can benefit from using a load cache are spatial filtering operations, deep learning inference operations (such as convolution operations), etc.With respect to Figure 7E, the hardware architecture or logic for a memory bank with a load cache is shown. For example, sliding window data access can be accelerated for unaligned access support in memory (eg, vector memory (VMEM)). This is a key memory access pattern for many computer vision algorithms, including filtering and convolution. For sliding window vector loads, most data from random access memory (RAM) bank 702 remains unchanged. In such an example, when sliding 4B, only 4B of data changes in the 64B vector load, so only 4B of new data is read from RAM bank 702 . To optimize the power of VMEMRAM, tiny caches called "load caches" can be attached to each supergroup per supergroup - so a total of 3 supergroups x 32 banks = 96 load caches per VMEM . In a non-limiting embodiment, the configuration of each load cache may include a two-line (2x2B=4B) size, full associativity, and a pseudo-least recently used (pLRU) replacement policy.The data cache where the most recent accesses are stored is divided into two parts - tag store 706 and data store 704 . In tag store 706, cache addresses and control information corresponding to previous accesses may be stored, and in data store 704, data from previous accesses may be stored. Control information in tag memory 706 may include a valid flag (e.g., whether the entry is valid), a dirty flag (e.g., whether the entry has been modified and needs to be written back to memory), and/or a last used flag (e.g., if the entry is to be replaced, Use the least recently used policy to indicate which entry to replace). Since the cache is a load cache, writing data may not update the cache, but valid and last used flags may be included in tag store 706 . A valid flag or bit can be used to qualify address matching, and any write should invalidate the entry. On each visit, the last used flag may be updated.As described herein, for the caching scheme to be effective, the storage capacity of the load cache is much smaller than the storage capacity of the memory or RAM bank 702 to reduce access time and save power. In one embodiment, each load cache may correspond to a single RAM bank 702, which may each be 2048x16-bit memory, and the load caches may each be 2x16-bit data stores 704 with 23-bit tags Store 706 (eg, 2 entries x (11 bit address + 1 bit valid) + 1 bit last used).In operation, offset 722, row address 724, and delta 726 may be used to generate memory addresses for memory accesses. This memory address may be tapped for comparison with tag store 706 - eg, with some number of previously accessed addresses (eg, 2 previous accesses). Arrows entering the top of tag memory 706 may represent memory addresses. In some embodiments, tag memory 706 may use the entire memory address to compare with memory addresses from previously accessed storage. In other embodiments, a subset of address bits from a memory address can be used to address a subset of tags, so only a subset of tags are compared to the memory address. For example, where a large number of previously accessed tags are stored in tag memory 706, only a subset of tags may be compared to a subset using memory address bits to reduce area and save power. In a load cache design with fewer tags—for example, tags corresponding to two previous accesses—the entire tag of the previous entry can be compared to the entire memory address. "==?" decision block 720 compares the current memory address of RAM bank 702 with the address stored in tag memory 706 . When there is a miss (e.g., tag and memory address mismatch), the read of RAM bank 702 can be enabled using read enable 708 and read data multiplexer (rd data mux) 712, which can select And the RAM bank 702 is read out to send to the staging flip-flop 716 . When there is a hit (eg, tag and memory address match), data store 704 can be addressed with 0 or 1 (in an embodiment with two entries) to indicate which previous access the hit corresponds to. The corresponding entry in the data store may be sent to the staging flip-flop 716 through the read data multiplexer 712 . Staging flip-flops 716 may return read-back data to the processor pipeline for eventual routing to the destination scalar or vector register of the load instruction.Hierarchy flip-flop 714 may correspond to a parity check. For example, a memory large enough to have parity bits (eg, in parity terminal 710) may be required to allow error detection and/or error correction. In memory (eg, VMEM), error detection can be used, and/or error correction logic can be implemented on readback data.Thus, the load cache may include tag bits in tag store 706 for way 0 and way 1, each tag bit may include 11 address bits and 1 valid bit. The load cache may also include 1-bit pLRU, and data bits for way0 and way1 in data storage 704, each data bit including 16 bits of data and 2 bits of parity. When the load cache is enabled, lookups are available in the D1 phase. To minimize power consumption, the load cache may only be enabled for the RAM banks 702 participating in the load. For example, for a single vector load, only 16 of the 32 load caches may be looked up. On a load hit (eg, where the load cache includes the data to be accessed), the read enable for a given RAM bank 702 may be suppressed, thereby preventing the RAM bank 702 from being lit. pLRU720 can also be renewed at the D1 stage. During the D2 stage, data and parity bits can be read from the load cache hit way and multiplexed with the RAM result.On a load cache miss, at the D1 stage, in a victim fashion, the existing entries to be evicted to make room for new entries can be determined based on the valid bit and pLRU. The tag of the victim path may then be updated with the miss address, and the read enable 708 of the RAM bank 702 may not be inhibited. In the D2 stage, the data/parity from the RAM bank 702 is not only sent to the read data crossbar, but also fills the evicted cache line with data. Stores can also look up load caches when enabled and participating. A store hit may invalidate the hit mode, and a store miss may be ignored.On a hit in the load cache, power is saved reading the RAM bank 702 . On the other hand, a miss in the load cache not only causes power to read the RAM bank 702, but also consumes power looking up the load cache to fill the victim path. Since not all types of memory access patterns get high hit rates in the load cache - especially when accessing supergroups in indexed addressing modes - only lookup vector linear loads in the load cache.When enabled, all stores may be looked up in the load cache to ensure that the load cache is never out of sync with, for example, data in VMEMRAM bank 702 . For a given supergroup, software can be used to disable the load cache for that supergroup's RAM bank 702 to minimize memory seek capabilities, as described in more detail below.For example, in some embodiments, the use of data caching may not provide a benefit. For example, in an operation where the access pattern is not repeated, the data cache may not be useful, so performing the additional task of checking the cache prior to the read may waste time and/or effort as a read may require the database to access the correct data . Thus, load caching can be enabled or disabled, reducing power loss due to access patterns with high load cache miss rates, and also allowing load caching to be used for access patterns where data caching saves power. In some embodiments, enabling or disabling can be programmed using application code, so a programmer can program code to enable data caching when needed and disable data caching when not required. In other embodiments, enabling or disabling may be performed by hardware analyzing read patterns and detecting overlapping patterns. For example, the hardware can enable load caching for a threshold amount of overlap between consecutive read operations. However, in cases where the overlap is less than a threshold, load caching can be disabled. As non-limiting examples, the threshold may be 25%, 40%, 50%, 75%, or a different threshold amount of overlap between reads.When the load cache is disabled, and as shown with respect to Figure 7E, tag memory 706 may not be accessed, and read enable 708 may be set such that reads to RAM bank 702 are enabled for every read. Similarly, data memory 704 may not be accessed, and read data multiplexer 712 may always pass data from RAM bank 702 to staging flip-flop 716 .Furthermore, in some embodiments, a memory group structure may include multiple supergroups—eg, three supergroups—and each supergroup may enable or disable load caching according to specific access patterns within each supergroup. For example, with three hyperbanks, each hyperbank may include 32 RAM memory banks, and the data cache for each memory bank may include two entries, where each entry is a word, thus 16 bit. Where two or more supergroups are used, the supergroups can be of any size, different sizes, the same size, or a combination thereof. For example, the first superset may be 128KB, the second superset may be 256KB, and the third superset may be 512KB.Referring now to FIG. 7F , each block of the method 750 described herein includes a computational process that may be performed using any combination of hardware, firmware, and/or software. For example, various functions may be performed by a processor executing instructions stored in memory. Method 750 may also be embodied as computer usable instructions stored on a computer storage medium. Method 750 may be provided by a stand-alone application, service or hosted service (standalone or in combination with another hosted service), or a plug-in to another product, to name a few. Furthermore, method 750 can be performed by any one or any combination of systems, structures or components, including but not limited to those described herein.FIG. 7F is a flowchart illustrating a method 750 for using data caching for read operations, according to some embodiments of the present disclosure. At block B702, the method 750 includes receiving data representing a memory read address. For example, after a first read operation using some memory banks, a second read operation may be performed that includes, in addition to one or more additional or other memory banks, one or more Multiple memory banks. Because the first read operation may have included storing the output of the read in the data cache corresponding to each respective memory bank, these values may be reused rather than requiring another read of the memory bank. In this way, the memory read address corresponding to the next read operation can be received and the load cache can be accessed - when enabled - to determine if any data is stored in the load cache.At block B704, method 750 includes comparing the memory read address to a load cache memory address corresponding to a previous memory read stored in the load cache. For example, data from a memory read may be stored in a load cache corresponding to a particular RAM bank 702 after a previous memory read. To remember this information, tag memory 706 may include one or more previous memory addresses corresponding to reads from RAM bank 702 .At block B706, method 750 includes determining that the memory read address at least partially overlaps with the load cache memory address. For example, the memory read address may be compared to a previously read previous memory read address stored in tag memory 706 . If there is a hit, the load cache may be used to read at least some of the data corresponding to the memory read address of the current memory read.At block B708, the method 750 includes reading at least a portion of the data corresponding to the memory read address from the load cache. For example, due to a hit in the load cache determined from the tag store 706, part of the data from the overlapping memory address may be read from the load cache, and the remainder of the data - if any - may be read from the RAM bank 702 read out.Decoupled Configurable AcceleratorsTo optimize a processor's performance for specific applications, such as real-time applications, the instruction set architecture (ISA) can be enhanced to create custom instructions to speed up common operations. This allows the processor to reduce the number of cycles required to perform a particular task. Execute the process of customizing the ISA until the performance goals of the system are met. However, these new instructions were added to operate on data in the processor's register file, or directly as memory as operands, using the existing processor controller and existing memory addressing and access hardware to implement. In such examples, it is desirable for the new instruction to fit the processor's register file read/write operand count (e.g., reuse existing ports), fit the register file width (e.g., fit the processor data type), and fit the processor pipeline stage . Because of these requirements for successfully adding instructions to the ISA, the flexibility to add new instructions is limited. Furthermore, when creating an ISA for a pipeline that handles multiple stages (eg, 30, 40, 50, etc. stages), the configuration of the ISA becomes complicated.Furthermore, processors offer a high degree of flexibility at the expense of power consumption - as each added instruction requires fetching, decoding/dispatching, reading/writing to register files and/or memory, etc. Therefore, adding additional functional units to implement these custom instructions increases the pressure on the register file read/write ports, resulting in required area (e.g., additional read/write ports may be required) and power (e.g., additional loads may be implemented register file). Furthermore, the processing pipeline of an embedded application often has multiple stages - where the output of one stage feeds the input to the next stage. Techniques such as executing multiple threads in a processor (eg, for different stages of processing) can reduce scaling times, providing reduced latency. However, multithreading comes at the expense of hardware - instructions must be fetched/decoded/scheduled from multiple threads, state information is kept for each state of each thread (e.g. in a register file), and control logic is included to Handle multiple threads in a processor. This results in increased area and power requirements, while making verification and programming of the processor more complex. Thus, while various methods exist for reducing latency in the processing pipeline, existing methods require additional surface area of the processor hardware, require additional power consumption due to the additional hardware, and increase the cost of programming the processor to perform Complexity of various tasks.To address the limitations of main processor configurations and the deficiencies of multi-threaded processors, the systems and methods of the present disclosure use a main processor or one or more units of a main processor—for example, a single-threaded processor such as a VPU— Except for domain-specific accelerators or coprocessors that are decoupled from the main processor and communicate with the main processor through shared memory - such as vector memory (VMEM). In this way, the accelerator can operate as a sub-unit of the main processor, but once configured, the accelerator can execute independently of instructions of the main processor, rather than requiring processor instructions to execute. For example, accelerator access instructions can be used to allow the host processor to configure and order the accelerators, and shared memory can allow inter-stage data structures to be shared between the host processor and the accelerators). Once the host processor starts or turns on the accelerator (e.g., via the common accelerator interface, and using one or more load/store instructions), the host processor is free to process the different stages (thus providing the ability to work concurrently and reduce run time) or transition to a low- or lowest-power state that waits for the accelerator to complete processing (e.g., minimizing power consumption when not actively processing). In this way, once configured by the host processor, each of the one or more accelerators can operate independently and concurrently with the host processor. The main processor and the accelerator can be synchronized during processing through a handshaking interface so that the main processor knows when the accelerator is finished processing and/or is ready to perform a new task, or vice versa. Shared memory can store configuration messages (e.g., for configuring the accelerator when configuration instructions cannot be efficiently sent through the accelerator interface due to size limitations), input buffers (e.g., store data for processing by the accelerator), and/or output results of the accelerator (For example, after processing is complete, data from the accelerator's eg register file may be stored back to the location in shared memory indicated in the configuration instruction from the host processor). Thus, once triggered, the accelerator can read configuration parameters and/or input data structures from shared memory, and can write output result data structures to shared memory.As a result, this combined system of host processors, shared memory, and decoupled accelerators allows the flexibility of a programmable host processor while achieving the power consumption levels of fixed-function hardware (e.g., because highly computational processing stages of a processing pipeline can be implemented as accelerator) without significantly increasing the complexity of the main processor (for example, because the main processor may only need additional accelerator configuration or access instructions to program the accelerator). For example, the accelerator's pipeline and data types (e.g., data width) can be independent of those of the main processor, allowing further customization and optimization, which may be the need for instructions to adapt to the processor's register file read/write operand count, register File widths and pipeline stages not achievable by the main processor alone.In some embodiments, the accelerator and the main processor may be coupled at instruction execution time to achieve some power savings of the accelerator when coupling execution to the main processor pipeline. However, in such an embodiment, the ability to process different stages of the pipeline concurrently will be reduced because instructions will be interleaved between the accelerator and the main processor. In one or more embodiments, the accelerator and main processor may be coupled through a higher level second level (L2) memory rather than through a shared memory connection. However, in such embodiments, higher levels of decoupling (eg, removal of coupling to higher levels through shared memory) may increase communication overhead with the host processor.Decoupled accelerators can be used for any task in any domain, e.g., for a non-limiting example, perform 1D, 2D, etc. lookups as decoupled lookup table accelerators to detect and resolve memory bank conflicts, perform 1D/2D interpolation, etc., for computers Vision algorithms, such as feature tracking, object tracking, image warping, pyramid creation, etc., for sensor processing, such as matrix multiplication or other operations on LiDAR data, RADAR data, and/or similar, for machine learning or deep learning applications program. Thus, the topology described herein can be applied to any processing pipeline where a portion of the processing can be offloaded to an accelerator.Depending on the implementation, there may be any number of decoupled accelerators on one or more chips that communicate with one or more main processors through shared memory. For example, a system on a chip (SoC) or other integrated circuit (IC) may include a main processor and one or more accelerators. performance of the task. Although the main processor is primarily described as a VPU, this is not intended to be limiting, and the main processor may include any processor type, such as a CPU, GPU, DPU, or other processor, without departing from the scope of this disclosure.Reference is now made to FIG. 8A , which illustrates a system 800 including one or more decoupled accelerators, according to some embodiments of the present disclosure. It should be understood that this and other arrangements described herein are set forth by way of example only. Other arrangements and elements (eg, machines, interfaces, functions, orders, functional groupings, etc.) may be used in addition to or instead of those shown, and some elements may be omitted entirely. Furthermore, many of the elements described herein are functional entities that can be implemented as discrete or distributed components or in combination with other components, in any suitable combination and location. Various functions described herein as being performed by entities may be performed by hardware, firmware, and/or software. For example, various functions may be performed by a processor executing instructions stored in memory. In some embodiments, the system 800 can be included in and/or can include components, features similar to the example autonomous vehicle 1300 of FIGS. 13A-13D , the example computing device 1400 of FIG. 14 , and/or the example data center 1500 of FIG. and/or functionally similar components, features and/or functions.System 800 may include a processor 802 (eg, a main processor), such as a VPU, CPU, GPU, DPU, etc., a decoupled accelerator 804, and/or a shared memory 806 (eg, vector memory or VMEM). Processor 802 may be coupled to an instruction cache (I-cache) 810 , which may cache instructions for execution by processor 802 . Processor 802 may include general purpose input/output (GPIO) 808 (eg, digital signal pins on an IC that may be used as input, output, or both, and that may be controllable at runtime), and IC configurator 812 . In some embodiments, as shown, the processor 802 may communicate on-chip using an Advanced Extensible Interface (AXI), such as but not limited to a 256-bit AXI interface. IC configurator 812 may be used to configure system 800 .Processor 802 may communicate directly with decoupled accelerator 804—eg, via a coprocessor or accelerator interface, such as an Advanced Peripheral Bus (APB) interface, and/or a handshaking, programming, or event interface. For example, processor 802 may configure accelerator 804 using an accelerator interface (or configuration bus), initiate or trigger processing of accelerator 804 using an event interface, and synchronize with accelerator 804 using a handshaking or event interface. As such, each accelerator 804 may include mechanisms configured to communicate with processor 802 through a corresponding accelerator interface or configuration bus. For example, when processing is complete, accelerator 804 may indicate the same to processor 802 via a handshake mechanism, or when processor 802 is waiting for accelerator 804 to complete processing, processor 802 may periodically poll accelerator 804 to request a status or end time . In some embodiments, the accelerator interface may include a 32-bit interface (or other smaller size interface) such that configuration instructions may be communicated to the accelerator 804 . However, in some embodiments, configuration messages may be large (e.g., greater than 32 bits, or some multiple thereof), and configuration messages may instead be stored in shared memory 806, and the configuration information may be stored in memory via the accelerator interface. The location in 806 is sent to the accelerator 804 to indicate where to retrieve the configuration information.The configuration bus may thus configure the accelerator 804 and events (or programming interfaces) may be used to allow the processor 802 to trigger or initiate processing of the accelerator 804 . Once triggered or started, the accelerator 804 may operate on its own, and the processor 802 waits for processing to complete and/or executes different processing tasks or stages. For example, an application programmer can program processor 802 and accelerator 804 to know what each is capable of, so that the application can be divided into parts—some parts of processor 802 and some parts of accelerator 804. Therefore, in an embodiment, processing may be performed in parallel between processor 802 and accelerator 804 to reduce runtime and increase efficiency. Configuration messages—through the accelerator interface and/or shared through shared memory 806—may be generated by processor 802 and used to indicate to accelerator 804 where in shared memory 806 the data to be processed begins, how much data to process, and how much data to process Write back somewhere in shared memory 806 . Processor 802 may generate an input buffer at a specified location in shared memory 806 that includes data operations for accelerator 804 . Once configuration messages are sent and input buffers are stored in shared memory 806, accelerator 804 may receive a trigger signal from processor 802 through an event interface (eg, a programming interface), and accelerator 804 may be processing the data. Once the accelerator 804 is triggered, the processor 802 can then perform other work or enter a low power state, and once the accelerator 804 finishes processing, the accelerator 804 can indicate the same to the processor 802 and can wait for additional work.Processor 802 may set up an input buffer or input data structure for processing by accelerator 804 and store it in memory 806 . The accelerator 804 may be configured using load/store operations by the processor 802 dedicated to configuring and communicating with the accelerator 804 . The configuration message may configure various registers of the accelerator 804 (eg, 256 32-bit registers in one embodiment). For example, for a decoupled lookup table accelerator (as described in more detail herein), the configuration information may indicate whether the lookup is for a 1D lookup with interpolation, a 2D lookup with bilinear interpolation, and/or another type of find. Once the accelerator 804 knows a particular mode or function, it can configure the registers to read data from memory 806 , process the data, and write data back to memory 806 .In some embodiments, the processor 802 may configure the accelerator 804 to execute multiple tasks at a time to improve efficiency. For example, where accelerators 804 will be performing various smaller tasks, configuring accelerators 804 individually may increase run time because each task may complete quickly requiring processor 802 to stop processing and configuring accelerators 804 for another task, and so on. To this end, the first task message may include the address of the second task message, thereby allowing self-linking of multiple tasks. In this way, processor 802 can generate configuration messages for multiple tasks at once, and generate configuration information and input buffers for each task, so that accelerator 804 can indicate to processor 802 that processing is complete and that accelerator 804 is ready to receive more Perform various tasks consecutively before work. Additionally, for increased efficiency, the accelerator 804 can be configured to overlap tasks such that when the first task is nearly complete, the accelerator 804 can begin decoding and configuring registers for the next task. Ultimately, by including separate instructions for processor 802 and accelerator 804, accelerator 804 may be able to operate on data formats or types that processor 802 would support. This may be a result of the architecture and layout of the registers of accelerator 804 being different and specialized for particular processing tasks.In an embodiment, processor 802 may communicate with shared memory 806 through any number of memory interfaces (eg, a 512-bit static random access memory (SRAM) interface). Similarly, accelerator 804 may communicate with shared memory 806 through any number of memory interfaces (eg, a 512-bit SRAM interface), as shown. Arbiter 814 may decide for each cycle which of processors 802 and/or accelerators 804 is allowed to access shared memory 806 .Referring now to FIG. 8B , each block of method 850 described herein includes a computational process that may be performed using any combination of hardware, firmware, and/or software. For example, various functions may be performed by a processor executing instructions stored in memory. Method 850 may also be embodied as computer usable instructions stored on a computer storage medium. Method 850 may be provided by a stand-alone application, service or hosted service (standalone or in combination with another hosted service), or a plug-in to another product, to name a few. In addition, method 850 is described with respect to system 800 of FIG. 8A , and method 850 may be performed by any one or any combination of systems, structures, or components, including but not limited to those described herein.FIG. 8B is a flowchart illustrating a method 850 for using decoupled accelerators according to some embodiments of the present disclosure. At block B802, the method 850 includes receiving configuration information for one or more first processing tasks of the processing pipeline. For example, accelerator 804 may receive configuration information from processor 802 (eg, a configuration message via an accelerator interface).At block B804, the method 850 includes configuring one or more registers of the accelerator based at least in part on the configuration information. For example, accelerator 804 may configure one or more registers based on the configuration information.At block B806, the method 850 includes reading data from the input buffer in memory based at least in part on the indication of the starting location of the input buffer included in the configuration information. For example, configuration information may include an indication of where in memory 806 the input buffer is stored, and accelerator 804 may read data from the input buffer into a register.At block B808, the method 850 includes processing data from the input buffer to compute output data. For example, accelerator 804 may process data from an input buffer to generate or compute output.At block B810, the method 850 includes writing the output data to memory at a location determined based at least in part on the configuration information. For example, the accelerator 804 may write out the results of the computation to the memory 806 and may indicate to the processor 802 that the processing is complete. Processor 802 may then use the output data to perform one or more second processing tasks of the processing pipeline.Decoupled Lookup Table AcceleratorParallel processing is used to speed up many computing tasks, including but not limited to: computer vision applications, deep learning applications, sensor processing applications, and/or other applications that benefit from parallelism (e.g., where processing tasks are independent of other processing tasks ). For example, a vector processor can operate on multiple elements in the same operation to achieve the efficiency needed to execute these types of parallel processing algorithms in real time while consuming low power. For example, a common operation for computer vision or deep learning tasks is to perform a lookup from a lookup table, image patch, or surface based on an index or coordinate position. To do this, a single vector load or store operation can be used to access data from multiple elements. Unless the index being looked up is regular (e.g. continuous or fixed integer strides in horizontal or vertical or depth direction), it results in random index access in memory.To support regular but unaligned vector accesses from memory, processors can use smaller banks of RAM to construct vector memory. In this way, the hardware is able to create interesting addressing modes for vector memories by independently generating unique addresses for each RAM bank. For unconventional indexed vector load operations in memory, since the indices of different vector elements may be independent of each other, this may lead to memory bank conflicts in one or more memory banks of RAM. Bank conflicts may not be statically determined because they are data dependent, thus not allowing the compiler to schedule around bank conflicts.In some conventional systems, various architectural designs may be implemented to support unconventional index vector load operations. For example, multiple read ports can be added to a RAM bank. In such an example, if the hardware can handle 32 vectors, each bank would require 32 read ports, which would increase expense, area and power, and increase place and route congestion around the RAM bank. Another example includes reducing the throughput of index lookups to perform a single scalar lookup per load. However, this creates a bottleneck for vector execution and becomes the limiting factor in execution time. Another example involves making multi-level copies of data structures in memory such that each vector lane can access data from a single bank. While this example can solve some of the throughput issues of other methods, the memory capacity is limited by the fact that the data structure takes up N times (where N is the number of entries to access) the space, which can lead to a decrease in the overall performance of the associated algorithm, except for making In addition to the overhead of the copy. However, in the case of small data structures, this approach is more appropriate. In some examples, conflicts can be detected and resolved dynamically by serializing conflicting lookups. However, this can lead to increased hardware complexity, as bank conflicts must be detected and resolved dynamically. Furthermore, these extra stages increase the load-use latency of these operations, affecting the compiler's ability to efficiently schedule code. Additionally, data-dependent execution delays may be introduced, which is a matter of efficient scheduling by the compiler. In some examples, combinations of these approaches can be performed.To address these shortcomings of other architectures, the systems and methods of the present disclosure include a decoupled lookup table accelerator configured to support unconventional index vector load operations. A decoupled lookup table accelerator may be included as accelerator 804 of system 800 and may communicate with processor 802 , eg, a VPU, through shared memory 806 . A decoupled lookup table (DLUT) can support multiple modes for performing table lookups, such as 1D lookup mode, 2D lookup mode, 2D collision-free lookup mode, 1D lookup with interpolation mode, 2D lookup with interpolation mode, Table reformatting patterns and/or other patterns. In any lookup mode, the DLUT can accept an index array in VMEM, which can be in 1D(x) format or 2D(x,y) format. For example, each element may consist of 16 bits or 32 bits, which may be unsigned. The DLUT may then perform prescribed index calculations, which may include 2D to 1D mapping, truncation/rounding, integer/fraction splitting, and/or valid range detection, as non-limiting examples. For example, a DLUT can detect or coalesce duplicate reads, detect bank conflicts within an index, and issue read requests to VMEM for requested table entries. Each element can consist of 8 bits, 16 bits, or 32 bits, and they can be signed or unsigned. The DLUT can then perform interpolation post-processing as configured and write the output back to VMEM. Each of these processing operations can be performed in a pipeline to increase throughput, reduce latency, and reduce power consumption.As a result, the DLUT accelerator overcomes the shortcomings of implementing dynamic conflict detection and resolution in the processor pipeline, allowing the compiler to efficiently schedule deterministic execution latencies of all memory operations while avoiding the complexity of doing inline conflict detection. Since the accelerator operates as a tightly coupled accelerator—for example, via a VMEM shared with the VPU—the processor can configure and start the accelerator while continuing to process other independent parts or stages of the processing pipeline or algorithm. In some embodiments, the accelerator may include additional features to further reduce the load on the host processor, such as offloading index generation for patches with specific lookup patterns, performing optional 1D blending and 2D interpolation on the lookedup data, and /or provide table reformatting support without lookups or interpolations. In practice, the entire system (including processor 802 and accelerator 804 for performing the lookup) has been shown to accelerate the processing of various computer vision algorithms (e.g., feature tracking, object tracking, image warping, pyramid creation, etc.) Factor two, while reducing energy consumption by more than 50% compared to executing the entire algorithm on the main processor alone.Referring now to FIG. 9A , FIG. 9A illustrates a system 900 including a decoupled look-up table (DLUT) accelerator, according to some embodiments of the present disclosure. It should be understood that this and other arrangements described herein are set forth by way of example only. Other arrangements and elements (eg, machines, interfaces, functions, orders, functional groupings, etc.) may be used in addition to or instead of those shown, and some elements may be omitted entirely. Furthermore, many of the elements described herein are functional entities that can be implemented as discrete or distributed components or in combination with other components, in any suitable combination and location. Various functions described herein as being performed by entities may be performed by hardware, firmware, and/or software. For example, various functions may be performed by a processor executing instructions stored in memory. In some embodiments, system 900 may be included in and/or may be included in conjunction with system 800 of FIG. 8A , example autonomous vehicle 1300 of FIGS. 13A-13D , example computing device 1400 of FIG. 1500 similar components, features and/or functions.System 900 may include one or more processors 902 (which may correspond to processors 802 of FIG. 8A ), memory 904 (which may correspond to shared memory 806 of FIG. 8A ), and decoupled look-up table (DLUT) accelerators 906 ( It may be included as accelerator 804 of accelerator 804 of FIG. 8A). In an embodiment, the processor 902 may include a VPU and the memory 904 may include a VMEM. DLUT accelerator 906 (or "DLUT 906") may include a processing unit (PU) interface (I/F) 908 for communicating with processor 902, a controller 912 for communicating with processor 902, and a configurator 910, For configuring DLUT 906 based on information shared across PU interface 908 from processor 902 and/or from memory 904 based on indication from processor 902 of where configuration messages or information are in memory 904 . For example, PU interface 908 and controller 912 may correspond to an advanced peripheral bus (APB) and an event or programming interface of system 800, respectively. The controller 912 may receive a start or trigger command or signal from the processor 902 (e.g., via an arrow labeled "START") indicating that the DLUT 906 may begin processing and/or may receive a polling signal from the processor 902 to assist in switching the processor 902 is synchronized with DLUT 906 . Additionally, when DLUT 906 finishes processing one or more assigned tasks, DLUT 906 can generate a signal to processor 902 (e.g., via an arrow labeled "Done") so that processor 902 can begin configuring tasks for the next task. DLUT906.During configuration, processor 902 may configure DLUT 906 directly via PU interface 908 and/or indirectly by indicating the location of configuration information in memory 904 via PU interface 908 . In the latter example, DLUT 906 may retrieve configuration information from memory via, for example, shared read port strm1_dm_rd, and may use the stored configuration information to configure DLUT 906 (e.g., configuration subunits (e.g., IAU, CDRU, PPU, etc.) and and/or other components of DLUT 906) to perform one or more tasks. For example, processor 902 may set up in memory 904 data structures required by DLUT 906 to perform one or more tasks. For example, for a 1000 coordinate lookup, processor 902 may set up a data structure in memory 904 with each of the 1000 coordinates, and may further allocate a buffer in memory 904 into which DLUT 906 writes the output. Processor 902 may also instruct DLUT 906 which operations to perform - eg, 1D or 2D lookup, with or without interpolation, table reformatting, etc. - and DLUT 906 may use this information to configure subunits. The configuration information set by the processor 902 may also include an indication of the bit width of a coordinate index, an indication of a bit width of an entry of a table, and the like. Thus, once the input and output buffers are set up in memory 904, and configuration information such as bit width, operation type, etc. is sent to DLUT 906, processor 902 can start or trigger DLUT 906 to begin processing. Thus, the processor 902 can perform other tasks while the DLUT 906 performs lookups, interpolations, table reformatting, etc., thereby reducing run time and increasing efficiency compared to systems relying solely on the processor 902.In operation, DLUT 906 may receive from memory 904 a list of indices corresponding to coordinates, and DLUT 906 may extract values from the table corresponding to indices (e.g., where the values are integer values) and/or may pull fractional values around values (for example, the left and right values for a 1D lookup or the upper-left, lower-left, upper-right, and lower-right values for a 2D lookup), and perform interpolation or other operations with respect to surrounding values. Once the final values are determined (e.g., directly by a lookup without performing post-processing, or after processing by the post-processing unit (PPU) 930), these values can be written to an output buffer in memory 904, which is identical to the input buffer The indices in are in one-to-one correspondence. To perform these tasks efficiently, in an embodiment, Index Address Unit (IAU) 922, Conflict Detection and Resolution Unit (CDRU) 924, Control (CTL) First In First Out (FIFO) 928, Fractional (FRAC) FIFO 926 For example, A post processing unit (PPU) 930, a data consolidation unit (DCU) 932, and/or other components may be used.For example, index (IDX) stream 916 may include an index stream read from memory 904 (e.g., via read port strm1_dm_rd), which is to be looked up in one or more lookup tables, and values corresponding to the index may be looked up via lookup A table (LUT) stream 918 is read from memory 904 (eg, via read port strm0_dm_rd). An output (OUT) stream 920 may be values written back to memory 904 (eg, via write port strm0_dm_wr) after processing using DLUT 906 .Processor 902 may instruct IDX flow 916 during configuration how to access the data structures used for indexing. For example, for a one-dimensional lookup, where the interface to memory 904 is 64 bytes wide, 64 bytes can be read out in each cycle. In the case of performing a 1D lookup, a single coordinate can be read for each index value (e.g. (x) value), while for a 2D lookup, two coordinate indices can be read for each index (e.g., (x,y) value). In a non-limiting example, each index may be 16 bits or 32 bits, so there may be 8, 16 or 32 coordinates from the IDX stream 916 in each 64 byte read.The IDX stream 916 data can be sent to the IAU 922 in raw format as a raw index, and each coordinate can be an integer value or a fractional value. The IAU 922 (where the index is a fractional value) may split the fractional value to provide the fractional bits to the FRACFIFO 926 to help use the PPU 930 to blend the surrounding values found in the table. The IAU 922 can then determine a set of indices to send to the CDRU 924, where the number of indices sent can correspond to the number of lookups that the LUT stream 918 can perform in a single cycle. For example, if LUT stream 918 can perform, eg, 32 lookups in one loop (based on the bit width of each value in the lookup table), IAU 922 can send 32 indices to CDRU 924 on each iteration. In some examples, such as where the values from IDX stream 916 to IAU 922 are integer values, IAU 922 may send each set of indices without any processing. However, where the values from IDX stream 916 are fractional values, IAU 922 can determine which indices need to be looked up to obtain each surrounding value needed (e.g., 2 indices for 1D interpolation or 2 indices for 2D interpolation). 4 indices) to perform interpolation or other operations to obtain mixed values corresponding to fractional values. For example, where the fractional value is (5.3,6.2) corresponding to (x,y) coordinates for 2D lookup and interpolation, the IAU 922 may determine that the lookup will occur at (5,6),(5,7) , (6,6) and (6,7), then the PPU 930 can mix these values to generate a final value corresponding to index (5.3,6.2). For example, the values can be mixed equally weighted, or can be mixed using bilinear interpolation, so that values closer to (5,6) than (6,7) are weighted more heavily to compute the final value of (5.3,6.2) value.The lookups can be set in an appropriate order corresponding to the index order in the input buffer in memory 904 read using IDX stream 916 (e.g., LUT stream 918 is capable of reading 32 of 32 values in each read cycle). lookup index) to CDRU 924. CDRU 924 then performs conflict detection and resolution by identifying bank conflicts that would result if lookup table reads in LUT stream 918 occurred in the order received from IAU 922, and resolves bank conflicts by changing the order of the indexes to avoid storing body conflict. For example, when a lookup of an index set would result in a bank conflict, and another (e.g., later or earlier) set of index sets is available for another lookup cycle, the CDRU 924 can find a non-conflicting lookup from the other lookup cycle and Non-conflicting lookups are swapped with conflicting lookups for that cycle. As a result, one or more bank conflicts can be avoided, thereby increasing throughput. For example, where the IAU sends 32 indexes per cycle, and 6 indexes for a given cycle have bank conflicts, the CDRU 924 can determine up to 6 indexes from another lookup that does not cause the current lookup conflict, And these 32 lookups may be performed - eg, 26 lookups from the original 32 and 6 lookups from another set sent by the IAU 922 . Once a lookup is determined (eg, with or without substitutions to resolve conflicts), a set of lookups can be read from memory 904 using LUT stream 918 .To account for out-of-order lookups where replacement occurs, CDRU 924 may use CTL FIFO 928 to indicate to the data merge unit the order of lookups from IAU 922 for each set of lookups. For example, for an initial set of 32 lookups, the DCU can determine that 8 were performed in the first cycle, then 8 in another cycle, then 16 in another cycle, and then can determine that the entire The 32 sets have been processed and then 32 lookups can be pushed to the PPU 930 for post-processing, if applicable, or they can be pushed directly to the OUT stream 920 to write to an output buffer in memory 904. This additional information indicating the actual order of lookups determined by the CDRU 924 and read into the index of the LUT stream 918 may be communicated to the DCU 932 via the CTLFIFO 928 . Therefore, whatever changes the CDRU 924 makes to the order of the indices received from the IAU 922, the DCU 932 can take that into account. CTL FIFO 928 may be useful because the number of cycles through IAU 922, CDRU 924, etc. is indeterminate and data dependent. For example, since conflicts are not known ahead of time (e.g., because the data may be non-deterministic), and are a result of programming, there is no solution to completely avoid conflicts, so CTL FIFO 928 helps instruct DCU 932 to organize find.PPU 930 may calculate final values for each index that may be read out to memory 904 as needed—eg, where additional operations need to be performed on lookup table values. In cases where post-processing is not required, the PPU 930 may not need to collect the results. For example, where a normal 1D or 2D lookup is performed on an integer-valued index that maps directly to a location in the lookup table, the PPU 930 and FRAC FIFO 926 may not be used to perform additional processing. When performing interpolation (e.g., linear on 1D lookup or bilinear on 2D lookup) and/or other operations, PPU 930 and FRAC FIFO 926 may be used to convert collected results into updated results or values to write out to memory 904 .In some embodiments, DLUT 906 may be used in table reformatting mode. For example, IDX stream 916 and OUT stream 920 may be used to update addresses for access and/or transposition. In such an example, where there is a buffer in memory 904 and the index in the buffer is to be transposed, the operation may be offloaded to DLUT 906 (instead of having the address generation unit of processor 902 perform the transposition) . Configuration information from processor 902, eg from an address generation unit, may indicate a read mode for reading from a buffer in memory 904 and a write mode for writing addresses back to memory 904 in a different mode. For example, where the programmer knows that many conflicts will result from a particular access pattern, the programmer can program the processor 902 to configure the DLUT 906 to perform table reformatting to scramble the data so that fewer or no conflicts are likely to occur.As another example, the DLUT 906 may be used to detect sentinel return values out of range, or out of range predictive shutdown output writes. So, for example, where a coordinate in the IDX stream 916 is outside a given image block and the corresponding value should not be written, the DLUT 906 can write out a sentinel value, which in turn can indicate to the processor 902 when to process the output buffer Information in the zone that does not rely on or use sentinel values in its processing. In some embodiments, the sentinel value may indicate to the processor 902 that these values are not written to memory, and thus values identified as error values may not be stored.Accordingly, DLUT 906 may be implemented as a pipeline of subunits that work together to perform a particular task or operation. Each subunit can operate independently and communicate with other subunits through a shared interface. Referring to FIG. 9B, table 940 illustrates the tasks of the various subunits of DLUT 906 during the processing of a particular operation.Thanks to the DLUT accelerator described here, processor pipelines can be kept deterministic by offloading dynamic conflict detection and resolution to decoupled accelerators. In addition, the accelerator can run independently and concurrently with the main processor (such as the VPU), thereby reducing runtime. DLUT accelerators can also allow 1D and/or 2D lookups from a common table with collision detection/resolution. The accelerator can perform various post-processing operations such as 1D lookup with linear interpolation, 2D lookup with bilinear interpolation, out-of-range detection sentinel return (1D and 2D), and/or out-of-range prediction off output write (1D and 2D ). The DLUT accelerator can be configured to perform interpolation using a configurable number of fractional bits and can support various index and data formats such as 8, 16 and 32 bit signed and unsigned data formats and 16 and 32 bit 1D and 2D coordinate indices Format. DLUT Accelerator can also convert between global and local coordinates using configurable X/Y offsets. The DLUT accelerator can also support dataflow units to read index buffers from VMEM, perform lookups from VMEM, and write results (or lookups or interpolations) to VMEM. The dataflow unit can support up to 2D addressing for linear and transposed access. To optimize the number of cycles required for lookups/interpolations, lookup indexes may be out of order to minimize bank collisions - for example, if VMEM supports N lookups, the accelerator can use MxN indexes to maximize the ability to survive collision detection -- and can perform duplicate detection to filter out duplicate indexes that are guaranteed to create conflicts. Furthermore, the 2D lookup and interpolation mode of the DLUT accelerator can include an index automatically generated within the accelerator based on several parameters (called auto-index mode), as opposed to the programmer providing a block of index data. This offloads the preparation of the index from the main processor to the accelerator.Referring now to FIG. 9C , each block of the method 950 described herein includes a computational process that may be performed using any combination of hardware, firmware, and/or software. For example, various functions may be performed by a processor executing instructions stored in memory. Method 950 may also be embodied as computer usable instructions stored on a computer storage medium. Method 950 may be provided by a stand-alone application, service or hosted service (stand alone or in combination with another hosted service), or a plug-in to another product, to name a few. In addition, method 950 is described with respect to system 900 of FIG. 9A , and method 950 may be performed by any one or combination of systems, structures or components, including but not limited to those described herein.FIG. 9C is a flowchart illustrating a method 950 for using a decoupled lookup table accelerator according to some embodiments of the present disclosure. At block B902, the method 950 includes configuring one or more subunits of the DLUT accelerator based at least in part on using the configuration information generated by the processor. For example, DLUT 906 may use information received from processor 902 and/or retrieved from memory 904 to configure subunits of DLUT 906 .At block B904, method 950 includes reading from memory a first set of indices in the stream, determining a first subset of indices without bank conflicts. For example, IAU 922 can generate a set of indexes for CDRU 924 to handle conflicts, and CDRU 924 can determine that there are no subsets of memory bank conflicts in the set of indexes.At block B906, the method 950 includes determining, from the second set of indices of the stream of indices read from memory, a second subset of indices that does not have a bank conflict with the first subset of indices. For example, IAU 922 may generate another set of indexes for CDRU 924 to handle conflicts, and CDRU 924 may determine to replace the indexes from the first set with one or more indexes from the second set of indexes that do not cause conflicts with the first set of indexes. One or more indexes of the group that have a conflict.At block B908, method 950 includes performing a lookup of one or more lookup tables using the first subset of indices and the second subset of indices in a single read cycle from memory to retrieve the plurality of values. For example, DLUT 906 may read values from memory 904 into LUT stream 918 using a subset of values from the indexed set and values from a second indexed set determined not to conflict with the subset of values of the first indexed set.At block B910, the method 950 includes writing the plurality of values to memory. For example, values from LUT stream 918 may be written to memory 904 in output stream 920 . Before being written out, the DCU 932 may reorganize the data such that the data is in a one-to-one order with the read-out index of the input buffer of the IDX stream 916 . In some embodiments, PPU 930 may perform one or more operations, such as interpolation, on the retrieved value before writing the final value out to memory 904 in OUT stream 920 .Hardware Sequencer for Direct Memory Access SystemsDirect memory access (DMA) systems can be used to move data from different memory locations without requiring a central processing unit (CPU). For example, a DMA can operate as a data movement engine, moving data from a source to a destination—for example, from a source such as external memory (e.g., DRAM) or from a vector memory such as an L2 buffer or a vector processing unit (VPU) ( VMEM) to a destination such as a VPU. DMA systems can perform additional operations in practice, such as but not limited to padding frame data, manipulating addresses, managing overlapping data, managing different traversal orders, and accounting for different frame sizes.In digital signal processing, multiple DMA resources can be used to describe the movement of structured tile data between external memory and a processor (eg VPU). For example, these DMA resources may include descriptors, channels, flip-flops and/or registers. For example, a descriptor may describe tile movement such as source position, destination position, line spacing, tile width, tile height, circular buffer arrangement, and the like. However, tile data movement for image surfaces with spatial and temporal dependencies presents additional programming model challenges for users and requires a different number of DMA configuration resources. These tile data dependencies may also complicate control code and control sequences in processor (eg, VPU) code. For example, typical processing operations may include filtering, such as 3x3 filtering. This type of operation introduces spatial dependencies, since each output pixel will depend on the corresponding value of the 3x3 pixels surrounding the output pixel. In such an operation, filtering may be performed using a 3x3 matrix of values, and this operation may be referred to as a spatial correlation operation. In practice, each tile of a frame may be of the same size—for example, 64x64—to reduce programming challenges. However, if a 3x3 filter is used on a 64x64 tile, adjacent blocks will require additional pixels up and down - eg, as shown in the shaded area of Figure 10C. Therefore, this information needs to be encoded in the DMA resource to allow data to be fetched correctly across tiles - this causes an additional programming burden to complete.Referring to Figures 10A-10G, Figures 10A-10G illustrate various challenges of data movement when using a DMA system. For example, visualization 1000 of FIG. 10A may correspond to populated frame data. In visualization 1000, there may be nine sections, top left section, top section, top right section, left section, center section, right section, bottom left section, bottom section, and bottom right section. In such an example, each section may include one or more tiles - eg, the upper left section may include one tile, while the top section may include, for example, four tiles. Therefore, to precisely define this segment, in existing methods, nine descriptors (e.g., one for each section), three channels (e.g., one for the left column, one for the center column, one for the right column) and three flip-flops (eg, one for each channel).Regarding padding, for example, due to spatial dependencies, when performing operations on data near the border of a tile or section of a frame, the DMA system can pad values or manufacture values for pixels outside the border of the image. This may be because, in some implementations, requesting data outside of the memory area for an image may trigger a fault. Therefore, DMA can be used to fill or manufacture values after fetching image data from the corresponding memory area to avoid triggering a fault. If there is no padding, the structure of the data may not match the kernel size, for example, if a filtering operation is performed. The fetched data with additional padding values can then be sent to the destination (e.g. the VPU) so that the VPU can process the data according to its configuration and can process the data in the same way over the entire (filled) frame. When padding, zero padding can be used (for example, where each new data point includes a zero value), duplicate values can be used (for example, pixel values of adjacent pixels are copied from the fetched data), and/or Another filling mechanism. Additionally, padding can be added to any side of the frame, and different padding can be added for different sides. For example, in FIG. 1OA, the padding area 1002 may be larger on the right than the left, top, or bottom of the frame. Padding adds complexity to DMA programming when moving data from source to destination (eg, from memory to VMEM), and also adds complexity to VPU programming when handling larger pad frames.Referring now to FIG. 10B , visualization 1010 of FIG. 10B corresponds to an address operation of a DMA system. For example, different descriptor addresses can be manipulated and programmed to fetch consecutive frame data. In order for DMA to perform efficiently, the address descriptions for data movement may be contiguous. Therefore, the address of each descriptor can be manipulated, and this manipulation must be passed from one descriptor to another. For example, when the values are filled as shown, the start address of each descriptor can be manipulated so that the fetched data includes the filled values. To do this, the programmer uses the starting address and the tile width and number of tiles in each section, and uses this information to generate the next descriptor address. For example, the first descriptor may cause data to be fetched starting from the top left, then the top, then the top right, then the left, then the center, etc., as indicated by the arrows in Figure 10B. However, the start descriptor address adds complexity to DMA programming when moving data to a destination such as VMEM.As another example, and with respect to FIG. 10C , to ensure continuous data processing, a DMA system may be required to read vertically and horizontally overlapping data from adjacent tiles. For example, as shown in the shaded area of Figure IOC, it may be desirable to read overlapping data from a tile in the upper left portion and an adjacent tile in the top in the same operation. Similarly, overlapping data from a tile in the upper left portion and an adjacent tile in the left portion may need to be read in the same operation. To do this, the descriptor needs to be updated or moved to include the overlap. For example, the base descriptor might include the address where the top starts, but in order to capture data from an adjacent tile in the upper left portion, the descriptor for the top needs to be updated (e.g., moved to the left) to capture data from the upper left tile . Such updates require additional programming complexity, especially as the number of descriptors increases.Also, with respect to Figures 10D-10F, a DMA system may need to support different traversal orders in order to read data from memory in a sequential manner. For example, the associated traversal order may differ whether filtering, convolution, matrix multiplication, and/or other operations are performed. With this in mind, various traversal orders can be supported, such as those shown in FIG. The raster traversal order of (visualization 1034 ) and/or the raster traversal order starting from bottom right (visualization 1036 ). Similarly, with respect to visualization 1038 of FIG. 10E , for cube images, the DMA system can support various cube traversal orders. FIG. 10F shows various vertical dig traversal orders that may be supported by a DMA system, such as a vertical dig traversal order from top left (visualization 1040), a vertical dig traversal order from top right (visualization 1042), a vertical Mining traversal order (visualization 1046 ), and/or vertical mining traversal order from bottom right (visualization 1048 ). To support each of these different traversal orders for moving data to memory (eg, VMEM) adds to the complexity of DMA programming.With respect to FIG. 10G , DMA systems may also need to support different frame sizes, such as moving multiple frames of different sizes (eg, Luma/Chroma composite or different pyramid levels). For example, a processor (such as a VPU) can process frames of different sizes to generate the final desired output. FIG. 10A illustrates an example visualization 1048 corresponding to pyramid processing of frames used for optical flow estimation operations. In such an example, the shifting of pixels could first compute a smaller frame size, then use hints from the output of the smaller frame size to compute a larger frame size, then use hints from the larger frame size to compute a larger frame size frame size, etc. Therefore, a DMA system can support fetching frame data of various frame sizes, but this capability requires additional programming complexity of the DMA system. For example, descriptors must be programmed or updated for each different frame size.To simplify the programming of these various operations supported by the DMA system, the DMA system and method of the present disclosure can use a hardware sequencer in conjunction with the DMA engine to address data movement issues. For example, the data movement of a complete image can be explicitly and completely described in the hardware ordering mode, which has a simplified programming model that handles tile ordering (triggering), padding, overlapping (offset), traversal order, and different frame sizes ( For example, the image structure of a frame, such as shown in FIG. 10I). A hardware sequencer can reduce DMA resource usage (eg, reduce the number of required descriptors, flip-flops, channels, etc.), offload control from the VPU for VPU control processing, and reduce the complexity of DMA programming. This can be accomplished by loading an image or frame descriptor view (eg, as shown in Figure 10I) in a sequence of commands from local programmable memory. These hardware sequence commands can incorporate every operation that results in increased programming complexity, as described in this paper—including image padding, tile overlap or offset, frame offset, image traversal order, and image size at tile granularity. In addition to descriptor information (for example, from image commands or from separate descriptor memory or SRAM), the hardware sequencer can also read image commands from memory and sequence tile movements to traverse and draw the entire frame .Referring now to FIG. 10H , FIG. 10H illustrates a DMA system 1050 including a hardware sequencer, according to some embodiments of the present disclosure. It should be understood that this and other arrangements described herein are set forth by way of example only. Other arrangements and elements (eg, machines, interfaces, functions, commands, functional groupings, etc.) may be used in addition to or instead of those shown, and some elements may be omitted entirely. Furthermore, many of the elements described herein are functional entities that may be implemented as discrete or distributed components or in combination with other components, in any suitable combination and location. Various functions described herein as being performed by entities may be performed by hardware, firmware, and/or software. For example, various functions may be performed by a processor executing instructions stored in memory. In some embodiments, the system 1050 can be included in and/or can include components, features similar to the example autonomous vehicle 1300 of FIGS. 13A-13D , the example computing device 1400 of FIG. 14 , and/or the example data center 1500 of FIG. and/or functionally similar components, features and/or functions.System 1050 may include DMA engine 1056 , register control 1058 , hardware (HW) sequencer controller 1060 , descriptor SRAM 1052 and/or hardware (HW) sequencer command SRAM 1054 . Existing systems may include only a DMA engine 1056 and a descriptor SRAM 1052 that stores frame descriptors. Therefore, as described herein, when sending data from a source to a destination, the DMA engine 1056 needs to perform all padding, address manipulation, etc. beforehand, and the VPU or other source is required to perform sequencing by handshaking with the DMA system (e.g., With the VPU as the primary node and the DMA as the secondary node). In such an example, the DMA engine 1056 would process at the tile level, using descriptors for sections of the frame, each section comprising one or more tiles, to retrieve one tile at a time for sending to the destination, And subsequent tiles will be retrieved based on the descriptor based on the instruction from the VPU to retrieve the next tile.However, using the system 1050 of FIG. 10H enables processing of frames at the frame level—for example, a single descriptor can be used for the frame shown in FIG. 10H , which previously required nine descriptors. Thus, in practice, when DMA engine 1056 attempts to load a descriptor from descriptor SRAM 1052 (or more generally, descriptor memory 1052), HW sequencer control 1060 can intercept the descriptor load and use the command sequence processing structure to Handle multiple frames, tile rows/columns and multiple descriptors. For this, a frame format 1070 (FIG. 10I) can be used which describes higher level frames by processing tile rows/columns (depending on traversal order) in hardware rather than at tile level. For example, instead of filling tiles, frame format 1070 can be used to fill an entire frame, thereby filling many frames with a single fill command. Thus, it is possible to understand the whole frame, e.g. where to fill, where to overlap, how to automatically manipulate addresses, etc. Furthermore, since the DMA engine 1056 can fetch descriptors directly from the descriptor SRAM 1052 without the intervention of the hardware sequencer control 1060 , legacy formats can still be supported for operations that may not benefit from the HW sequencer control 1060 .HW sequencer control 1060 may operate, for example, as a state machine that reads HW sequencer command SRAM 1054 (or more generally HW sequencer command memory 1054 ), where frame format 1070 including sequenced commands is stored. A processing controller—eg, an R5 processor, CPU, ARM processor, etc.—may program or configure the hardware sequencer command SRAM 1054 and descriptor SRAM 1052 using programming code and/or settings from a higher level engine.Descriptors SRAM 1054 may include one or more descriptors that may define tile dimensions (e.g., tile width dx and tile height dy), image or frame origins (e.g., top left, bottom right, etc.) , trigger type, and/or other microinformation about the scan type of the descriptor.The HW Sequencer Command SRAM 1054 may store the frame format 1070 defining the frame as a whole, the size of the frame, frame padding, etc. For example, frame format 1070 may include frame headers for header control, offset control, and padding control, and may include column headers and/or row headers for columns or rows of a frame (e.g., column headers for vertical scan mode and raster scan pattern line header). Frame header controls may include a frame repetition factor to identify how many times a particular frame will be repeated, and the number of descriptor rows and/or descriptor columns. Frame header offset controls can include frame tile offset (e.g., offset from tile to tile) and frame offset (e.g., offset between two or more frames that can be read using a single channel). shift, for example a YUV frame can be processed to include three separate planes). The frame padding header can indicate how many lines or pixels of padding to add at the frame level (as opposed to the per-tile level of existing methods), such as padding the left side of the frame, the top of the frame, the right side of the frame, and/or , filling the entire frame instead of filling each tile within each part of the frame at the tile level.Column headers can be used when the traversal order is vertical, while row headers can be used when the traversal order is raster or horizontal. Column headers and/or row headers may include column or row offsets (e.g., the offset between each column or row), column or row repetition factors (e.g., how many times the same column or row is repeated across frames type of processing, e.g. N-1 times, where N is the number of times a column or row is processed), and the number of descriptors used for each column or row (e.g. a single descriptor can be used to repeat the same tile across rows or columns, or The first descriptor can be used to traverse one part of the row, the second descriptor can be used to traverse another part of the row, and so on). Descriptor IDs can be described such that descriptors—eg, stored in descriptor SRAM 1052—can be pulled and used to describe rows or columns. For example, the descriptor ID may indicate which descriptor was used for a particular column and/or row, and how many times the descriptor was repeated (eg, N-1 times, where N is the total number of times the descriptor was used). In an embodiment, there may be a set of descriptors (eg, 64), and the descriptor ID may be used to determine which descriptor should be used for a particular column and/or row. In this way, the hardware sequencer controller 1060 looks at the superstructure of the frame above the base descriptor from the descriptor SRAM 1052, which allows the resources required by the DMA engine 1056 to implement the same data transfer to be simplified. Additionally, hardware sequencer control 1060 can prefetch tiles early (eg, using register control 1058 ) to reduce latency, and tile data can be immediately available when requested by DMA engine 1056 .In operation, HW Sequencer Control 1060 may read image structures (e.g., Frame Format 1070) from HW Sequencer Command SRAM 1054 and descriptor information from Descriptor SRAM 1052 and may combine this information for DMA Engine 1056 for frame sorting. Thus, the HW Sequencer Control 1060 can read the image structure, pull in the descriptors and sequence the frames with the DMA Engine 1056 in the correct descriptor format, rather than requiring individual encoding of each descriptor, trigger, channel, etc. DMA engine 1056. In an embodiment, register controls 1058 may help control traversal order, prefetching, and/or other frame addressing controls. The HW sequencer control 1060 further simplifies the VPU's code so that the VPU does not have to account for multiple channels. Instead, the VPU can request one tile, then the next, then the next, and so on. The HW sequencer control 1060 knows the current position in the frame and therefore the next tile to fetch for the DMA engine 1056, and the DMA engine 1056 does not have to keep track of this information internally.The system 1050 can thus be backwards compatible with previous approaches, as the system can still support the use of various descriptors, triggers, channels, etc., but can also be understood at the frame level to reduce complexity. System 1050 may support image padding at all corners of frames with different pixel padding sizes, vertically and/or horizontally overlapping tiles to allow the VPU to access adjacent tiles for processing along tile boundaries, and in different traversal orders Iterate over frames. Additionally, the system 1050 can support automatic tile offset adjustments by the hardware sequencer control 1060 at the VMEM destination. Because the descriptors in the frame are linked by hardware, the user does not need to link or stitch the descriptors together. The hardware sequencer control 1060 can manage address ordering of descriptors/tiles across frames without additional programming complexity, and the hardware sequencer control 1060 can prefetch tiles to improve performance.In some embodiments, descriptors may be included in an image or frame structure rather than stored separately in descriptor SRAM 1052 . For example, without implementing legacy compatibility, the entire sorting structure and tile structure can be described in the frame structure. In such an example, the frame format of FIG. 10I can be used to include additional information for the descriptor, such as tile width, trigger type, etc., to result in the same information being available for the HW Sequencer control 1060 .Reference is made to FIG. 10J , which is an example of the frame format 1070 of FIG. 10 when implemented for a raster scan sequence, according to some embodiments of the present disclosure. For example, frame format 1070A is an example of a frame format in raster mode, with frame address handling, using single channel, single trigger, and single descriptor. In this example, the tile structure could be 16x8. Figure 10K is an example of such a tile structure with hardware ordering in a raster scan sequence, using example frame format 1070A for frame address processing, according to some embodiments of the present disclosure. For example, for each row of tiles, the same descriptor (e.g., tile dimension) can be used (as indicated by "D1" in visualization 1072), so that the same tile 16 is applied along each row (from C1 to C16). times), then repeat for 8 rows from top to bottom (from R1 to R8). The sequence may include 20 bytes, as shown in frame format 1070A, and each row may have N*2+ bytes, where N represents the number of entries per row (as shown in FIG. 10J ). Thus, to order frames as shown in visualization 1072, frame format 1070A may include no frame repetition, number of descriptor rows may be zero, no tile offset, no frame offset, left (PL), right (PR), upper (PT), lower (PB) 3 lines of pixel frame filling, this line can be repeated 7 times (8 lines in total), and the offset of each line can be the block height (Ty) (so that each One row offset tile height), one descriptor can be used with descriptor ID D1, and the descriptor can be repeated 15 times in each row (total 16 times). Therefore, in practice, the HW sequencer control can use the descriptor corresponding to D1 (which includes the tile height and tile width) from the descriptor SRAM 1052, and can use the data stored in the HW sequencer control SRAM 1054 The image structure of the frame format 1072 in , sorts the image tiles tile by tile (16 tiles per row), row by row (from R1 to R8 ) for the target processor (eg, VPU). In this way, a single descriptor, single flip-flop, and single channel can be used, reducing programming complexity while still allowing the DMA system 1050 to be the primary or controlling component in the interaction of the DMA system 1050 and the VPU.In some embodiments, as an extension of the HW sequencer control 1060, a DMA trigger mode may be used to reduce software intervention in VPU programming by having the DMA system 1050 command descriptor sequences. For example, the DMA system 1050 may read images from external memory, tile the images, and sequence the tiles for the VPU. To facilitate this, the VPU can expose start and done signals. VPU startup may be driven by the DMA system 1050, and the VPU may send a completion signal to the DMA system 1050 when the VPU completes processing a block of instructions. Thus, the DMA system 1050 (eg, hardware sequencer control 1060) and the VPU can participate in a handshake mechanism where the DMA system 1050 is the primary node and the VPU is the secondary node. This DMA trigger mode can minimize VPU tile control overhead and simplify the DMA engine 1056 programming model. For example, specific code for double-buffered DMA data movement may not be required, and the DMA core code may be independent of the VPU core code. Thus, the DMA trigger mode simplifies the VPU code, since the DMA system uses the HW sequencer control 1060 to handle tile sequencing. The sample code below illustrates the VPU code before and after the DMA trigger is added.Before:after:As a result, where the VPU had previously been requesting to move a tile to VMEM, now, because the HW sequencer control 1060 controls the sequencing, the DMA system 1050 can trigger moving a tile to VMEM with the VPU as the target. In this way, the DMA system 1050 can fetch data to be processed by the VPU ahead of time, and when the VPU indicates processing is complete, the DMA system 1050 can make the next data to be processed immediately available (e.g., in VMEM) and can send VPU indicates the same.When performing processing of one or more frames, HW sequencer control 1060 may retrieve one or more descriptors (which may indicate tile size, trigger type, etc.) The sequencer commands the SRAM 1054 to retrieve the image structure. HW sequencer command 1060—in conjunction with register control 1058—can then start traversing the first row or column according to the traversal order and using the first (and only in an embodiment) The number of repetitions encountered in the case of one or more descriptors (for example, 1-N) moves to the second descriptor, and so on. As each tile is determined, the DMA engine 1056 may retrieve the tile data from the source data and write the tile data to the destination data (eg, in the VMEM). Once the data is written to the data destination, the processor (eg, VPU) may be notified by the hardware sequencer control 1060 that the data is available for the processor to begin processing. Then, during processing, the DMA system 1050 can fetch the next tile of data based on the sequence from the hardware sequencer control 1060 and write the data to the data destination so that when the processor indicates that processing is complete, the hardware sequencer control 1060 The VPU can be indicated (via a handshake mechanism) that the next data to be processed is available, and so on until processing is complete.Referring now to FIG. 10L , each block of method 1080 described herein includes a computational process that may be performed using any combination of hardware, firmware, and/or software. For example, various functions may be performed by a processor executing instructions stored in memory. Method 1080 may also be embodied as computer usable instructions stored on a computer storage medium. Method 1080 may be provided by a stand-alone application, service or hosted service (stand alone or in combination with another hosted service), or a plug-in to another product, to name a few. Furthermore, method 1080 is described with respect to the system of FIG. 10H , and method 1080 may be performed by any one or any combination of systems, structures or components, including but not limited to those described herein.FIG. 10L is a flowchart of a method 1080 of a DMA system including a hardware sequencer, according to some embodiments of the present disclosure. At block B1002, the method 1080 includes retrieving a tile structure from a descriptor memory and a frame structure corresponding to a frame from a hardware sequencer command memory. For example, hardware sequencer control 1060 may retrieve descriptors from descriptor SRAM 1052 .At block B1004, the method 1080 includes ordering the retrieval of the tiles of the frame from the source memory. For example, hardware sequencer control 1060—in an embodiment combined with register control 1058—may perform retrieval of tiles from source memory by DMA engine 1056 according to the frame (or image) structure and tile description from the descriptor. Sort.At block B1006, the method 1080 includes writing the retrieved data corresponding to the tile to the destination memory. For example, the DMA engine 1056 may write retrieved data corresponding to tiles to a target memory (eg, VMEM) for processing by a target processor (eg, VPU).At block B1008, the method 1080 includes providing an indication to a processor associated with the destination memory that the retrieved data is stored in the destination memory. For example, the HW sequencer control 1060 may indicate to the processor that the next tile's data is ready for processing.At block B1010, the method 1080 includes receiving an indication that processing of the retrieved data is complete. For example, after processing is complete, the processor may indicate to the DMA system 1050 that processing is complete, at which point the next tile of data may be loaded (or possibly already preloaded) into destination memory, and the DMA system 1050 may indicate to the processor the same .Configuring a DMA system for region-dependent data movement using the VPUA processing controller can configure a direct memory access (DMA) system and a processor (eg, a vector processing unit (VPU)) can trigger and sequence the DMA when a known data pattern is acquired. However, when dealing with different data points or features with irregular or unknown data patterns, challenges may be introduced in reconfiguring data movement, since feature or object locations are computed dynamically. For example, object tracking algorithms, feature tracking algorithms, object detection algorithms, deep learning algorithms using variable-sized regions of interest (ROIs), and/or other region-dependent data movement algorithms need to dynamically adjust address and data pairs so that DMA systems Appropriate information can be retrieved for processing by a processor (eg, VPU). In traditional systems, when acquiring unknown data patterns—such as in object tracking—the processing controller (for example, an R5 processor core used to control a Programmable Vision Accelerator (PVA)) may require interrupts to intervene in processing cycles to The updated information computed by the processor (eg, VPU) is determined and the DMA is reconfigured and used for the next iteration. Therefore, the processing controller introduces additional delays to, for example, tracking algorithms, which require shorter response times.To address the shortcomings of conventional systems that require processing controller intervention, the systems and methods of the present disclosure can use a DMA and a processor (e.g., a VPU) to configure a tightly coupled processing loop that allows the DMA to reconfigure based on the processor's output its descriptor. Thus, the DMA can be dynamically reprogrammed at runtime to handle certain algorithms that require region-dependent data movement. This VPU configuration mode can be used to update the DMA's descriptors to calculate tracking feature data (including position) based on the runtime VPU. Thus, the VPU can specify a list of address and data pairs in memory (e.g. VMEM), and then trigger the DMA to update its own descriptors to collect data from the region with the newly computed address. By relying on the interface between the VPU and DMA, once the processing controller initially configures the VPU and DMA to start processing, no processing controller (eg, R5 or ARM processing core) intervention may be required. This batch, fast, and synchronous MMIO access for updating feature descriptors thus reduces latency for object tracking, feature tracking, object detection, deep learning, and/or other algorithms with region-dependent data movement.Referring now to FIG. 11A , FIG. 11A illustrates a data flow diagram 1100 of a process for configuring a direct memory access (DMA) system using a vector processing unit (VPU), according to some embodiments of the present disclosure. It should be understood that this and other arrangements described herein are set forth by way of example only. Other arrangements and elements (eg, machines, interfaces, functions, orders, functional groupings, etc.) may be used in addition to or instead of those shown, and some elements may be omitted entirely. Furthermore, many of the elements described herein are functional entities that can be implemented as discrete or distributed components or in combination with other components, in any suitable combination and location. Various functions described herein as being performed by entities may be performed by hardware, firmware, and/or software. For example, various functions may be performed by a processor executing instructions stored in memory. In some embodiments, process 1100 may be composed of a process that includes similar components, features, and/or functionality to example autonomous vehicle 1300 of FIGS. 13A-13D , example computing device 1400 of FIG. system execution.A system for performing process 1100 may include a processing controller 1102 (e.g., R5 processor, ARM processing core, instruction set architecture (ISA), X86 architecture, etc.), a direct memory access (DMA) system 1104, a vector processing unit (VPU) 1108 (or another processor type), vector memory (VMEM) 1110 (or another memory type), and descriptor RAM 1106. In fact, the VPU configuration mode can configure the DMA to fetch a descriptor by writing a series of non-consecutive address/data pairs to the DMA descriptor SRAM. Process 1100 may be described with respect to example features or object tracking algorithms. However, this is not intended to be limiting, and the process 1100 and underlying system may be used to execute any type of algorithm, such as those with region-dependent data movement.For example, a first operation may include processing controller 1102 configuring DMA 1104 and VPU 1108 to perform processing on some data, and then triggering both DMA 1104 and VPU 1108 to perform processing. For example, the processing controller 1102 may load the descriptor RAM 1106 into memory for a starting point for processing, and may configure the registers of the VPU 1108 for a particular type of operation that the VPU 1108 will perform on the data.For a second operation, VPU 1108 may trigger DMA 1104 to read initial feature data points in VMEM 1110 . For example, to begin work, the VPU 1108 needs data from the DMA 1104, so the VPU 1108 configures the DMA 1104 to load data points into the VMEM 1110 at locations where the VPU 1108 knows to retrieve the data for processing.In a third operation, the VPU 1108 can process the current set of feature data and calculate the next tracked object or feature location. As a result, the VPU 1108 may now have calculated a new or updated position for the tracked feature or object.In a fourth operation, VPU 1108 may update VMEM 1110 with the updated location using the VPU configuration format (described with reference to FIG. 11B ), and then may trigger DMA 1104 to update its descriptors in Descriptor RAM 1106 . For example, FIG. 11B is a table 1120 illustrating the VPU configuration format written by the VPU into vector memory (VMEM) and read by the DMA system, according to some embodiments of the present disclosure. For example, for each address/data pair, the format may include four bytes of address and four bytes of data.In a fifth operation, DMA 1104 may update descriptors in descriptor RAM 1106 to retrieve appropriate data for the next processing iteration of VPU 1108 . For example, DMA 1104 may read address/data pairs into the VPU configuration format to patch the operation descriptor with the updated location. In an embodiment, there may be a one-to-one correspondence between feature points and descriptors, such that each tracked feature, object or point may include an associated descriptor. This way, the address/data pair for each tracked feature, object or point can be updated over time using a separate descriptor.In a sixth operation, DMA 1104 may use newly updated descriptors in descriptor RAM 1106 to obtain new characteristic data for a location. For example, DMA 1104 can indicate to VPU 1108 that the descriptor has been updated, and VPU 1108 can trigger DMA 1104 to read new data to VMEM 1110 , and so on.As a result, after the first configuration operation of the processing controller, operations two to six can be repeated to form a tightly synchronized VPU configuration cycle that requires processing controller intervention - thereby reducing latency to account for short responses required by tracking or detection algorithms time. Also, because DMA 1104 is overwriting addresses in memory with new updated addresses, DMA 1104 is updating code that DMA 1104 needs to look at to determine what to fetch next. By doing so, throughput is increased compared to conventional systems that rely on a control bus to update registers with addresses and data. Thus, the benefit of defining an address/data protocol is achieved, where variable address locations with variable amounts of data can be updated and how address/data pairs are updated. This allows the DMA 1104, which may be wider than the control bus (e.g., 512 bits versus 32 bits, respectively) to update to (for example, but not limited to) 8 address/data pairs at a time (where each address/data Use 8 bytes to define, as shown in Figure 11B).Furthermore, although the DMA is shown as being updated using the VPU configuration mode of process 1100, additional or alternative elements or components of the system may be updated. For example, the instruction cache of VPU 1108 may be updated using the VPU using a similar approach. As another example, an updated hardware sequencer program can be written to update the hardware sequencer memory by giving address data. This would essentially involve writing a new program to hardware sequencer RAM - such as hardware sequencer RAM 1054 for hardware sequencer controller 1060 of Figure 10H.Referring now to FIG. 11C , each block of the method 1150 described herein includes a computational process that may be performed using any combination of hardware, firmware, and/or software. For example, various functions may be performed by a processor executing instructions stored in memory. Method 1150 may also be embodied as computer usable instructions stored on a computer storage medium. Method 1150 may be provided by a stand-alone application, service or hosted service (stand alone or in combination with another hosted service), or a plug-in to another product, to name a few. In addition, method 1150 is described with respect to the system of FIG. 11A , and method 1150 may be performed by any one or any combination of systems, structures, or components, including but not limited to those described herein.11C is a flowchart of a method 1150 of configuring a DMA system using a VPU according to some embodiments of the present disclosure. At block B1102, the method 1150 includes calculating, using the processor and based at least in part on the first data written to the memory using the DMA system, a first output corresponding to one or more first updated positions of the tracked feature. For example, VPU 1108 may access data from VMEM 1110 written to VMEM 1110 using DMA 1104 and may process the data to compute one or more object positions corresponding to tracked features, objects, points, etc.At block B1104, the method 1150 includes updating, using the processor, the memory to include second data representing one or more address/data pairs corresponding to the one or more first update locations. For example, after computing one or more locations, VPU 1108 may update VMEM 1110 with address/data pairs in a format, such as the format described with respect to FIG. 11B .At block B1106, the method 1150 includes updating one or more descriptors corresponding to the tracked characteristic using the DMA system and based at least in part on the one or more address/data pairs. For example, DMA 1104 may access address/data pairs from VMEM 1110 and use the address/data pairs to update descriptors in descriptor RAM 1106 for the next read operation.At block B1108, the method 1150 includes writing third data to memory using the DMA system and based at least in part on the one or more descriptors. For example, DMA 1104 may write updated data to VMEM 1110 from corresponding to the address/data pair identified using the descriptor.At block B1110, the method 1150 includes calculating, using the processor and based at least in part on the third data, a second output corresponding to the one or more second updated positions of the tracked feature. For example, once the updated data is in VMEM 1110, VPU 1108 may compute the next set of updated address/data pairs corresponding to the tracked feature, object, point, etc., and this process may repeat until processing is complete.Permanent Fault Detection in Programmable Vision Accelerators (PVA)In safety-critical applications such as autonomous and semi-autonomous machine applications, there are stringent requirements for permanent fault detection and isolation. For example, when deep learning, computer vision, sensor processing, and/or other applications are implemented in machines, permanent fault detection must be performed regularly within the allotted time budget for accurate testing while also allowing the application to perform correctly - For example, low latency. Regarding Automotive Safety Integrity Level (ASIL) D, applications executing in autonomous or semi-autonomous machines may require permanent fault coverage of 90% or more. For this, end-to-end coverage may be required, with low latency, while meeting the runtime budget of each specific application. Traditional approaches use built-in self-test (BIST) to identify faults, but these BIST techniques either do not include sufficient coverage, introduce excessive latency in the system, and/or do not meet the run-time budget of some applications.To address the deficiencies of these conventional approaches, the present systems and methods can implement a Multiple Input Signature Register (MISR) BIST—for example, to perform fault detection of a Programmable Vision Accelerator (PVA) of a System-on-Chip (SoC). For example, in various embodiments of the present disclosure, a PVA may include one or more DMA systems and one or more VPUs using one or more processing controllers (or control processors) such as R5 processing and ARM processors, CPUs and/or similar devices) to control. Therefore, each component of the PVA may need to be tested, and the present system and method perform MISR BIST to detect permanent faults in an end-to-end manner. In this way, permanent fault detection can be performed to cover end-to-end blocks of control and data logic, reporting errors directly to the safety processor to reduce latency, and tailored for specific applications to meet associated run-time budgets.In various embodiments, MISR can be used in the PVA to implement a software logic BIST for permanent fault detection. MISR hardware (here with respect to FIGS. 12A and/or 12B ) may include cyclic redundancy check (CRC) hardware that is initialized (eg, using a known seed value) using the process controller. When executing a PVA application, the processing controller can allocate a portion of the timing budget (e.g., about 10% or less) to run a known software MISR test with known inputs with deterministic precomputation output, the deterministic precomputed output has the correct signature or gold value. For example, where the timing budget corresponds to 30 frames per second, a timing budget corresponding to 3 frames or less may be allocated to the MISR test. At the allotted time, the process controller may initiate the MISR test and wait for the test to complete to terminate the MISR CRC calculation. Once the test is complete, the MISR hardware may read back the final CRC value and check the final CRC value against the precomputed golden value. In the case of a mismatch, the MISR hardware can report the error directly to the SoC's security processor to take further action to handle the security error—for example, causing the application's output to be ignored, addressing or resolving a permanent fault, etc.Accordingly, the MISR hardware in the DMA block may monitor one or more (eg, all in an embodiment) of the PVA's Advanced Extensible Interface (AXI) master ports (eg, in an embodiment all) transactions in . By checking all output stages from the PVA, in an embodiment, the safety integrity of the PVA can be checked for permanent flaws (e.g., output information) that may corrupt the output stages, which may be destroyed by the PVA when the application program is executed. and/or another engine consumption. MISR hardware can thus detect errors across the different blocks of the PVA (eg, the processing controller, VPU, and DMA system), as these components all cooperate and interact in generating the output stage. Signatures computed in MISR hardware can represent the state of these different PVA blocks during MISR testing.In an embodiment, the MISR scheme may include a CRC check on both the write address (eg, 40-bit control) and the write data (eg, 512-bit data) leaving the AXI master port. This feature may allow isolation of control path failures (eg, addressing errors) from data path failures (eg, calculation errors). Due to the configuration of the MISR hardware (as described in this article), each DMAAXI port may be able to be checked. In an embodiment, a control bit may be used to disable writing to the address and data output of all channels participating in the MISR calculation in order to save bandwidth consumption in the memory subsystem and during memory allocation. In addition, MISR schemes may include per-channel control register bits to exclude or mask specific channels from MISR calculations—for example, to isolate non-secure channels. In an embodiment, the DMA can calculate the MISR CRC using IEEE 802 and MPEG CRC-32 raw polynomials: X32+X26+X23+X22+X16+X12+X11+X10+X8+X7+X5+X4+X2+X+ 1. The MISR SET register can be used to set the initial CRC value (eg, seed value) for address and data CRC calculations. The MISR REF register can be used to compare the CRC value for address and data CRC calculations.To support MISR for 512-bit data, 8:1 bit data compression can be applied - for example, each data byte can be compressed to 1 data bit by 8>1 exclusive-or (XOR) operation to form 2X32-bit message data . To support MISR 40-bit addresses, the 9 most significant bits can be compressed - for example, the 9 most significant bits can be compressed via a 9>1 XOR operation to form a 32-bit message address. Variations in test patterns and instructions are available to cover compression-related aliasing. The likelihood of aliasing is likely to be low, since error failures do not produce address CRC errors when there is an even number of errors in a byte on the output image. Also, aliasing may be less likely since the reference CRC can be computed on the output image with the same pattern at the same even error bit positions throughout the MISR test. During experiments, aliasing was shown to cause an average coverage loss of 0.25%. In an embodiment, data compression with such low aliasing is valuable due to the width of the bus (eg, 512 bits), and without compression the MISR test may not meet the latency or runtime budget of the system.The MISR timer register can be used to time out the MISR calculation and the MISR timer register can be decremented on every AXI clock. A timeout feature may help in the event of a failure that causes the MISR test to hang, which may prevent the MISR hardware from reporting errors. When the MISR test is over, the process controller can use a software event to stop the MISR calculation. The DMA system can compare the MISR REF value with the MISR tested data and address output MISR VAL value, and the DMA hardware can update the MISR status register based on the comparison result. For example, the MISR status register can include one of the following values: 0: idle; 1: done: failed data; 3: busy; 4: done: both address and data failed; 5: done: failed timeout; 6: RSVD; and 7 :done :passed. In the case of a MISR timeout error, the DMA can generate a timeout signal to the security processor, and in the case of a CRC check error in the data and/or address, the DMA can assert a security error to the security processor.Referring to FIG. 12A , FIG. 12A is a diagram of a built-in self-test (BIST) system for performing cyclic redundancy check (CRC) calculations of a programmable vision accelerator (PVA), according to some embodiments of the present disclosure. It should be understood that this and other arrangements described herein are set forth by way of example only. Other arrangements and elements (eg, machines, interfaces, functions, orders, functional groupings, etc.) may be used in addition to or instead of those shown, and some elements may be omitted entirely. Furthermore, many of the elements described herein are functional entities that can be implemented as discrete or distributed components or in combination with other components, in any suitable combination and location. Various functions described herein as being performed by entities may be performed by hardware, firmware, and/or software. For example, various functions may be performed by a processor executing instructions stored in memory. In some embodiments, MISR hardware 1250 may include similar components to DMA system 1050 of FIG. 10H , example autonomous vehicle 1300 of FIGS. 13A-13D , example computing device 1400 of FIG. , features and/or functions. For example, MISR hardware 1200 may be included in the DMA block of the PVA, as shown in Figure 10H. In this way, MISR hardware 1200 can operate on the output stage of data movement, and addressing (or control) can access the output of DMA engine 1056 . As shown in Figure 12A, there may be 16 AXI data channels and 16 AXI address (or control) channels. However, this is not intended to be limiting, and any number (and/or type) of data and/or address channels may be used in accordance with an embodiment.In operation, the processing controller may control the DMA system 1050 and the MISR hardware 1200—as well as one or more processing components of the system, such as a VPU. When performing a MISR test on the DMA system 1050, in an embodiment, the test code may include all zeros, all ones, alternating zeros and ones, and/or a random code sequence. In this way, high coverage of the DMA system 1050 can be achieved. For example, when testing a VPU, the test code may include application specific or custom code. For example, during testing coverage of a particular application, the components or portions of the VPU used (e.g., registers, logic, etc.) can be determined, and test code can be generated so that those specific components or portions of the VPU are included in the execution of the test code middle. For example, these random data with different instructions can be included in the test code to sequence the tests with different instructions to use different areas of the VPU logic. In this way, the coverage of the VPU in general and in particular the coverage of specific applications executing on the VPU is increased. By performing DMA and VPU testing in this manner, and since the processing controller is involved in the control and interaction between various components (e.g., DMA system 1050 and the VPU), the processing controller can have high coverage because the data movement The output and addressing of the controller are affected by the interaction of the processing controller.During testing, where different code modes are used, the code modes can be used in an alternating pattern, or one code can be used for the first timeframe (e.g. equivalent to 30fps time) and another code for the second timeframe (e.g. time equivalent to 30fps), another code for the third timeframe (e.g. time equivalent to 30fps), etc. For example, in the DMA code example, a code of 0 could be used for the first time frame, then a code of 1 for the second time frame, then alternating codes of 0 and 1 (for example, 0101010101...) for the third time frame , then a random code for the fourth timeframe (for example, 011100100010…), then these four codes may repeat, and so on.In practice, when testing DMA, for example, the process controller may interact with the MISR controller 1206 to write set reference values into the MISR data set register 1210 and the MISR address set register 1216 . These values may differ for data and addresses and may be referred to as seed values for the CRC calculation. The processing controller can then initialize the channel where the data movement is performed in the DMA engine 1056, and since the location of the test code in memory is known to the processing controller, the descriptors (e.g., in descriptor SRAM by the processing controller Configuration) 1052) may be used to sequence the DMA engine 1056 for data that passes the MISR test. The processing controller can set a timer 1226 on the MISR hardware 1200 to enable MISR testing, and can then trigger the channel of the DMA engine 1056 to start reading test data from source destinations and outputting data to the MISR hardware 1200 for MISR testing. Thus, when testing DMA, data movement (eg, correct addressing and correct data in the addressed location) is being tested, so the MISR hardware 1200 can access the output of the DMA engine 1056 while the DMA engine executes the data movement of the test code. Such taps into the output stage can be indicated in FIG. 12A as external memory, which can be funneled in sequence by the process controller (one data channel at a time, one address channel at a time). For example, for data channels, the process controller can sequence through each of, say, 16 data channels, and the corresponding AXI write data (wdata) for each channel can be fed through the CH0-CH16 data CRC calculation 1202—for example, in series. For example, the processing controller may configure the channel output registers 1220 to pass through the channels one at a time according to a configured sequence from the processing controller. In an embodiment, channel mask registers 1208 (e.g., programmed by MISR controller 1206 based on interaction with the processing controller) may be configured by the processing controller to mask various channels (e.g., not under test channel) are masked or removed from the CRC calculation. In an embodiment, this masking may be performed using AND gates. In the event that one or more channels are masked out, the golden value in the MISR data reference register 1222 (which may be provided by the processing controller to the MISR controller 1206) may only correspond to the CRC calculation for the unmasked channels. For each unmasked channel, the data on the channel (generated using test code read from memory) can be applied (e.g., compressed or uncompressed) to a polynomial in the CRC data calculation 1202 to generate the MISR data value for that channel 1214. Once the channel has finished calculating, the processing controller can receive the indication and can cause the next channel of data to be sent to the CRC calculation 1202 to calculate the next MISR data value 1214, and so on until each unmasked channel has a corresponding MISR data value 1214. Once each MISR data value 1214 for a particular iteration has been calculated, these values 1214 can be combined to generate a final MISR data value that can be compared to the golden value in the MISR data reference register 1222 to generate a MISR data status determination (For example, it may include states corresponding to the values 0-7 above).As another example, for address channels, the process controller may sequence each of, for example, 16 address or control channels, and may feed the corresponding AXI write address (waddress) address for each channel via the CH0-CH16 CRC calculation 1204 - For example, concatenation. In an embodiment, the channel mask register 1208 may be configured by the processing controller to mask or remove various channels—eg, channels not under test—from CRC calculations. In an embodiment, this masking may be performed using AND gates. In the event that one or more channels are masked out, the golden value in MISR data reference register 1224 may only correspond to the CRC calculation for the unmasked channel. For each unmasked channel, the address on the channel (generated using test code read from memory) may be applied (eg, compressed or uncompressed) to the polynomial of the CRC address calculation 1204 to generate the MISR address value 1218 for that channel. Once the channel has finished calculating, the processing controller can receive the indication and can cause the next channel of address data to be sent to the CRC calculation 1204 to calculate the next MISR address value 1218, and so on until each unmasked channel has a corresponding MISR The address value is 1218. Once each MISR address value 1218 for a particular iteration is calculated, these values 1218 can be combined to generate a final address MISR value that can be compared to the golden value in the MISR reference register 1224 address to generate the MISR address status determination ( For example, it may include states corresponding to the values 0-7 above).In some embodiments, MISR testing can be iterative such that a first code can be processed, the output can be tested, the output can then be used for the next iteration which can be tested, and so on. In such an embodiment, the MISR test may include multiple stages, and a completed MISR test may include performing each stage.In the case where MISR hardware 1200 is specifically used to test a VPU, for example, DMA system 1050 can move test code into VMEM, VPU can process test code and write results back to VMEM, and DMA engine 1056 can read results from VMEM Return to the destination location. When writing the result back to the destination location, MISR hardware 1200 can access the DMA output and perform MISR on the data (eg, including data and address), and perform MISR similar to that discussed herein. In this way, the interaction of the VPU with test code can be tested using MISR hardware 1200 .After completing the MISR test, the processing controller may receive an interrupt. For example, the process controller may receive a completion interrupt and, in the absence of errors, may wait for the next MISR test cycle. Where the interrupt is an error interrupt, the type of error can be determined—eg, failed data, failed address, failed both, etc.—and the safety error can be asserted to the secure processor. For example, in some embodiments where MISR hardware 1200 is hung or idle (eg, with a timeout error), the DMA may generate a timeout signal to the SoC's security processor.In some embodiments, to speed up MISR calculations to calculate CRCs on one or more (eg, such as 16 in an embodiment) channels without serializing or hierarchical channel-MISR calculations may be based on the In the AXIID field, the channel is demultiplexed by parallel channel calculation. For example, since CRC calculations are done at different rates, the method of FIG. 12A includes serial processing of one channel after another. However, using the system of Figure 12B, as described below, these calculations can be done in parallel. For example, when a processing controller terminates a MISR computation, the MISR controller can sort all channel outputs to compute a final signature, which can be compared to reference or golden values for address and data outputs. This feature can speed up permanent fault detection without requiring an additional programmer register interface – for example, because the same control registers can be used for all channels.Likewise, and referring to FIG. 12B , which is a built-in self-test (BIST) system diagram for parallel channel cyclic redundancy check (CRC) calculations of a programmable vision accelerator (PVA), according to some embodiments of the present disclosure. It should be understood that this and other arrangements described herein are set forth by way of example only. Other arrangements and elements (eg, machines, interfaces, functions, orders, functional groupings, etc.) may be used in addition to or instead of those shown, and some elements may be omitted entirely. Furthermore, many of the elements described herein are functional entities that can be implemented as discrete or distributed components or in combination with other components, in any suitable combination and location. Various functions described herein as being performed by entities may be performed by hardware, firmware, and/or software. For example, various functions may be performed by a processor executing instructions stored in memory. In some embodiments, MISR hardware 1250 may include similar components to DMA system 1050 of FIG. 10H , example autonomous vehicle 1300 of FIGS. 13A-13D , example computing device 1400 of FIG. features and/or functions. For example, MISR hardware 1250 may be included in the DMA block of the PVA, as shown in Figure 10H. In this way, MISR hardware 1250 can operate on the output stage of data movement, and addressing (or control) can access the output of DMA engine 1056 . As shown in Figure 12A, there may be 16 AXI data channels and 16 AXI address (or control) channels. However, this is not intended to be limiting, and any number (and/or type) of data and/or address channels may be used in accordance with an embodiment.MISR hardware 1250 may operate similarly to MISR hardware 1200 of FIG. 12A . Except that the MISR hardware 1250 can be configured for parallel data channel and parallel address channel CRC calculations. For example, the process controller may configure the MISR Data Set Register 1256 to set a seed or reference value for each Data CRC calculation 1260A-1260N (corresponding to AXI Data Lanes 0-15, respectively), and may configure the MISR Address Set Register 1258 for each Address CRC calculations 1262A-1262N set seeds or reference values (corresponding to AXI address channels 0-15, respectively). The processing controller, similar to that described with respect to FIG. 12A , can then trigger data movement (e.g., for DMA testing) and/or VPU processing (e.g., for VPU specific testing) of the DMA system 1050 to move data, and the MISR Hardware 1250 can be plugged into the output stage for testing.Thus, the process controller may cause 16 channels of data to be sent to multiplexer (mux) 1252 and 16 channels of address data to be sent to multiplexer (mux) 1254 . mux 1252 can then provide the corresponding channel's data to the corresponding CRC calculation 1260A-1260N (e.g., channel 0AXI data to channel 0 CRC calculation 1260, channel 1 data to channel 1 CRC calculation 1260B, etc.), and each CRC calculation 1260 can use the data and a CRC polynomial with reference values to calculate MISR data values 1284A-1284N (eg, channel 0 CRC calculation 1260A may calculate MISR data 0 value 1284A, channel 1 CRC calculation 1260B may calculate MISR data 1 value 1284B, etc.). MISR data values 1284A-1284N may then be ordered out of multiplexer (mux) 1264 according to the MISR sequence from MISR control 1270 configured by the process controller. In an embodiment, such as described with respect to FIG. 12A , one or more channels may not be included in a particular MISR test, so the channel mask register 1268 may be configured by the process controller to update the MISR sequence so that the channel or channels correspond to one or more The MISR data values 1284 for the masked channels are not provided to the channel 0-16 data CRC calculation 1274 for use in calculating the final CRC value. For unmasked channels, multiplexer 1264 may output MISR data values 1284 according to the MISR sequence. In this way, different channels and different computation times of the CRC computation 1260 are accounted for, since the MISR data values 1284 are forced to output according to the MISR sequence, rather than being sent to the CRC computation 1274 according to the timing at which the CRC computation is being completed. Once the MISR sequence of MISR data values 1284 is output by multiplexer 1264 to CRC calculation 1274 , CRC calculation 1274 may calculate a final CRC value and store the final CRC value to VAL register 1276 . The final CRC value in the VAL register 1276 may then be compared to the golden value in the MISR data reference register 1272 (configured by the MISR control 1270 from the process controller) to determine the MISR data status.Similarly, the process controller can cause 16 address channels to be sent to multiplexer (mux) 1254, which can then provide the corresponding address channels to corresponding CRC calculations 1262A-1262N (e.g., channel 0AXI address to channel 0 CRC calculation 1262, channel 1 address to channel 1 CRC calculation 1262B, etc.), and each CRC calculation 1262 may use the address and a CRC polynomial with reference values to calculate MISR address values 1286A-1286N (e.g., channel 0 CRC calculation 1262A may Calculate MISR address 0 value 1286A, channel 1 CRC calculation 1262B may calculate MISR address 1 value 1286B, etc.). MISR address values 1286A-1286N may then be sequenced out of multiplexer (mux) 1266 according to the MISR sequence from MISR control 1270, as configured by the process controller. In an embodiment, such as described with respect to FIG. 12A , one or more channels may not be included in a particular MISR test, so the channel mask register 1268 may be configured by the process controller to update the MISR sequence so that one or more channels correspond to The MISR address values 1286 for multiple masked channels are not provided to the channel 0-16 address CRC calculation 1280 for use in calculating the final CRC value. For unmasked channels, the MISR address value 1286 may be output by the multiplexer 1266 according to the MISR sequence. In this way, different channels and different computation times of the CRC computation 1262 are accounted for, since the MISR address value 1286 is forced to be output according to the MISR sequence, rather than being sent to the CRC computation 1280 according to the timing at which the CRC computation is completing. Once multiplexer 1266 outputs the MISR sequence of MISR address values 1286 to CRC calculation 1280 , CRC calculation 1280 may calculate a final CRC value and store the final CRC value to VAL register 1282 . The final CRC value in VAL may then be compared in register 1282 to the golden value in MISR address reference register 1278 (configured by MISR control 1270 from the process controller) to determine the MISR address status.MISR data status and MISR address status can be checked and used similarly to that described above with respect to FIG. 12A.Referring now to FIG. 12C , each block of the method 1290 described herein includes a computational process that may be performed using any combination of hardware, firmware, and/or software. For example, various functions may be performed by a processor executing instructions stored in memory. Method 1290 may also be embodied as computer usable instructions stored on a computer storage medium. Method 1290 may be provided by a stand-alone application, service or hosted service (stand alone or in combination with another hosted service), or a plug-in to another product, to name a few. Furthermore, method 1290 is described with respect to the system of FIG. 12A , and method 1290 may be performed by any one or any combination of systems, structures or components, including but not limited to those described herein.Figure 12C is a flowchart of a method 1290 of implementation (BIST) for permanent fault detection in a PVA according to some embodiments of the present disclosure. At block B1202, the method 1290 includes receiving multiple channels of data from the DMA system one channel at a time and based on the ordering by the processing controller. For example, MISR hardware 1200 may receive one data channel (or one address data channel) at a time according to an order determined by the process controller.At block B1204, the method 1290 includes calculating a plurality of MISR values by performing a CRC calculation using the polynomial of the CRC calculation and corresponding data corresponding to the channel to calculate the MISR value. For example, for each channel, the CRC calculation 1202 (or address 1204) may use the data (or address) from the channel and the CRC calculation polynomial 1202 to calculate the MISR data value 1214 (or the MISR address value 1216 for the address) (from the CRC MISR Data Set Register 1210 or MISR Address Set Register 1216 seed value).At block B1206, the method 1290 includes calculating a final MISR value using the plurality of MISR values. For example, MISR data values 1214 from each channel (or MISR address values from each channel) may be combined to generate the final MISR value.At block B1208, method 1290 includes comparing the final MISR value to the signature value. For example, the final MISR value generated from each MISR value 1214 (or address value 1216) may be compared to the signature or golden value of the MISR data reference register 1222 (or MISR address reference register 1224 for an address).At block B1210, the method 1290 includes outputting a MISR status based at least in part on the comparison. For example, based on the comparison of block B1208, a status can be determined—eg, failed data, failed address, both failed, completed, etc.—and this status can be used to notify the SoC's security processor where an error condition was generated.Example Ego VehicleFigure 13A is an illustration of an example ego vehicle 1300, according to some embodiments of the present disclosure. Ego vehicle 1300 (alternatively referred to herein as "vehicle 1300") may include, but is not limited to, passenger vehicles such as cars, trucks, buses, first responder vehicles, shuttles, electric or motorized bicycles, motorcycles , fire truck, police vehicle, ambulance, boat, construction vehicle, underwater vessel, drone, vehicle coupled to a trailer, and/or another type of vehicle (e.g., unmanned and/or housing a or vehicles with more passengers). Autonomous vehicles are generally described in accordance with the National Highway Traffic Safety Administration (NHTSA), a division of the U.S. Department of Transportation, and the Society of Automotive Engineers (SAE) “Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles” (June 2018 Standard No. J3016-201806 issued on the 15th, Standard No. J3016-201609 issued on September 30, 2016, and previous and future versions of the standard) to describe the automation level. Vehicle 1300 may be capable of implementing one or more functions consistent with levels 3-5 of the level of autonomous driving. For example, depending on the embodiment, the vehicle 1300 may be capable of conditional automation (level 3), high automation (level 4), and/or full automation (level 5).Vehicle 1300 may include components such as a chassis, body, wheels (eg, 2, 4, 6, 8, 18, etc.), tires, axles, and other components of the vehicle. Vehicle 1300 may include a propulsion system 1350 such as an internal combustion engine, a hybrid electric power plant, an all-electric engine, and/or another propulsion system type. Propulsion system 1350 may be connected to a driveline of vehicle 1300 , which may include a transmission, to enable propulsion of vehicle 1300 . Propulsion system 1350 may be controlled in response to receiving a signal from throttle/accelerator 1352 .A steering system 1354 , which may include a steering wheel, may be used to steer vehicle 1300 (eg, along a desired path or route) while propulsion system 1350 is operating (eg, while the vehicle is in motion). Steering system 1354 may receive signals from steering actuator 1356 . For fully automatic (Level 5) functions, a steering wheel may be optional.Brake sensor system 1346 may be used to operate the vehicle brakes in response to receiving signals from brake actuator 1348 and/or brake sensors.One or more controllers 1336, which may include one or more system-on-chip (SoC) 1304 ( FIG. 13C ) and/or one or more GPUs, may provide information to one or more components and/or systems of vehicle 1300 To provide a signal (eg, to represent a command). For example, one or more controllers may direct operation of the vehicle brakes via one or more brake actuators 1348, operation of the steering system 1354 via one or more steering actuators 1356, operation of the steering system 1354 via one or more accelerator pedals, The /accelerator 1352 operates a signal for the propulsion system 1350 . The one or more controllers 1336 may include one or more onboard (e.g., integrated) computing devices (e.g., supercomputers) that process sensor signals and output operational commands (e.g., signals representative of the commands) to implement The vehicle 1300 is driven autonomously and/or assisted by a human driver. The one or more controllers 1336 may include a first controller 1336 for autonomous driving functions, a second controller 1336 for functional safety functions, a third controller for artificial intelligence functions such as computer vision 1336, fourth controller for infotainment functions 1336, fifth controller for redundancy in emergency situations 1336, and/or other controllers. In some examples, a single controller 1336 can handle two or more of the above functions, two or more controllers 1336 can handle a single function, and/or any combination thereof.The one or more controllers 1336 may provide signals for controlling one or more components and/or systems of the vehicle 1300 in response to sensor data (eg, sensor inputs) received from one or more sensors. Sensor data may be received from, for example and without limitation, GNSS sensors 1358 (e.g., Global Positioning System sensors), RADAR sensors 1360, ultrasonic sensors 1362, LIDAR sensors 1364, inertial measurement unit (IMU) sensors 1366 (e.g., accelerometers, gyroscopes, , magnetic compass, magnetometer, etc.), microphone 1396, stereo camera 1368, wide-angle camera 1370 (e.g., fisheye camera), infrared camera 1372, surround camera 1374 (e.g., 360-degree camera), long-range and/or mid-range camera 1398, speed Sensors 1344 (eg, to measure the velocity of vehicle 1300 ), vibration sensors 1342 , steering sensors 1340 , brake sensors (eg, as part of brake sensor system 1346 ), and/or other sensor types.One or more of controllers 1336 may receive input (e.g., represented by input data) from instrument cluster 1332 of vehicle 1300, and via human-machine interface (HMI) display 1334, audible annunciators, speakers, and/or via vehicle Other components of 1300 provide output (eg, represented by output data, display data, etc.). These outputs may include, for example, vehicle speed, velocity, time, map data (e.g., HD map 1322 of FIG. ) and the like, such as information about objects and object states perceived by the controller 1336, and the like. For example, HMI display 1334 may display information regarding the presence of one or more objects (e.g., street signs, warning signs, traffic light changes, etc.) Information about driving maneuvers (eg, change lane now, exit 34B in two miles, etc.).Vehicle 1300 also includes a network interface 1324 that can communicate over one or more networks using one or more wireless antennas 1326 and/or a modem. For example, network interface 1324 may be capable of communicating via LTE, WCDMA, UMTS, GSM, CDMA2000, and the like. One or more wireless antennas 1326 may also use one or more local area networks such as Bluetooth, Bluetooth LE, Z-Wave, ZigBee, etc. and/or one or more low-level wireless networks such as LoRaWAN, SigFox, etc. A power wide area network (LPWAN) enables communication between objects in the environment (eg, vehicles, mobile devices, etc.).Figure 13B is an example of camera positions and fields of view for the example ego vehicle 1300 of Figure 13A, according to some embodiments of the present disclosure. The cameras and respective fields of view are an example embodiment and are not intended to be limiting. For example, additional and/or alternative cameras may be included and/or located at different locations on vehicle 1300 .Camera types for the cameras may include, but are not limited to, digital cameras that may be adapted for use with components and/or systems of the vehicle 1300 . The camera may operate at Automotive Safety Integrity Level (ASIL) B and/or at another ASIL. The camera type may have any image capture rate, eg, 60 frames per second (fps), 120 fps, 240 fps, etc., depending on the embodiment. The camera may be capable of using a rolling shutter, a global shutter, another type of shutter, or a combination thereof. In some examples, the color filter array may include a red-white-white-white (RCCC) color filter array, a red-white-white-blue (RCCB) color filter array, a red-blue-green-white (RBGC) color filter array, a Foveon X3 color filter array, array, a Bayer sensor (RGGB) color filter array, a monochrome sensor color filter array, and/or another type of color filter array. In some embodiments, sharp pixel cameras, such as cameras with RCCC, RCCB, and/or RBGC color filter arrays, may be used in efforts to increase light sensitivity.In some examples, one or more of the cameras may be used to perform advanced driver assistance system (ADAS) functions (eg, as part of a redundant or fail-safe design). For example, a multi-function monocular camera can be installed to provide functions including lane departure warning, traffic sign assist and intelligent headlight control. One or more (eg all cameras) of the cameras may simultaneously record and provide image data (eg video).One or more of the cameras may be mounted in a mounting assembly, such as a custom designed (3-D printed) assembly, in order to cut off stray light and noise from within the vehicle that may interfere with the camera's image data capture capabilities. Reflections (e.g. reflections from the dashboard reflected in the windshield mirror). Regarding the wing mirror mount assembly, the wing mirror assembly may be custom 3-D printed such that the camera mounting plate matches the shape of the wing mirror. In some examples, one or more cameras may be integrated into the wing mirrors. For side-view cameras, one or more cameras can also be integrated into the four pillars at each corner of the cab.A camera with a field of view that includes portions of the environment in front of the vehicle 1300 (e.g., a front-facing camera) can be used for look-arounds to help identify forward paths and obstacles, and with the help of one or more controllers 1336 and/or control SoCs Down Assist provides information critical to generating occupancy grids and/or determining preferred vehicle routes. Front-facing cameras can be used to perform many of the same ADAS functions as LIDAR, including emergency braking, pedestrian detection, and collision avoidance. Front cameras may also be used for ADAS functions and systems, including lane departure warning (“LDW”), autonomous cruise control (“ACC”), and/or other functions such as traffic sign recognition.A wide variety of cameras can be used in front configurations including, for example, monocular camera platforms including CMOS (complementary metal oxide semiconductor) color imagers. Another example may be a wide-angle camera 1370, which may be used to sense objects entering the field of view from the periphery (eg, pedestrians, intersection traffic, or bicycles). Although only one wide-angle camera is illustrated in FIG. 13B , any number of wide-angle cameras 1370 may be present on vehicle 1300 . In addition, remote cameras 1398 (eg, long-view stereo camera pairs) can be used for depth-based object detection, especially for objects for which a neural network has not been trained. Remote cameras 1398 can also be used for object detection and classification and basic object tracking.One or more stereo cameras 1368 may also be included in the front configuration. Stereo camera 1368 may include an integrated control unit that includes a scalable processing unit that may provide a multi-core microprocessor and programmable logic (FPGA) with integrated CAN or Ethernet interfaces on a single chip. Such a unit can be used to generate a 3-D map of the vehicle environment, including distance estimates for all points in the image. Alternative stereo camera 1368 may include a compact stereo vision sensor that may include two camera lenses (one each on the left and right) and may measure the distance from the vehicle to the target object and use the resulting information (e.g. metadata) to activate autonomous emergency systems An image processing chip for braking and lane departure warning functions. Other types of stereo cameras 1368 may be used in addition to or alternatively to those described herein.A camera (eg, a side-view camera) with a field of view of portions of the environment including the sides of the vehicle 1300 may be used to look around, provide information used to create and update occupancy grids, and generate side-impact collision warnings. For example, surround cameras 1374 (eg, four surround cameras 1374 as shown in FIG. 13B ) may be placed on vehicle 1300 . Surround cameras 1374 may include wide-angle cameras 1370, fisheye cameras, 360-degree cameras, and/or the like. Four examples, four fisheye cameras can be placed on the front, rear and sides of the vehicle. In an alternative arrangement, the vehicle may use three surround cameras 1374 (eg, left, right, and rear), and may utilize one or more other cameras (eg, a forward-facing camera) as a fourth surround-view camera.A camera with a field of view including the portion of the environment behind the vehicle 1300 (eg, a rear view camera) may be used for parking assistance, look around, rear collision warning, and create and update occupancy grids. A wide variety of cameras may be used, including but not limited to cameras that are also suitable as front cameras as described herein (eg, long-range and/or mid-range cameras 1398, stereo cameras 1368, infrared cameras 1372, etc.).Figure 13C is a block diagram of an example system architecture for the example autonomous vehicle 1300 of Figure 13A, according to some embodiments of the present disclosure. It should be understood that this arrangement and others described herein are set forth by way of example only. Other arrangements and elements (eg, machines, interfaces, functions, sequences, groupings of functions, etc.) may be used in addition to or in place of those shown, and some elements may be omitted entirely. Further, many of the elements described herein are functional entities, which may be implemented as discrete or distributed components or in combination with other components, and in any suitable combination and location. Various functions described herein as being performed by entities may be implemented by hardware, firmware, and/or software. For example, various functions may be implemented by a processor executing instructions stored in a memory.Each of the components, features, and systems of vehicle 1300 in FIG. 13C are illustrated as being connected via bus 1302 . Bus 1302 may include a controller area network (CAN) data interface (alternatively referred to herein as a "CAN bus"). CAN may be a network within the vehicle 1300 used to aid in the control of various features and functions of the vehicle 1300, such as brakes, acceleration, braking, steering, actuation of windshield wipers, and the like. A CAN bus can be configured with tens or even hundreds of nodes, each with its own unique identifier (eg CAN ID). The CAN bus can be read to find steering wheel angle, ground speed, engine revolutions per minute (RPM), button positions and/or other vehicle status indicators. The CAN bus can be ASIL B compliant.Although bus 1302 is described herein as a CAN bus, this is not intended to be limiting. For example, FlexRay and/or Ethernet may be used in addition to or alternatively to the CAN bus. Furthermore, although bus 1302 is represented by a single line, this is not intended to be limiting. For example, there may be any number of buses 1302, which may include one or more CAN buses, one or more FlexRay buses, one or more Ethernet buses, and/or one or more other busses using different protocols. type of bus. In some examples, two or more buses 1302 can be used to perform different functions and/or can be used for redundancy. For example, the first bus 1302 can be used for collision avoidance functions, and the second bus 1302 can be used for drive control. In any example, each bus 1302 can communicate with any component of the vehicle 1300 , and two or more buses 1302 can communicate with the same component. In some examples, each SoC 1304, each controller 1336, and/or each computer within the vehicle may have access to the same input data (e.g., input from sensors of the vehicle 1300) and may be connected to, for example, a CAN bus public bus.Vehicle 1300 may include one or more controllers 1336, such as those described herein with respect to FIG. 13A. Controller 1336 can be used for a variety of functions. Controller 1336 may be coupled to any other various components and systems of vehicle 1300 and may be used for control of vehicle 1300 , artificial intelligence for vehicle 1300 , infotainment for vehicle 1300 , and/or the like.Vehicle 1300 may include one or more system-on-chip (SoC) 1304 . SoC 1304 may include CPU 1306, GPU 1308, processor 1310, cache 1312, accelerator 1314, data storage 1316, and/or other components and features not shown. The SoC 1304 can be used to control the vehicle 1300 in a variety of platforms and systems. For example, one or more SoCs 1304 can be integrated in a system (such as that of vehicle 1300) with HD maps 1322, which can be downloaded from one or more servers (such as one or more of FIG. 13D ) via network interface 1324. Multiple servers 1378) obtain map refreshes and/or updates.CPU 1306 may include a CPU cluster or CPU complex (alternatively referred to herein as "CCPLEX"). CPU 1306 may include multiple cores and/or L2 cache. For example, in some embodiments, CPU 1306 may include eight cores in a coherent multiprocessor configuration. In some embodiments, CPU 1306 may include four dual-core clusters, each with a dedicated L2 cache (eg, 2MB L2 cache). CPU 1306 (eg, CCPLEX) may be configured to support simultaneous cluster operations such that any combination of clusters of CPU 1306 can be active at any given time.The CPU 1306 can implement power management capabilities that include one or more of the following features: each hardware block can automatically perform clock gating to save dynamic power when idle; each core clock can be clocked at Gating is performed when the core is not actively executing instructions; each core can be independently power-gated; when all cores are clock-gated or power-gated, each core cluster can be clock-gated independently; And/or when all cores are power gated, each core cluster can be power gated independently. The CPU 1306 may further implement enhanced algorithms for managing power states, where allowed power states and desired wake-up times are specified, and the hardware/microcode determines the best power state to enter for the core, cluster, and CCPLEX. Processing cores can support simplified power state entry sequences in software, with this work offloaded to microcode.GPU 1308 may include an integrated GPU (alternatively referred to herein as an "iGPU"). GPU 1308 can be programmable and efficient for parallel workloads. In some examples, GPU 1308 may use an enhanced tensor instruction set. GPU 1308 may include one or more streaming microprocessors, where each streaming microprocessor may include an L1 cache (e.g., an L1 cache with at least 96 KB of storage capacity), and two or more of these streaming microprocessors may include More can share the L2 cache (eg L2 cache with 512KB storage capacity). In some embodiments, GPU 1308 may include at least eight streaming microprocessors. GPU 1308 may use a computing application programming interface (API). Additionally, GPU 1308 may utilize one or more parallel computing platforms and/or programming models (eg, NVIDIA's CUDA).In the case of automotive and embedded use, the GPU 1308 can be power optimized for maximum performance. For example, GPU 1308 may be fabricated on Fin Field Effect Transistors (FinFETs). However, this is not intended to be limiting, and GPU 1308 may be fabricated using other semiconductor fabrication processes. Each streaming microprocessor may incorporate several mixed-precision processing cores divided into multiple blocks. For example and without limitation, 64 PF32 cores and 32 PF64 cores may be divided into four processing blocks. In such an example, each processing block can allocate 16 FP32 cores, 8 FP64 cores, 16 INT32 cores, two mixed-precision NVIDIA tensor cores for deep learning matrix arithmetic, L0 instruction cache, warps (warp) scheduler, dispatch unit, and/or 64KB register file. In addition, streaming microprocessors can include independent parallel integer and floating-point data paths to provide efficient execution of workloads utilizing a mix of computational and addressable computations. Streaming microprocessors may include independent thread scheduling capabilities to allow finer-grained synchronization and cooperation between parallel threads. Streaming microprocessors may include a combined L1 data cache and shared memory unit to increase performance while simplifying programming.GPU 1308 may include high bandwidth memory (HBM) and/or a 16GB HBM2 memory subsystem providing a peak memory bandwidth of approximately 900GB/s in some examples. In some examples, Synchronous Graphics Random Access Memory (SGRAM), such as Graphics Double Data Rate Synchronous Random Access Memory Fifth Generation (GDDR5), may be used in addition to or instead of HBM memory.GPU 1308 may include unified memory technology that includes access counters to allow more precise migration of memory pages to the processors that access them most frequently, thereby increasing the efficiency of memory ranges shared between processors. In some examples, address translation service (ATS) support may be used to allow GPU 1308 to directly access CPU 1306 page tables. In such an example, when the GPU 1308 memory management unit (MMU) experiences a miss, an address translation request may be transmitted to the CPU 1306 . In response, CPU 1306 may look up the virtual-to-physical mapping for the address in its page tables and transmit the translation back to GPU 1308 . In this way, unified memory technology may allow a single unified virtual address space for the memory of both CPU 1306 and GPU 1308 , thereby simplifying GPU 1308 programming and porting applications to GPU 1308 .Additionally, GPU 1308 may include access counters that may track how often GPU 1308 accesses memory of other processors. Access counters can help ensure that memory pages are moved to the physical memory of the processors that access those pages most frequently.SoC 1304 may include any number of caches 1312, including those caches described herein. For example, cache 1312 may include an L3 cache available to both CPU 1306 and GPU 1308 (eg, connected to both CPU 1306 and GPU 1308). Cache 1312 may include a write-back cache, which may track the state of a line, for example, by using a cache coherence protocol (eg, MEI, MESI, MSI, etc.). Depending on the embodiment, the L3 cache may include 4MB or more, although smaller cache sizes may also be used.SoC 1304 may include an arithmetic logic unit (ALU) that may be utilized in the processing to perform any of various tasks or operations with respect to vehicle 1300 , such as processing a DNN. Additionally, SoC 1304 may include a floating point unit (FPU) (or other math coprocessor or number coprocessor type) for performing mathematical operations within the system. For example, SoC 104 may include one or more FPUs integrated as execution units within CPU 1306 and/or GPU 1308 .SoC 1304 may include one or more accelerators 1314 (eg, hardware accelerators, software accelerators, or a combination thereof). For example, SoC 1304 may include a hardware accelerator cluster, which may include optimized hardware accelerators and/or large on-chip memory. This large on-chip memory (eg, 4MB SRAM) can enable clusters of hardware accelerators to accelerate neural network and other computations. Clusters of hardware accelerators can be used to supplement GPU 1308 and offload some tasks from GPU 1308 (eg, freeing up more cycles of GPU 1308 to perform other tasks). As one example, the accelerator 1314 may be used for targeted workloads (eg, perception, convolutional neural networks (CNN), etc.) that are stable enough to allow easy controllable acceleration. As used herein, the term "CNN" may include all types of CNNs, including region-based or region convolutional neural networks (RCNNs) and fast RCNNs (eg, for object detection).Accelerators 1314 (eg, clusters of hardware accelerators) may include deep learning accelerators (DLAs). The DLA may include one or more tensor processing units (TPUs) that may be configured to provide an additional 10 trillion operations per second for deep learning applications and inference. A TPU may be an accelerator configured to perform image processing functions (eg, for CNN, RCNN, etc.) and optimized for performing image processing functions. DLA can be further optimized for a specific set of neural network types and floating-point operations and inference. DLAs are designed to deliver higher performance per millimeter than general-purpose GPUs and far exceed the performance of CPUs. The TPU can perform several functions, including single-instance convolution functions, supporting eg INT8, INT16 and FP16 data types for both features and weights, and post-processor functions.DLA can quickly and efficiently execute neural networks, especially CNNs, on processed or raw data, for any of a wide variety of functions, such as and not limited to: for object recognition using data from camera sensors and detection; CNNs for distance estimation using data from camera sensors; CNNs for emergency vehicle detection and recognition and detection using data from microphones; and CNNs for facial recognition and vehicle ownership using data from camera sensors identified CNNs; and/or CNNs used for security and/or security-related events.The DLA can perform any function of the GPU 1308, and by using an inference accelerator, for example, a designer can target either the DLA or the GPU 1308 to any function. For example, the designer may focus the processing and floating point operations of the CNN on the DLA and leave other functions to the GPU 1308 and/or other accelerators 1314 .Accelerators 1314 (eg, clusters of hardware accelerators) may include Programmable Vision Accelerators (PVAs), which may alternatively be referred to herein as computer vision accelerators. PVAs can be designed and configured to accelerate computer vision algorithms for advanced driver assistance systems (ADAS), autonomous driving, and/or augmented reality (AR) and/or virtual reality (VR) applications. PVA can provide a balance between performance and flexibility. For example, each PVA may include, for example and without limitation, any number of Reduced Instruction Set Computer (RISC) cores, Direct Memory Access (DMA), and/or any number of vector processors.The RISC core may interface with an image sensor (such as that of any camera described herein), an image signal processor, and/or the like. Each of these RISC cores may include any amount of memory. Depending on the embodiment, the RISC core may use any of several protocols. In some examples, the RISC core can execute a real-time operating system (RTOS). A RISC core may be implemented using one or more integrated circuit devices, application specific integrated circuits (ASICs), and/or memory devices. For example, a RISC core may include an instruction cache and/or tightly coupled RAM.DMA may enable components of the PVA to access system memory independently of the CPU 1306 . The DMA may support any number of features to provide optimizations to the PVA, including but not limited to support for multi-dimensional addressing and/or circular addressing. In some examples, the DMA can support up to six or more dimensions of addressing, which can include block width, block height, block depth, horizontal block stepping, vertical block stepping, and/or depth stepping.A vector processor can be a programmable processor that can be designed to efficiently and flexibly execute programming for computer vision algorithms and provide signal processing capabilities. In some examples, a PVA may include a PVA core and two vector processing subsystem partitions. A PVA core may include a processor subsystem, one or more DMA engines (eg, two DMA engines), and/or other peripherals. The vector processing subsystem may operate as the main processing engine of the PVA, and may include a vector processing unit (VPU), an instruction cache, and/or a vector memory (eg, VMEM). The VPU core may include a digital signal processor such as, for example, a Single Instruction Multiple Data (SIMD), Very Long Instruction Word (VLIW) digital signal processor. The combination of SIMD and VLIW can enhance throughput and rate.Each of the vector processors may include an instruction cache and may be coupled to dedicated memory. As a result, in some examples, each of the vector processors may be configured to execute independently of the other vector processors. In other examples, a vector processor included in a particular PVA may be configured to employ data parallelization. For example, in some embodiments, multiple vector processors included in a single PVA may execute the same computer vision algorithm, but on different regions of the image. In other examples, a vector processor included in a particular PVA may simultaneously execute different computer vision algorithms on the same image, or even execute different algorithms on a sequence of images or portions of images. Among other things, any number of PVAs may be included in a hardware accelerator cluster, and any number of vector processors may be included in each of these PVAs. In addition, the PVA can include additional error-correcting code (ECC) memory to enhance overall system security.The accelerator 1314 (eg, a hardware accelerator cluster) may include an on-chip computer vision network and SRAM to provide high bandwidth, low latency SRAM for the accelerator 1314 . In some examples, the on-chip memory can include at least 4MB of SRAM consisting of, for example and without limitation, eight field-configurable memory blocks, which can be accessed by both the PVA and the DLA. Each pair of memory blocks may include an Advanced Peripheral Bus (APB) interface, configuration circuitry, controllers, and multiplexers. Any type of memory can be used. The PVA and DLA can access memory via a backbone that provides high-speed memory access to the PVA and DLA. The backbone may include an on-chip computer vision network interconnecting the PVA and DLA to memory (eg using APBs).The on-chip computer vision network may include an interface that determines that both the PVA and DLA provide ready and valid signals before transmitting any control signals/addresses/data. Such an interface can provide separate phases and separate channels for transmission of control signals/addresses/data, as well as burst communication for continuous data transmission. This type of interface may conform to ISO 26262 or IEC 61508 standards, but other standards and protocols may also be used.In some examples, SoC 1304 may include a real-time ray tracing hardware accelerator such as described in US Patent Application No. 16/101,232, filed August 10, 2018. This real-time ray tracing hardware accelerator can be used to quickly and efficiently determine the position and extent of objects (e.g., within a world model) in order to generate real-time visualization simulations for RADAR signal interpretation, for sound propagation synthesis and/or analysis, For SONAR system simulations, for general wave propagation simulations, for comparison with LIDAR data for positioning and/or other functional purposes, and/or for other uses. In some embodiments, one or more tree traversal units (TTUs) may be used to perform one or more ray tracing related operations.Accelerators 1314 (eg, clusters of hardware accelerators) have a wide range of autonomous driving uses. PVAs can be programmable vision accelerators that can be used in key processing stages in ADAS and autonomous vehicles. The capabilities of the PVA are a good match for algorithmic domains that require predictable processing, low power, and low latency. In other words, PVA performs well on semi-dense or dense regular computations, even on small datasets that require predictable runtime with low latency and low power. So, in the context of a platform for autonomous vehicles, the PVA is designed to run classic computer vision algorithms as they are efficient at object detection and integer math.For example, according to one embodiment of the technology, PVA is used to perform computer stereo vision. In some examples, semi-global matching based algorithms may be used, but this is not intended to be limiting. Many applications for level 3-5 autonomous driving require on-the-fly motion estimation/stereo matching (e.g. structure from motion, pedestrian recognition, lane detection, etc.). PVA can perform computer stereo vision functions on inputs from two monocular cameras.In some examples, PVA can be used to perform dense optical flow. Raw RADAR data is processed (eg using 4D Fast Fourier Transform) to provide processed RADAR. In other examples, the PVA is used for deep time-of-flight processing, such as by processing raw time-of-flight data to provide processed time-of-flight data.DLA can be used to run any type of network to enhance control and driving safety, including, for example, neural networks that output confidence metrics for each object detection. Such confidence values may be interpreted as probabilities, or as providing a relative "weight" of each detection compared to other detections. This confidence value enables the system to make further decisions about which detections should be considered true positive detections rather than false positive detections. For example, the system could set a threshold for confidence and only consider detections above the threshold as true positives. In automatic emergency braking (AEB) systems, false positive detections can cause the vehicle to automatically perform emergency braking, which is clearly undesirable. Therefore, only the most confident detections should be considered triggers for AEB. DLA can run a neural network for regressing confidence values. The neural network may take as its input at least some subset of parameters, such as bounding box dimensions, ground plane estimates obtained (e.g. from another subsystem), inertial measurement unit (IMU) sensor 1366 outputs related to vehicle 1300 orientation, distance , 3D position estimates of objects obtained from neural networks and/or other sensors (eg, LIDAR sensor 1364 or RADAR sensor 1360 ), etc.SoC 1304 may include one or more data stores 1316 (eg, memories). Data storage 1316 may be on-chip memory of SoC 1304, which may store neural networks to be executed on the GPU and/or DLA. In some examples, data store 1316 may be large enough to store multiple instances of a neural network for redundancy and safety. Data storage 1312 may include L2 or L3 cache 1312 . References to data storage 1316 may include references to memory associated with PVA, DLA, and/or other accelerators 1314 as described herein.SoC 1304 may include one or more processors 1310 (eg, embedded processors). Processor 1310 may include a boot and power management processor, which may be a dedicated processor and subsystem for handling boot power and management functions and related security enforcement. A boot and power management processor can be part of the SoC 1304 boot sequence and can provide run-time power management services. The boot power and management processor may provide clock and voltage programming, auxiliary system low power state transitions, SoC 1304 thermal and temperature sensor management, and/or SoC 1304 power state management. Each temperature sensor may be implemented as a ring oscillator whose output frequency is proportional to temperature, and SoC 1304 may use the ring oscillators to detect the temperature of CPU 1306 , GPU 1308 and/or accelerator 1314 . If it is determined that the temperature exceeds the threshold, the startup and power management processor may enter a temperature fault routine and place the SoC 1304 in a lower power state and/or place the vehicle 1300 in a driver safe stop mode (eg, bring the vehicle 1300 to a safe stop).Processor 1310 may also include a set of embedded processors that may act as audio processing engines. The audio processing engine may be an audio subsystem that allows full hardware support for multi-channel audio over multiple interfaces and a wide and flexible set of audio I/O interfaces. In some examples, the audio processing engine is a dedicated processor core with a digital signal processor with dedicated RAM.Processor 1310 may also include an always-on processor engine that may provide the necessary hardware features to support low power sensor management and wake-up use cases. This always-on-processor engine may include a processor core, tightly coupled RAM, supporting peripherals such as timers and interrupt controllers, various I/O controller peripherals, and routing logic.Processor 1310 may also include a security cluster engine, which includes a dedicated processor subsystem that handles security management for automotive applications. A secure cluster engine may include two or more processor cores, tightly coupled RAM, supporting peripherals (eg, timers, interrupt controllers, etc.), and/or routing logic. In safe mode, the two or more cores may operate in lockstep mode and act as a single core with comparison logic to detect any differences between their operations.Processor 1310 may also include a real-time camera engine, which may include a dedicated processor subsystem for handling real-time camera management.Processor 1310 may also include a high dynamic range signal processor, which may include an image signal processor, which is a hardware engine that is part of the camera's processing pipeline.Processor 1310 may include a video image compositor, which may be a processing block (implemented, for example, on a microprocessor) that implements the video post-processing functions required by the video playback application to produce the final image for the player window. The video image multiplexer may perform lens distortion correction on the wide-angle camera 1370, surround camera 1374, and/or on the in-cab surveillance camera sensor. The in-cab surveillance camera sensors are preferably monitored by a neural network running on another instance of the advanced SoC, configured to recognize in-cab events and respond accordingly. The in-cab system can perform lip reading to activate cellular service and make calls, dictate emails, change vehicle destinations, activate or change the vehicle's infotainment system and settings, or provide voice-activated web surfing. Certain functions are only available to the driver when the vehicle is operating in autonomous mode, and are otherwise disabled.The video image compositor may include enhanced temporal noise reduction for spatial and temporal noise reduction. For example, in the presence of motion in a video, noise reduction appropriately weights spatial information, downweighting information provided by neighboring frames. Temporal noise reduction performed by the video image compositor may use information from previous images to reduce noise in the current image where the image or portions of the image do not include motion.The video image compositor may also be configured to perform stereo correction on the input stereo shot frames. The video image compositor can further be used for user interface composition when the operating system desktop is in use and the GPU 1308 does not need to continuously render new surfaces. Even when the GPU 1308 is powered on and active for 3D rendering, the video image compositor can be used to offload the GPU 1308 to improve performance and responsiveness.The SoC 1304 may also include a Mobile Industry Processor Interface (MIPI) camera serial interface for receiving video and input from a camera, a high-speed interface, and/or a video input block that may be used for camera and related pixel input functions. SoC 1304 may also include an input/output controller that may be controlled by software and may be used to receive I/O signals not committed to a specific role.SoC 1304 may also include a wide range of peripheral interfaces to enable communication with peripherals, audio codecs, power management and/or other devices. SoC 1304 can be used to process data from cameras (connected via Gigabit multimedia serial link and Ethernet), sensors (such as LIDAR sensor 1364, RADAR sensor 1360, etc. which can be connected via Ethernet), data from bus 1302 Data (eg speed of vehicle 1300, steering wheel position, etc.), data from GNSS sensor 1358 (connected via Ethernet or CAN bus). The SoC 1304 may also include dedicated high-performance mass storage controllers, which may include their own DMA engines, and which may be used to free the CPU 1306 from routine data management tasks.SoC 1304 can be an end-to-end platform with a flexible architecture that spans automation levels 3-5, providing leverage and efficient use of computer vision and ADAS technologies for diversity and redundancy, along with deep learning tools for flexible A comprehensive functional safety architecture for platforms that reliably drive software stacks. SoC 1304 can be faster, more reliable, and even more energy and space efficient than conventional systems. For example, when combined with CPU 1306, GPU 1308, and data storage 1316, accelerator 1314 can provide a fast and efficient platform for level 3-5 autonomous vehicles.The technology thus provides capabilities and functionality not achievable by conventional systems. For example, computer vision algorithms can be executed on CPUs that can be configured using a high-level programming language such as the C programming language to execute a wide variety of processing algorithms across a wide variety of visual data. However, CPUs often cannot meet the performance requirements of many computer vision applications, such as those related to, for example, execution time and power consumption. In particular, many CPUs cannot execute complex object detection algorithms in real time, a requirement for in-vehicle ADAS applications and for practical Level 3-5 autonomous vehicles.In contrast to conventional systems, by providing CPU complexes, GPU complexes, and clusters of hardware accelerators, the techniques described herein allow multiple neural networks to be executed simultaneously and/or sequentially, and the results combined to achieve 3-5 Level autonomous driving function. For example, a CNN executing on a DLA or a dGPU (eg, GPU 1320 ) may include text and word recognition, allowing a supercomputer to read and understand traffic signs, including signs for which the neural network has not been specifically trained. The DLA may also include a neural network capable of recognizing, interpreting, and providing a semantic understanding of the sign, and passing this semantic understanding to the path planning module running on the CPU complex.As another example, multiple neural networks can run simultaneously as required for level 3, 4 or 5 driving. For example, a warning sign consisting of "Caution: Flashing lights indicate icing conditions" along with electric lights can be interpreted by several neural networks independently or collectively. The sign itself may be recognized as a traffic sign by a deployed first neural network (e.g., a trained neural network), and the text "flashing lights indicate icing conditions" may be interpreted by a deployed second neural network, which deploys The neural network informs the vehicle's path planning software (preferably executing on the CPU complex) that icing conditions exist when flashing lights are detected. The blinking lights can be identified by operating a deployed third neural network over multiple frames, which informs the vehicle's path planning software of the presence (or absence) of the blinking lights. All three neural networks can run simultaneously, eg, within the DLA and/or on the GPU 1308 .In some examples, the CNN for facial recognition and owner identification may identify the presence of an authorized driver and/or owner of the vehicle 1300 using data from camera sensors. The always-on sensor processing engine can be used to unlock the vehicle and turn on the lights when the owner approaches the driver's door, and in safety mode, disable the vehicle when the owner leaves the vehicle. In this manner, SoC 1404 provides security against theft and/or carjacking.In another example, a CNN for emergency vehicle detection and recognition may use data from the microphone 1396 to detect and recognize emergency vehicle sirens. In contrast to conventional systems that use generic classifiers to detect alarms and manually extract features, the SoC 1304 uses CNNs to classify ambient and urban sounds as well as visual data. In a preferred embodiment, a CNN running on the DLA is trained to recognize the relative closing rates of emergency vehicles (eg, by using the Doppler effect). The CNN may also be trained to identify emergency vehicles specific to the local area in which the vehicle is operating, as identified by the GNSS sensor 1358 . So, for example, when operating in Europe, the CNN would seek to detect European alerts, and when in the US, the CNN would seek to identify only North American alerts. Once an emergency vehicle is detected, with the aid of the ultrasonic sensor 1362, the control program can be used to execute emergency vehicle safety routines, slow the vehicle, pull over, stop the vehicle, and/or idle the vehicle until the emergency Vehicles pass.The vehicle may include a CPU 1318 (eg, a discrete CPU or dCPU) that may be coupled to the SoC 1304 via a high-speed interconnect (eg, PCIe). CPU 1318 may include, for example, an X86 processor. CPU 1318 may be used to perform any of a wide variety of functions including, for example, arbitrating potentially inconsistent results between ADAS sensors and SoC 1304, and/or monitoring the status and status of controller 1336 and/or infotainment SoC 1330. Health status.Vehicle 1300 may include GPU 1320 (eg, a discrete GPU or dGPU) that may be coupled to SoC 1304 via a high-speed interconnect (eg, NVIDIA's NVLINK). GPU 1320 may provide additional artificial intelligence functionality, such as by executing redundant and/or distinct neural networks, and may be used to train and/or update neural networks based at least in part on input from sensors of vehicle 1300 (e.g., sensor data) Neural Networks.Vehicle 1300 may also include a network interface 1324, which may include one or more wireless antennas 1326 (eg, one or more wireless antennas for different communication protocols, such as cellular antennas, Bluetooth antennas, etc.). Network interface 1324 may be used to enable wireless connections over the Internet to the cloud (eg, to server 1378 and/or other network devices), to other vehicles, and/or to computing devices (eg, passenger client devices). To communicate with other vehicles, a direct link may be established between the two vehicles, and/or an indirect link may be established (eg, across a network and via the Internet). A direct link may be provided using a vehicle-to-vehicle communication link. The vehicle-to-vehicle communication link may provide vehicle 1300 with information about vehicles approaching vehicle 1300 (eg, vehicles in front of, to the side of, and/or behind vehicle 1300 ). This functionality may be part of the cooperative adaptive cruise control functionality of the vehicle 1300 .The network interface 1324 may include an SoC that provides modulation and demodulation functions and enables the controller 1336 to communicate over a wireless network. The network interface 1324 may include a radio frequency front end for up-conversion from baseband to radio frequency and down-conversion from radio frequency to baseband. Frequency translation may be performed by well-known procedures, and/or may be performed using super-heterodyne procedures. In some examples, radio frequency front-end functionality may be provided by a separate chip. The network interface may include wireless functionality for communicating via LTE, WCDMA, UMTS, GSM, CDMA2000, Bluetooth, Bluetooth LE, Wi-Fi, Z-Wave, ZigBee, LoRaWAN, and/or other wireless protocols.Vehicle 1300 may also include data storage 1328 , which may include off-chip (eg, off-SoC 1304 ) storage. Data storage 1328 may include one or more storage elements, including RAM, SRAM, DRAM, VRAM, flash memory, hard disks, and/or other components and/or devices that can store at least one bit of data.Vehicle 1300 may also include GNSS sensors 1358 . GNSS sensors 1358 (eg, GPS, assisted GPS sensors, differential GPS (DGPS) sensors, etc.) are used to assist in mapping, sensing, occupancy grid generation, and/or path planning functions. Any number of GNSS sensors 1358 may be used including, for example and without limitation, GPS using a USB connector with an Ethernet to serial (RS-232) bridge.Vehicle 1300 may also include RADAR sensor 1360 . RADAR sensor 1360 may be used by vehicle 1300 for remote vehicle detection even in darkness and/or inclement weather conditions. RADAR functional safety level can be ASIL B. RADAR sensor 1360 may use CAN and/or bus 1302 (eg, to transmit data generated by RADAR sensor 1360 ) for control as well as access to object tracking data, and in some examples Ethernet to access raw data. A wide variety of RADAR sensor types can be used. For example and without limitation, RADAR sensor 1360 may be suitable for front, rear and side RADAR use. In some examples, a Pulse Doppler RADAR sensor is used.The RADAR sensor 1360 may include different configurations such as long range with a narrow field of view, short range with a wide field of view, short range side coverage, and the like. In some examples, remote RADAR may be used for adaptive cruise control functions. Long-range RADAR systems can provide a wide field of view (eg, within 250m) through two or more independent scans. The RADAR sensor 1360 can help distinguish between static and moving objects, and can be used by ADAS systems for emergency brake assistance and forward collision warning. Long-range RADAR sensors can include single-site multimode RADARs with multiple (eg, six or more) fixed RADAR antennas and high-speed CAN and FlexRay interfaces. In an example with six antennas, the center four antennas can create a focused beam pattern designed to record the surroundings of the vehicle 1300 at higher rates with minimal traffic disturbance from adjacent lanes. The other two antennas can extend the field of view, making it possible to quickly detect vehicles entering or leaving the lane of vehicle 1300 .As an example, a medium-range RADAR system may include a range of up to 1360m (front) or 80m (rear) and a field of view of up to 42 degrees (front) or 1350 degrees (rear). Short-range RADAR systems may include, but are not limited to, RADAR sensors designed to be mounted on both ends of the rear bumper. When installed on both ends of the rear bumper, such a RADAR sensor system can create two beams that continuously monitor blind spots behind and beside the vehicle.Short-range RADAR systems can be used in ADAS systems for blind spot detection and/or lane change assistance.Vehicle 1300 may also include ultrasonic sensor 1362 . Ultrasonic sensors 1362, which may be placed on the front, rear, and/or sides of the vehicle 1300, may be used for parking assistance and/or to create and update occupancy grids. A wide variety of ultrasonic sensors 1362 can be used, and different ultrasonic sensors 1362 can be used for different detection ranges (eg 2.5m, 4m). Ultrasonic sensor 1362 may operate at a functional safety level of ASIL B.Vehicle 1300 may include LIDAR sensor 1364 . The LIDAR sensor 1364 may be used for object and pedestrian detection, emergency braking, collision avoidance, and/or other functions. LIDAR sensor 1364 may be functional safety level ASIL B. In some examples, vehicle 1300 may include multiple LIDAR sensors 1364 (eg, two, four, six, etc.) that may use Ethernet (eg, to provide data to a Gigabit Ethernet switch).In some examples, LIDAR sensor 1364 may be capable of providing a list of objects and their distances for a 360 degree field of view. A commercially available LIDAR sensor 1364 may have, for example, an advertised range of approximately 1300m, with an accuracy of 2cm-3cm, supporting a 1300Mbps Ethernet connection. In some examples, one or more non-obtrusive LIDAR sensors 1364 may be used. In such examples, LIDAR sensor 1364 may be implemented as a small device that may be embedded into the front, rear, sides, and/or corners of vehicle 1300 . In such an example, the LIDAR sensor 1364 may provide a field of view of up to 120 degrees horizontal and 35 degrees vertical, with a range of 200 m, even for low reflectivity objects. The front mounted LIDAR sensor 1364 may be configured for a horizontal field of view between 45 degrees and 135 degrees.In some examples, LIDAR technology such as 3D Flash LIDAR may also be used. 3D Flash LIDAR uses the flash of a laser as an emission source to illuminate the vehicle's surroundings up to about 200m. Flash LIDAR units include receptors that record the laser pulse transit time and reflected light on each pixel, which in turn corresponds to the range from the vehicle to the object. Flash LIDAR could allow each laser flash to generate a highly accurate and distortion-free image of the surrounding environment. In some examples, four flashing LIDAR sensors may be deployed, one on each side of vehicle 1300 . Available 3D flash LIDAR systems include solid-state 3D staring array LIDAR cameras with no moving parts other than fans (eg, non-scanning LIDAR devices). Flash LIDAR devices can use 5 nanoseconds per frame Class I (eye-safe) laser pulses and can capture reflected laser light in the form of 3D range point clouds and co-registered intensity data. By using a flash LIDAR, and because the flash LIDAR is a solid state device with no moving parts, the LIDAR sensor 1364 may be less susceptible to motion blur, vibration, and/or shock.The vehicle may also include an IMU sensor 1366 . In some examples, IMU sensor 1366 may be located in the center of the rear axle of vehicle 1300 . IMU sensors 1366 may include, for example and without limitation, accelerometers, magnetometers, gyroscopes, magnetic compasses, and/or other sensor types. In some examples, such as in a six-axis application, the IMU sensor 1366 may include an accelerometer and a gyroscope, while in a nine-axis application, the IMU sensor 1366 may include an accelerometer, a gyroscope, and a magnetometer.In some embodiments, the IMU sensor 1366 can be implemented as a miniature high-performance GPS-assisted inertial navigation system (GPS/INS) that combines a microelectromechanical system (MEMS) inertial sensor, a high-sensitivity GPS receiver, and an advanced Kalman filtering algorithm to provide Estimation of position, velocity and attitude. As such, in some examples, IMU sensor 1366 may enable vehicle 1300 to estimate heading by directly observing and correlating velocity changes from GPS to IMU sensor 1366 without input from magnetic sensors. In some examples, IMU sensor 1366 and GNSS sensor 1358 may be combined into a single integrated unit.The vehicle may include a microphone 1396 positioned in and/or around the vehicle 1300 . Microphone 1396 may be used for emergency vehicle detection and identification, among other things.The vehicle may also include any number of camera types, including stereo cameras 1368, wide-angle cameras 1370, infrared cameras 1372, surround cameras 1374, long- and/or mid-range cameras 1398, and/or other camera types. These cameras may be used to capture image data around the entire perimeter of the vehicle 1300 . The type of camera used depends on the embodiment and the requirements of the vehicle 1300 , and any combination of camera types may be used to provide the necessary coverage around the vehicle 1300 . Also, the number of cameras may vary depending on the embodiment. For example, the vehicle may include six cameras, seven cameras, ten cameras, twelve cameras, and/or another number of cameras. As an example and not limitation, the cameras may support Gigabit Multimedia Serial Link (GMSL) and/or Gigabit Ethernet. Each of the cameras is described in more detail herein with respect to Figures 13A and 13B.Vehicle 1300 may also include vibration sensor 1342 . Vibration sensors 1342 may measure vibrations of components of the vehicle such as axles. For example, changes in vibration may indicate changes in the road surface. In another example, when two or more vibration sensors 1342 are used, the difference between the vibrations can be used to determine friction or slip on the road surface (e.g. when there is a difference in vibration between a powered drive shaft and a freely rotating shaft ).Vehicle 1300 may include ADAS system 1338 . In some examples, ADAS system 1338 may include a SoC. ADAS systems 1338 may include autonomous/adaptive/automatic cruise control (ACC), cooperative adaptive cruise control (CACC), forward collision warning (FCW), automatic emergency braking (AEB), lane departure warning (LDW), lane keeping Assist (LKA), Blind Spot Warning (BSW), Rear Cross Traffic Warning (RCTW), Collision Warning System (CWS), Lane Centering (LC) and/or other features and functions.The ACC system may use a RADAR sensor 1360, a LIDAR sensor 1364, and/or a camera. The ACC system may include longitudinal ACC and/or transverse ACC. The longitudinal ACC monitors and controls the distance to the vehicle immediately in front of the vehicle 1300 and automatically adjusts the vehicle speed to maintain a safe distance from the vehicle in front. Lateral ACC performs distance keeping and advises the vehicle 1300 to change lanes if necessary. Lateral ACC is relevant to other ADAS applications such as LCA and CWS.CACC uses information from other vehicles, which may be received indirectly from other vehicles via a wireless link via network interface 1324 and/or wireless antenna 1326 or via a network connection (eg, via the Internet). A direct link may be provided by a vehicle-to-vehicle (V2V) communication link, while an indirect link may be an infrastructure-to-vehicle (I2V) communication link. In general, the V2V communication concept provides information about the immediately preceding vehicle (eg, the vehicle immediately in front of and in the same lane as the vehicle 1300 ), while the I2V communication concept provides information about traffic further ahead. A CACC system may include either or both of I2V and V2V information sources. Given the information of vehicles ahead of the vehicle 1300, CACC can be more reliable, and it has the potential to improve the smoothness of traffic flow and reduce road congestion.FCW systems are designed to alert the driver to hazards so that the driver can take corrective action. The FCW system uses a front-facing camera and/or RADAR sensor 1360 coupled to a dedicated processor, DSP, FPGA, and/or ASIC that is electrically coupled to, for example, a display, speaker, and/or vibration Driver feedback such as components. FCW systems can provide warnings in the form of, for example, sound, visual warnings, vibrations and/or rapid brake pulses.AEB systems detect an impending forward collision with another vehicle or other object and can automatically apply the brakes if the driver does not take corrective action within specified time or distance parameters. The AEB system may use a front camera and/or RADAR sensor 1360 coupled to a dedicated processor, DSP, FPGA, and/or ASIC. When an AEB system detects a hazard, it typically first alerts the driver to take corrective action to avoid a collision, and if the driver does not take corrective action, the AEB system can automatically apply the brakes in an effort to prevent or at least mitigate the predicted impact of the collision. AEB systems may include technologies such as Dynamic Brake Support and/or Collision Imminent Braking.The LDW system provides visual, audible and/or tactile warnings, such as steering wheel or seat vibrations, to alert the driver when the vehicle 1300 crosses lane markings. When the driver indicates intentional lane departure, by activating the turn signal, the LDW system is not activated. The LDW system may use a front-facing camera coupled to a dedicated processor, DSP, FPGA, and/or ASIC that is electrically coupled to components such as a display, speaker, and/or vibrating components. Driver Feedback.The LKA system is a variant of the LDW system. If the vehicle 1300 begins to leave the lane, the LKA system provides steering input or braking to correct the vehicle 1300 .The BSW system detects and warns the driver of vehicles in the car's blind spot. The BSW system may provide visual, audible and/or tactile alerts to indicate that it is unsafe to merge or change lanes. The system can provide additional warning when the driver uses the turn signal. The BSW system may use a rear-facing camera and/or RADAR sensor 1360 coupled to a dedicated processor, DSP, FPGA, and/or ASIC that is electrically coupled to, for example, a display, speaker, and/or Or driver feedback like vibrating parts.The RCTW system may provide visual, audible, and/or tactile notifications when an object is detected outside the range of the rear camera while the vehicle 1300 is in reverse. Some RCTW systems include AEB to ensure that the vehicle's brakes are applied to avoid a crash. The RCTW system may use one or more rear RADAR sensors 1360 coupled to a dedicated processor, DSP, FPGA, and/or ASIC electrically coupled to, for example, a display, speaker, and/or Or driver feedback like vibrating parts.Conventional ADAS systems can be prone to false positive results, which can be annoying and distracting to the driver, but are typically not disastrous because the ADAS system alerts the driver and allows the driver to decide whether a safe condition actually exists and corresponds to take action. However, in an autonomous vehicle 1300, in case of a conflicting outcome, the vehicle 1300 itself must decide whether to heed the outcome from the primary computer or the secondary computer (eg first controller 1336 or second controller 1336). For example, in some embodiments, the ADAS system 1338 may be a backup and/or secondary computer for providing perception information to a backup computer rationale module. A backup computer plausibility monitor can run redundant and diverse software on hardware components to detect failures in perception and dynamic driving tasks. Output from the ADAS system 1338 may be provided to a supervisory MCU. If the outputs from the primary and secondary computers conflict, the supervisory MCU must determine how to reconcile the conflict to ensure safe operation.In some examples, the host computer may be configured to provide a confidence score to the supervisory MCU indicating the host computer's confidence in the selected result. If the confidence score exceeds a threshold, the supervisory MCU can follow the direction of the primary computer regardless of whether the secondary computer provides conflicting or inconsistent results. Where the confidence score does not meet the threshold and where the primary and secondary computers indicate different outcomes (eg conflicts), the supervisory MCU may arbitrate between these computers to determine the appropriate outcome.The supervisory MCU may be configured to run a neural network trained and configured to determine conditions under which the secondary computer provides a false alarm based at least in part on outputs from the primary computer and the secondary computer. So supervising the neural network in the MCU can learn when the secondary computer's output can be trusted and when it can't. For example, when the secondary computer is a RADAR-based FCW system, a neural network in the supervisory MCU can know when the FCW system is identifying metal objects that are not actually dangerous, such as drainage grates or manhole covers that trigger an alarm. Similarly, when the assisting computer is a camera-based LDW system, a neural network in a supervising MCU can learn to ignore LDW when cyclists or pedestrians are present and lane departure is actually the safest strategy. In embodiments that include a neural network running on a supervisory MCU, the supervisory MCU may include at least one of a DLA or a GPU adapted to run the neural network with associated memory. In a preferred embodiment, a supervisory MCU may include and/or be included as a component of SoC 1304 .In other examples, the ADAS system 1338 may include a secondary computer that performs ADAS functions using conventional computer vision rules. This way, the secondary computer can use classical computer vision rules (if-then), and the presence of neural networks in the supervisory MCU can improve reliability, safety, and performance. For example, diverse implementations and intentional non-identity make the overall system more fault-tolerant, especially for failures caused by software (or software-hardware interface) functions. For example, if there is a software bug or error in the software running on the primary computer and non-identical software code running on the secondary computer provides the same overall result, the supervisory MCU can be more confident that the overall result is correct and the primary computer Vulnerabilities in software or hardware do not cause substantial errors.In some examples, the output of the ADAS system 1338 may be fed to a perception block of the host computer and/or a dynamic driving task block of the host computer. For example, if the ADAS system 1338 indicates a forward collision warning due to an object immediately ahead, the perception block may use this information in identifying the object. In other examples, the secondary computer may have its own neural network that is trained and thus reduces the risk of false positives as described herein.The vehicle 1300 may also include an infotainment SoC 1330 (eg, an in-vehicle infotainment system (IVI)). Although illustrated and described as an SoC, an infotainment system may not be an SoC and may include two or more discrete components. Infotainment SoC 1330 may include audio (e.g., music, personal digital assistant, navigation instructions, news, radio, etc.), video (e.g., TV, movies, streaming, etc.), telephony (e.g., hands-free calls), network connectivity (e.g. LTE, WiFi, etc.) and/or information services (e.g. navigation system, rear parking assistance, radio data systems such as fuel level, total distance covered, brake fuel level, Vehicle-related information such as on/off, air filter information, etc.) hardware and software combination. For example, infotainment SoC 1330 may include radio, disc player, navigation system, video player, USB and Bluetooth connectivity, in-vehicle computer, in-vehicle entertainment, WiFi, steering wheel audio controls, hands-free voice controls, head-up display (HUD), HMI Display 1334, telematics device, control panel (eg, for controlling and/or interacting with various components, features, and/or systems), and/or other components. The infotainment SoC 1330 may further be used to provide information (e.g., visual and/or audible) to the user of the vehicle, such as information from the ADAS system 1338, such as planned vehicle maneuvers, trajectories, surrounding environment information (e.g., intersection information, autonomous driving information such as vehicle information, road information, etc.), and/or other information.Infotainment SoC 1330 may include GPU functionality. Infotainment SoC 1330 may communicate with other devices, systems and/or components of vehicle 1400 via bus 1302 (eg, CAN bus, Ethernet, etc.). In some examples, the infotainment SoC 1330 can be coupled to a supervisory MCU such that in the event of a failure of the main controller 1336 (e.g., the main and/or backup computers of the vehicle 1300), the infotainment system's GPU can perform some self-driving functions . In such an example, the infotainment SoC 1330 may place the vehicle 1300 in a driver safety park mode as described herein.Vehicle 1300 may also include an instrument cluster 1332 (eg, digital instrument cluster, electronic instrument cluster, digital instrument panel, etc.). Instrument cluster 1332 may include a controller and/or a supercomputer (eg, a discrete controller or supercomputer). Instrument cluster 1332 may include a suite of instruments such as speedometer, fuel level, oil pressure, tachometer, odometer, turn indicators, shift position indicators, seat belt warning light, parking brake warning light, engine fault light, Airbag (SRS) system information, lighting controls, security system controls, navigation information and more. In some examples, information may be displayed and/or shared between infotainment SoC 1330 and instrument cluster 1332 . In other words, the instrument cluster 1332 may be included as part of the infotainment SoC 1330, or vice versa.Figure 13D is a system diagram of communications between a cloud-based server and the example autonomous vehicle 1300 of Figure 13A, according to some embodiments of the present disclosure. System 1376 may include server 1378 , network 1390 , and vehicles including vehicle 1300 . Server 1378 may include multiple GPUs 1384(A)-1384(H) (collectively referred to herein as GPUs 1384), PCIe switches 1382(A)-1382(H) (collectively referred to herein as PCIe switches 1382), and/or CPU 1380(A) )-1380(B) (here collectively referred to as CPU 1380). GPU 1384, CPU 1380, and PCIe switches may be interconnected with a high-speed interconnect such as, for example and without limitation, NVLink interface 1388 developed by NVIDIA and/or PCIe connection 1386. In some examples, GPU 1384 is connected via NVLink and/or NVSwitch SoC, and GPU 1384 and PCIe switch 1382 are connected via a PCIe interconnect. Although eight GPUs 1384, two CPUs 1380, and two PCIe switches are shown, this is not intended to be limiting. Depending on the embodiment, each of servers 1378 may include any number of GPUs 1384, CPUs 1380, and/or PCIe switches. For example, each of servers 1378 may include eight, sixteen, thirty-two, and/or more GPUs 1384 .Server 1378 may receive image data representing images showing unexpected or changed road conditions, such as recently started road work, over network 1390 and from vehicles. Server 1378 may transmit neural network 1392, updated neural network 1392, and/or map information 1394, including information about traffic and road conditions, over network 1390 and to the vehicle. Updates to map information 1394 may include updates to HD map 1322, such as information about construction sites, potholes, curves, flooding, or other obstructions. In some examples, neural network 1392, updated neural network 1392, and/or map information 1394 may have been represented from new training and/or data received from any number of vehicles in the environment and/or based on Experience in training (e.g., using server 1378 and/or other servers) occurs.Server 1378 may be used to train machine learning models (eg, neural networks) based on training data. Training data can be generated by the vehicle, and/or can be generated in simulation (eg, using a game engine). In some examples, the training data is labeled (such as in the case of neural networks that benefit from supervised learning) and/or undergoes other preprocessing, while in other examples the training data is not labeled and/or preprocessed (such as in the case of Neural networks do not require supervised learning). Training can be performed according to any one or more categories of machine learning techniques, including but not limited to categories such as: supervised training, semi-supervised training, unsupervised training, self-learning, reinforcement learning, federated learning, transfer learning, feature learning (including principal composition and cluster analysis), multilinear subspace learning, manifold learning, representation learning (including alternate dictionary learning), rule-based machine learning, anomaly detection, and any variant or combination thereof. Once the machine learning model is trained, the machine learning model can be used by the vehicle (eg, transmitted to the vehicle via network 1390 ), and/or the machine learning model can be used by server 1378 to monitor the vehicle remotely.In some examples, server 1378 may receive data from the vehicle and apply that data to a state-of-the-art real-time neural network for real-time intelligent inference. Servers 1378 may include deep learning supercomputers and/or dedicated AI computers powered by GPUs 1384, such as the DGX and DGX Station machines developed by NVIDIA. However, in some examples, servers 1378 may include deep learning infrastructure using only CPU-powered data centers.The deep learning infrastructure of server 1378 may be capable of fast real-time inference, and this capability may be used to assess and verify the health of processors, software, and/or associated hardware in vehicle 1300 . For example, the deep learning infrastructure may receive periodic updates from the vehicle 1300, such as a sequence of images and/or objects that the vehicle 1300 has located within the sequence of images (eg, via computer vision and/or other machine learning object classification techniques). The deep learning infrastructure can run its own neural network to identify objects and compare them to what the vehicle 1300 recognizes, and if the results don't match and the infrastructure concludes that the AI in the vehicle 1300 is malfunctioning, then the server 1378 can A signal is transmitted to the vehicle 1300 instructing the failsafe computer of the vehicle 1300 to take control, notify the passengers, and complete the safe parking maneuver.For inference, the server 1378 may include a GPU 1384 and one or more programmable inference accelerators (such as NVIDIA's TensorRT). The combination of GPU-powered servers and inference acceleration can enable real-time response. In other examples, where performance is less critical, CPU, FPGA, and other processor-powered servers can be used for inferencing.example computing deviceFIG. 14 is a block diagram of an example computing device 1400 suitable for implementing some embodiments of the present disclosure. Computing device 1400 may include an interconnection system 1402 that directly or indirectly couples: memory 1404, one or more central processing units (CPUs) 1406, one or more graphics processing units (GPUs) 1408, communication interfaces 1410, Input/output (I/O) ports 1412 , input/output components 1414 , power supply 1416 , one or more presentation components 1418 (eg, display(s)) and one or more logic units 1420 . In at least one embodiment, computing device(s) 1400 may include one or more virtual machines (VMs), and/or any components thereof may include virtual components (eg, virtual hardware components). For non-limiting example, one or more of GPUs 1408 may include one or more vGPUs, one or more of CPUs 1406 may include one or more vCPUs, and/or a One or more may include one or more virtual logical units. As such, computing device(s) 1400 may include discrete components (eg, a full GPU dedicated to computing device 1400 ), virtual components (eg, a portion of a GPU dedicated to computing device 1400 ), or combinations thereof.Although the various blocks of FIG. 14 are shown connected via interconnection system 1402 with wires, this is not intended to be limiting and is for clarity only. For example, in some embodiments, a presentation component 1418 (such as a display device) may be considered an I/O component 1414 (eg, if the display is a touch screen). As another example, CPU 1406 and/or GPU 1408 may include memory (eg, memory 1404 may represent storage in addition to memory of GPU 1408, CPU 1406, and/or other components). In other words, the computing device of FIG. 14 is merely illustrative. In terms of "Workstation", "Server", "Laptop", "Desktop", "Tablet", "Client Device", "Mobile Device", "Handheld Device", "Game Console" , "Electronic Control Unit (ECU)", "Virtual Reality System" and/or such categories of other device or system types, as all are considered within the scope of the computing device of FIG. 14 .Interconnect system 1402 may represent one or more links or buses, such as an address bus, data bus, control bus, or combinations thereof. Interconnect system 1402 may include one or more bus or link types, such as Industry Standard Architecture (ISA) bus, Extended Industry Standard Architecture (EISA) bus, Video Electronics Standards Association (VESA) bus, Peripheral Component Interconnect (PCI ) bus, a peripheral component interconnect express (PCIe) bus, and/or another type of bus or link. In some embodiments, there are direct connections between components. As an example, CPU 1406 may be directly connected to memory 1404 . Further, CPU 1406 may be directly connected to GPU 1408 . Where direct or point-to-point connections exist between components, interconnection system 1402 may include PCIe links to perform the connections. In these examples, a PCI bus need not be included in computing device 1400 .Memory 1404 may include any of a variety of computer-readable media. Computer readable media can be any available media that can be accessed by computing device 1400 . Computer readable media may include both volatile and nonvolatile media, and removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media.Computer storage media may include volatile and nonvolatile media and/or removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, and/or other data types. and non-removable media. For example, memory 1404 may store computer readable instructions (eg, representing program(s) and/or program element(s), such as an operating system). Computer storage media may include, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic tape cartridge, tape, magnetic disk storage device or other magnetic storage device, or Any other medium that can be used to store the desired information and that can be accessed by computing device 1400 . As used herein, computer storage media does not include the signal itself.Computer storage media may embody computer readable instructions, data structures, program modules and/or other data types in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term "modulated data signal" may refer to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, computer storage media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.CPU 1406 may be configured to execute at least some of the computer-readable instructions to control one or more components of computing device 1400 to perform one or more of the methods and/or processes described herein. CPUs 1406 may each include one or more cores (eg, one, two, four, eight, twenty-eight, seventy-two, etc.) capable of processing numerous software threads simultaneously. CPU 1406 may include any type of processor, and may include different types of processors depending on the type of computing device 1400 implemented (e.g., a processor with fewer cores for a mobile device and a processor with more cores for a server). processor). For example, depending on the type of computing device 1400, the processor may be an Advanced RISC Machine (ARM) processor implemented using Reduced Instruction Set Computing (RISC) or an x86 processor implemented using Complex Instruction Set Computing (CISC). Computing device 1400 may include one or more CPUs 1406 in addition to one or more microprocessors or supplemental coprocessors, such as math coprocessors.In addition to or instead of CPU(s) 1406, GPU(s) 1408 may be configured to execute at least some of the computer-readable instructions to control a computing device One or more components of 1400 perform one or more of the methods and/or processes described herein. One or more of GPUs 1408 may be integrated GPUs (eg, with one or more of CPUs 1406 ) and/or one or more of GPUs 1408 may be discrete GPUs. In an embodiment, one or more of GPUs 1408 may be a co-processor for one or more of CPUs 1406 . GPU 1408 may be used by computing device 1400 to render graphics (eg, 3D graphics) or to perform general-purpose calculations. For example, GPU 1408 may be used for general-purpose computing on GPUs (GPGPU). GPU 1408 may include hundreds or thousands of cores capable of processing hundreds or thousands of software threads simultaneously. GPU 1408 may generate pixel data for an output image in response to rendering commands (eg, rendering commands received from CPU 1406 via a host interface). GPU 1408 may include graphics memory (eg, display memory) for storing pixel data or any other suitable data (eg, GPGPU data). Display memory may be included as part of memory 1404 . GPU 1408 may include two or more GPUs operating in parallel (eg, via a link). Links can connect GPUs directly (for example, using NVLINK) or can connect GPUs through switches (for example, using NVSwitch). When combined together, each GPU 1408 may produce a different portion of the output or pixel data or GPGPU data for a different output (e.g., a first GPU for a first image and a second GPU for a second image). GPU). Each GPU may contain its own memory, or may share memory with other GPUs.In addition to or instead of CPU 1406 and/or GPU 1408, logic unit 1420 may be configured to execute at least some of the computer-readable instructions to control one or more components of computing device 1400 to perform One or more of the methods and/or processes described herein. In an embodiment, CPU (one or more) 1406, GPU (one or more) 1408, and/or logic unit (one or more) 1420 may perform methods, processes and/or any combination of its parts. One or more of logic units 1420 may be part of and/or integrated with one or more of CPU 1406 and/or GPU 1408 and/or One or more of or logic units 1420 may be discrete components or otherwise external to CPU 1406 and/or GPU 1408 . In an embodiment, one or more of logic units 1420 may be a co-processor of one or more of CPUs 1406 and/or one or more of GPUs 1408 .Examples of logic unit 1420 include one or more processing cores and/or components thereof, such as data processing units (DPUs), tensor cores (TCs), tensor processing units (TPUs), pixel vision cores (PVCs), vision Processing Unit (VPU), Graphics Processing Cluster (GPC), Texture Processing Cluster (TPC), Streaming Multiprocessor (SM), Tree Transversal Unit (TTU), Artificial Intelligence Accelerator (AIA), Deep Learning Accelerator (DLA), Arithmetic Logic Unit (ALU), Application Specific Integrated Circuit (ASIC), Floating Point Unit (FPU), Input/Output (I/O) elements, Peripheral Component Interconnect (PCI) or Peripheral Component Interconnect Express (PCIe) elements, etc.Communication interface 1410 may include one or more receivers, transmitters, and/or transceivers that enable computing device 1400 to communicate with other computing devices via electronic communication networks, including wired and/or wireless communications. Communication interface 1410 may include components and functionality to enable communication over any of a number of different networks, such as wireless networks (e.g., Wi-Fi, Z-Wave, Bluetooth, Bluetooth LE, ZigBee, etc.), wired networks (e.g., via Ethernet or InfiniBand), Low Power Wide Area Networks (e.g., LoRaWAN, SigFox, etc.), and/or the Internet. In one or more embodiments, logic unit 1420 and/or communication interface 1410 may include one or more data processing units (DPUs) to transfer data received over the network and/or through interconnection system 1402 directly to One or more GPUs 1408 (eg, memory of one or more GPUs 1408).I/O ports 1412 may enable computing device 1400 to be logically coupled to other devices including I/O components 1414, presentation component(s) 1418, and/or other components, some of which may be built into (e.g., integrated in) the computing device 1400. Illustrative I/O components 1414 include microphones, mice, keyboards, joysticks, game pads, game controllers, satellite dishes, scanners, printers, wireless devices, and the like. The I/O component 1414 may provide a natural user interface (NUI) that handles mid-air gestures, speech, or other physiological input generated by a user. In some cases, the input may be transmitted to appropriate network elements for further processing. The NUI may enable voice recognition, stylus recognition, facial recognition, biometric recognition, gesture recognition on and near the screen, mid-air gestures, head and eye tracking, and touch recognition associated with the display of computing device 1400 (as described below). any combination of those described in more detail). Computing device 1400 may include depth cameras for gesture detection and recognition, such as stereo camera systems, infrared camera systems, RGB camera systems, touch screen technology, and combinations of these. Additionally, computing device 1400 may include an accelerometer or gyroscope (eg, as part of an inertial measurement unit (IMU)) to enable detection of motion. In some examples, computing device 1400 may use the output of the accelerometer or gyroscope to render immersive augmented or virtual reality.Power source 1416 may include hardwired power, battery power, or a combination thereof. Power supply 1416 may provide power to computing device 1400 to enable components of computing device 1400 to operate.Presentation components 1418 may include a display (eg, a monitor, touch screen, television screen, head-up display (HUD), other display types, or combinations thereof), speakers, and/or other presentation components. Rendering component 1418 may receive data from other components (eg, GPU 1408, CPU 1406, etc.) and output the data (eg, as images, video, sound, etc.).Example data centerFIG. 15 illustrates an example data center 1500 that may be used in at least one embodiment of the present disclosure. Data center 1500 may include data center infrastructure layer 1510 , framework layer 1520 , software layer 1530 and/or application layer 1540 .As shown in FIG. 15, data center infrastructure layer 1510 may include resource coordinator 1512, grouped computing resources 1514, and node computing resources ("node C.R.s") 1516(1)-1516(N), where "N" represents any A complete positive integer. In at least one embodiment, nodes C.R.s 1516(1)-1516(N) may include, but are not limited to, any number of central processing units (“CPUs”) or other processors (including accelerators, field programmable gate arrays (FPGAs), ), graphics processor or graphics processing unit (GPU), etc.), memory devices (e.g., dynamic read-only memory), storage devices (e.g., solid-state or disk drives), network input/output (“NW I/O”) devices , network switches, virtual machines ("VMs"), power modules and/or cooling modules, and the like. In some embodiments, one or more of node C.R.s from node C.R.s 1516(1)-1516(N) may correspond to a server having one or more of the computing resources described above. Additionally, in some embodiments, nodes C.R.s 1516(1)-15161(N) may include one or more virtual components, such as vGPUs, vCPUs, etc., and/or One or more of may correspond to a virtual machine (VM).In at least one embodiment, grouped computing resources 1514 may comprise individual groupings of nodes C.R.s 1516 housed within one or more racks (not shown), or housed at different geographic locations (also not shown) many racks in a data center. Individual groupings of node C.R.s 1516 within grouped computing resources 1514 may include grouped computing, network, memory or storage resources that may be configured or allocated to support one or more workloads. In at least one embodiment, several node C.R.s 1516 including CPUs, GPUs, DPUs, and/or other processors may be grouped within one or more racks to provide computing resources to support one or more workloads. One or more racks may also include any number of power modules, cooling modules, and/or network switches in any combination.Resource coordinator 1522 may configure or otherwise control computing resources 1514 of one or more node C.R.s 1516(1)-1516(N) and/or groups. In at least one embodiment, resource coordinator 1522 may comprise a software design infrastructure (“SDI”) management entity for data center 1500 . Resource coordinator 1522 may include hardware, software, or some combination thereof.In at least one embodiment, as shown in FIG. 15 , the framework layer 1520 may include a job scheduler 1533 , a configuration manager 1534 , a resource manager 1536 and/or a distributed file system 1538 . The framework layer 1520 may include a framework supporting software 1532 of the software layer 1530 and/or one or more applications 1542 of the application layer 1540 . Software 1532 or applications 1542 may include web-based service software or applications, respectively, such as those provided by Amazon (Amazon) Web Services, Google Cloud (Google Cloud), and Microsoft Azure. Framework layer 1520 may be, but is not limited to, a free and open source software web application framework (such as Apache Spark™ (hereinafter referred to as "Spark")) that can utilize distributed file system 1538 for large-scale data processing (e.g., "big data") type. In at least one embodiment, job scheduler 1533 may include a Spark driver to facilitate scheduling workloads supported by different tiers of data center 1500 . Configuration manager 1534 may be able to configure different layers, such as software layer 1530 and framework layer 1520 (which includes Spark and distributed file system 1538 to support large-scale data processing). Resource manager 1536 may be capable of managing clustered or grouped computing resources that are mapped to or allocated to support distributed file system 1538 and job scheduler 1533 . In at least one embodiment, clustered or grouped computing resources may include grouped computing resources 1514 at data center infrastructure layer 1510 . Resource manager 1536 may coordinate with resource coordinator 1512 to manage these mapped or allocated computing resources.In at least one embodiment, the software 1532 included in the software layer 1530 may include the distributed file system 1538 of the nodes C.R.s 1516(1)-1516(N), the grouped computing resources 1514, and/or the framework layer 1520. Software used at least in part. The one or more types of software may include, but are not limited to, Internet web search software, email virus scanning software, database software, and streaming video content software.In at least one embodiment, the applications 1542 included in the application layer 1540 may include nodes C.R.s 1516(1)-1516(N), grouped computing resources 1514, and/or distributed file system 1538 of the framework layer 1520 One or more types of applications used at least in part. One or more types of applications may include, but are not limited to, any number of genomic applications, cognitive computing, and machine learning applications, including training or inference software, machine learning framework software (e.g., PyTorch, TensorFlow, Caffe, etc.), and/or or other machine learning applications used in conjunction with one or more embodiments.In at least one embodiment, any of configuration manager 1534, resource manager 1536, and resource coordinator 1512 may implement any number and type of data based on any amount and type of data acquired in any technically feasible manner. Self-modifying actions. The self-modifying action may save a data center operator of data center 1500 from making potentially poor configuration decisions and potentially avoid underutilized and/or poorly performing portions of the data center.According to one or more embodiments described herein, data center 1500 may include tools, services, software, or other resources to train or use one or more machine learning models to predict or infer information . For example, the machine learning model(s) may be trained by computing weight parameters from a neural network architecture using the software and/or computing resources described above with respect to data center 1500 . In at least one embodiment, a trained or deployed machine learning model corresponding to one or more neural networks can be used to perform training using one or more training techniques, such as but not limited to those described herein. ) to infer or predict information using the resources described above with respect to data center 1500.In at least one embodiment, data center 1500 may use CPUs, application specific integrated circuits (ASICs), GPUs, FPGAs, and/or other hardware (or virtual computing resources corresponding thereto) to perform training and/or inference using the aforementioned resources. Additionally, one or more of the software and/or hardware resources described above may be configured to allow users to train or perform services that infer information, such as image recognition, speech recognition, or other artificial intelligence services.Example network environmentA network environment suitable for implementing embodiments of the present disclosure may include one or more client devices, servers, network attached storage (NAS), other backend devices, and/or other device types. Client devices, servers, and/or other device types (e.g., each device) may be implemented on one or more instances of computing device(s) 1400 of FIG. 14—e.g., each device may Similar components, features and/or functions of computing device(s) 1400 are included. Furthermore, where a backend device (eg, server, NAS, etc.) is implemented, the backend device may be included as part of a data center 1500 , an example of which is described in more detail herein with respect to FIG. 15 .The components of a networked environment can communicate with each other over a network, which can be wired, wireless, or both. A network may include multiple networks or a network of multiple networks. For example, a network may include one or more wide area networks (WANs), one or more local area networks (LANs), one or more public networks (such as the Internet and/or the public switched telephone network (PSTN)), and/or a or more private networks. Where the network includes a wireless telecommunications network, components such as base stations, communication towers, or even access points (among other components) may provide wireless connectivity.Compatible network environments may include one or more peer-to-peer network environments (in which case the server may not be included in the network environment) and one or more client-server network environments (in which case , one or more servers may be included in the network environment). In a peer-to-peer network environment, the functionality described herein for the server can be implemented on any number of client devices.In at least one embodiment, the network environment may include one or more cloud-based network environments, distributed computing environments, combinations thereof, and the like. A cloud-based network environment may include a framework layer, a job scheduler, a resource manager, and a distributed file system implemented on one or more servers, which may include one or more core network servers and/or edge server. The framework layer may include a framework supporting software of the software layer and/or one or more applications of the application layer. Software or applications may include web-based service software or applications, respectively. In an embodiment, one or more client devices may use web-based service software or applications (eg, by accessing service software and/or applications via one or more application programming interfaces (APIs)). The framework layer can be, but is not limited to, a free and open source software network application framework such as a distributed file system that can be used for large-scale data processing (eg, "big data").A cloud-based network environment may provide cloud computing and/or cloud storage that performs any combination of the computing and/or data storage functions (or one or more portions thereof) described herein. Any of these various functions may be distributed across multiple locations from a central or core server (eg, one or more data centers that may be distributed across a state, region, country, world, etc.). The core server may assign at least a portion of the functionality to the edge server if the connection to the user (eg, client device) is relatively close to the edge server. A cloud-based network environment can be private (eg, limited to a single organization), public (eg, available to many organizations), and/or a combination thereof (eg, a hybrid cloud environment).The client device(s) may include at least some of the components, features, and functions of the example computing device(s) 1400 described herein with respect to FIG. 14 . By way of example and not limitation, a client device may be implemented as a personal computer (PC), laptop computer, mobile device, smart phone, tablet computer, smart watch, wearable computer, personal digital assistant (PDA), MP3 player , virtual reality headsets, Global Positioning System (GPS) or devices, video players, video cameras, surveillance devices or systems, vehicles, boats, spaceships, virtual machines, drones, robots, handheld communication devices, hospital equipment, gaming devices or system, entertainment system, vehicle computing system, embedded system controller, remote control, appliance, consumer electronics device, workstation, edge device, any combination of these depicted devices, or any other suitable device.The present disclosure may be described in the general context of machine-useable instructions, or computer code, comprising computer-executable instructions, such as program modules, being executed by a computer or other machine, such as a personal digital assistant or other handheld device. Generally, program modules, including routines, programs, objects, components, data structures, etc., refer to code that performs particular tasks or implements particular abstract data types. The present disclosure can be practiced in a wide variety of system configurations, including handheld devices, consumer electronics devices, general purpose computers, more professional computing devices, and so on. The disclosure may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.As used herein, the recitation of "and/or" with respect to two or more elements should be interpreted as referring to only one element or a combination of elements. For example, "element A, element B, and/or element C" may include only element A, only element B, only element C, only element A and element B, element A and element C, element B and element C, or element A, B and C. In addition, "at least one of element A or element B" may include at least one of element A, at least one of element B, or at least one of element A and at least one of element B. Further, "at least one of element A and element B" may include at least one of element A, at least one of element B, or at least one of element A and at least one of element B.The subject matter of the present disclosure is described here in detail to satisfy statutory requirements. However, the description itself is not intended to limit the scope of the present disclosure. Rather, the present disclosure contemplates that claimed subject matter may be embodied in other ways as well, to include different steps, or combinations of steps, than those described herein in connection with other present or future technologies. Moreover, although the terms "step" and/or "block" may be used herein to imply different elements of the method employed, these terms should not be construed as implying any specific order among or between the various steps disclosed herein. order, unless the order of the steps is explicitly described.
A method for displaying a user interface on an electronic device is described. The method includes presenting a user interface. The user interface includes a coordinate system. The coordinate system corresponds to physical coordinates based on sensor data. The method also includes displaying at least a target audio signal and an interfering audio signal on the user interface.
CLAIMS 1. A method for displaying a user interface on an electronic device, comprising: presenting a user interface, wherein the user interface comprises a coordinate system, wherein the coordinate system corresponds to physical coordinates based on sensor data; and displaying at least a target audio signal and an interfering audio signal on the user interface. 2. The method of claim 1, further comprising displaying a directionality of at least one of the target audio signal and the interfering audio signal captured by at least one microphone. 3. The method of claim 2, wherein the target audio signal comprises a voice signal. 4. The method of claim 2, further comprising displaying at least one icon corresponding to at least one of the target audio signal and the interfering audio signal. 5. The method of claim 1, further comprising passing the target audio signal. 6. The method of claim 1, further comprising attenuating the interfering audio signal. 7. The method of claim 1, further comprising aligning at least a part of the user interface with a reference plane. 8. The method of claim 7, wherein the reference plane is horizontal. 9. The method of claim 7, wherein aligning at least a part of the user interface comprises mapping a two-dimensional polar plot into a three-dimensional display space. 10. The method of claim 1, wherein the physical coordinates are earth coordinates. 11. The method of claim 1, wherein the coordinate system maintains an orientation independent of electronic device orientation. 12. The method of claim 1, further comprising: recognizing an audio signature; looking up the audio signature in a database; obtaining identification information corresponding to the audio signature; and displaying the identification information on the user interface. 13. The method of claim 12, wherein the identification information is an image of a person corresponding to the audio signature. An electronic device, comprising: a display, wherein the display presents a user interface, wherein the user interface comprises a coordinate system, wherein the coordinate system corresponds to physical coordinates based on sensor data; and the display displays at least a target audio signal and an interfering audio signal on the user interface. 15. The electronic device of claim 14, wherein the display displays a directionality of at least one of the target audio signal and the interfering audio signal captured by at least one microphone. 16. The electronic device of claim 15, wherein the target audio signal comprises a voice signal. 17. The electronic device of claim 15, wherein the display displays at least one icon corresponding to at least one of the target audio signal and the interfering audio signal. 18. The electronic device of claim 14, further comprising operation circuitry coupled to the display, wherein the operation circuitry passes the target audio signal. 19. The electronic device of claim 14, further comprising operation circuitry coupled to the display, wherein the operation circuitry attenuates the interfering audio signal. 20. The electronic device of claim 14, wherein the user interface aligns at least a part of the user interface with a reference plane. 21. The electronic device of claim 20, wherein the reference plane is horizontal. 22. The electronic device of claim 20, wherein aligning at least a part of the user interface comprises mapping a two-dimensional polar plot into a three-dimensional display space. 23. The electronic device of claim 14, wherein the physical coordinates are earth coordinates. 24. The electronic device of claim 14, wherein the coordinate system maintains an orientation independent of electronic device orientation. 25. The electronic device of claim 14, further comprising audio signature recognition circuitry that recognizes an audio signature, looks up the audio signature in a database, obtains identification information corresponding to the audio signature, and passes the identification information to the display. 26. The electronic device of claim 25, wherein the identification information is an image of a person corresponding to the audio signature. 27. A computer-program product for displaying a user interface, comprising a non- transitory tangible computer-readable medium having instructions thereon, the instructions comprising: code for causing an electronic device to present a user interface, wherein the user interface comprises a coordinate system, wherein the coordinate system corresponds to physical coordinates based on sensor data; and code for causing the electronic device to display at least a target audio signal and an interfering audio signal on the user interface. 28. The computer-program product of claim 27, wherein the instructions further comprise code for causing the electronic device to display a directionality of at least one of the target audio signal and the interfering audio signal captured by at least one microphone. 29. The computer-program product of claim 27, wherein the instructions further comprise code for causing the electronic device to pass the target audio signal. 30. The computer-program product of claim 27, wherein the instructions further comprise code for causing the electronic device to attenuate the interfering audio signal. 31. An apparatus for displaying a user interface, comprising: means for presenting a user interface, wherein the user interface comprises a coordinate system, wherein the coordinate system corresponds to physical coordinates based on sensor data; and means for displaying at least a target audio signal and an interfering audio signal on the user interface. 32. The apparatus of claim 31, further comprising means for displaying a directionality of at least one of the target audio signal and the interfering audio signal captured by at least one microphone. 33. The apparatus of claim 31, further comprising means for passing the target audio signal. 34. The apparatus of claim 31, further comprising means for attenuating the interfering audio signal.
SYSTEMS AND METHODS FOR DISPLAYING A USER INTERFACE RELATED APPLICATIONS [0001] This application is related to and claims priority from U.S. Provisional Patent Application Serial No. 61/713,447 filed October 12, 2012, for "SYSTEMS AND METHODS FOR MAPPING COORDINATES," U.S. Provisional Patent Application Serial No. 61/714,212 filed October 15, 2012, for "SYSTEMS AND METHODS FOR MAPPING COORDINATES," U.S. Provisional Application Serial No. 61/624,181 filed April 13, 2012, for "SYSTEMS, METHODS, AND APPARATUS FOR ESTIMATING DIRECTION OF ARRIVAL," U.S. Provisional Application Serial No. 61/642,954, filed May 4, 2012, for "SYSTEMS, METHODS, AND APPARATUS FOR ESTIMATING DIRECTION OF ARRIVAL" and U.S. Provisional Application No. 61/726,336, filed November 14, 2012, for "SYSTEMS, METHODS, AND APPARATUS FOR ESTIMATING DIRECTION OF ARRIVAL." TECHNICAL FIELD [0002] The present disclosure relates generally to electronic devices. More specifically, the present disclosure relates to systems and methods for displaying a user interface. BACKGROUND [0003] In the last several decades, the use of electronic devices has become common. In particular, advances in electronic technology have reduced the cost of increasingly complex and useful electronic devices. Cost reduction and consumer demand have proliferated the use of electronic devices such that they are practically ubiquitous in modern society. As the use of electronic devices has expanded, so has the demand for new and improved features of electronic devices. More specifically, electronic devices that perform functions faster, more efficiently or with higher quality are often sought after. [0004] Some electronic devices (e.g., cellular phones, smart phones, computers, etc.) use audio or speech signals. These electronic devices may code speech signals for storage or transmission. For example, a cellular phone captures a user's voice or speech using a microphone. The microphone converts an acoustic signal into an electronic signal. This electronic signal may then be formatted (e.g., coded) for transmission to another device (e.g., cellular phone, smart phone, computer, etc.), for playback or for storage. [0005] Noisy audio signals may pose particular challenges. For example, competing audio signals may reduce the quality of a desired audio signal. As can be observed from this discussion, systems and methods that improve audio signal quality in an electronic device may be beneficial. SUMMARY [0006] A method for displaying a user interface on an electronic device is described. The method includes presenting a user interface. The user interface includes a coordinate system. The coordinate system corresponds to physical coordinates based on sensor data. The method also includes displaying at least a target audio signal and an interfering audio signal on the user interface. The target audio signal may include a voice signal. The reference plane may be horizontal. The physical coordinates may be earth coordinates. [0007] The method may include displaying a directionality of at least one of the target audio signal and the interfering audio signal captured by at least one microphone. The method may include displaying at least one icon corresponding to at least one of the target audio signal and the interfering audio signal. The method may include passing the target audio signal. The method may include attenuating the interfering audio signal. The method may include aligning at least a part of the user interface with a reference plane. [0008] Aligning at least a part of the user interface may include mapping a two- dimensional polar plot into a three-dimensional display space. The coordinate system may maintain an orientation independent of electronic device orientation. [0009] The method may include recognizing an audio signature. The method may also include looking up the audio signature in a database. The method may additionally include obtaining identification information corresponding to the audio signature. The method may further include displaying the identification information on the user interface. The identification information may be an image of a person corresponding to the audio signature. [0010] An electronic device is also described. The electronic device includes a display. The display presents a user interface. The user interface includes a coordinate system. The coordinate system corresponds to physical coordinates based on sensor data. The display displays at least a target audio signal and an interfering audio signal on the user interface. [0011] A computer-program product for displaying a user interface is also described. The computer-program product includes a non-transitory tangible computer- readable medium with instructions. The instructions include code for causing an electronic device to present a user interface. The user interface includes a coordinate system. The coordinate system corresponds to physical coordinates based on sensor data. The instructions also include code for causing the electronic device to display at least a target audio signal and an interfering audio signal on the user interface. [0012] An apparatus for displaying a user interface is also described. The apparatus includes means for presenting a user interface. The user interface includes a coordinate system. The coordinate system corresponds to physical coordinates based on sensor data. The apparatus also includes means for displaying at least a target audio signal and an interfering audio signal on the user interface. BRIEF DESCRIPTION OF THE DRAWINGS [0013] Figure 1 shows multiple views of a multi-microphone handset; [0014] Figure 2A shows a far-field model of plane wave propagation relative to a microphone pair; [0015] Figure 2B shows multiple microphone pairs in a linear array; [0016] Figure 3 A shows plots of unwrapped phase delay vs. frequency for four different directions of arrival (DO As); [0017] Figure 3B shows plots of wrapped phase delay vs. frequency for the same four different directions of arrival as depicted in Figure 3A; [0018] Figure 4A shows an example of measured phase delay values and calculated values for two DO A candidates; [0019] Figure 4B shows a linear array of microphones arranged along the top margin of a television screen; [0020] Figure 5A shows an example of calculating DOA differences for a frame; [0021] Figure 5B shows an example of calculating a DOA estimate; [0022] Figure 5C shows an example of identifying a DOA estimate for each frequency; [0023] Figure 6A shows an example of using calculated likelihoods to identify a best microphone pair and best DOA candidate for a given frequency; [0024] Figure 6B shows an example of likelihood calculation; [0025] Figures 7 shows an example of bias removal; [0026] Figure 8 shows another example of bias removal; [0027] Figure 9 shows an example of an angiogram that plots source activity likelihood at the estimated DOA over frame and frequency; [0028] Figure 10A shows an example of a speakerphone application; [0029] Figure 10B shows a mapping of pair-wise DOA estimates to a 360° range in the plane of the microphone array; [0030] Figures 11A-B show an ambiguity in the DOA estimate; [0031] Figure 11C shows a relation between signs of observed DOAs and quadrants of an x-y plane; [0032] Figures 12A-12D show an example in which the source is located above the plane of the microphones; [0033] Figure 13A shows an example of microphone pairs along non-orthogonal axes; [0034] Figure 13B shows an example of use of the array of Figure 13A to obtain a DOA estimate with respect to the orthogonal x and y axes; [0035] Figure 13C illustrates a relation between arrival of parallel wavefronts at microphones of different arrays for examples of two different DOAs; [0036] Figures 14A-14B show examples of pair-wise normalized beamformer/null beamformers (BFNFs) for a two-pair microphone array; [0037] Figure 15A shows a two-pair microphone array; [0038] Figure 15B shows an example of a pair- wise normalized minimum variance distortionless response (MVDR) BFNF; [0039] Figure 16A shows an example of a pair- wise BFNF for frequencies in which the matrix A A is not ill-conditioned; [0040] Figure 16B shows examples of steering vectors; [0041] Figure 17 shows a flowchart of one example of an integrated method of source direction estimation as described herein; [0042] Figures 18-31 show examples of practical results of DOA estimation, source discrimination, and source tracking as described herein; [0043] Figure 32A shows a telephone design, and Figures 32B-32D show use of such a design in various modes with corresponding visualization displays; [0044] Figure 33A shows a flowchart for a method M10 according to a general configuration; [0045] Figure 33B shows an implementation T12 of task T10; [0046] Figure 33C shows an implementation T14 of task T10; [0047] Figure 33D shows a flowchart for an implementation M20 of method M10; [0048] Figure 34A shows a flowchart for an implementation M25 of method M20; [0049] Figure 34B shows a flowchart for an implementation M30 of method M10; [0050] Figure 34C shows a flowchart for an implementation M100 of method M30; [0051] Figure 35A shows a flowchart for an implementation Ml 10 of method M100; [0052] Figure 35B shows a block diagram of an apparatus A5 according to a general configuration; [0053] Figure 35C shows a block diagram of an implementation A10 of apparatus A5; [0054] Figure 35D shows a block diagram of an implementation A15 of apparatus A10; [0055] Figure 36A shows a block diagram of an apparatus MF5 according to a general configuration; [0056] Figure 36B shows a block diagram of an implementation MF10 of apparatus MF5; [0057] Figure 36C shows a block diagram of an implementation MF15 of apparatus MF10; [0058] Figure 37A illustrates a use of a device to represent a three-dimensional direction of arrival in a plane of the device; [0059] Figure 37B illustrates an intersection of the cones of confusion that represent respective responses of microphone arrays having non-orthogonal axes to a point source positioned outside the plane of the axes; [0060] Figure 37C illustrates a line of intersection of the cones of Figure 37B; [0061] Figure 38A shows a block diagram of an audio preprocessing stage; [0062] Figure 38B shows a block diagram of a three-channel implementation of an audio preprocessing stage; [0063] Figure 39A shows a block diagram of an implementation of an apparatus that includes means for indicating a direction of arrival; [0064] Figure 39B shows an example of an ambiguity that results from the one- dimensionality of a DOA estimate from a linear array; [0065] Figure 39C illustrates one example of a cone of confusion; [0066] Figure 40 shows an example of source confusion in a speakerphone application in which three sources are located in different respective directions relative to a device having a linear microphone array; [0067] Figure 41 A shows a 2-D microphone array that includes two microphone pairs having orthogonal axes; [0068] Figure 41B shows a flowchart of a method according to a general configuration that includes tasks; [0069] Figure 41C shows an example of a DOA estimate shown on a display; [0070] Figure 42A shows one example of correspondences between the signs of 1-D estimates and corresponding quadrants of the plane defined by array axes; [0071] Figure 42B shows another example of correspondences between the signs of 1-D estimates and corresponding quadrants of the plane defined by array axes; [0072] Figure 42C shows a correspondence between the four values of the tuple (sign^) , sign(6> y)) and the quadrants of the plane; [0073] Figure 42D shows a 360-degree display according to an alternate mapping; [0074] Figure 43 A shows an example that is similar to Figure 41 A but depicts a more general case in which the source is located above the x-y plane; [0075] Figure 43B shows another example of a 2-D microphone array whose axes define an x-y plane and a source that is located above the x-y plane; [0076] Figure 43C shows an example of such a general case in which a point source is elevated above the plane defined by the array axes; [0077] Figures 44A-44D show a derivation of a conversion of (θ χ, 6> y) into an angle in the array plane; [0078] Figure 44E illustrates one example of a projection p and an angle of elevation; [0079] Figure 45 A shows a plot obtained by applying an alternate mapping; [0080] Figure 45B shows an example of intersecting cones of confusion associated with responses of linear microphone arrays having non-orthogonal axes x and r to a common point source; [0081] Figure 45C shows the lines of intersection of cones; [0082] Figure 46A shows an example of a microphone array; [0083] Figure 46B shows an example of obtaining a combined directional estimate in the x-y plane with respect to orthogonal axes x and y with observations (θ χ, 6> r) from an array as shown in Figure 46 A; [0084] Figure 46C illustrates one example of a projection; [0085] Figure 46D illustrates one example of determining a value from the dimensions of a projection vector; [0086] Figure 46E illustrates another example of determining a value from the dimensions of a projection vector; [0087] Figure 47A shows a flowchart of a method according to another general configuration that includes instances of tasks; [0088] Figure 47B shows a flowchart of an implementation of a task that includes subtasks; [0089] Figure 47C illustrates one example of an apparatus with components for performing functions corresponding to Figure 47A; [0090] Figure 47D illustrates one example of an apparatus including means for performing functions corresponding to Figure 47A; [0091] Figure 48A shows a flowchart of one implementation of a method that includes a task; [0092] Figure 48B shows a flowchart for an implementation of another method; [0093] Figure 49A shows a flowchart of another implementation of a method; [0094] Figure 49B illustrates one example of an indication of an estimated angle of elevation relative to a display plane; [0095] Figure 49C shows a flowchart of such an implementation of another method that includes a task; [0096] Figures 50A and 50B show examples of a display before and after a rotation; [0097] Figures 51 A and 5 IB show other examples of a display before and after a rotation; [0098] Figure 52A shows an example in which a device coordinate system E is aligned with the world coordinate system; [0099] Figure 52B shows an example in which a device is rotated and the matrix F that corresponds to an orientation; [00100] Figure 52C shows a perspective mapping, onto a display plane of a device, of a projection of a DOA onto the world reference plane; [00101] Figure 53A shows an example of a mapped display of the DOA as projected onto the world reference plane; [00102] Figure 53B shows a flowchart of such another implementation of a method; [00103] Figure 53C illustrates examples of interfaces including a linear slider potentiometer, a rocker switch and a wheel or knob; [00104] Figure 54A illustrates one example of a user interface; [00105] Figure 54B illustrates another example of a user interface; [00106] Figure 54C illustrates another example of a user interface; [00107] Figures 55A and 55B show a further example in which an orientation sensor is used to track an orientation of a device; [00108] Figure 56 is a block diagram illustrating one configuration of an electronic device in which systems and methods for mapping a source location may be implemented; [00109] Figure 57 is a flow diagram illustrating one configuration of a method for mapping a source location; [00110] Figure 58 is a block diagram illustrating a more specific configuration of an electronic device in which systems and methods for mapping a source location may be implemented; [00111] Figure 59 is a flow diagram illustrating a more specific configuration of a method for mapping a source location; [00112] Figure 60 is a flow diagram illustrating one configuration of a method for performing an operation based on the mapping; [00113] Figure 61 is a flow diagram illustrating another configuration of a method for performing an operation based on the mapping; [00114] Figure 62 is a block diagram illustrating one configuration of a user interface in which systems and methods for displaying a user interface on an electronic device may be implemented; [00115] Figure 63 is a flow diagram illustrating one configuration of a method for displaying a user interface on an electronic device; [00116] Figure 64 is a block diagram illustrating one configuration of a user interface in which systems and methods for displaying a user interface on an electronic device may be implemented; [00117] Figure 65 is a flow diagram illustrating a more specific configuration of a method for displaying a user interface on an electronic device; [00118] Figure 66 illustrates examples of the user interface for displaying a directionality of at least one audio signal; [00119] Figure 67 illustrates another example of the user interface for displaying a directionality of at least one audio signal; [00120] Figure 68 illustrates another example of the user interface for displaying a directionality of at least one audio signal; [00121] Figure 69 illustrates another example of the user interface for displaying a directionality of at least one audio signal; [00122] Figure 70 illustrates another example of the user interface for displaying a directionality of at least one audio signal; [00123] Figure 71 illustrates an example of a sector selection feature of the user interface; [00124] Figure 72 illustrates another example of the sector selection feature of the user interface; [00125] Figure 73 illustrates another example of the sector selection feature of the user interface; [00126] Figure 74 illustrates more examples of the sector selection feature of the user interface; [00127] Figure 75 illustrates more examples of the sector selection feature of the user interface; [00128] Figure 76 is a flow diagram illustrating one configuration of a method for editing a sector; [00129] Figure 77 illustrates examples of a sector editing feature of the user interface; [00130] Figure 78 illustrates more examples of the sector editing feature of the user interface; [00131] Figure 79 illustrates more examples of the sector editing feature of the user interface; [00132] Figure 80 illustrates more examples of the sector editing feature of the user interface; [00133] Figure 81 illustrates more examples of the sector editing feature of the user interface; [00134] Figure 82 illustrates an example of the user interface with a coordinate system oriented independent of electronic device orientation; [00135] Figure 83 illustrates another example of the user interface with the coordinate system oriented independent of electronic device orientation; [00136] Figure 84 illustrates another example of the user interface with the coordinate system oriented independent of electronic device orientation; [00137] Figure 85 illustrates another example of the user interface with the coordinate system oriented independent of electronic device orientation; [00138] Figure 86 illustrates more examples of the user interface with the coordinate system oriented independent of electronic device orientation; [00139] Figure 87 illustrates another example of the user interface with the coordinate system oriented independent of electronic device orientation; [00140] Figure 88 is a block diagram illustrating another configuration of the user interface in which systems and methods for displaying a user interface on an electronic device may be implemented; [00141] Figure 89 is a flow diagram illustrating another configuration of a method for displaying a user interface on an electronic device; [00142] Figure 90 illustrates an example of the user interface coupled to a database; [00143] Figure 91 is a flow diagram illustrating another configuration of a method for displaying a user interface on an electronic device; [00144] Figure 92 is a block diagram illustrating one configuration of a wireless communication device in which systems and methods for mapping a source location may be implemented; [00145] Figure 93 illustrates various components that may be utilized in an electronic device; and [00146] Figure 94 illustrates another example of a user interface. DETAILED DESCRIPTION [00147] The 3rd Generation Partnership Project (3GPP) is a collaboration between groups of telecommunications associations that aims to define a globally applicable 3rd generation (3G) mobile phone specification. 3GPP Long Term Evolution (LTE) is a 3GPP project aimed at improving the Universal Mobile Telecommunications System (UMTS) mobile phone standard. The 3GPP may define specifications for the next generation of mobile networks, mobile systems and mobile devices. [00148] It should be noted that, in some cases, the systems and methods disclosed herein may be described in terms of one or more specifications, such as the 3GPP Release-8 (Rel-8), 3 GPP Release-9 (Rel-9), 3 GPP Release-10 (Rel-10), LTE, LTE- Advanced (LTE-A), Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Enhanced Data Rates for GSM Evolution (EDGE), Time Division Long-Term Evolution (TD-LTE), Time Division Synchronous Code Division Multiple Access (TD-SCDMA), Frequency-Division Duplexing Long-Term Evolution (FDD-LTE), UMTS, GSM EDGE Radio Access Network (GERAN), Global Positioning System (GPS), etc. However, at least some of the concepts described herein may be applied to other wireless communication systems. For example, the term electronic device may be used to refer to a User Equipment (UE). Furthermore, the term base station may be used to refer to at least one of the terms Node B, Evolved Node B (eNB), Home Evolved Node B (HeNB), etc. [00149] Unless expressly limited by its context, the term "signal" is used herein to indicate any of its ordinary meanings, including a state of a memory location (or set of memory locations) as expressed on a wire, bus, or other transmission medium. Unless expressly limited by its context, the term "generating" is used herein to indicate any of its ordinary meanings, such as computing or otherwise producing. Unless expressly limited by its context, the term "calculating" is used herein to indicate any of its ordinary meanings, such as computing, evaluating, estimating and/or selecting from a plurality of values. Unless expressly limited by its context, the term "obtaining" is used to indicate any of its ordinary meanings, such as calculating, deriving, receiving (e.g., from an external device), and/or retrieving (e.g., from an array of storage elements). Unless expressly limited by its context, the term "selecting" is used to indicate any of its ordinary meanings, such as identifying, indicating, applying, and/or using at least one, and fewer than all, of a set of two or more. Unless expressly limited by its context, the term "determining" is used to indicate any of its ordinary meanings, such as deciding, establishing, concluding, calculating, selecting and/or evaluating. Where the term "comprising" is used in the present description and claims, it does not exclude other elements or operations. The term "based on" (as in "A is based on B") is used to indicate any of its ordinary meanings, including the cases (i) "derived from" (e.g., "B is a precursor of A"), (ii) "based on at least" (e.g., "A is based on at least B") and, if appropriate in the particular context, (iii) "equal to" (e.g., "A is equal to B" or "A is the same as B"). Similarly, the term "in response to" is used to indicate any of its ordinary meanings, including "in response to at least." Unless otherwise indicated, the terms "at least one of A, B, and C" and "one or more of A, B, and C" indicate "A and/or B and/or C." [00150] References to a "location" of a microphone of a multi-microphone audio sensing device indicate the location of the center of an acoustically sensitive face of the microphone, unless otherwise indicated by the context. The term "channel" is used at times to indicate a signal path and at other times to indicate a signal carried by such a path, according to the particular context. Unless otherwise indicated, the term "series" is used to indicate a sequence of two or more items. The term "logarithm" is used to indicate the base-ten logarithm, although extensions of such an operation to other bases are within the scope of this disclosure. The term "frequency component" is used to indicate one among a set of frequencies or frequency bands of a signal, such as a sample (or "bin") of a frequency domain representation of the signal (e.g., as produced by a fast Fourier transform) or a subband of the signal (e.g., a Bark scale or mel scale subband). [00151] Unless indicated otherwise, any disclosure of an operation of an apparatus having a particular feature is also expressly intended to disclose a method having an analogous feature (and vice versa), and any disclosure of an operation of an apparatus according to a particular configuration is also expressly intended to disclose a method according to an analogous configuration (and vice versa). The term "configuration" may be used in reference to a method, apparatus and/or system as indicated by its particular context. The terms "method," "process," "procedure," and "technique" are used generically and interchangeably unless otherwise indicated by the particular context. A "task" having multiple subtasks is also a method. The terms "apparatus" and "device" are also used generically and interchangeably unless otherwise indicated by the particular context. The terms "element" and "module" are typically used to indicate a portion of a greater configuration. Unless expressly limited by its context, the term "system" is used herein to indicate any of its ordinary meanings, including "a group of elements that interact to serve a common purpose." [00152] Any incorporation by reference of a portion of a document shall also be understood to incorporate definitions of terms or variables that are referenced within the portion, where such definitions appear elsewhere in the document, as well as any figures referenced in the incorporated portion. Unless initially introduced by a definite article, an ordinal term (e.g., "first," "second," "third," etc.) used to modify a claim element does not by itself indicate any priority or order of the claim element with respect to another, but rather merely distinguishes the claim element from another claim element having a same name (but for use of the ordinal term). Unless expressly limited by its context, each of the terms "plurality" and "set" is used herein to indicate an integer quantity that is greater than one. A. Systems, Methods and Apparatus for Estimating Direction of Arrival [00153] A method of processing a multichannel signal includes calculating, for each of a plurality of different frequency components of the multichannel signal, a difference between a phase of the frequency component in each of a first pair of channels of the multichannel signal, to obtain a plurality of phase differences. This method also includes estimating an error, for each of a plurality of candidate directions, between the candidate direction and a vector that is based on the plurality of phase differences. This method also includes selecting, from among the plurality of candidate directions, a candidate direction that corresponds to the minimum among the estimated errors. In this method, each of said first pair of channels is based on a signal produced by a corresponding one of a first pair of microphones, and at least one of the different frequency components has a wavelength that is less than twice the distance between the microphones of the first pair. [00154] It may be assumed that in the near-field and far-field regions of an emitted sound field, the wavefronts are spherical and planar, respectively. The near-field may be defined as that region of space that is less than one wavelength away from a sound receiver (e.g., a microphone array). Under this definition, the distance to the boundary of the region varies inversely with frequency. At frequencies of two hundred, seven hundred, and two thousand hertz, for example, the distance to a one-wavelength boundary is about 170, forty-nine, and seventeen centimeters, respectively. It may be useful instead to consider the near-field/far-field boundary to be at a particular distance from the microphone array (e.g., fifty centimeters from a microphone of the array or from the centroid of the array, or one meter or 1.5 meters from a microphone of the array or from the centroid of the array). [00155] Various configurations are now described with reference to the Figures, where like reference numbers may indicate functionally similar elements. The systems and methods as generally described and illustrated in the Figures herein could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of several configurations, as represented in the Figures, is not intended to limit scope, as claimed, but is merely representative of the systems and methods. Features and/or elements depicted in a Figure may be combined with at least one features and/or elements depicted in at least one other Figures. [00156] Figure 1 shows an example of a multi-microphone handset H100 (e.g., a multi-microphone device) that includes a first microphone pair MVlO-1, MV10-3 whose axis is in a left-right direction of a front face of the device, and a second microphone pair MVlO-1, MV10-2 whose axis is in a front-back direction (i.e., orthogonal to the front face). Such an arrangement may be used to determine when a user is speaking at the front face of the device (e.g., in a browse-talk mode). The front- back pair may be used to resolve an ambiguity between front and back directions that the left-right pair typically cannot resolve on its own. In some implementations, the handset HI 00 may include one or more loudspeakers LS10, L20L, LS20R, a touchscreen TS10, a lens L10 and/or one or more additional microphones ME10, MR10. [00157] In addition to a handset as shown in Figure 1, other examples of audio sensing devices that may be implemented to include a multi-microphone array and to perform a method as described herein include portable computing devices (e.g., laptop computers, notebook computers, netbook computers, ultra-portable computers, tablet computers, mobile Internet devices, smartbooks, smartphones, etc.), audio- or videoconferencing devices, and display screens (e.g., computer monitors, television sets). [00158] A device as shown in Figure 1 may be configured to determine the direction of arrival (DOA) of a source signal by measuring a difference (e.g., a phase difference) between the microphone channels for each frequency bin to obtain an indication of direction, and averaging the direction indications over all bins to determine whether the estimated direction is consistent over all bins. The range of frequency bins that may be available for tracking is typically constrained by the spatial aliasing frequency for the microphone pair. This upper limit may be defined as the frequency at which the wavelength of the signal is twice the distance, d, between the microphones. Such an approach may not support accurate tracking of source DOA beyond one meter and typically may support only a low DOA resolution. Moreover, dependence on a front- back pair to resolve ambiguity may be a significant constraint on the microphone placement geometry, as placing the device on a surface may effectively occlude the front or back microphone. Such an approach also typically uses only one fixed pair for tracking. [00159] It may be desirable to provide a generic speakerphone application such that the multi-microphone device may be placed arbitrarily (e.g., on a table for a conference call, on a car seat, etc.) and track and/or enhance the voices of individual speakers. Such an approach may be capable of dealing with an arbitrary target speaker position with respect to an arbitrary orientation of available microphones. It may also be desirable for such an approach to provide instantaneous multi-speaker tracking/separating capability. Unfortunately, the current state of the art is a single-microphone approach. [00160] It may also be desirable to support source tracking in a far-field application, which may be used to provide solutions for tracking sources at large distances and unknown orientations with respect to the multi-microphone device. The multi- microphone device in such an application may include an array mounted on a television or set-top box, which may be used to support telephony. Examples include the array of a Kinect device (Microsoft Corp., Redmond, WA) and arrays from Skype (Microsoft Skype Division) and Samsung Electronics (Seoul, KR). In addition to the large source- to-device distance, such applications typically also suffer from a bad signal-to- interference-noise ratio (SINR) and room reverberation. [00161] It is a challenge to provide a method for estimating a three-dimensional direction of arrival (DOA) for each frame of an audio signal for concurrent multiple sound events that is sufficiently robust under background noise and reverberation. Robustness can be obtained by maximizing the number of reliable frequency bins. It may be desirable for such a method to be suitable for arbitrarily shaped microphone array geometry, such that specific constraints on microphone geometry may be avoided. A pair- wise 1-D approach as described herein can be appropriately incorporated into any geometry. [00162] The systems and methods disclosed herein may be implemented for such a generic speakerphone application or far-field application. Such an approach may be implemented to operate without a microphone placement constraint. Such an approach may also be implemented to track sources using available frequency bins up to Nyquist frequency and down to a lower frequency (e.g., by supporting use of a microphone pair having a larger inter-microphone distance). Rather than being limited to a single pair for tracking, such an approach may be implemented to select a best pair among all available pairs. Such an approach may be used to support source tracking even in a far- field scenario, up to a distance of three to five meters or more, and to provide a much higher DOA resolution. Other potential features include obtaining an exact 2-D representation of an active source. For best results, it may be desirable that each source is a sparse broadband audio source, and that each frequency bin is mostly dominated by no more than one source. [00163] Figure 33A shows a flowchart for a method M10 according to a general configuration that includes tasks T10, T20 and T30. Task T10 calculates a difference between a pair of channels of a multichannel signal (e.g., in which each channel is based on a signal produced by a corresponding microphone). For each among a plurality K of candidate directions, task T20 calculates a corresponding directional error that is based on the calculated difference. Based on the K directional errors, task T30 selects a candidate direction. [00164] Method M10 may be configured to process the multichannel signal as a series of segments. Typical segment lengths range from about five or ten milliseconds to about forty or fifty milliseconds, and the segments may be overlapping (e.g., with adjacent segments overlapping by 25% or 50%) or non-overlapping. In one particular example, the multichannel signal is divided into a series of non-overlapping segments or "frames," each having a length of ten milliseconds. In another particular example, each frame has a length of twenty milliseconds. A segment as processed by method M10 may also be a segment (i.e., a "subframe") of a larger segment as processed by a different operation, or vice versa. [00165] Examples of differences between the channels include a gain difference or ratio, a time difference of arrival, and a phase difference. For example, task T10 may be implemented to calculate the difference between the channels of a pair as a difference or ratio between corresponding gain values of the channels (e.g., a difference in magnitude or energy). Figure 33B shows such an implementation T12 of task T10. [00166] Task T12 may be implemented to calculate measures of the gain of a segment of the multichannel signal in the time domain (e.g., for each of a plurality of subbands of the signal) or in a frequency domain (e.g., for each of a plurality of frequency components of the signal in a transform domain, such as a fast Fourier transform (FFT), discrete cosine transform (DCT), or modified DCT (MDCT) domain). Examples of such gain measures include, without limitation, the following: total magnitude (e.g., sum of absolute values of sample values), average magnitude (e.g., per sample), root mean square (RMS) amplitude, median magnitude, peak magnitude, peak energy, total energy (e.g., sum of squares of sample values), and average energy (e.g., per sample). [00167] In order to obtain accurate results with a gain-difference technique, it may be desirable for the responses of the two microphone channels to be calibrated relative to each other. It may be desirable to apply a low-pass filter to the multichannel signal such that calculation of the gain measure is limited to an audio-frequency component of the multichannel signal. [00168] Task T12 may be implemented to calculate a difference between gains as a difference between corresponding gain measure values for each channel in a logarithmic domain (e.g., values in decibels) or, equivalently, as a ratio between the gain measure values in a linear domain. For a calibrated microphone pair, a gain difference of zero may be taken to indicate that the source is equidistant from each microphone (i.e., located in a broadside direction of the pair), a gain difference with a large positive value may be taken to indicate that the source is closer to one microphone (i.e., located in one endfire direction of the pair), and a gain difference with a large negative value may be taken to indicate that the source is closer to the other microphone (i.e., located in the other endfire direction of the pair). [00169] In another example, task T10 from Figure 33A may be implemented to perform a cross-correlation on the channels to determine the difference (e.g., calculating a time-difference-of-arrival based on a lag between channels of the multichannel signal). [00170] In a further example, task T10 is implemented to calculate the difference between the channels of a pair as a difference between the phase of each channel (e.g., at a particular frequency component of the signal). Figure 33C shows such an implementation T14 of task T10. As discussed below, such calculation may be performed for each among a plurality of frequency components. [00171] For a signal received by a pair of microphones directly from a point source in a particular direction of arrival (DOA) relative to the axis of the microphone pair, the phase delay differs for each frequency component and also depends on the spacing between the microphones. The observed value of the phase delay at a particular frequency component (or "bin") may be calculated as the inverse tangent (also called the arctangent) of the ratio of the imaginary term of the complex FFT coefficient to the real term of the complex FFT coefficient. [00172] As shown in Figure 2A, the phase delay value Αφ for a source S01 for at least one microphone MCIO, MC20 at a particular frequency,^ may be related to source DOA under a far-field (i.e., plane- wave) assumption as A^ = 2 f , where d Jc denotes the distance between the microphones MCIO, MC20 (in meters), Θ denotes the angle of arrival (in radians) relative to a direction that is orthogonal to the array axis, / denotes frequency (in Hz), and c denotes the speed of sound (in m/s). As will be described below, the DOA estimation principles described herein may be extended to multiple microphone pairs in a linear array (e.g., as shown in Figure 2B). For the ideal case of a single point source with no reverberation, the ratio of phase delay to frequency A<Pf will have the same value 2nf over all frequencies. As discussed in more Jc detail below, the DOA, Θ, relative to a microphone pair is a one-dimensional measurement that defines the surface of a cone in space (e.g., such that the axis of the cone is the axis of the array). [00173] Such an approach is typically limited in practice by the spatial aliasing frequency for the microphone pair, which may be defined as the frequency at which the wavelength of the signal is twice the distance d between the microphones. Spatial aliasing causes phase wrapping, which puts an upper limit on the range of frequencies that may be used to provide reliable phase delay measurements for a particular microphone pair. [00174] Figure 3A shows plots of unwrapped phase delay vs. frequency for four different DOAs D10, D20, D30, D40. Figure 3B shows plots of wrapped phase delay vs. frequency for the same DOAs D10, D20, D30, D40, where the initial portion of each plot (i.e., until the first wrapping occurs) are shown in bold. Attempts to extend the useful frequency range of phase delay measurement by unwrapping the measured phase are typically unreliable. [00175] Task T20 may be implemented to calculate the directional error in terms of phase difference. For example, task T20 may be implemented to calculate the directional error at frequency /, for each of an inventory of K DO A candidates, where 1≤k≤ K , as a squared differe an absolute difference j- difference and the phase difference corresponding to the DOA candidate. [00176] Instead of phase unwrapping, a proposed approach compares the phase delay as measured (e.g., wrapped) with pre-calculated values of wrapped phase delay for each of an inventory of DOA candidates. Figure 4 A shows such an example that includes angle vs. frequency plots of the (noisy) measured phase delay values MPD10 and the phase delay values PD10, PD20 for two DOA candidates of the inventory (solid and dashed lines), where phase is wrapped to the range of pi to minus pi. The DOA candidate that is best matched to the signal as observed may then be determined by calculating a corresponding directional error for each DOA candidate, έ? , and identifying the DOA candidate value that corresponds to the minimum among these directional errors. Such a directional error may be calculated, for example, as an error, e ph k, between the phase delay values, Αφ^ j , for the fc-th DOA candidate and the observed phase delay values Δφ^ j . In one example, the error, ^ , is expressed asover a desired range or other set F of frequency components, i.e. as the sum e p^ ^ = {A<p 0fr j - Αφ^ j ) 2of the squared differences between the observed and candidate phase delay values over F. The phase delay values, Αφ^ j , for each DOA candidate, , may be calculated before run-time (e.g., during design or manufacture), according to known values of c and d and the desired range of frequency components/, and retrieved from storage during use of the device. Such a pre-calculated inventory may be configured to support a desired angular range and resolution (e.g., a uniform resolution, such as one, two, five, six, ten, or twelve degrees; or a desired nonuniform resolution) and a desired frequency range and resolution (which may also be uniform or non-uniform). [00177] It may be desirable to calculate the directional error (e.g., e p^ f , p^ across as many frequency bins as possible to increase robustness against noise. For example, it may be desirable for the error calculation to include terms from frequency bins that are beyond the spatial aliasing frequency. In a practical application, the maximum frequency bin may be limited by other factors, which may include available memory, computational complexity, strong reflection by a rigid body (e.g., an object in the environment, a housing of the device) at high frequencies, etc. [00178] A speech signal is typically sparse in the time-frequency domain. If the sources are disjoint in the frequency domain, then two sources can be tracked at the same time. If the sources are disjoint in the time domain, then two sources can be tracked at the same frequency. It may be desirable for the array to include a number of microphones that is at least equal to the number of different source directions to be distinguished at any one time. The microphones may be omnidirectional (e.g., as may be typical for a cellular telephone or a dedicated conferencing device) or directional (e.g., as may be typical for a device such as a set-top box). [00179] Such multichannel processing is generally applicable, for example, to source tracking for speakerphone applications. Such a technique may be used to calculate a DOA estimate for a frame of the received multichannel signal. Such an approach may calculate, at each frequency bin, the error for each candidate angle with respect to the observed angle, which is indicated by the phase delay. The target angle at that frequency bin is the candidate having the minimum error. In one example, the error is then summed across the frequency bins to obtain a measure of likelihood for the candidate. In another example, one or more of the most frequently occurring target DOA candidates across all frequency bins is identified as the DOA estimate (or estimates) for a given frame. [00180] Such a method may be applied to obtain instantaneous tracking results (e.g., with a delay of less than one frame). The delay is dependent on the FFT size and the degree of overlap. For example, for a 512-point FFT with a 50% overlap and a sampling frequency of 16 kilohertz (kHz), the resulting 256-sample delay corresponds to sixteen milliseconds. Such a method may be used to support differentiation of source directions typically up to a source-array distance of two to three meters, or even up to five meters. [00181] The error may also be considered as a variance (i.e., the degree to which the individual errors deviate from an expected value). Conversion of the time-domain received signal into the frequency domain (e.g., by applying an FFT) has the effect of averaging the spectrum in each bin. This averaging is even more obvious if a subband representation is used (e.g., mel scale or Bark scale). Additionally, it may be desirable to perform time-domain smoothing on the DOA estimates (e.g., by applying a recursive smoother, such as a first-order infinite-impulse-response filter). It may be desirable to reduce the computational complexity of the error calculation operation (e.g., by using a search strategy, such as a binary tree, and/or applying known information, such as DOA candidate selections from one or more previous frames). [00182] Even though the directional information may be measured in terms of phase delay, it is typically desired to obtain a result that indicates source DOA. Consequently, it may be desirable to implement task T20 to calculate the directional error at frequency /, for each of an inventory of K DOA candidates, in terms of DOA rather than in terms of phase delay. [00183] An expression of directional error in terms of DOA may be derived by expressing wrapped phase delay at frequency (e.g., the observed phase delay, Δφ 0£, j , as a function wrof the DOA, Θ, of the signal, such as We assume that this expression is equivalent to a corresponding expression for unwrapped phase delay as a function of <i sin # DOA, such as Ψ f _ un {e) = -2jf , except near discontinuities that are due to C phase wrapping. The directional error, β j ^ , may then be expressed in terms of observed DOA, 0 o, and candidate DOA, , as eph_f_k = f _wr{e ob)- X¥ f _ wr{Gk OR eph _ f _ k = ( Ψ_ wr (0ob ) - Ψ/ _ wr {Ok )) 2 ≡( Ψ_ un (&ob ) __ un (** )) 2, where the difference between the observed and candidate phase delay at frequency f is expressed in terms of observed DOA at frequency f, d oj , and candidate DOA, ¾ , as ψ_ w«) ~ ψ_unM =— -— l sin ob_f -sme k). A directional error, e ph_ k, across F may then be expressed in terms of observed DOA, 0 oh, and candidate DOA, [00184] We perform a Taylor series expansion on this result to obtain the following first-order approximation: 2^ d(sin e ob_ f- sin 0 k) « (e ob_ f-0 k) 2^ dcose, which is used to obtain an expression of the difference between the DOA 0 oobserved at fre uency f and DOA candidate T20), with the assumed equivalence of observed wrapped phase delay to unwrapped phase delay, to express the directional error in terms of DOA [e^OA f k■ > eDOA k ) rather than phase delay , eph _f_k-> eph _ k where the values of V _ wr (&ob [00185] To avoid division with zero at the endfire directions ( θ= +/- 90°), it may be desirable to implement task T20 to perform such an expansion using a second-order approximation instead, as in the following: - C/ B , θι = 0(broadside) Fob - Θΐί \ = \ - Β + ■JB 2- 4AC where A = (^d sm d^ )/c , otherwise] 2A B = (- 2 fd cos e k)/ c , and C = -( Py un f un (@k ))- As m mefirst-order example above, this expression may be used, with the assumed equivalence of observed wrapped phase delay to unwrapped phase delay, to express the directional error in terms of DOA as a function of the observed and candidate wrapped phase delay values. [00186] Figures 5A-5C depict a plurality of frames 502. As shown in Figure 5A, a directional error based on a difference between observed and candidate DOA for a given frame of the received signal may be calculated in such manner (e.g., by task T20) at each of a plurality of frequencies / of the received microphone signals (e.g., V/ e F ) and for each of a plurality of DOA candidates . It may be desirable to implement task T20 to perform a temporal smoothing operation on each directional error e according to an expression such as e s(n) = fie s(n + i)+ (l— )e{n) (also known as a first-order IIR or recursive filter), where e s(n - l) denotes the smoothed directional error for the previous frame, e s(n) denotes the current unsmoothed value of the directional error, e s(n) denotes the current smoothed value of the directional error, and β is a smoothing factor whose value may be selected from the range from zero (no smoothing) to one (no updating). Typical values for smoothing factor β include 0.1, 0.2, 0.25, 0.3, 0.4 and 0.5. It is typical, but not necessary, for such an implementation of task T20 to use the same value of β to smooth directional errors that correspond to different frequency components. Similarly, it is typical, but not necessary, for such an implementation of task T20 to use the same value of β to smooth directional errors that correspond to different candidate directions. As demonstrated in Figure 5B, a DOA estimate for a given frame may be determined by summing the squared differences for each candidate across all frequency bins in the frame to obtain a directional error (e.g., e p^ ^ or eDOA k ) and selecting the DOA candidate having the minimum error. Alternatively, as demonstrated in Figure 5C, such differences may be used to identify the best-matched (i.e. minimum squared difference) DOA candidate at each frequency. A DOA estimate for the frame may then be determined as the most frequent DOA across all frequency bins. [00187] Based on the directional errors, task T30 selects a candidate direction for the frequency component. For example, task T30 may be implemented to select the candidate direction associated with the lowest among the K directional errors produced by task T20. In another example, task T30 is implemented to calculate a likelihood based on each directional error and to select the candidate direction associated with the highest likelihood. [00188] As shown in Figure 6B, an error term 604 may be calculated for each candidate angle 606, /, and each of a set F of frequencies for each frame 608, k. It may be desirable to indicate a likelihood of source activity in terms of a calculated DOA difference or error term 604. One example of such a likelihood L may be expressed, for 1 a particular frame, frequency and [00189] For this expression, an extremely good match at a particular frequency may cause a corresponding likelihood to dominate all others. To reduce this susceptibility, it may be desirable to include a regularization term λ, as in the following expression: [00190] Speech tends to be sparse in both time and frequency, such that a sum over a set of frequencies F may include results from bins that are dominated by noise. It may be desirable to include a bias term β , as in the following expression: β . The bias term, which may vary over frequency and/or time, may be based on an assumed distribution of the noise (e.g., Gaussian). Additionally or alternatively, the bias term may be based on an initial estimate of the noise (e.g., from a noise-only initial frame). Additionally or alternatively, the bias term may be updated dynamically based on information from noise-only frames, as indicated, for example, by a voice activity detection module. Figures 7 and 8 show examples of plots of likelihood before and after bias removal, respectively. In Figure 7, the frame number 710, an angle of arrival 712 and an amplitude 714 of a signal are illustrated. Similarly, in Figure 8, the frame number 810, an angle of arrival 812 and an amplitude 814 of a signal are illustrated. [00191] The frequency-specific likelihood results may be projected onto a (frame, angle) plane (e.g., as shown in Figure 8) to obtain a DOA estimation per frame est k = max; L(i, f, k) that is robust to noise and reverberation because only f≡F target-dominant frequency bins contribute to the estimate. In this summation, terms in which the error is large may have values that approach zero and thus become less significant to the estimate. If a directional source is dominant in some frequency bins, the error value at those frequency bins may be nearer to zero for that angle. Also, if another directional source is dominant in other frequency bins, the error value at the other frequency bins may be nearer to zero for the other angle. [00192] The likelihood results may also be projected onto a (frame, frequency) plane as shown in the bottom panel 918 of Figure 9 to indicate likelihood information per frequency bin, based on directional membership (e.g., for voice activity detection). The bottom panel 918 shows, for each frequency and frame, the corresponding likelihood for the estimated DOA (e.g., arg max,- ^L(Z, /, &)). This likelihood may be used to f≡F indicate likelihood of speech activity. Additionally or alternatively, such information may be used, for example, to support time- and/or frequency-selective masking of the received signal by classifying frames and/or frequency components according to their directions of arrival. [00193] An angiogram representation, as shown in the bottom panel 918 of Figure 9, is similar to a spectrogram representation. As shown in the top panel 916 of Figure 9, a spectrogram may be obtained by plotting, at each frame, the magnitude of each frequency component. An angiogram may be obtained by plotting, at each frame, a likelihood of the current DOA candidate at each frequency. [00194] Figure 33D shows a flowchart for an implementation M20 of method M10 that includes tasks T100, T200 and T300. Such a method may be used, for example, to select a candidate direction of arrival of a source signal, based on information from a pair of channels of a multichannel signal, for each of a plurality F of frequency components of the multichannel signal. For each among the plurality F of frequency components, task T100 calculates a difference between the pair of channels. Task T100 may be implemented, for example, to perform a corresponding instance of task T10 (e.g., task T12 or T14) for each among the plurality F of frequency components. [00195] For each among the plurality F of frequency components, task T200 calculates a plurality of directional errors. Task T200 may be implemented to calculate K directional errors for each frequency component. For example, task T200 may be implemented to perform a corresponding instance of task T20 for each among the plurality F of frequency components. Alternatively, task T200 may be implemented to calculate K directional errors for each among one or more of the frequency components, and to calculate a different number (e.g., more or less than K) directional errors for each among a different one or more among the frequency components. [00196] For each among the plurality F of frequency components, task T300 selects a candidate direction. Task T300 may be implemented to perform a corresponding instance of task T30 for each among the plurality F of frequency components. [00197] The energy spectrum of voiced speech (e.g., vowel sounds) tends to have local peaks at harmonics of the pitch frequency. The energy spectrum of background noise, on the other hand, tends to be relatively unstructured. Consequently, components of the input channels at harmonics of the pitch frequency may be expected to have a higher signal-to-noise ratio (SNR) than other components. It may be desirable to configure method M20 to consider only frequency components that correspond to multiples of an estimated pitch frequency. [00198] Typical pitch frequencies range from about 70 to 100 Hz for a male speaker to about 150 to 200 Hz for a female speaker. The current pitch frequency may be estimated by calculating the pitch period as the distance between adjacent pitch peaks (e.g., in a primary microphone channel). A sample of an input channel may be identified as a pitch peak based on a measure of its energy (e.g., based on a ratio between sample energy and frame average energy) and/or a measure of how well a neighborhood of the sample is correlated with a similar neighborhood of a known pitch peak. A pitch estimation procedure is described, for example, in section 4.6.3 (pp. 4-44 to 4-49) of EVRC (Enhanced Variable Rate Codec) document C.S0014-C, available online at www.3gpp.org. A current estimate of the pitch frequency (e.g., in the form of an estimate of the pitch period or "pitch lag") will typically already be available in applications that include speech encoding and/or decoding (e.g., voice communications using codecs that include pitch estimation, such as code-excited linear prediction (CELP) and prototype waveform interpolation (PWI)). [00199] It may be desirable, for example, to configure task T100 such that at least twenty-five, fifty or seventy-five percent of the calculated channel differences (e.g., phase differences) correspond to multiples of an estimated pitch frequency. The same principle may be applied to other desired harmonic signals as well. In a related method, task T100 is implemented to calculate phase differences for each of the frequency components of at least a subband of the channel pair, and task T200 is implemented to calculate directional errors based on only those phase differences which correspond to multiples of an estimated pitch frequency. [00200] Figure 34A shows a flowchart for an implementation M25 of method M20 that includes task T400. Such a method may be used, for example, to indicate a direction of arrival of a source signal, based on information from a pair of channels of a multichannel signal. Based on the F candidate direction selections produced by task T300, task T400 indicates a direction of arrival. For example, task T400 may be implemented to indicate the most frequently selected among the F candidate directions as the direction of arrival. For a case in which the source signals are disjoint in frequency, task T400 may be implemented to indicate more than one direction of arrival (e.g., to indicate a direction for each among more than one source). Method M25 may be iterated over time to indicate one or more directions of arrival for each of a sequence of frames of the multichannel signal. [00201] A microphone pair having a large spacing is typically not suitable for high frequencies, because spatial aliasing begins at a low frequency for such a pair. A DOA estimation approach as described herein, however, allows the use of phase delay measurements beyond the frequency at which phase wrapping begins, and even up to the Nyquist frequency (i.e., half of the sampling rate). By relaxing the spatial aliasing constraint, such an approach enables the use of microphone pairs having larger inter- microphone spacing. As an array with a large inter-microphone distance typically provides better directivity at low frequencies than an array with a small inter- microphone distance, use of a larger array typically extends the range of useful phase delay measurements into lower frequencies as well. [00202] The DOA estimation principles described herein may be extended to multiple microphone pairs MClOa, MClOb, MClOc in a linear array (e.g., as shown in Figure 2B). One example of such an application for a far- field scenario is a linear array of microphones MClOa-e arranged along the margin of a television TV 10 or other large-format video display screen (e.g., as shown in Figure 4B). It may be desirable to configure such an array to have a non-uniform (e.g., logarithmic) spacing between microphones, as in the examples of Figures 2B and 4B. [00203] For a far-field source, the multiple microphone pairs of a linear array will have essentially the same DOA. Accordingly, one option is to estimate the DOA as an average of the DOA estimates from two or more pairs in the array. However, an averaging scheme may be affected by mismatch of even a single one of the pairs, which may reduce DOA estimation accuracy. Alternatively, it may be desirable to select, from among two or more pairs of microphones of the array, the best microphone pair for each frequency (e.g., the pair that gives the minimum error e tat that frequency), such that different microphone pairs may be selected for different frequency bands. At the spatial aliasing frequency of a microphone pair, the error will be large. Consequently, such an approach will tend to automatically avoid a microphone pair when the frequency is close to its wrapping frequency, thus avoiding the related uncertainty in the DOA estimate. For higher-frequency bins, a pair having a shorter distance between the microphones will typically provide a better estimate and may be automatically favored, while for lower-frequency bins, a pair having a larger distance between the microphones will typically provide a better estimate and may be automatically favored. In the four- microphone example shown in Figure 2B, six different pairs of microphones are possible (i.e. ( 4) 6 ). [00204] In one example, the best pair for each axis is selected by calculating, for each frequency /, Pxl values, where P is the number of pairs, / is the size of the inventory, and each value e piis the squared absolute difference between the observed angle 6 pj (for pair p and frequency/) and the candidate angle . For each frequency f, the pair p that corresponds to the lowest error value e piis selected. This error value also indicates the best DOA candidate at frequency f (as shown in Figure 6A). [00205] Figure 34B shows a flowchart for an implementation M30 of method M10 that includes an implementation T150 of task T10 and an implementation T250 of task T20. Method M30 may be used, for example, to indicate a candidate direction for a frequency component of the multichannel signal (e.g., at a particular frame). [00206] For each among a plurality P of pairs of channels of the multichannel signal, task T250 calculates a plurality of directional errors. Task T250 may be implemented to calculate K directional errors for each channel pair. For example, task T250 may be implemented to perform a corresponding instance of task T20 for each among the plurality P of channel pairs. Alternatively, task T250 may be implemented to calculate K directional errors for each among one or more of the channel pairs, and to calculate a different number (e.g., more or less than K) directional errors for each among a different one or more among the channel pairs. [00207] Method M30 also includes a task T35 that selects a candidate direction, based on the pluralities of directional errors. For example, task T35 may be implemented to select the candidate direction that corresponds to the lowest among the directional errors. [00208] Figure 34C shows a flowchart for an implementation MIOO of method M30 that includes an implementation T 170 of tasks T100 and T150, an implementation T270 of tasks T200 and T250, and an implementation T350 of task T35. Method MIOO may be used, for example, to select a candidate direction for each among a plurality F of frequency components of the multichannel signal (e.g., at a particular frame). [00209] For each among the plurality F of frequency components, task T170 calculates a plurality P of differences, where each among the plurality P of differences corresponds to a different pair of channels of the multichannel signal and is a difference between the 21 channels (e.g., a gain-based or phase-based difference). For each among the plurality F of frequency components, task T270 calculates a plurality of directional errors for each among the plurality P of pairs. For example, task T270 may be implemented to calculate, for each of the frequency components, K directional errors for each of the P pairs, or a total of PxK directional errors for each frequency component. For each among the plurality F of frequency components, and based on the corresponding pluralities of directional errors, task T350 selects a corresponding candidate direction. [00210] Figure 35A shows a flowchart for an implementation Ml 10 of method M100. The implementation MHO may include tasks T170, T270, T350 and T400 that may be examples of corresponding elements described in connection with at least one of Figures 34 A and Figure 34C. [00211] Figure 35B shows a block diagram of an apparatus A5 according to a general configuration that includes an error calculator 200 and a selector 300. Error calculator 200 is configured to calculate, for a calculated difference between a pair of channels of a multichannel signal and for each among a plurality K of candidate directions, a corresponding directional error that is based on the calculated difference (e.g., as described herein with reference to implementations of task T20). Selector 300 is configured to select a candidate direction, based on the corresponding directional error (e.g., as described herein with reference to implementations of task T30). [00212] Figure 35C shows a block diagram of an implementation A10 of apparatus A5 that includes a difference calculator 100. Apparatus A10 may be implemented, for example, to perform an instance of method M10, M20, M30, and/or M100 as described herein. Calculator 100 is configured to calculate a difference (e.g., a gain-based or phase-based difference) between a pair of channels of a multichannel signal (e.g., as described herein with reference to implementations of task T10). Calculator 100 may be implemented, for example, to calculate such a difference for each among a plurality F of frequency components of the multichannel signal. In such case, calculator 100 may also be implemented to apply a subband filter bank to the signal and/or to calculate a frequency transform of each channel (e.g., a fast Fourier transform (FFT) or modified discrete cosine transform (MDCT)) before calculating the difference. [00213] Figure 35D shows a block diagram of an implementation A15 of apparatus A10 that includes an indicator 400. Indicator 400 is configured to indicate a direction of arrival, based on a plurality of candidate direction selections produced by selector 300 (e.g., as described herein with reference to implementations of task T400). Apparatus A15 may be implemented, for example, to perform an instance of method M25 and/or Ml 10 as described herein. [00214] Figure 36A shows a block diagram of an apparatus MF5 according to a general configuration. Apparatus MF5 includes means F20 for calculating, for a calculated difference between a pair of channels of a multichannel signal and for each among a plurality K of candidate directions, a corresponding directional error or fitness measure that is based on the calculated difference (e.g., as described herein with reference to implementations of task T20). Apparatus MF5 also includes means F30 for selecting a candidate direction, based on the corresponding directional error (e.g., as described herein with reference to implementations of task T30). [00215] Figure 36B shows a block diagram of an implementation MF10 of apparatus MF5 that includes means F10 for calculating a difference (e.g., a gain-based or phase- based difference) between a pair of channels of a multichannel signal (e.g., as described herein with reference to implementations of task T10). Means F10 may be implemented, for example, to calculate such a difference for each among a plurality F of frequency components of the multichannel signal. In such case, means F10 may also be implemented to include means for performing a subband analysis and/or calculating a frequency transform of each channel (e.g., a fast Fourier transform (FFT) or modified discrete cosine transform (MDCT)) before calculating the difference. Apparatus MF10 may be implemented, for example, to perform an instance of method M10, M20, M30, and/or M100 as described herein. [00216] Figure 36C shows a block diagram of an implementation MF15 of apparatus MF10 that includes means F40 for indicating a direction of arrival, based on a plurality of candidate direction selections produced by means F30 (e.g., as described herein with reference to implementations of task T400). Apparatus MF15 may be implemented, for example, to perform an instance of method M25 and/or Ml 10 as described herein. [00217] The signals received by a microphone pair may be processed as described herein to provide an estimated DOA, over a range of up to 180 degrees, with respect to the axis of the microphone pair. The desired angular span and resolution may be arbitrary within that range (e.g. uniform (linear) or non-uniform (nonlinear), limited to selected sectors of interest, etc.). Additionally or alternatively, the desired frequency span and resolution may be arbitrary (e.g. linear, logarithmic, mel-scale, Bark-scale, etc.). [00218] In the model as shown in Figure 2B, each DOA estimate between 0 and +/- 90 degrees from a microphone pair indicates an angle relative to a plane that is orthogonal to the axis of the pair. Such an estimate describes a cone around the axis of the pair, and the actual direction of the source along the surface of this cone is indeterminate. For example, a DOA estimate from a single microphone pair does not indicate whether the source is in front of or behind (or above or below) the microphone pair. Therefore, while more than two microphones may be used in a linear array to improve DOA estimation performance across a range of frequencies, the range of DOA estimation supported by a linear array is typically limited to 180 degrees. [00219] The DOA estimation principles described herein may also be extended to a two-dimensional (2-D) array of microphones. For example, a 2-D array may be used to extend the range of source DOA estimation up to a full 360° (e.g., providing a similar range as in applications such as radar and biomedical scanning). Such an array may be used in a speakerphone application, for example, to support good performance even for arbitrary placement of the telephone relative to one or more sources. [00220] The multiple microphone pairs of a 2-D array typically will not share the same DOA, even for a far-field point source. For example, source height relative to the plane of the array (e.g., in the z-axis) may play an important role in 2-D tracking. Figure 10A shows an example of a speakerphone application in which the x-y plane as defined by the microphone axes is parallel to a surface (e.g., a tabletop) on which the telephone is placed. In this example, the source 1001 is a person speaking from a location that is along the x axis 1010 but is offset in the direction of the z axis 1014 (e.g., the speaker's mouth is above the tabletop). With respect to the x-y plane as defined by the microphone array, the direction of the source 1001 is along the x axis 1010, as shown in Figure 10A. The microphone pair along the y axis 1012 estimates a DOA of the source as zero degrees from the x-z plane. Due to the height of the speaker above the x-y plane, however, the microphone pair along the x axis estimates a DOA of the source as 30° from the x axis 1010 (i.e., 60 degrees from the y-z plane), rather than along the x axis 1010. Figures 11A and 11B show two views of the cone of confusion CY10 associated with this DOA estimate, which causes an ambiguity in the estimated speaker direction with respect to the microphone axis. Figure 37 A shows another example of a point source 3720 (i.e., a speaker's mouth) that is elevated above a plane of the device H100 (e.g., a display plane and/or a plane defined by microphone array axes). sin # sin Θ2 [00221] An expression such as tan tan , where θγ and έ¾ sin # sin #i are the estimated DOA for pair 1 and 2, respectively, may be used to project all pairs of DOAs to a 360° range in the plane in which the three microphones are located. Such projection may be used to enable tracking directions of active speakers over a 360° range around the microphone array, regardless of height difference. Applying the expression above to project the DOA estimates (0°, 60°) of Figure 10A into the x-y plane produces -i (0°,90°) , which may be mapped to a combined directional estimate 1022 (e.g., an azimuth) of 270° as shown in Figure 10B. [00222] In a typical use case, the source will be located in a direction that is not projected onto a microphone axis. Figures 12A-12D show such an example in which the source S01 is located above the plane of the microphones MCIO, MC20, MC30. In this example, the DOA of the source signal passes through the point (x, y, z) = (5, 2, 5), Figure 12A shows the x-y plane as viewed from the +z direction. Figures 12B and 12D show the x-z plane as viewed from the direction of microphone MC30, and Figure 12C shows the y-z plane as viewed from the direction of microphone MCIO. The shaded area in Figure 12A indicates the cone of confusion CY associated with the DOA θ\ as observed by the y-axis microphone pair MC20-MC30, and the shaded area in Figure 12B indicates the cone of confusion CX associated with the DOA S01, έ¾ asobserved by the x-axis microphone pair MC10-MC20. In Figure 12C, the shaded area indicates cone CY, and the dashed circle indicates the intersection of cone CX with a plane that passes through the source and is orthogonal to the x axis. The two dots on this circle that indicate its intersection with cone CY are the candidate locations of the source. Likewise, in Figure 12D the shaded area indicates cone CX, the dashed circle indicates the intersection of cone CY with a plane that passes through the source and is orthogonal to the y axis, and the two dots on this circle that indicate its intersection with cone CX are the candidate locations of the source. It may be seen that in this 2-D case, an ambiguity remains with respect to whether the source is above or below the x-y plane. [00223] For the example shown in Figures 12A-12D, the DOA observed by the x- axis microphone pair MC10-MC20 is θ 2= tan _1(- 5 /V25 + 4) = -42.9° , and the DOA observed by the y-axis microphone pair MC20-MC30 is θ = tan _1(- 2/V25 + 25) = -15.89° . Using the expression to project these directions into the x-y plane produces the magnitudes (21.8°, 68.2°) of the desired angles relative to the x and y axes, respectively, which corresponds to the given source location (x, y, z) = (5, 2, 5). The signs of the observed angles indicate the x-y quadrant in which the source (e.g., as indicated by the microphones MCIO, MC20 and MC30) is located, as shown in Figure l lC. [00224] In fact, almost 3D information is given by a 2D microphone array, except for the up-down confusion. For example, the directions of arrival observed by microphone pairs MC10-MC20 and MC20-MC30 may also be used to estimate the magnitude of the angle of elevation of the source relative to the x-y plane. If d denotes the vector from microphone MC20 to the source, then the lengths of the projections of vector d onto the x-axis, the -axis, and the x-y plane may be expressed as i sin^ ) , d sin ) and , respectively. The magnitude of the angle of elevation may then be estimated as = cos 1/sin 2(^ 1) + sin 2(^ 2) . [00225] Although the microphone pairs in the particular examples of Figures 10A- 10B and 12A-12D have orthogonal axes, it is noted that for microphone pairs having non-orthogonal axes, the expression may be used to project the DOA estimates to those non-orthogonal axes, and from that point it is straightforward to obtain a representation of the combined directional estimate with respect to orthogonal axes. Figure 37B shows an example of the intersecting cones of confusion CI, C2 associated with the responses of microphone arrays having non- orthogonal axes (as shown) to a common point source. Figure 37C shows one of the lines of intersection LI of these cones CI, C2, which defines one of two possible directions of the point source with respect to the array axes in three dimensions. [00226] Figure 13A shows an example of microphone array MCIO, MC20, MC30 in which the axis 1 of pair MC20, MC30 lies in the x-y plane and is skewed relative to the y axis by a skew angle (¾. Figure 13B shows an example of obtaining a combined directional estimate in the x-y plane with respect to orthogonal axes x and y with observations ft?) from an array of microphones MCIO, MC20, MC30 as shown in Figure 13 A. If d denotes the vector from microphone MC20 to the source, then the lengths of the projections of vector d onto the x-axis and axis 1 may be expressed as d sin(#2 ) , d sin(#i ) , respectively. The vector (x, y) denotes the projection of vector d onto the x-y plane. The estimated value of x is known, and it remains to estimate the value of y. [00227] The estimation of y may be performed using the projection Pl = (d sin θ\ sin 0 Q, d sin θ\ cos 0 Q) of vector (x, y) onto axis 1. Observing that the difference between vector (x, y) and vector ργ is orthogonal to ργ , we calculate y as sin θχ - sin 2sin <¾ y = d The desired angles of arrival in the x-y plane, relative to the cos#o orthogonal x and y axes, may then be expressed respectively as sin Θ2 cos Θ2 sin θχ - sin sin 0 [00228] Extension of DOA estimation to a 2-D array is typically well-suited to and sufficient for a speakerphone application. However, further extension to an N- dimensional array is also possible and may be performed in a straightforward manner. For tracking applications in which one target is dominant, it may be desirable to select N pairs for representing N dimensions. Once a 2-D result is obtained with a particular microphone pair, another available pair can be utilized to increase degrees of freedom. For example, Figures 12A-12D and 13A, 13B illustrate use of observed DOA estimates from different microphone pairs in the x-y plane to obtain an estimate of the source direction as projected into the x-y plane. In the same manner, observed DOA estimates from an x-axis microphone pair and a z-axis microphone pair (or other pairs in the x-z plane) may be used to obtain an estimate of the source direction as projected into the x-z plane, and likewise for the y-z plane or any other plane that intersects three or more of the microphones. [00229] Estimates of DOA error from different dimensions may be used to obtain a combined likelihood estimate, for example, using an expression such as 2 2 max 0 - 01 0,1 0 - 01 0,2 + λ mean\ 0 - 0 0,1 ø - ø '0,2 + λ fX f,2 f,2 0 Qi denotes the DOA candidate selected for pair i. Use of the maximum among the different errors may be desirable to promote selection of an estimate that is close to the cones of confusion of both observations, in preference to an estimate that is close to only one of the cones of confusion and may thus indicate a false peak. Such a combined result may be used to obtain a (frame, angle) plane, as shown in Figure 8 and described herein, and/or a (frame, frequency) plot, as shown at the bottom of Figure 9 and described herein. [00230] The DOA estimation principles described herein may be used to support selection among multiple speakers. For example, location of multiple sources may be combined with a manual selection of a particular speaker (e.g., push a particular button to select a particular corresponding user) or automatic selection of a particular speaker (e.g., by speaker recognition). In one such application, a telephone is configured to recognize the voice of its owner and to automatically select a direction corresponding to that voice in preference to the directions of other sources. [00231] For a one-dimensional (1-D) array of microphones, a direction of arrival DOA10 for a source may be easily defined in a range of, for example, -90° to 90°. For example, it is easy to obtain a closed-form solution for the direction of arrival DOA10 across a range of angles (e.g., as shown in cases 1 and 2 of Figure 13C) in terms of phase differences among the signals produced by the various microphones of the array. [00232] For an array that includes more than two microphones at arbitrary relative locations (e.g., a non-coaxial array), it may be desirable to use a straightforward extension of one-dimensional principles as described above, e.g. (Θ1, Θ2) in a two-pair case in two dimensions, (Θ1, Θ2, Θ3) in a three-pair case in three dimensions, etc. A key problem is how to apply spatial filtering to such a combination of paired 1-D direction of arrival DOA10 estimates. For example, it may be difficult or impractical to obtain a closed-form solution for the direction of arrival DOA10 across a range of angles for a non-coaxial array (e.g., as shown in cases 3 and 4 of Figure 13C) in terms of phase differences among the signals produced by the various microphones of the array. [00233] Figure 14A shows an example of a straightforward one-dimensional (1-D) pairwise beamforming-nullforming (BFNF) BF10 configuration for spatially selective filtering that is based on robust 1-D DOA estimation. In this example, the notation d j denotes microphone pair number i, microphone number j within the pair, and source nts a steering vector for the respective source and microphone pair (the ellipse indicates the steering vector for source 1 and microphone pair 1), and λ denotes a regularization factor. The number of sources is not greater than the number of microphone pairs. Such a configuration avoids a need to use all of the microphones at once to define a DOA. [00234] We may apply a beamformer/null beamformer (BFNF) BF10 as shown in Figure 14A by augmenting the steering vector for each pair. In this figure, A denotes the conjugate transpose of A, x denotes the microphone channels and y denotes the _i_ H -1 H spatially filtered channels. Using a pseudo-inverse operation A = (A A) A as shown in Figure 14A allows the use of a non-square matrix. For a three-microphone MCIO, MC20, MC30 case (i.e., two microphone pairs) as illustrated in Figure 15 A, for example, the number of rows 2*2 = 4 instead of 3, such that the additional row makes the matrix non-square. [00235] As the approach shown in Figure 14A is based on robust 1-D DOA estimation, complete knowledge of the microphone geometry is not required, and DOA estimation using all microphones at the same time is also not required. Such an approach is well-suited for use with anglogram-based DOA estimation as described herein, although any other 1-D DOA estimation method can also be used. Figure 14B shows an example of the BFNF BF10 as shown in Figure 14A which also includes a normalization N10 (i.e., by the denominator) to prevent an ill-conditioned inversion at the spatial aliasing frequency (i.e., the wavelength that is twice the distance between the microphones). [00236] Figure 15B shows an example of a pair-wise (PW) normalized MVDR (minimum variance distortionless response) BFNF BF10, in which the manner in which the steering vector (array manifold vector) is obtained differs from the conventional approach. In this case, a common channel is eliminated due to sharing of a microphone between the two pairs (e.g., the microphone labeled as 2 an^ ¾ 1 mFigure 15A). The noise coherence matrix Γ may be obtained either by measurement or by theoretical calculation using a sine function. It is noted that the examples of Figures 14 A, 14B, and 15B may be generalized to an arbitrary number of sources N such that N <= M, where M is the number of microphones. [00237] Figure 16A shows another example of a BFNF BF10 that may be used if the matrix A A is not ill-conditioned, which may be determined using a condition number or determinant of the matrix. In this example, the notation is as in Figure 14A, and the number of sources N is not greater than the number of microphone pairs M. If the matrix is ill-conditioned, it may be desirable to bypass one microphone signal for that frequency bin for use as the source channel, while continuing to apply the method to spatially filter other frequency bins in which the matrix A A is not ill-conditioned. This option saves computation for calculating a denominator for normalization. The methods in Figures 14A-16A demonstrate BFNF BF10 techniques that may be applied independently at each frequency bin. The steering vectors are constructed using the DOA estimates for each frequency and microphone pair as described herein. For example, each element of the steering vector for pair p and source n for DOA (¾·, frequency f, and microphone number m (1 or 2) may be calculated as where l„ indicates the distance between the microphones of pair p, ω indicates the frequency bin number, and f sindicates the sampling frequency. Figure 16B shows examples of steering vectors SVlOa-b for an array as shown in Figure 15 A. [00238] A PWBFNF scheme may be used for suppressing direct path of interferers up to the available degrees of freedom (instantaneous suppression without smooth trajectory assumption, additional noise-suppression gain using directional masking, additional noise- suppression gain using bandwidth extension). Single-channel postprocessing of quadrant framework may be used for stationary noise and noise-reference handling. [00239] It may be desirable to obtain instantaneous suppression but also to provide minimization of artifacts such as musical noise. It may be desirable to maximally use the available degrees of freedom for BFNF. One DOA may be fixed across all frequencies, or a slightly mismatched alignment across frequencies may be permitted. Only the current frame may be used, or a feed-forward network may be implemented. The BFNF may be set for all frequencies in the range up to the Nyquist rate (e.g., except ill-conditioned frequencies). A natural masking approach may be used (e.g., to obtain a smooth natural seamless transition of aggressiveness). Figure 31 shows an example of DOA tracking for a target and a moving interferer for a scenario as shown in Figure 2 IB and 22. In Figure 31 a fixed source S10 at D is indicated, and a moving source S20 is also indicated. [00240] Figure 17 shows a flowchart for one example of an integrated method 1700 as described herein. This method includes an inventory matching task T10 for phase delay estimation, an error calculation task T20 to obtain DOA error values, a dimension- matching and/or pair-selection task T30, and a task T40 to map DOA error for the selected DOA candidate to a source activity likelihood estimate. The pair-wise DOA estimation results may also be used to track one or more active speakers, to perform a pair-wise spatial filtering operation, and/or to perform time- and/or frequency-selective masking. The activity likelihood estimation and/or spatial filtering operation may also be used to obtain a noise estimate to support a single-channel noise suppression operation. Figures 18 and 19 show an example of observations obtained using a 2-D microphone arrangement to track movement of a source (e.g., a human speaker) among directions A-B-C-D as shown in Figure 21A. As depicted in Figure 21A three microphones MCIO, MC20, MC30 may be used to record an audio signal. In this example, Figure 18 shows observations A-D by the y-axis pair MC20-MC30, where distance dx is 3.6 centimeters; Figure 19 shows observations A-D by the x-axis pair MC10-MC20, where distance dy is 7.3 centimeters; and the inventory of DOA estimates covers the range of -90 degrees to +90 degrees at a resolution of five degrees. [00241] It may be understood that when the source is in an endfire direction of a microphone pair, elevation of a source above or below the plane of the microphones limits the observed angle. Consequently, when the source is outside the plane of the microphones, it is typical that no real endfire is observed. It may be seen in Figures 18 and 19 that due to elevation of the source with respect to the microphone plane, the observed directions do not reach -90 degrees even as the source passes through the corresponding endfire direction (i.e., direction A for the x-axis pair MC10-MC20, and direction B for the y-axis pair MC20-MC30). [00242] Figure 20 shows an example in which +/- 90-degree observations A-D from orthogonal axes, as shown in Figures 18 and 19 for a scenario as shown in Figure 21A, are combined to produce DOA estimates in the microphone plane over a range of zero to 360 degrees. In this example, a one-degree resolution is used. Figure 22 shows an example of combined observations A-D using a 2-D microphone arrangement, where distance dx is 3.6 centimeters and distance dy is 7.3 centimeters, to track movement, by microphones MCIO, MC20, MC30 of a source (e.g., a human speaker) among directions A-B-C as shown in Figure 21B in the presence of another source (e.g., a stationary human speaker) at direction D. [00243] As described above, a DOA estimate may be calculated based on a sum of likelihoods. When combining observations from different microphone axes (e.g., as shown in Figure 20), it may be desirable to perform the combination for each individual frequency bin before calculating a sum of likelihoods, especially if more than one directional source may be present (e.g., two speakers, or a speaker and an interferer). Assuming that no more than one of the sources is dominant at each frequency bin, calculating a combined observation for each frequency component preserves the distinction between dominance of different sources at different corresponding frequencies. If a summation over frequency bins dominated by different sources is performed on the observations before they are combined, then this distinction may be lost, and the combined observations may indicate spurious peaks at directions which do not correspond to the location of any actual source. For example, summing observations from orthogonal microphone pairs of a first source at 45 degrees and a second source at 225 degrees, and then combining the summed observations, may produce spurious peaks at 135 and 315 degrees in addition to the desired peaks at 45 and 225 degrees. [00244] Figures 23 and 24 show an example of combined observations for a conference call scenario, as shown in Figure 25, in which the phone is stationary on a table top. In Figure 25 a device may include three microphones MCIO, MC20, MC30. In Figure 23, the frame number 2310, an angle of arrival 2312 and an amplitude 2314 of a signal are illustrated. At about frame 5500, speaker 1 stands up, and movement of speaker 1 is evident to about frame 9000. Movement of speaker 3 near frame 9500 is also visible. The rectangle in Figure 24 indicates a target sector selection TSS10, such that frequency components arriving from directions outside this sector may be rejected or otherwise attenuated, or otherwise processed differently from frequency components arriving from directions within the selected sector. In this example, the target sector is the quadrant of 180-270 degrees and is selected by the user from among the four quadrants of the microphone plane. This example also includes acoustic interference from an air conditioning system. [00245] Figures 26 and 27 show an example of combined observations for a dynamic scenario, as shown in Figure 28A. In Figure 28A a device may be positioned between a first speaker S10, a second speaker S20 and a third speaker S30. In Figure 26, the frame number 2610, an angle of arrival 2612 and an amplitude 2614 of a signal are illustrated. In this scenario, speaker 1 picks up the phone at about frame 800 and replaces it on the table top at about frame 2200. Although the angle span is broader when the phone is in this browse-talk position, it may be seen that the spatial response is still centered in a designated DOA. Movement of speaker 2 after about frame 400 is also evident. As in Figure 24, the rectangle in Figure 27 indicates user selection of the quadrant of 180-270 degrees as the target sector TSS10. Figures 29 and 30 show an example of combined observations for a dynamic scenario with road noise, as shown in Figure 28B. In Figure 28B a phone may receive an audio signal from a speaker S10. In Figure 29, the frame number 2910, an angle of arrival 2912 and an amplitude 2914 of a signal are illustrated. In this scenario, the speaker picks up the phone between about frames 200 and 100 and again between about frames 1400 and 2100. In this example, the rectangle in Figure 30 indicates user selection of the quadrant of 270-360 degrees as an interference sector IS 10. [00246] (VAD) An anglogram-based technique as described herein may be used to support voice activity detection (VAD), which may be applied for noise suppression in various use cases (e.g., a speakerphone). Such a technique, which may be implemented as a sector-based approach, may include a "vadall" statistic based on a maximum likelihood (likelihood_max) of all sectors. For example, if the maximum is significantly larger than a noise-only threshold, then the value of the vadall statistic is one (otherwise zero). It may be desirable to update the noise-only threshold only during a noise-only period. Such a period may be indicated, for example, by a single-channel VAD (e.g., from a primary microphone channel) and/or a VAD based on detection of speech onsets and/or offsets (e.g., based on a time-derivative of energy for each of a set of frequency components). [00247] Additionally or alternatively, such a technique may include a per- sector "vad[sector]" statistic based on a maximum likelihood of each sector. Such a statistic may be implemented to have a value of one only when the single-channel VAD and the onset-offset VAD are one, vadall is one and the maximum for the sector is greater than some portion (e.g., 95%) of likelihood_max. This information can be used to select a sector with maximum likelihood. Applicable scenarios include a user-selected target sector with a moving interferer, and a user-selected interference sector with a moving target. [00248] It may be desirable to select a tradeoff between instantaneous tracking (PWBFNF performance) and prevention of too-frequent switching of the interference sector. For example, it may be desirable to combine the vadall statistic with one or more other VAD statistics. The vad[sector] may be used to specify the interference sector and/or to trigger updating of a non- stationary noise reference. It may also be desirable to normalize the vadall statistic and/or a vad[sector] statistic using, for example, a minimum-statistics-based normalization technique (e.g., as described in U.S. Pat. Appl. Publ. No. 2012/0130713, published May 24, 2012). [00249] An anglogram-based technique as described herein may be used to support directional masking, which may be applied for noise suppression in various use cases (e.g., a speakerphone). Such a technique may be used to obtain additional noise- suppression gain by using the DOA estimates to control a directional masking technique (e.g., to pass a target quadrant and/or to block an interference quadrant). Such a method may be useful for handling reverberation and may produce an additional 6-12 dB of gain. An interface from the angiogram may be provided for quadrant masking (e.g., by assigning an angle with maximum likelihood per each frequency bin). It may be desirable to control the masking aggressiveness based on target dominancy, as indicated by the angiogram. Such a technique may be designed to obtain a natural masking response (e.g., a smooth natural seamless transition of aggressiveness). [00250] It may be desirable to provide a multi-view graphical user interface (GUI) for source tracking and/or for extension of PW BFNF with directional masking. Various examples are presented herein of three-microphone (two-pair) two-dimensional (e.g., 360°) source tracking and enhancement schemes which may be applied to a desktop hands-free speakerphone use case. However, it may be desirable to practice a universal method to provide seamless coverage of use cases ranging from the desktop hands-free to handheld hands-free or even to handset use cases. While a three-microphone scheme may be used for a handheld hands-free use case, it may be desirable to also use a fourth microphone (if already there) on the back of the device. For example, it may be desirable for at least four microphones (three microphone pairs) to be available to represent (x, y, z) dimension. A design as shown in Figure 1 has this feature, as does the design shown in Figure 32A, with three frontal microphones MCIO, MC20, MC30 and a back microphone MC40 (shaded circle). [00251] It may be desirable to provide a visualization of an active source on a display screen of such a device. The extension principles described herein may be applied to obtain a straightforward extension from 2D to 3D by using a front-back microphone pair. To support a multi-view GUI, we can determine the user's holding pattern by utilizing any of a variety of position detection methods, such as an accelerometer, gyrometer, proximity sensor and/or a variance of likelihood given by 2D angiogram per each holding pattern. Depending on the current holding pattern, we can switch to two non-coaxial microphone pairs as appropriate to such a holding pattern and can also provide a corresponding 360° 2D representation on the display, if the user wants to see it. [00252] For example, such a method may be implemented to support switching among a range of modes that may include a desktop hands-free (e.g., speakerphone) mode, a portrait browse-talk mode, and a landscape browse-talk mode. Figure 32B shows an example of a desktop hands-free mode with three frontal microphones MCIO, MC20, MC30 and a corresponding visualization on a display screen of the device. Figure 32D shows an example of a handheld hands-free (portrait) mode, with two frontal microphones MCIO, MC20, and one back microphone MC40 (shaded circle) being activated, and a corresponding display. Figure 32C shows an example of a handheld hands-free (landscape) mode, with a different pair of frontal microphones MCIO, MC20 and one back microphone MC40 (shaded circle) being activated, and a corresponding display. In some configurations, the back microphone MC40 may be located on the back of the device, approximately behind one of the frontal microphones MCIO. [00253] It may be desirable to provide an enhancement of a target source. The extension principles described herein may be applied to obtain a straightforward extension from 2D to 3D by also using a front-back microphone pair. Instead of only two DO A estimates (Θ1, Θ2), we obtain an additional estimate from another dimension for a total of three DOA estimates (Θ1, Θ2, Θ3). In this case, the PWBFNF coefficient matrix as shown in Figures 14A and 14B expands from 4 by 2 to 6 by 2 (with the added microphone pair), and the masking gain function expands from f(91)f(92) to f(91)f(92)f(93). Using a position-sensitive selection as described above, we can use all three microphone pairs optimally, regardless of the current holding pattern, to obtain a seamless transition among the modes in terms of the source enhancement performance. Of course, more than three pairs may be used at one time as well. [00254] Each of the microphones for direction estimation as discussed herein (e.g., with reference to location and tracking of one or more users or other sources) may have a response that is omnidirectional, bidirectional, or unidirectional (e.g., cardioid). The various types of microphones that may be used include (without limitation) piezoelectric microphones, dynamic microphones, and electret microphones. It is expressly noted that the microphones may be implemented more generally as transducers sensitive to radiations or emissions other than sound. In one such example, the microphone array is implemented to include one or more ultrasonic transducers (e.g., transducers sensitive to acoustic frequencies greater than fifteen, twenty, twenty-five, thirty, forty or fifty kilohertz or more). [00255] An apparatus as disclosed herein may be implemented as a combination of hardware (e.g., a processor) with software and/or with firmware. Such apparatus may also include an audio preprocessing stage AP10 as shown in Figure 38A that performs one or more preprocessing operations on signals produced by each of the microphones MCIO and MC20 (e.g., of an implementation of one or more microphone arrays) to produce preprocessed microphone signals (e.g., a corresponding one of a left microphone signal and a right microphone signal) for input to task T10 or difference calculator 100. Such preprocessing operations may include (without limitation) impedance matching, analog-to-digital conversion, gain control, and/or filtering in the analog and/or digital domains. [00256] Figure 38B shows a block diagram of a three-channel implementation AP20 of audio preprocessing stage AP10 that includes analog preprocessing stages PlOa, PlOb and PlOc. In one example, stages PlOa, PlOb, and PlOc are each configured to perform a high-pass filtering operation (e.g., with a cutoff frequency of 50, 100, or 200 Hz) on the corresponding microphone signal. Typically, stages PlOa, PlOb and PlOc will be configured to perform the same functions on each signal. [00257] It may be desirable for audio preprocessing stage AP10 to produce each microphone signal as a digital signal, that is to say, as a sequence of samples. Audio preprocessing stage AP20, for example, includes analog-to-digital converters (ADCs) C 10a, CI 0b and ClOc that are each arranged to sample the corresponding analog signal. Typical sampling rates for acoustic applications include 8 kHz, 12 kHz, 16 kHz, and other frequencies in the range of from about 8 to about 16 kHz, although sampling rates as high as about 44.1, 48 or 192 kHz may also be used. Typically, converters ClOa, ClOb and ClOc will be configured to sample each signal at the same rate. [00258] In this example, audio preprocessing stage AP20 also includes digital preprocessing stages P20a, P20b, and P20c that are each configured to perform one or more preprocessing operations (e.g., spectral shaping) on the corresponding digitized channel to produce a corresponding one of a left microphone signal ALIO, a center microphone signal AC 10, and a right microphone signal AR10 for input to task T10 or difference calculator 100. Typically, stages P20a, P20b and P20c will be configured to perform the same functions on each signal. It is also noted that preprocessing stage AP10 may be configured to produce a different version of a signal from at least one of the microphones (e.g., at a different sampling rate and/or with different spectral shaping) for content use, such as to provide a near-end speech signal in a voice communication (e.g., a telephone call). Although Figures 38A and 38B show two channel and three-channel implementations, respectively, it will be understood that the same principles may be extended to an arbitrary number of microphones. [00259] Figure 39A shows a block diagram of an implementation MF15 of apparatus MF10 that includes means F40 for indicating a direction of arrival, based on a plurality of candidate direction selections produced by means F30 (e.g., as described herein with reference to implementations of task T400). Apparatus MF15 may be implemented, for example, to perform an instance of method M25 and/or Ml 10 as described herein. [00260] The signals received by a microphone pair or other linear array of microphones may be processed as described herein to provide an estimated DOA that indicates an angle with reference to the axis of the array. As described above (e.g., with reference to methods M20, M25, Ml 00, and Ml 10), more than two microphones may be used in a linear array to improve DOA estimation performance across a range of frequencies. Even in such cases, however, the range of DOA estimation supported by a linear (i.e., one-dimensional) array is typically limited to 180 degrees. [00261] Figure 2B shows a measurement model in which a one-dimensional DOA estimate indicates an angle (in the 180-degree range of +90 degrees to -90 degrees) relative to a plane that is orthogonal to the axis of the array. Although implementations of methods M200 and M300 and task TB200 are described below with reference to a context as shown in Figure 2B, it will be recognized that such implementations are not limited to this context and that corresponding implementations with reference to other contexts (e.g., in which the DOA estimate indicates an angle of 0 to 180 degrees relative to the axis in the direction of microphone MCIO or, alternatively, in the direction away from microphone MCIO) are expressly contemplated and hereby disclosed. [00262] The desired angular span may be arbitrary within the 180-degree range. For example, the DOA estimates may be limited to selected sectors of interest within that range. The desired angular resolution may also be arbitrary (e.g. uniformly distributed over the range, or nonuniformly distributed). Additionally or alternatively, the desired frequency span may be arbitrary (e.g., limited to a voice range) and/or the desired frequency resolution may be arbitrary (e.g. linear, logarithmic, mel-scale, Bark-scale, etc.). [00263] Figure 39B shows an example of an ambiguity that results from the one- dimensionality of a DOA estimate from a linear array. In this example, a DOA estimate from microphone pair MCIO, MC20 (e.g., as a candidate direction as produced by selector 300, or a DOA estimate as produced by indicator 400) indicates an angle Θ with reference to the array axis. Even if this estimate is very accurate, however, it does not indicate whether the source is located along line dl or along line d2. [00264] As a consequence of its one-dimensionality, a DOA estimate from a linear microphone array actually describes a right circular conical surface around the array axis in space (assuming that the responses of the microphones are perfectly omnidirectional) rather than any particular direction in space. The actual location of the source on this conical surface (also called a "cone of confusion") is indeterminate. Figure 39C shows one example of such a surface. [00265] Figure 40 shows an example of source confusion in a speakerphone application in which three sources (e.g., mouths of human speakers) are located in different respective directions relative to device D100 (e.g., a smartphone) having a linear microphone array. In this example, the source directions dl, d2, and d3 all happen to lie on a cone of confusion that is defined at microphone MC20 by an angle (Θ+90 degrees) relative to the array axis in the direction of microphone MCIO. Because all three source directions have the same angle relative to the array axis, the microphone pair produces the same DOA estimate for each source and fails to distinguish among them. [00266] To provide for an estimate having a higher dimensionality, it may be desirable to extend the DOA estimation principles described herein to a two- dimensional (2-D) array of microphones. Figure 41 A shows a 2-D microphone array that includes two microphone pairs having orthogonal axes. In this example, the axis of the first pair MCIO, MC20 is the x axis and the axis of the second pair MC20, MC30 is the y axis. An instance of an implementation of method M10 may be performed for the first pair to produce a corresponding 1-D DOA estimate θ χ, and an instance of an implementation of method M10 may be performed for the second pair to produce a corresponding 1-D DOA estimate G y. For a signal that arrives from a source located in the plane defined by the microphone axes, the cones of confusion described by θ χand θ γcoincide at the direction of arrival d of the signal to indicate a unique direction in the plane. [00267] Figure 4 IB shows a flowchart of a method M200 according to a general configuration that includes tasks TBIOOa, TBIOOb, and TB200. Task TBIOOa calculates a first DOA estimate for a multichannel signal with respect to an axis of a first linear array of microphones, and task TBIOOa calculates a second DOA estimate for the multichannel signal with respect to an axis of a second linear array of microphones. Each of tasks TBIOOa and TBIOOb may be implemented, for example, as an instance of an implementation of method M10 (e.g., method M20, M30, M100, or MHO) as described herein. Based on the first and second DOA estimates, task TB200 calculates a combined DOA estimate. [00268] The range of the combined DOA estimate may be greater than the range of either of the first and second DOA estimates. For example, task TB200 may be implemented to combine 1-D DOA estimates, produced by tasks TBIOOa and TBIOOb and having individual ranges of up to 180 degrees, to produce a combined DOA estimate that indicates the DOA as an angle in a range of up to 360 degrees. Task TB200 may be implemented to map 1-D DOA estimates θ χ, θ γto a direction in a larger angular range by applying a mapping, such as ( ^y, θ χ> 0 θ° =(180° - θ ν, otherwise' ^ to combine one angle with information (e.g., sign information) from the other angle. For the 1-D estimates (θ χ, 0 y) = (45°, 45°) as shown in Figure 41A, for example, TB200 may be implemented to apply such a mapping to obtain a combined estimate 6 Cof 45 degrees relative to the x-axis. For a case in which the range of the DOA estimates is 0 to 180 degrees rather than -90 to +90 degrees, it will be understood that the axial polarity (i.e., positive or negative) condition in expression (1) would be expressed in terms of whether the DOA estimate under test is less than or greater than 90 degrees. [00269] It may be desirable to show the combined DOA estimate 6 Con a 360-degree- range display. For example, it may be desirable to display the DOA estimate as an angle on a planar polar plot. Planar polar plot display is familiar in applications such as radar and biomedical scanning, for example. Figure 41C shows an example of a DOA estimate shown on such a display. In this example, the direction of the line indicates the DOA estimate and the length of the line indicates the current strength of the component arriving from that direction. As shown in this example, the polar plot may also include one or more concentric circles to indicate intensity of the directional component on a linear or logarithmic (e.g., decibel) scale. For a case in which more than one DOA estimate is available at one time (e.g., for sources that are disjoint in frequency), a corresponding line for each DOA estimate may be displayed. Alternatively, the DOA estimate may be displayed on a rectangular coordinate system (e.g., Cartesian coordinates). [00270] Figures 42 A and 42B show correspondences between the signs of the 1-D estimates θ χand θ γ, respectively, and corresponding quadrants of the plane defined by the array axes. Figure 42C shows a correspondence between the four values of the tuple (sign (θ χ) , sign (# y)) and the quadrants of the plane. Figure 42D shows a 360-degree display according to an alternate mapping (e.g., relative to the y-axis) [00271] It is noted that Figure 41 A illustrates a special case in which the source is located in the plane defined by the microphone axes, such that the cones of confusion described by θ χand θ γindicate a unique direction in this plane. For most practical applications, it may be expected that the cones of confusion of nonlinear microphone pairs of a 2-D array typically will not coincide in a plane defined by the array, even for a far-field point source. For example, source height relative to the plane of the array (e.g., displacement of the source along the z-axis) may play an important role in 2-D tracking. [00272] It may be desirable to produce an accurate 2-D representation of directions of arrival for signals that are received from sources at arbitrary locations in a three- dimensional space. For example, it may be desirable for the combined DOA estimate produced by task TB200 to indicate the DOA of a source signal in a plane that does not include the DOA (e.g., a plane defined by the microphone array or by a display surface of the device). Such indication may be used, for example, to support arbitrary placement of the audio sensing device relative to the source and/or arbitrary relative movement of the device and source (e.g., for speakerphone and/or source tracking applications). [00273] Figure 43A shows an example that is similar to Figure 41A but depicts a more general case in which the source is located above the x-y plane. In such case, the intersection of the cones of confusion of the arrays indicates two possible directions of arrival: a direction dl that extends above the x-y plane, and a direction d2 that extends below the x-y plane. In many applications, this ambiguity may be resolved by assuming that direction dl is correct and ignoring the second direction d2. For a speakerphone application in which the device is placed on a tabletop, for example, it may be assumed that no sources are located below the device. In any case, the projections of directions dl and d2 on the x-y plane are the same. [00274] While a mapping of 1-D estimates θ χand θ γto a range of 360 degrees (e.g., as in expression (1) or (2)) may produce an appropriate DOA indication when the source is located in the microphone plane, it may produce an inaccurate result for the more general case of a source that is not located in that plane. For a case in which θ χ= 8 yas shown in Figure 41B, for example, it may be understood that the corresponding direction in the x-y plane is 45 degrees relative to the x axis. Applying the mapping of expression (1) to the values (θ χ, θ γ) = (30°, 30°), however, produces a combined estimate 6 Cof 30 degrees relative to the x axis, which does not correspond to the source direction as projected on the plane. [00275] Figure 43B shows another example of a 2-D microphone array whose axes define an x-y plane and a source that is located above the x-y plane (e.g., a speakerphone application in which the speaker's mouth is above the tabletop). With respect to the x-y plane, the source is located along the y axis (e.g., at an angle of 90 degrees relative to the x axis). The x-axis pair MCIO, MC20 indicates a DOA of zero degrees relative to the y-z plane (i.e., broadside to the pair axis), which agrees with the source direction as projected onto the x-y plane. Although the source is located directly above the y axis, it is also offset in the direction of the z axis by an elevation angle of 30 degrees. This elevation of the source from the x-y plane causes the y-axis pair MC20, MC30 to indicate a DOA of sixty degrees (i.e., relative to the x-z plane) rather than ninety degrees. Applying the mapping of expression (1) to the values (θ χ, θ γ) = (0°, 60°) produces a combined estimate 6 Cof 60 degrees relative to the x axis, which does not correspond to the source direction as projected on the plane. [00276] In a typical use case, the source will be located in a direction that is neither within a plane defined by the array axes nor directly above an array axis. Figure 43C shows an example of such a general case in which a point source (i.e., a speaker's mouth) is elevated above the plane defined by the array axes. In order to obtain a correct indication in the array plane of a source direction that is outside that plane, it may be desirable to implement task TB200 to convert the 1-D DOA estimates into an angle in the array plane to obtain a corresponding DOA estimate in the plane. [00277] Figures 44A-44D show a derivation of such a conversion of (θ χ, 6> y) into an angle in the array plane. In Figures 44A and 44B, the source vector d is projected onto the x axis and onto the y axis, respectively. The lengths of these projections (d sin θ χand d sin 6> y, respectively) are the dimensions of the projection p of source vector d onto the x-y plane, as shown in Figure 44C. These dimensions are sufficient to determine conversions of DOA estimates (θ χ, 6> y) into angles (§ x, 0 y) of p in the x-y plane relative to the y-axis and relative to the x-axis, respectively, as shown in Figure 44D: where ε is a small value as may be included to avoid a divide-by-zero error. (It is noted with reference to Figures 43B, 43C, 44A-E, and also 46A-E as discussed below, that the relative magnitude of d as shown is only for convenience of illustration, and that the magnitude of d should be large enough relative to the dimensions of the microphone array for the far-field assumption of planar wavefronts to remain valid.) [00278] Task TB200 may be implemented to convert the DOA estimates according to such an expression into a corresponding angle in the array plane and to apply a mapping (e.g., as in expression (1) or (2)) to the converted angle to obtain a combined DOA estimate 6 Cin that plane. It is noted that such an implementation of task TB200 may omit calculation of § y(alternatively, of θ χ) as included in expression (3), as the value 6 Cmay be determined from θ χas combined with sign(# y) = sign(# y) (e.g., as shown in expressions (1) and (2)). For such a case in which the value of \§ y\ is also desired, it may be calculated as \§ y\ = 90°— \θ χ\ (and likewise for \θ χ\). [00279] Figure 43C shows an example in which the DOA of the source signal passes through the point (x,y,z) = (5,2,5). In this case, the DOA observed by the x-axis microphone pair MC10-MC20 is θ χ= tan _1(5/V25 + 4) * 42.9°, and the DOA observed by the y-axis microphone pair MC20-MC30 is 6 y= tan _1(2/V25 + 25) « 15.8°. Using expression (3) to convert these angles into corresponding angles in the x-y plane produces the converted DOA estimates (§ x, 0 y) = (21.8°, 68.2°), which correspond to the given source location (x,y) = (5,2). [00280] Applying expression (3) to the values (θ χ, θ γ) = (30°, 30°) as shown in Figure 41B produces the converted estimates (θ χ, θ γ) = (45°, 45°), which are mapped by expression (1) to the expected value of 45 degrees relative to the x axis. Applying expression (3) to the values (θ χ, θ γ) = (0°, 60°) as shown in Figure 43B produces the converted estimates (θ χ, θ γ) = (0°, 90°), which are mapped by expression (1) to the expected value of 90 degrees relative to the x axis. [00281] Task TB200 may be implemented to apply a conversion and mapping as described above to project a DOA, as indicated by any such pair of DOA estimates from a 2-D orthogonal array, onto the plane in which the array is located. Such projection may be used to enable tracking directions of active speakers over a 360° range around the microphone array, regardless of height difference. Figure 45A shows a plot obtained by applying an alternate mapping _ ( ~®y θ χ< 0 c ~ie y+ 180°, otherwise to the converted estimates (§ x, § y= (0°, 90°) from Figure 43B to obtain a combined directional estimate (e.g., an azimuth) of 270 degrees. In this figure, the labels on the concentric circles indicate relative magnitude in decibels. [00282] Task TB200 may also be implemented to include a validity check on the observed DOA estimates prior to calculation of the combined DOA estimate. It may be desirable, for example, to verify that the value {\θ χ\ + |# y|) is at least equal to 90 degrees (e.g., to verify that the cones of confusion associated with the two observed estimates will intersect along at least one line). [00283] In fact, the information provided by such DOA estimates from a 2D microphone array is nearly complete in three dimensions, except for the up-down confusion. For example, the directions of arrival observed by microphone pairs MC10- MC20 and MC20-MC30 may also be used to estimate the magnitude of the angle of elevation of the source relative to the x-y plane. If d denotes the vector from microphone MC20 to the source, then the lengths of the projections of vector d onto the x-axis, the y-axis, and the x-y plane may be expressed as d sin^), d sin(6> y), and d |sin 2(0 ¾;) + sin 2(# y), respectively (e.g., as shown in Figures 44A-44E). The agnitude of the angle of elevation may then be estimated as [00284] Although the linear microphone arrays in some particular examples have orthogonal axes, it may be desirable to implement method M200 for a more general case in which the axes of the microphone arrays are not orthogonal. Figure 45B shows an example of the intersecting cones of confusion associated with the responses of linear microphone arrays having non-orthogonal axes x and r to a common point source. Figure 45C shows the lines of intersection of these cones, which define the two possible directions dl and d2 of the point source with respect to the array axes in three dimensions. [00285] Figure 46A shows an example of a microphone array MC10-MC20-MC30 in which the axis of pair MC10-MC20 is the x axis, and the axis r of pair MC20-MC30 lies in the x-y plane and is skewed relative to the y axis by a skew angle a. Figure 46B shows an example of obtaining a combined directional estimate in the x-y plane with respect to orthogonal axes x and y with observations (θ χ, 6> r) from an array as shown in Figure 46A. If d denotes the vector from microphone MC20 to the source, then the lengths of the projections of vector d onto the x-axis (d x) and onto the axis r (d r) may be expressed as d sin^) and d sin(6> r), respectively, as shown in Figures 46B and 46C. The vector p = (p x, p y) denotes the projection of vector d onto the x-y plane. The estimated value of p x= d sin θ χis known, and it remains to determine the value of p y. [00286] We assume that the value of a is in the range (-90°, +90°), as an array having any other value of a may easily be mapped to such a case. The value of p ymay be determined from the dimensions of the projection vector d r= (d sin e rsin a , d sin 6 rcos a) as shown in Figures 46D and 46E. Observing that the difference between vector p and vector d ris orthogonal to d r(i.e., that the inner product ( p— d r), d r) is equal to zero), we calculate p yas sin 9 r— sin θ χsin Py = d cos (which reduces to p y= d sin 9 rfor a = 0). The desired angles of arrival in the x-y plane, relative to the orthogonal x and y axes, may then be expressed respectively as sin Θ Ύcos a sin 9 r— sin 9 Xsin a |sin 9 r— sin θ χsin a\ + ε. |sin θ χ\ cos a + ε It is noted that expression (3) is a special case of expression (4) in which a = 0. The dimensions {jp x, V y) of projection p may also be used to estimate the angle of elevation 8 hof the source relative to the x-y plane (e.g., in a similar manner as described above with reference to Figure 44E). [00287] Figure 47A shows a flowchart of a method M300 according to a general configuration that includes instances of tasks TBIOOa and TBIOOb. Method M300 also includes an implementation TB300 of task TB200 that calculates a projection of the direction of arrival into a plane that does not include the direction of arrival (e.g., a plane defined by the array axes). In such manner, a 2-D array may be used to extend the range of source DOA estimation from a linear, 180-degree estimate to a planar, 360- degree estimate. Figure 47C illustrates one example of an apparatus A300 with components (e.g., a first DOA estimator BlOOa, a second DOA estimator BlOOb and a projection calculator B300) for performing functions corresponding to Figure 47A. Figure 47D illustrates one example of an apparatus MF300 including means (e.g., means FBlOOa for calculating a first DOA estimate with respect to an axis of a first array, means FBI 00b for calculating a second DOA estimate with respect to an axis of a second array and means FB300 for calculating a projection of a DOA onto a plane that does not include the DOA) for performing functions corresponding to Figure 47A. [00288] Figure 47B shows a flowchart of an implementation TB302 of task TB300 that includes subtasks TB310 and TB320. Task TB310 converts the first DOA estimate (e.g., θ χ) to an angle in the projection plane (e.g., θ χ). For example, task TB310 may perform a conversion as shown in, e.g., expression (3) or (4). Task TB320 combines the converted angle with information (e.g., sign information) from the second DOA estimate to obtain the projection of the direction of arrival. For example, task TB320 may perform a mapping according to, e.g., expression (1) or (2). [00289] As described above, extension of source DOA estimation to two dimensions may also include estimation of the angle of elevation of the DOA over a range of 90 degrees (e.g., to provide a measurement range that describes a hemisphere over the array plane). Figure 48A shows a flowchart of such an implementation M320 of method M300 that includes a task TB400. Task TB400 calculates an estimate of the angle of elevation of the DOA with reference to a plane that includes the array axes (e.g., as described herein with reference to Figure 44E). Method M320 may also be implemented to combine the projected DOA estimate with the estimated angle of elevation to produce a three-dimensional vector. [00290] It may be desirable to perform an implementation of method M300 within an audio sensing device that has a 2-D array including two or more linear microphone arrays. Examples of a portable audio sensing device that may be implemented to include such a 2-D array and may be used to perform such a method for audio recording and/or voice communications applications include a telephone handset (e.g., a cellular telephone handset); a wired or wireless headset (e.g., a Bluetooth headset); a handheld audio and/or video recorder; a personal media player configured to record audio and/or video content; a personal digital assistant (PDA) or other handheld computing device; and a notebook computer, laptop computer, netbook computer, tablet computer, or other portable computing device. The class of portable computing devices currently includes devices having names such as laptop computers, notebook computers, netbook computers, ultra-portable computers, tablet computers, mobile Internet devices, smartbooks, and smartphones. Such a device may have a top panel that includes a display screen and a bottom panel that may include a keyboard, wherein the two panels may be connected in a clamshell or other hinged relationship. Such a device may be similarly implemented as a tablet computer that includes a touchscreen display on a top surface. [00291] Extension of DOA estimation to a 2-D array (e.g., as described herein with reference to implementations of method M200 and implementations of method M300) is typically well-suited to and sufficient for a speakerphone application. However, further extension of such principles to an N-dimensional array (wherein N >=2) is also possible and may be performed in a straightforward manner. For example, Figures 41A- 46E illustrate use of observed DOA estimates from different microphone pairs in an x-y plane to obtain an estimate of a source direction as projected into the x-y plane. In the same manner, an instance of method M200 or M300 may be implemented to combine observed DOA estimates from an x-axis microphone pair and a z-axis microphone pair (or other pairs in the x-z plane) to obtain an estimate of the source direction as projected into the x-z plane, and likewise for the y-z plane or any other plane that intersects three or more of the microphones. The 2-D projected estimates may then be combined to obtain the estimated DOA in three dimensions. For example, a DOA estimate for a source as projected onto the x-y plane may be combined with a DOA estimate for the source as projected onto the x-z plane to obtain a combined DOA estimate as a vector in (x, y, z) space. [00292] For tracking applications in which one target is dominant, it may be desirable to select N linear microphone arrays (e.g., pairs) for representing N respective dimensions. Method M200 or M300 may be implemented to combine a 2-D result, obtained with a particular pair of such linear arrays, with a DOA estimate from each of one or more linear arrays in other planes to provide additional degrees of freedom. [00293] Estimates of DOA error from different dimensions may be used to obtain a combined likelihood estimate, for example, using an expression such as 1 1 where θ ο ίdenotes the DOA candidate selected for pair i. Use of the maximum among the different errors may be desirable to promote selection of an estimate that is close to the cones of confusion of both observations, in preference to an estimate that is close to only one of the cones of confusion and may thus indicate a false peak. Such a combined result may be used to obtain a (frame, angle) plane, as shown in Figure 8 and described herein, and/or a (frame, frequency) plot, as shown at the bottom of Figure 9 and described herein. [00294] Figure 48B shows a flowchart for an implementation M325 of method M320 that includes tasks TBlOOc and an implementation TB410 of task T400. Task TBlOOc calculates a third estimate of the direction of arrival with respect to an axis of a third microphone array. Task TB410 estimates the angle of elevation based on information from the DOA estimates from tasks TBIOOa, TBlOOb, and TBlOOc. [00295] It is expressly noted that methods M200 and M300 may be implemented such that task TBIOOa calculates its DOA estimate based on one type of difference between the corresponding microphone channels (e.g., a phase-based difference), and task TBlOOb (or TBlOOc) calculates its DOA estimate based on another type of difference between the corresponding microphone channels (e.g., a gain-based difference). In one application of such an example of method M325, an array that defines an x-y plane is expanded to include a front-back pair (e.g., a fourth microphone located at an offset along the z axis with respect to microphone MCIO, MC20, or MC30). The DOA estimate produced by task TBlOOc for this pair is used in task TB400 to resolve the front-back ambiguity in the angle of elevation, such that the method provides a full spherical measurement range (e.g., 360 degrees in any plane). In this case, method M325 may be implemented such that the DOA estimates produced by tasks TBIOOa and TBIOOb are based on phase differences, and the DOA estimate produced by task TBIOOc is based on gain differences. In a particular example (e.g., for tracking of only one source), the DOA estimate produced by task TB IOOc has two states: a first state indicating that the source is above the plane, and a second state indicating that the source is below the plane. [00296] Figure 49A shows a flowchart of an implementation M330 of method M300. Method M330 includes a task TB500 that displays the calculated projection to a user of the audio sensing device. Task TB500 may be configured, for example, to display the calculated projection on a display screen of the device in the form of a polar plot (e.g., as shown in Figure 41C, 42D, and 45A). Examples of such a display screen, which may be a touchscreen as shown in Figure. 1, include a liquid crystal display (LCD), an organic light-emitting diode (OLED) display, an electrowetting display, an electrophoretic display, and an interferometric modulator display. Such display may also include an indication of the estimated angle of elevation (e.g., as shown in Figure 49B). [00297] Task TB500 may be implemented to display the projected DOA with respect to a reference direction of the device (e.g., a principal axis of the device). In such case, the direction as indicated will change as the device is rotated relative to a stationary source, even if the position of the source does not change. Figures 50A and 50B show examples of such a display before and after such rotation, respectively. [00298] Alternatively, it may be desirable to implement task TB500 to display the projected DOA relative to an external reference direction, such that the direction as indicated remains constant as the device is rotated relative to a stationary source. Figures 51 A and 5 IB show examples of such a display before and after such rotation, respectively. [00299] To support such an implementation of task TB500, device D100 may be configured to include an orientation sensor (not shown) that indicates a current spatial orientation of the device with reference to an external reference direction, such as a gravitational axis (e.g., an axis that is normal to the earth's surface) or a magnetic axis (e.g., the earth's magnetic axis). The orientation sensor may include one or more inertial sensors, such as gyroscopes and/or accelerometers. A gyroscope uses principles of angular momentum to detect changes in orientation about an axis or about each of two or three (typically orthogonal) axes (e.g., changes in pitch, roll and/or twist). Examples of gyroscopes, which may be fabricated as micro-electromechanical systems (MEMS) devices, include vibratory gyroscopes. An accelerometer detects acceleration along an axis or along each of two or three (typically orthogonal) axes. An accelerometer may also be fabricated as a MEMS device. It is also possible to combine a gyroscope and an accelerometer into a single sensor. Additionally or alternatively, the orientation sensor may include one or more magnetic field sensors (e.g., magnetometers), which measure magnetic field strength along an axis or along each of two or three (typically orthogonal) axes. In one example, device D100 includes a magnetic field sensor that indicates a current orientation of the device relative to a magnetic axis (e.g., of the earth). In such case, task TB500 may be implemented to display the projected DOA on a grid that is rotated into alignment with that axis (e.g., as a compass). [00300] Figure 49C shows a flowchart of such an implementation M340 of method M330 that includes a task TB600 and an implementation TB510 of task TB500. Task TB600 determines an orientation of the audio sensing device with reference to an external reference axis (e.g., a gravitational or magnetic axis). Task TB510 displays the calculated projection based on the determined orientation. [00301] Task TB500 may be implemented to display the DOA as the angle projected onto the array plane. For many portable audio sensing devices, the microphones used for DOA estimation will be located at the same surface of the device as the display (e.g., microphones ME10, MVlO-1, and MV10-3 in Figure. 1) or much closer to that surface than to each other (e.g., microphones ME10, MR10, and MV10-3 in Figure. 1). The thickness of a tablet computer or smartphone, for example, is typically small relative to the dimensions of the display surface. In such cases, any error between the DOA as projected onto the array plane and the DOA as projected onto the display plane may be expected to be negligible, and it may be acceptable to configure task TB500 to display the DOA as projected onto the array plane. [00302] For a case in which the display plane differs noticeably from the array plane, task TB500 may be implemented to project the estimated DOA from a plane defined by the axes of the microphone arrays into a plane of a display surface. For example, such an implementation of task TB500 may display a result of applying a projection matrix to the estimated DOA, where the projection matrix describes a projection from the array plane onto a surface plane of the display. Alternatively, task TB300 may be implemented to include such a projection. [00303] As described above, the audio sensing device may include an orientation sensor that indicates a current spatial orientation of the device with reference to an external reference direction. It may be desirable to combine a DOA estimate as described herein with such orientation information to indicate the DOA estimate with reference to the external reference direction. Figure 53B shows a flowchart of such an implementation M350 of method M300 that includes an instance of task TB600 and an implementation TB310 of task TB300. Method M350 may also be implemented to include an instance of display task TB500 as described herein. [00304] Figure 52A shows an example in which the device coordinate system E is aligned with the world coordinate system. Figure 52A also shows a device orientation matrix F that corresponds to this orientation (e.g., as indicated by the orientation sensor). Figure 52B shows an example in which the device is rotated (e.g., for use in browse-talk mode) and the matrix F (e.g., as indicated by the orientation sensor) that corresponds to this new orientation. [00305] Task TB310 may be implemented to use the device orientation matrix F to project the DOA estimate into any plane that is defined with reference to the world coordinate system. In one such example, the DOA estimate is a vector g in the device coordinate system. In a first operation, vector g is converted into a vector h in the world coordinate system by an inner product with device orientation matrix F. Such a conversion may be performed, for example, according to an expression such as h = (g TE) TF. In a second operation, the vector h is projected into a plane P that is defined with reference to the world coordinate system by the projection A(A TA) ~1A Th, where A is a basis matrix of the plane P in the world coordinate system. [00306] In a typical example, the plane P is parallel to the x-y plane of the world coordinate system (i.e., the "world reference plane"). Figure 52C shows a perspective mapping, onto a display plane of the device, of a projection of a DOA onto the world reference plane as may be performed by task TB500, where the orientation of the display plane relative to the world reference plane is indicated by the device orientation matrix F. Figure 53A shows an example of such a mapped display of the DOA as projected onto the world reference plane. [00307] In another example, task TB310 is configured to project DOA estimate vector g into plane P using a less complex interpolation among component vectors of g that are projected into plane P. In this case, the projected DOA estimate vector P gmay be calculated according to an expression such as Pg — CLQx-y(Tp) + β 9x-z(p) Y9y-z(p)> where [e xe ye z] denote the basis vectors of the device coordinate system; g = g xe x+ g ye y+ g ze z; θ α, θβ, θ γdenote the angles between plane P and the planes spanned by [e xe y], [e xe z], [e ye z] , respectively, and α, β, γ denote their respective cosines (a 2+ β 2+ γ 2= 1) ; and g x- y( P), g x- z(p) >g y- z(p) denote the projections into plane P of the component vectors 9x-y> 9x-z >9y-z = , respectively. The plane corresponding to the minimum among α, β, and γ is the plane that is closest to P, and an alternative implementation of task TB310 identifies this minimum and produces the corresponding one of the projected component vectors as an approximation of P g. [00308] It may be desirable to configure an audio sensing device to discriminate among source signals having different DO As. For example, it may be desirable to configure the audio sensing device to perform a directionally selective filtering operation on the multichannel signal to pass directional components that arrive from directions within an angular pass range and/or to block or otherwise attenuate directional components that arrive from directions within an angular stop range. [00309] It may be desirable to use a display as described herein to support a graphical user interface to enable a user of an audio sensing device to configure a directionally selective processing operation (e.g., a beamforming operation as described herein). Figure 54A shows an example of such a user interface, in which the unshaded portion of the circle indicates a range of directions to be passed and the shaded portion indicates a range of directions to be blocked. The circles indicate points on a touch screen that the user may slide around the periphery of the circle to change the selected range. The touch points may be linked such that moving one causes the other to move by an equal angle in the same angular direction or, alternatively, in the opposite angular direction. Alternatively, the touch points may be independently selectable (e.g., as shown in Figure 54B). It is also possible to provide one or more additional pairs of touch points to support selection of more than one angular range (e.g., as shown in Figure 54C). [00310] As alternatives to touch points as shown in Figures 54A-C, the user interface may include other physical or virtual selection interfaces (e.g., clickable or touchable icons on a screen) to obtain user input for selection of pass/stop band location and/or width. Examples of such interfaces include a linear slider potentiometer, a rocker switch (for binary input to indicate, e.g., up-down, left-right, clockwise/counter-clockwise), and a wheel or knob as shown in Figure 53C. [00311] For use cases in which the audio sensing device is expected to remain stationary during use (e.g., the device is placed on a flat surface for speakerphone use), it may be sufficient to indicate a range of selected directions that is fixed relative to the device. If the orientation of the device relative to a desired source changes during use, however, components arriving from the direction of that source may no longer be admitted. Figures 55A and 55B show a further example in which an orientation sensor is used to track an orientation of the device. In this case, a directional displacement of the device (e.g., as indicated by the orientation sensor) is used to update the directional filtering configuration as selected by the user (and to update the corresponding display) such that the desired directional response may be maintained despite a change in orientation of the device. [00312] It may be desirable for the array to include a number of microphones that is at least equal to the number of different source directions to be distinguished (e.g., the number of beams to be formed) at any one time. The microphones may be omnidirectional (e.g., as may be typical for a cellular telephone or a dedicated conferencing device) or directional (e.g., as may be typical for a device such as a set-top box). [00313] The DOA estimation principles described herein may be used to support selection among multiple speakers. For example, location of multiple sources may be combined with a manual selection of a particular speaker (e.g., push a particular button, or touch a particular screen area, to select a particular corresponding speaker or active source direction) or automatic selection of a particular speaker (e.g., by speaker recognition). In one such application, an audio sensing device (e.g., a telephone) is configured to recognize the voice of its owner and to automatically select a direction corresponding to that voice in preference to the directions of other sources. B. Systems and Methods for Mapping a Source Location [00314] It should be noted that one or more of the functions, apparatuses, methods and/or algorithms described above may be implemented in accordance with the systems and methods disclosed herein. Some configurations of the systems and methods disclosed herein describe multi-modal sensor fusion for seamless audio processing. For instance, the systems and methods described herein enable projecting multiple DOA information from 3D sound sources captured by microphones into a physical 2D plane using sensor data and a set of microphones located on a 3D device, where the microphone signals may be selected based on the DOA information retrieved from the microphones that maximize the spatial resolution of sound sources in a 2D physical plane and where the sensor data provides a reference of the orientation of 3D device with respect to the physical 2D plane. There are many use cases that may benefit from the fusion of sensors such as an accelerometer, proximity sensor, etc., with multi- microphones. One example (e.g., "use case 1") may include a robust handset intelligent switch (IS). Another example (e.g., "use case 2") may include robust support for various speakerphone holding patterns. Another example (e.g., "use case 3") may include seamless speakerphone-handset holding pattern support. Yet another example (e.g., "use case 4") may include a multi-view visualization of active source and coordination passing. [00315] Some configurations of the systems and methods disclosed herein may include at least one statistical model for discriminating desired use cases with pre- obtainable sensor data, if necessary. Available sensor data may be tracked along with multi-microphone data, and may be utilized for at least one of the use cases. Some configurations of the systems and methods disclosed herein may additionally or alternatively track sensor data along with other sensor data (e.g., camera data) for at least one use case. [00316] Various configurations are now described with reference to the Figures, where like reference numbers may indicate functionally similar elements. The systems and methods as generally described and illustrated in the Figures herein could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of several configurations, as represented in the Figures, is not intended to limit scope, as claimed, but is merely representative of the systems and methods. Features and/or elements depicted in a Figure may be combined with at least one features and/or elements depicted in at least one other Figures. [00317] Figure 56 is a block diagram illustrating one configuration of an electronic device 5602 in which systems and methods for mapping a source location may be implemented. The systems and methods disclosed herein may be applied to a variety of electronic devices 5602. Examples of electronic devices 5602 include cellular phones, smartphones, voice recorders, video cameras, audio players (e.g., Moving Picture Experts Group-1 (MPEG-1) or MPEG-2 Audio Layer 3 (MP3) players), video players, audio recorders, desktop computers, laptop computers, personal digital assistants (PDAs), gaming systems, etc. One kind of electronic device 5602 is a communication device, which may communicate with another device. Examples of communication devices include telephones, laptop computers, desktop computers, cellular phones, smartphones, wireless or wired modems, e-readers, tablet devices, gaming systems, cellular telephone base stations or nodes, access points, wireless gateways and wireless routers, etc. [00318] An electronic device 5602 (e.g., communication device) may operate in accordance with certain industry standards, such as International Telecommunication Union (ITU) standards and/or Institute of Electrical and Electronics Engineers (IEEE) standards (e.g., 802.11 Wireless Fidelity or "Wi-Fi" standards such as 802.11a, 802.11b, 802. l lg, 802.11η, 802.11ac, etc.). Other examples of standards that a communication device may comply with include IEEE 802.16 (e.g., Worldwide Interoperability for Microwave Access or "WiMAX"), 3 GPP, 3 GPP LTE, 3rd Generation Partnership Project 2 (3GPP2), GSM and others (where a communication device may be referred to as a User Equipment (UE), NodeB, evolved NodeB (eNB), mobile device, mobile station, subscriber station, remote station, access terminal, mobile terminal, terminal, user terminal and/or subscriber unit, etc., for example). While some of the systems and methods disclosed herein may be described in terms of at least one standard, this should not limit the scope of the disclosure, as the systems and methods may be applicable to many systems and/or standards. [00319] The electronic device 5602 may include at least one sensor 5604, a mapper 5610 and/or an operation block/module 5614. As used herein, the phrase "block/module" indicates that a particular component may be implemented in hardware (e.g., circuitry), software or a combination of both. For example, the operation block/module 5614 may be implemented with hardware components such as circuitry and/or software components such as instructions or code, etc. Additionally, one or more of the components or elements of the electronic device 5602 may be implemented in hardware (e.g., circuitry), software, firmware or any combination thereof. For example, the mapper 5610 may be implemented in circuitry (e.g., in an Application-Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) and/or one or more processors, etc.). [00320] The at least one sensor 5604 may collect data relating to the electronic device 5602. The at least one sensor 5604 may be included in and/or coupled to the electronic device 5602. Examples of sensors 5604 include microphones, accelerometers, gyroscopes, compasses, infrared sensors, tilt sensors, global positioning system (GPS) receivers, proximity sensors, cameras, ultrasound sensors, etc. In some implementations, the at least one sensor 5604 may provide sensor data 5608 to the mapper 5610. Examples of sensor data 5608 include audio signals, accelerometer readings, gyroscope readings, position information, orientation information, location information, proximity information (e.g., whether an object is detected close to the electronic device 5602), images, etc. [00321] In some configurations (described in greater detail below), the mapper 5610 may use the sensor data 5608 to improve audio processing. For example, a user may hold the electronic device 5602 (e.g., a phone) in different orientations for speakerphone usage (e.g., portrait, landscape or even desktop hands-free). Depending on the holding pattern (e.g., the electronic device 5602 orientation), the electronic device 5602 may select appropriate microphone configurations (including a single microphone configuration) to improve spatial audio processing. By adding accelerometer/proximity sensor data 5608, the electronic device 5602 may make the switch seamlessly. [00322] The sensors 5604 (e.g., a multiple microphones) may receive one or more audio signals (e.g., a multi-channel audio signal). In some implementations, microphones may be located at various locations of the electronic device 5602, depending on the configuration. For example, microphones may be positioned on the front, sides and/or back of the electronic device 5602 as illustrated above in Figure 1. Additionally or alternatively, microphones may be positioned near the top and/or bottom of the electronic device 5602. In some cases, the microphones may be configured to be disabled (e.g., not receive an audio signal). For example, the electronic device 5602 may include circuitry that disables at least one microphone in some cases. In some implementations, one or more microphones may be disabled based on the electronic device 5602 orientation. For example, if the electronic device 5602 is in a horizontal face-up orientation on a surface (e.g., a tabletop mode), the electronic device 5602 may disable at least one microphone located on the back of the electronic device 5602. Similarly, if the electronic device 5602 orientation changes (by a large amount for example), the electronic device 5602 may disable at least one microphone. [00323] A few examples of various microphone configurations are given as follows. In one example, the electronic device 5602 may be designed to use a dual-microphone configuration when possible. Unless the user holds the electronic device 5602 (e.g., phone) in such a way that a normal vector to the display is parallel, or nearly parallel with the ground (e.g., the electronic device 5602 appears to be vertically oriented (which can be determined based on sensor data 5608)), the electronic device 5602 may use a dual-microphone configuration in a category A configuration. In some implementations, in the category A configuration, the electronic device 5602 may include a dual microphone configuration where one microphone may be located near the back-top of the electronic device 5602, and the other microphone may be located near the front-bottom of the electronic device 5602. In this configuration, the electronic device 5602 may be capable of discriminating audio signal sources (e.g., determining the direction of arrival of the audio signals) in a plane that contains a line formed by the locations of the microphones. Based on this configuration, the electronic device 5602 may be capable of discriminating audio signal sources in 180 degrees. Accordingly, the direction of arrival of the audio signals that arrive within the 180 degree span may be discriminated based on the two microphones in the category A configuration. For example, an audio signal received from the left, and an audio signal received from the right of the display of the electronic device 5602 may be discerned. The directionality of one or more audio signals may be determined as described in section A above in some configurations. [00324] In another example, unless the user holds the electronic device 5602 (e.g., phone) in such a way that a normal vector to the display is perpendicular, or nearly perpendicular with the ground (e.g., the electronic device 5602 appears to be horizontally oriented (which can be informed by sensor data 5608)), the electronic device 5602 may use a dual-microphone configuration, with a category B configuration. In this configuration, the electronic device 5602 may include a dual microphone configuration where one microphone may be located near the back-bottom of the electronic device 5602, and the other microphone may be located near the front-bottom of the electronic device 5602. In some implementations, in the category B configuration, one microphone may be located near the back-top of the electronic device 5602, and the other microphone may be located near the front top of the electronic device 5602. [00325] In the category B configuration, audio signals may be discriminated (e.g., the direction of arrival of the audio signals may be determined) in a plane that contains a line formed by the locations of the microphones. Based on this configuration, there may be 180 degree audio source discrimination. Accordingly, the direction of arrival of the audio signals that arrive within the 180 degree span may be discriminated based on the two microphones in the category B configuration. For example, an audio signal received from the top, and an audio signal received from the bottom of the display of the electronic device 5602 may be discerned. However, two audio signals that are on the left or right of the display of the electronic device 5602 may not be discerned. It should be noted that if the electronic device orientation 102 were changed, such that the electronic device 5602 were vertically oriented, instead of horizontally oriented, the audio signals from the left and right of the display of the electronic device may be discerned. For a three-microphone configuration, category C, the electronic device 5602 may use a front-back pair of microphones for the vertical orientations and may use a top-bottom pair of microphones for horizontal orientations. Using a configuration as in category C, the electronic device 5602 may be capable of discriminating audio signal sources (e.g., discriminating the direction of arrival from different audio signals) in 360 degrees. [00326] The mapper 5610 may determine a mapping 5612 of a source location to electronic device 5602 coordinates and from the electronic device 5602 coordinates to physical coordinates (e.g., a two-dimensional plane corresponding to real-world or earth coordinates) based on the sensor data 5608. The mapping 5612 may include data that indicates mappings (e.g., projections) of a source location to electronic device coordinates and/or to physical coordinates. For example, the mapper 5610 may implement at least one algorithm to map the source location to physical coordinates. In some implementations, the physical coordinates may be two-dimensional physical coordinates. For example, the mapper 5610 my use sensor data 5608 from the at least one sensor 5604 (e.g., integrated accelerometer, proximity and microphone data) to determine an electronic device 5602 orientation (e.g., holding pattern) and to direct the electronic device 5602 to perform an operation (e.g., display a source location, switch microphone configurations and/or configure noise suppression settings). [00327] The mapper 5610 may detect change in electronic device 5602 orientation. In some implementations, electronic device 5602 (e.g., phone) movements may be detected through the sensor 5604 (e.g., an accelerometer and/or a proximity sensor). The mapper 5610 may utilize these movements, and the electronic device 5602 may adjust microphone configurations and/or noise suppression settings based on the extent of rotation. For example, the mapper 5610 may receive sensor data 5608 from the at least one sensor 5604 that indicates that the electronic device 5602 has changed from a horizontal orientation, (e.g., a tabletop mode) to a vertical orientation (e.g., a browse- talk mode). In some implementations, the mapper 5610 may indicate that an electronic device 5602 (e.g., a wireless communication device) has changed orientation from a handset mode (e.g., the side of a user's head) to a browse-talk mode (e.g., in front of a user at eye-level). [00328] The electronic device may also include an operation block/module 5614 that performs at least one operation based on the mapping 5612. For example, the operation block/module 5614 may be coupled to the at least one microphone and may switch the microphone configuration based on the mapping 5612. For example, if the mapping 5612 indicates that the electronic device 5602 has changed from a vertical orientation (e.g., a browse-talk mode) to a horizontal face-up orientation on a flat surface (e.g., a tabletop mode), the operation block/module 5614 may disable at least one microphone located on the back of the electronic device. Similarly, as will be described below, the operation block/module 5614 may switch from a multi-microphone configuration to a single microphone configuration. Other examples of operations include tracking a source in two or three dimensions, projecting a source into a three-dimensional display space and performing non- stationary noise suppression. [00329] Figure 57 is a flow diagram illustrating one configuration of a method 5700 for mapping electronic device 5602 coordinates. The method 5700 may be performed by the electronic device 5602. The electronic device 5602 may obtain 5702 sensor data 5608. At least one sensor 5604 coupled to the electronic device 5602 may provide sensor data 5608 to the electronic device 5602. Examples of sensor data 5608 include audio signal(s) (from one or more microphones, for example), accelerometer readings, position information, orientation information, location information, proximity information (e.g., whether an object is detected close to the electronic device 5602), images, etc. In some implementations, the electronic device 5602 may obtain 5702 the sensor data 5608 (e.g., the accelerometer x-y-z coordinate) using pre-acquired data for each designated electronic device 5602 orientation (e.g., holding pattern) and corresponding microphone identification. [00330] The electronic device 5602 may map 5704 a source location to electronic device coordinates based on the sensor data. This may be accomplished as described above in connection with one or more of Figures 41-48. For example, the electronic device 5602 may estimate a direction of arrival (DOA) of a source relative to electronic device coordinates based on a multichannel signal (e.g., multiple audio signals from two or more microphones). In some approaches, mapping 5704 the source location to electronic device coordinates may include projecting the direction of arrival onto a plane (e.g., projection plane and/or array plane, etc.) as described above. In some configurations, the electronic device coordinates may be a microphone array plane corresponding to the device. In other configurations, the electronic device coordinates may be another coordinate system corresponding to the electronic device 5602 that a source location (e.g., DOA) may be mapped to (e.g., translated and/or rotated) by the electronic device 5602. [00331] The electronic device 5602 may map 5706 the source location from the electronic device coordinates to physical coordinates (e.g., two-dimensional physical coordinates). This may be accomplished as described above in connection with one or more of Figures 49-53. For example, the electronic device may utilize an orientation matrix to project the DOA estimate into a plane that is defined with reference to the world (or earth) coordinate system. [00332] In some configurations, the mapper 5610 included in the electronic device 5602 may implement at least one algorithm to map 5704 the source location to electronic device coordinates and to map 5706 the source location from the electronic device coordinates the electronic device 5602 coordinates to physical coordinates. In some configurations, the mapping 5612 may be applied to a "3D audio map." For example, in some configurations, a compass (e.g., a sensor 5604) may provide compass data (e.g., sensor data 5608) to the mapper 5610. In this example, the electronic device 5602 may obtain a sound distribution map in a four pi direction (e.g., a sphere) translated into physical (e.g., real world or earth) coordinates. This may allow the electronic device 5602 to describe a three-dimensional audio space. This kind of elevation information may be utilized to reproduce elevated sound via a loudspeaker located in an elevated position (as in a 22.2 surround system, for example). [00333] In some implementations, mapping 5706 the source location from the electronic device coordinates to physical coordinates may include detecting an electronic device 5602 orientation and/or detecting any change in an electronic device 5602 orientation. For example, the mapper 5610 my use sensor data 5608 from the at least one sensor 5604 (e.g., integrated accelerometer, proximity and microphone data) to determine an electronic device 5602 orientation (e.g., holding pattern). Similarly, the mapper 5610 may receive sensor data 5608 from the at least one sensor 5604 that indicates that the electronic device 5602 has changed from a horizontal orientation (e.g., a tabletop mode) to a vertical orientation (e.g., a browse-talk mode). [00334] The electronic device 5602 may perform 5708 an operation based on the mapping 5612. For example, the electronic device 5602 may perform 5708 at least one operation based on the electronic device 5602 orientation (e.g., as indicated by the mapping 5612). Similarly, the electronic device 5602 my perform 5708 an operation based on a detected change in the electronic device 5602 orientation (e.g., as indicated by the mapping 5612). Specific examples of operations include switching the electronic device 5602 microphone configuration, tracking an audio source (in two or three dimensions, for instance), mapping a source location from physical coordinates into a three-dimensional display space, non-stationary noise suppression, filtering, displaying images based on audio signals, etc. [00335] An example of mapping 5706 the source location from the electronic device coordinates to physical coordinates is given as follows. According to this example, the electronic device 5602 (e.g., the mapper 5610) may monitor the sensor data 5608 (e.g., accelerometer coordinate data), smooth the sensor data 5608 (simple recursive weighting or Kalman smoothing), and the operation block/module 5614 may perform an operation based on the mapping 5612 (e.g., mapping or projecting the audio signal source). [00336] The electronic device 5602 may obtain a three-dimensional (3D) space defined by x-y-z basis vectors E = (e x, e y, e z) in a coordinate system given by a form factor (e.g. FLUID) (by using a gyro sensor for example). The electronic device 5602 may also specify the basis vector E'= (e x', e y', e z- ) in the physical (e.g., real word) coordinate system based on the x-y-z position sensor data 5608. The electronic device 5602 may then obtain A = (e x", e y"), which is a basis vector space to obtain any two- dimensional plane in the coordinate system. Given the search grid g = (x, y, z), the electronic device 5602 may project the basis vector space down to the plane (x", y") by taking the first two elements of the projection operation defined by taking the first two elements ( "' where (x" , y") = A(A TA^ 1A T(g · E') . [00337] For example, assuming that a device (e.g., phone) is held in browse-talk mode, then E = l 0 of , [0 1 0f , [0 0 if ) and E' = Jo 0 lf , [0 1 0f , [l 0 Of ). Then, g = [θ 0 if in a device (e.g., phone) coordinate system and {g TE E' = 0 Of . In order to project it down to A = ([l 0 0f , [0 1 Of ), which is the real x-y plane (e.g., physical coordinates), Λ(Λ ΓΛ) 1A T(g rE E' = [l 0 Of . It should be noted that the first two elements [l Of may be taken after the projection operation. Accordingly, g in E may now be projected onto A as [l of . Thus, [θ 0 if with a browse-talk mode in device (e.g., phone) x-y-z geometry corresponds to [l of for the real world x-y plane. [00338] For a less complex approximation for projection, the electronic device 5602 may apply a simple interpolation scheme among three set representations defined as P(x' , y') = P x_ y(x' , y') + βΡ χ_ ζ(χ' , y') + P y_ z(χ' , y') , where a + β + = 1 and that is a function of the angle between the real x-y plane and each set plane. Alternatively, the electronic device 5602 may use the representation given by P(x' , y') = min(P x— y(x',y;) ' ^x— z(z',y') ' Py— z(x',y. ) ). In the example of mapping, a coordinate change portion is illustrated before the projection operation. [00339] Additionally or alternatively, performing 5708 an operation may include mapping the source location from the physical coordinates into a three-dimensional display space. This may be accomplished as described in connection with one or more of Figures 52-53. Additional examples are provided below. For instance, the electronic device 5602 may render a sound source representation corresponding to the source location in a three-dimensional display space. In some configurations, the electronic device 5602 may render a plot (e.g., polar plot, rectangular plot) that includes the sound source representation on a two-dimensional plane corresponding to physical coordinates in the three-dimensional display space, where the plane is rendered based on the device orientation. In this way, performing 5708 the operation may include maintaining a source orientation in the three-dimensional display space regardless of the device orientation (e.g., rotation, tilt, pitch, yaw, roll, etc.). For instance, the plot will be aligned with physical coordinates regardless of how the device is oriented. In other words, the electronic device 5602 may compensate for device orientation changes in order to maintain the orientation of the plot in relation to physical coordinates. In some configurations, displaying the three-dimensional display space may include projecting the three-dimensional display space onto a two-dimensional display (for display on a two-dimensional pixel grid, for example). [00340] Figure 58 is a block diagram illustrating a more specific configuration of an electronic device 5802 in which systems and methods for mapping electronic device 5802 coordinates may be implemented. The electronic device 5802 may be an example of the electronic device 5602 described in connection with Figure 56. The electronic device 5802 may include at least one sensor 5804, at least one microphone, a mapper 5810 and an operation block/module 5814 that may be examples of corresponding elements described in connection with Figure 56. In some implementations, the at least one sensor 5804 may provide sensor data 5808, that may be an example of the sensor data 5608 described in connection with Figure 56, to the mapper 5810. [00341] The operation block/module 5814 may receive a reference orientation 5816. In some implementations, the reference orientation 5816 may be stored in memory that is included in and/or coupled to the electronic device 5802. The reference orientation 5816 may indicate a reference electronic device 5602 orientation. For example, the reference orientation 5816 may indicate an optimal electronic device 5602 orientation (e.g., an optimal holding pattern). The optimal electronic device 5602 orientation may correspond to an orientation where a dual microphone configuration may be implemented. For example, the reference orientation 5816 may be the orientation where the electronic device 5602 is positioned between a vertical orientation and a horizontal orientation. In some implementations, electronic device 5602 (e.g., phone) orientations that are horizontal and vertical are non-typical holding patterns (e.g., not optimal electronic device 5602 orientations). These positions (e.g., vertical and/or horizontal) may be identified using sensors 5804 (e.g., accelerometers). In some implementations, the intermediate positions (which may include the reference orientation 5816) may be positions for endfire dual microphone noise suppression. By comparison, the horizontal and/or vertical orientations may be handled by broadside/single microphone noise suppression. [00342] In some implementations, the operation block/module 5814 may include a three-dimensional source projection block/module 5818, a two-dimensional source tracking block/module 5820, a three-dimensional source tracking block/module 5822, a microphone configuration switch 5824 and/or a non- stationary noise suppression block/module 5826. [00343] The three-dimensional source tracking block/module 5822 may track an audio signal source in three dimensions. For example, as the audio signal source moves relative to the electronic device 5602, or as the electronic device 5602 moves relative to the audio signal source, the three-dimensional source tracking block/module 5822 may track the location of the audio signal source relative to the electronic device 5802 in three dimensions. In some implementations, the three-dimensional source tracking block/module 5822 may track an audio signal source based on the mapping 5812. In other words, the three-dimensional source tracking block/module 5822 may determine the location of the audio signal source relative to the electronic device based on the electronic device 5802 orientation as indicated in the mapping 5812. In some implementations, the three-dimensional source projection block/module 5818 may project the source (e.g., the source tracked in three dimensions) into two-dimensional space. For example, the three-dimensional source projection block/module 5818 may use at least one algorithm to project a source tracked in three dimensions to a display in two dimensions. [00344] In this implementation, the two-dimensional source tracking block/module 5820 may track the source in two dimensions. For example, as the audio signal source moves relative to the electronic device 5602, or as the electronic device 5602 moves relative to the audio signal source, the two-dimensional source tracking block/module 5820 may track the location of the audio signal source relative to the electronic device 5802 in two dimensions. In some implementations, the two-dimensional source tracking block/module 5820 may track an audio signal source based on the mapping 5812. In other words, the two-dimensional source tracking block/module 5820 may determine the location of the audio signal source relative to the electronic device based on the electronic device 5802 orientation as indicated in the mapping 5812. [00345] The microphone configuration switch 5824 may switch the electronic device 5802 microphone configuration. For example, the microphone configuration switch 5824 may enable/disable at least one of the microphones. In some implementations, the microphone configuration switch 5824 may switch the microphone configuration 306 based on the mapping 5812 and/or the reference orientation 5816. For example, when the mapping 5812 indicates that the electronic device 5802 is horizontal face-up on a flat surface (e.g., a tabletop mode), the microphone configuration switch 5824 may disable at least one microphone located on the back of the electronic device 5802. Similarly, when the mapping 5812 indicates that the electronic device 5802 orientation is different (by a certain amount for example) from the reference orientation 5816, the microphone configuration switch 5824 may switch from a multi-microphone configuration (e.g., a dual-microphone configuration) to a single microphone configuration. [00346] Additionally or alternatively, the non- stationary noise- suppression block/module 326 may perform non-stationary noise suppression based on the mapping 5812. In some implementations, the non-stationary noise suppression block/module 5826 may perform the non- stationary noise suppression independent of the electronic device 5802 orientation. For example, non-stationary noise suppression may include spatial processing such as beam-null forming and/or directional masking, which are discussed above. [00347] Figure 59 is a flow diagram illustrating a more specific configuration of a method 5900 for mapping electronic device 5802 coordinates. The method 5900 may be performed by the electronic device 5802. The electronic device 5802 may obtain 5902 sensor data 5808. In some implementations, this may be done as described in connection with Figure 57. [00348] The electronic device 5802 may determine 5904 a mapping 5812 of electronic device 5802 coordinates from a multi-microphone configuration to physical coordinates based on the sensor data 5808. In some implementations, this may be done as described in connection with Figure 57. [00349] The electronic device 5802 may determine 5906 an electronic device orientation based on the mapping 5812. For example, the mapper 5810 may receive sensor data 5808 from a sensor 5804 (e.g., an accelerometer). In this example, the mapper 5810 may use the sensor data 5808 to determine the electronic device 5802 orientation. In some implementations, the electronic device 5802 orientation may be based on a reference plane. For example, the electronic device 5802 may use polar coordinates to define an electronic device 5802 orientation. As will be described below, the electronic device 5802 may perform at least one operation based on the electronic device 5802 orientation. [00350] In some implementations, the electronic device 5802 may provide a real-time source activity map to the user. In this example, the electronic device 5802 may determine 5906 an electronic device 5802 orientation (e.g., a user's holding pattern) by utilizing a sensor 5804 (e.g., an accelerometer and/or gyroscope). A variance of likelihood (directionality) may be given by a two-dimensional (2D) angiogram (or polar plot) per each electronic device 5802 orientation (e.g., holding pattern). In some cases, the variance may become significantly large (omni-directional) if the electronic device 5802 faces the plane made by two pairs orthogonally. [00351] In some implementations, the electronic device 5802 may detect 5908 any change in the electronic device 5802 orientation based on the mapping 5812. For example, the mapper 5810 may monitor the electronic device 5802 orientation over time. In this example, the electronic device 5802 may detect 5908 any change in the electronic device 5802 orientation. For example, the mapper 5810 may indicate that an electronic device 5802 (e.g., a wireless communication device) has changed orientation from a handset mode (e.g., the side of a user's head) to a browse-talk mode (e.g., in front of a user at eye-level). As will be described below, the electronic device 5802 may perform at least one operation based on any change to the electronic device 5802 orientation. [00352] Optionally, the electronic device 5802 (e.g., operation block/module 5814) may determine 5910 whether there is a difference between the electronic device 5802 orientation and the reference orientation 5816. For example, the electronic device 5802 may receive a mapping 5812 that indicates the electronic device 5802 orientation. The electronic device 5802 may also receive a reference orientation 5816. If the electronic device 5802 orientation and the reference orientation 5816 are not the same the electronic device 5802 may determine that there is a difference between the electronic device 5802 orientation and the reference orientation 5816. As will be described below, the electronic device 5802 may perform at least one operation based on the difference between the electronic device 5802 orientation and the reference orientation 5816. In some implementations, determining 5910 whether there is a difference between the electronic device 5802 orientation and the reference orientation 5816 may include determining whether any difference is greater than a threshold amount. In this example, the electronic device 5802 may perform an operation based on the difference when the difference is greater than the threshold amount. [00353] In some implementations, the electronic device 5802 may switch 5912 a microphone configuration based on the electronic device 5802 orientation. For example, the electronic device 5802 may select microphone signals based on DOA information that maximize the spatial resolution of one or more sound sources in physical coordinates (e.g., a 2D physical plane). Switching 5912 a microphone configuration may include enabling/disabling microphones that are located at various locations on the electronic device 5802. [00354] Switching 5912 a microphone configuration may be based on the mapping 5812 and/or reference orientation 5816. In some configurations, switching 5912 between different microphone configurations may be performed, but often, as in the case of switching 5912 from a dual microphone configuration to a single microphone configuration, may include a certain systematic delay. For example, the systematic delay may be around three seconds when there is an abrupt change of the electronic device 5802 orientation. By basing the switch 5912 on the mapping 5812 (e.g., and the sensor data 5808), switching 5912 from a dual microphone configuration to a single microphone configuration may be made seamlessly. In some implementations, switching 5912 a microphone configuration based on the mapping 5812 and/or the reference orientation 5816 may include switching 5912 a microphone configuration based on at least one of the electronic device 5802 orientation, any change in the electronic device 5802 orientation and any difference between the electronic device 5802 orientation and the reference orientation 5816. [00355] A few examples of switching 5912 a microphone configuration are given as follows. In one example, the electronic device 5802 may be in the reference orientation 5816 (e.g., an optimal holding pattern). In this example, the electronic device 5802 may learn the sensor data 5808 (e.g., the accelerometer x-y-z coordinates). This may be based on a simple weighted average (e.g., alpha*history + (1 -alpha)* current) or more sophisticated Kalman smoothing, for example. If the electronic device 5802 determines 5910 there is a significantly large difference from the tracked accelerometer statistic and the reference orientation 5816, the electronic device 5802 may switch 5912 from a multiple microphone configuration to a single microphone configuration. [00356] In another example, suppose that a user changes posture (e.g., from sitting on a chair to lying down on a bed). If the user holds the electronic device 5802 (e.g., phone) in an acceptable holding pattern (e.g., the electronic device 5802 is in the reference orientation 5816), the electronic device 5802 may continue to be in a multiple microphone configuration, (e.g., a dual microphone configuration), and learn the accelerometer coordinate (e.g., obtain 5902 the sensor data 5808). Furthermore, the electronic device 5802 may detect a user's posture while they are in a phone conversation, for example, by detecting the electronic device 5802 orientation. Suppose that a user does not speak while he/she moves the electronic device 5802 (e.g., phone) away from the mouth. In this case, the electronic device 5802 may switch 5912 from a multiple microphone configuration to a single microphone configuration and the electronic device 5802 may remain in the single microphone configuration. However, as soon as the user speaks while holding the electronic device 5802 in an optimal holding pattern (e.g., in the reference orientation 5816), the electronic device 5802 will switch back to the multiple microphone configuration (e.g., a dual-microphone configuration). [00357] In another example, the electronic device 5802 may be in a horizontal facedown orientation (e.g., a user lies down on a bed holding the electronic device while the display of the electronic device 5802 is facing downward towards the top of the bed). This electronic device 5802 orientation may be easily detected because the z coordinate is negative, as sensed by the sensor 5804 (e.g., the accelerometer). Additionally or alternatively, for the user's pose change from sitting to lying on a bed, the electronic device 5802 may also learn the user's pose using frames using phase and level differences. As soon as the user uses the electronic device 5802 in the reference orientation 5816 (e.g., holds the electronic device 5802 in the optimal holding pattern), the electronic device 5802 may perform optimal noise suppression. Sensors 5804 (e.g., integrated accelerometer and microphone data) may then be used in the mapper 5810 to determine the electronic device 5802 orientation (e.g., holding pattern of the electronic device 5802) and the electronic device 5802 may perform an operation (e.g., select the appropriate microphone configuration). More specifically, front and back microphones may be enabled, or front microphones may be enabled while back microphones may be disabled. Either of these configurations may be in effect while the electronic device 5802 is in a horizontal orientation (e.g., speakerphone or tabletop mode). [00358] In another example, a user may change the electronic device 5802 (e.g., phone) holding pattern (e.g., electronic device 5802 orientation) from handset usage to speakerphone or vice versa. By adding accelerometer/proximity sensor data 5808, the electronic device 5802 may make a microphone configuration switch seamlessly and adjust microphone gain and speaker volume (or earpiece to larger loudspeaker switch). For example, suppose that a user puts the electronic device 5802 (e.g., phone) face down. In some implementations, the electronic device 5802 may also track the sensor 5804 so that the electronic device 5802 may track if the electronic device 5802 (e.g., phone) is facing down or up. If the electronic device 5802 (e.g., phone) is facing down, the electronic device 5802 may provide speaker phone functionality. In some implementations, the electronic device may prioritize the proximity sensor result. In other words, if the sensor data 5808 indicates that an object (e.g., a hand or a desk) is near to the ear, the electronic device may not switch 5912 to speakerphone. [00359] Optionally, the electronic device 5802 may track 5914 a source in three dimensions based on the mapping 5812. For example, the electronic device 5802 may track an audio signal source in three dimensions as it moves relative to the electronic device 5802. In this example, the electronic device 5802 may project 5916 the source (e.g., source location) into a two-dimensional space. For example, the electronic device 5802 may project 5916 the source that was tracked in three dimensions onto a two- dimensional display in the electronic device 5802. Additionally, the electronic device 5802 may switch 5918 to tracking the source in two dimensions. For example, the electronic device 5802 may track in two dimensions an audio signal source as it moves relative to the electronic device 5802. Depending on an electronic device 5802 orientation, the electronic device 5802 may select corresponding nonlinear pairs of microphones and provide a 360-degree two-dimensional representation with proper two- dimensional projection. For example, the electronic device 5802 may provide a visualization of two-dimensional, 360-degree source activity regardless of electronic device 5802 orientation (e.g., holding patterns (speakerphone mode, portrait browse-talk mode, and landscape browse-talk mode, or in between any combination thereof). The electronic device 5802 may interpolate the visualization to a two-dimensional representation for in-between each holding pattern. In fact, the electronic device 5802 may even render a three-dimensional visualization using three sets of two-dimensional representations. [00360] In some implementations, the electronic device 5802 may perform 5920 non- stationary noise suppression. Performing 5920 non-stationary noise suppression may suppress a noise audio signal from a target audio signal to improve spatial audio processing. In some implementations, the electronic device 5802 may be moving during the noise suppression. In these implementations, the electronic device 5802 may perform 5920 non- stationary noise suppression independent of the electronic device 5802 orientation. For example, if a user mistakenly rotates a phone but still wants to focus on some target direction, then it may be beneficial to maintain that target direction regardless of the device orientation. [00361] Figure 60 is a flow diagram illustrating one configuration of a method 6000 for performing 5708 an operation based on the mapping 5812. The method 6000 may be performed by the electronic device 5802. The electronic device 5802 may detect 6002 any change in the sensor data 5808. In some implementations, detecting 6002 any change in the sensor data 5808 may include detecting whether a change in the sensor data 5808 is greater than a certain amount. For example, the electronic device 5802 may detect 6002 whether there is a change in accelerometer data that is greater than a determined threshold amount. [00362] The electronic device 5802 may determine 6004 if the sensor data 5808 indicates that the electronic device 5802 is in one of a horizontal or vertical position or that the electronic device 5802 is in an intermediate position. For example, the electronic device 5802 may determine whether the sensor data 5808 indicates that the electronic device 5802 is in a tabletop mode (e.g., horizontal face-up on a surface) or a browse-talk mode (e.g., vertical at eye level) or whether the electronic device 5802 is in a position other than vertical or horizontal (e.g., which may include the reference orientation 5816). [00363] If the electronic device 5802 determines 6004 that the sensor data 5808 indicates that the electronic device 5802 is in an intermediate position, the electronic device 5802 may use 6006 a dual microphone configuration. If the electronic device 5802 was not previously using a dual microphone configuration, using 6006 a dual microphone configuration may include switching to a dual microphone configuration. By comparison, if the electronic device 5802 was previously using a dual microphone configuration, using 6006 a dual microphone configuration may include maintaining a dual microphone configuration. [00364] If the electronic device 5802 determines 6004 that the sensor data 5808 indicates that the electronic device 5802 is in a horizontal or vertical position, the electronic device 5802 may determine 6008 if a near field phase/gain voice activity detector (VAD) is active. In other words, the electronic device 5802 may determine if the electronic device 5802 is located close to the audio signal source (e.g., a user's mouth). If the electronic device 5802 determines 6008 that a near field phase/gain voice activity detector is active (e.g., the electronic device 5802 is near the user's mouth), the electronic device 5802 may use 6006 a dual microphone configuration. [00365] If the electronic device 5802 determines 6008 that a near field phase/gain voice activity detector is not active (e.g., the electronic device 5802 is not located close to the audio signal source), the electronic device 5802 may use 6010 a single microphone configuration. If the electronic device 5802 was not previously using a single microphone configuration, using 6010 a single microphone configuration may include switching to a single microphone configuration. By comparison, if the electronic device 5802 was previously using a single microphone configuration, using 6010 a single microphone configuration may include maintaining a single microphone configuration. In some implementations, using 6010 a single microphone configuration may include using broadside/single microphone noise suppression. [00366] Figure 61 is a flow diagram illustrating another configuration of a method 6100 for performing 5708 an operation based on the mapping 5812. The method 6100 may be performed by the electronic device 5802. The electronic device 5802 may detect 6102 any change in the sensor data 5808. In some implementations, this may be done as described in connection with Figure 60. [00367] The electronic device 5802 may determine 6104 if the sensor data 5808 indicates that the electronic device 5802 is in a tabletop position or in an intermediate or vertical position. For example, the electronic device 5802 may determine 6104 if the sensor data 5808 indicates that the electronic device 5802 is horizontal face-up on a surface (e.g., a tabletop position) or whether the electronic device 5802 is vertical (e.g., a browse-talk position) or in a position other than vertical or horizontal (e.g., which may include the reference orientation 5816). [00368] If the electronic device 5802 determines 6104 that the sensor data 5808 indicates that the electronic device 5802 is in an intermediate position, the electronic device 5802 may use 6106 front and back microphones. In some implementations, using 6106 front and back microphones may include enabling/disabling at least one microphone. [00369] If the electronic device 5802 determines 6104 that the sensor data 5808 indicates that the electronic device 5802 is in a tabletop position, the electronic device 5802 may determine 6108 if the electronic device 5802 is facing up. In some implementations, the electronic device 5802 may determine 6108 if the electronic device 5802 is facing up based on the sensor data 5808. If the electronic device 5802 determines 6108 that the electronic device 5802 is facing up, the electronic device 5802 may use 6110 front microphones. For example, the electronic device may use 6110 at least one microphone locate on the front of the electronic device 5802. In some implementations, using 6110 front microphones may include enabling/disabling at least one microphone. For example, using 6110 front microphones may include disabling at least one microphone located on the back of the electronic device 5802. [00370] If the electronic device 5802 determines 6108 that the electronic device 5802 is not facing up (e.g., the electronic device 5802 is facing down), the electronic device 5802 may use 6112 back microphones. For example, the electronic device may use 6112 at least one microphone locate on the back of the electronic device 5802. In some implementations, using 6112 back microphones may include enabling/disabling at least one microphone. For example, using 6112 back microphones may include disabling at least one microphone located on the front of the electronic device 5802. [00371] Figure 62 is a block diagram illustrating one configuration of a user interface 6228 in which systems and methods for displaying a user interface 6228 on an electronic device 6202 may be implemented. In some implementations, the user interface 6228 may be displayed on an electronic device 6202 that may be an example of the electronic device 5602 described in connection with Figure 56. The user interface 6228 may be used in conjunction with and/or independently from the multi-microphone configurations described herein. The user interface 6228 may be presented on a display 6264 (e.g., a screen) of the electronic device 6202. The display 6264 may also present a sector selection feature 6232. In some implementations, the user interface 6228 may provide an editable mode and a fixed mode. In an editable mode, the user interface 6228 may respond to input to manipulate at least one feature (e.g., sector selection feature) of the user interface 6228. In a fixed mode, the user interface 6228 may not respond to input to manipulate at least one feature of the user interface 6228. [00372] The user interface 6228 may include information. For example, the user interface 6228 may include a coordinate system 6230. In some implementations, the coordinate system 6230 may be a reference for audio signal source location. The coordinate system 6230 may correspond to physical coordinates. For example, sensor data 5608 (e.g., accelerometer data, gyro data, compass data, etc.) may be used to map electronic device 6202 coordinates to physical coordinates as described in Figure 57. In some implementations, the coordinate system 6230 may correspond to a physical space independent of earth coordinates. [00373] The user interface 6228 may display a directionality of audio signals. For example, the user interface 6228 may include audio signal indicators that indicate the direction of the audio signal source. The angle of the audio signal source may also be indicated in the user interface 6228. The audio signal(s) may be a voice signal. In some implementations, the audio signals may be captured by the at least one microphone. In this implementation, the user interface 6228 may be coupled to the at least one microphone. The user interface 6228 may display a 2D angiogram of captured audio signals. In some implementations, the user interface 6228 may display a 2D plot in 3D perspective to convey an alignment of the plot with a plane that is based on physical coordinates in the real world, such as the horizontal plane. In this implementation, the user interface 6228 may display the information independent of the electronic device 6202 orientation. [00374] In some implementations, the user interface 6228 may display audio signal indicators for different types of audio signals. For example, the user interface 6228 may include an angiogram of a voice signal and a noise signal. In some implementations, the user interface 6228 may include icons corresponding to the audio signals. For example, as will be described below, the display 6264 may include icons corresponding to the type of audio signal that is displayed. Similarly, as will be described below, the user interface 6228 may include icons corresponding to the source of the audio signal. The position of these icons in the polar plot may be smoothed in time. As will be described below, the user interface 6228 may include one or more elements to carry out the functions described herein. For example, the user interface 6228 may include an indicator of a selected sector and/or may display icons for editing a selected sector. [00375] The sector selection feature 6232 may allow selection of at least one sector of the physical coordinate system 6230. The sector selection feature 6232 may be implemented by at least one element included in the user interface 6228. For example, the user interface 6228 may include a selected sector indicator that indicates a selected sector. In some implementations, the sector selection feature 6232 may operate based on touch input. For example, the sector selection feature 6232 may allow selection of a sector based on a single touch input (e.g., touching, swiping and/or circling an area of the user interface 6228 corresponding to a sector). In some implementations, the sector selection feature 6232 may allow selection of multiple sectors at the same time. In this example, the sector selection feature 6232 may allow selection of the multiple sectors based on multiple touch inputs. It should be understood that the electronic device 6202 may include circuitry, a processor and/or instructions for producing the user interface 6228. [00376] Figure 63 is a flow diagram illustrating one configuration of a method 6300 for displaying a user interface 6228 on an electronic device 6202. The method 6300 may be performed by the electronic device 6202. The electronic device 6202 may obtain 6302 sensor data (e.g., accelerometer data, tilt sensor data, orientation data, etc.) that corresponds to physical coordinates. [00377] The electronic device 6202 may present 6304 the user interface 6228, for example on a display 6264 of the electronic device 6202. In some implementations, the user interface 6228 may include the coordinate system 6230. As described above, the coordinate system 6230 may be a reference for audio signal source location. The coordinate system 6230 may correspond to physical coordinates. For example, sensor data 5608 (e.g., accelerometer data, gyro data, compass data, etc.) may be used to map electronic device 6202 coordinates to physical coordinates as described above. [00378] In some implementations, presenting 6304 the user interface 6228 that may include the coordinate system 6230 may include presenting 6304 the user interface 6228 and the coordinate system 6230 in an orientation that is independent of the electronic device 6202 orientation. In other words, as the electronic device 6202 orientation changes (e.g., the electronic device 6202 rotates), the coordinate system 6230 may maintain orientation. In some implementations, the coordinate system 6230 may correspond to a physical space independent of earth coordinates. [00379] The electronic device 6202 may provide 6306 a sector selection feature 6232 that allows selection of at least one sector of the coordinate system 6230. As described above, the electronic device 6202 may provide 6306 a sector selection feature via the user interface 6228. For example, the user interface 6228 may include at least one element that allows selection of at least one sector of the coordinate system 6230. For example, the user interface 6228 may include an indicator that indicates a selected sector. [00380] The electronic device 6202 may also include a touch sensor that allows touch input selection of the at least one sector. For example, the electronic device 6202 may select (and/or edit) one or more sectors and/or one or more audio signal indicators based on one or more touch inputs. Some examples of touch inputs include one or more taps, swipes, patterns (e.g., symbols, shapes, etc.), pinches, spreads, multi-touch rotations, etc. In some configurations, the electronic device 6202 (e.g., user interface 6228) may select a displayed audio signal indicator (and/or sector) when one or more taps, a swipe, a pattern, etc., intersects with the displayed audio signal indicator (and/or sector). Additionally or alternatively, the electronic device 6202 (e.g., user interface 6228) may select a displayed audio signal indicator (and/or sector) when a pattern (e.g., a circular area, rectangular area or area within a pattern), etc., fully or partially surrounds or includes the displayed audio signal indicator (and/or sector). It should be noted that one or more audio signal indicators and/or sectors may be selected at a time. [00381] In some configurations, the electronic device 6202 (e.g., user interface 6228) may edit one or more sectors and/or audio signal indicators based on one or more touch inputs. For example, the user interface 6228 may present one or more options (e.g., one or more buttons, a drop-down menu, etc.) that provide options for editing the audio signal indicator or selected audio signal indicator (e.g., selecting an icon or image for labeling the audio signal indicator, selecting or changing a color, pattern and/or image for the audio signal indicator, setting whether a corresponding audio signal should be filtered (e.g., blocked or passed), zooming in or out on the displayed audio signal indicator, etc.). Additionally or alternatively, the user interface 6228 may present one or more options (e.g., one or more buttons, a drop-down menu, etc.) that provide options for editing the sector (e.g., selecting or changing a color, pattern and/or image for the sector, setting whether audio signals in the sector should be filtered (e.g., blocked or passed), zooming in or out on the sector, adjusting sector size (by expanding or contracting the sector, for example), etc.). For instance, a pinch touch input may correspond to reducing or narrowing sector size, while a spread may correspond to enlarging or expanding sector size. [00382] The electronic device 6202 may provide 6308 a sector editing feature that allows editing the at least one sector. For example, the sector editing feature may enable adjusting (e.g., enlarging, reducing, shifting, etc.) the sector as describe herein. [00383] In some configurations, the electronic device 6202 (e.g., display 6264) may additionally or alternatively display a target audio signal and an interfering audio signal on the user interface. The electronic device 6202 (e.g., display 6264) may display a directionality of the target audio signal and/or the interfering audio signal captured by one or more microphones. The target audio signal may include a voice signal. [00384] Figure 64 is a block diagram illustrating one configuration of a user interface 6428 in which systems and methods for displaying a user interface 6428 on an electronic device 6402 may be implemented. In some implementations, the user interface 6428 may be included on a display 6464 of an electronic device 6402 that may be examples of corresponding elements described in connection with Figure 62. The electronic device 6402 may include a user interface 6428, at least one microphone 6406, an operation block/module 6414, a display 6464 and/or a sector selection feature 6432 that may be examples of corresponding elements described in one or more of Figures 56 and 62. [00385] In some implementations, the user interface 6428 may present a sector editing feature 6436, and/or a user interface alignment block/module 6440. The sector editing feature 6436 may allow for editing of at least one sector. For example, the sector editing feature 6436 may allow editing of at least one selected sector of the physical coordinate system 6430. The sector editing feature 6436 may be implemented by at least one element included in the display 6464. For example, the user interface 6428 may include at least one touch point that allows a user to adjust the size of a selected sector. In some implementations, the sector editing feature 6436 may operate based on touch input. For example, the sector editing feature 6436 may allow editing of a selected sector based on a single touch input. In some implementations, the sector editing feature 6436 may allow for at least one of adjusting the size of a sector, adjusting the shape of a sector, adjusting the boundaries of a sector and/or zooming in on the sector. In some implementations, the sector editing feature 6436 may allow editing of multiple sectors at the same time. In this example, the sector editing feature 6436 may allow editing of the multiple sectors based on multiple touch inputs. [00386] As described above, in certain implementations, at least one of the sector selection feature 6432 and the sector editing feature 6436 may operate based on a single touch input or multiple touch inputs. For example, the sector selection feature 6432 may be based on one or more swipe inputs. For instance, the one or more swipe inputs may indicate a circular region. In some configurations, the one or more swipe inputs may be a single swipe. The sector selection feature 6432 may be based on single or multi-touch input. Additionally or alternatively, the electronic device 6402 may adjust a sector based on a single or multi-touch input. [00387] In these examples, the display 6464 may include a touch sensor 6438 that may receive touch input (e.g., a tap, a swipe or circular motion) that selects a sector. The touch sensor 6438 may also receive touch input that edits a sector, for example, by moving touch points displayed on the display 6464. In some configurations, the touch sensor 6438 may be integrated with the display 6464. In other configurations, the touch sensor 6438 may be implemented separately in the electronic device 6402 or may be coupled to the electronic device 6402. [00388] The user interface alignment block/module 6440 may align all or part of the user interface 6428 with a reference plane. In some implementations, the reference plane may be horizontal (e.g., parallel to ground or a floor). For example, the user interface alignment block/module 6440 may align part of the user interface 6428 that displays the coordinate system 6430. In some implementations, the user interface alignment block/module 6440 may align all or part of the user interface 6428 in real time. [00389] In some configurations, the electronic device 6402 may include at least one image sensor 6434. For example, several image sensors 6434 may be included within an electronic device 6402 (in addition to or alternatively from multiple microphones 6406). The at least one image sensor 6434 may collect data relating to the electronic device 6402 (e.g., image data). For example, a camera (e.g., an image sensor) may generate an image. In some implementations, the at least one image sensor 6434 may provide image data 5608 to the display 6464. [00390] The electronic device 6402 may pass audio signals (e.g., a target audio signal) included within at least one sector. For example, the electronic device 6402 may pass audio signals an operation block/module 6414. The operation block/module may pass audio one or more signals indicated within the at least one sector. In some implementations, the operation block/module 6414 may include an attenuator 6442 that attenuates an audio signal. For example, the operation block/module 6414 (e.g., attenuator 6442) may attenuate (e.g., block, reduce and/or reject) audio signals not included within the at least one selected sector (e.g., interfering audio signal(s)). In some cases, the audio signals may include a voice signal. For instance, the sector selection feature may allow attenuation of undesirable audio signals aside from a user voice signal. [00391] In some configurations, the electronic device (e.g., the display 6464 and/or operation block/module 6414) may indicate image data from the image sensor(s) 6434. In one configuration, the electronic device 6402 (e.g., operation block/module 6414) may pass image data (and filter other image data, for instance) from the at least one image sensor 6434 based on the at least one sector. In other words, at least one of the techniques described herein regarding the user interface 6428 may be applied to image data alternatively from or in addition to audio signals. [00392] Figure 65 is a flow diagram illustrating a more specific configuration of a method 6500 for displaying a user interface 6428 on an electronic device 6402. The method may be performed by the electronic device 6402. The electronic device 6402 may obtain 6502 a coordinate system 6430 that corresponds to a physical coordinate. In some implementations, this may be done as described in connection with Figure 63. [00393] The electronic device 6402 may present 6504 a user interface 6428 that includes the coordinate system 6430. In some implementations, this may be done as described in connection with Figure 63. [00394] The electronic device 6402 may display 6506 a directionality of at least one audio signal captured by at least one microphone. In other words, the electronic device 6402 may display the location of an audio signal source relative to the electronic device. The electronic device 6402 may also display the angle of the audio signal source in the display 6464. As described above, the electronic device 6402 may display a 2D angiogram of captured audio signals. In some implementations, the display 6464 may display a 2D plot in 3D perspective to convey an alignment of the plot with a plane that is based on physical coordinates in the real world, such as the horizontal plane. [00395] The electronic device 6402 may display 6508 an icon corresponding to the at least one audio signal (e.g., corresponding to a wave pattern displayed on the user interface 6428). According to some configurations, the electronic device 6402 (e.g., display 6464) may display 6508 an icon that identifies an audio signal as being a target audio signal (e.g., voice signal). Additionally or alternatively, the electronic device 6402 (e.g., display 6464) may display 6508 an icon (e.g., a different icon) that identifies an audio signal as being noise and/or interference (e.g., an interfering or interference audio signal). [00396] In some implementations, the electronic device 6402 may display 6508 an icon that corresponds to the source of an audio signal. For example, the electronic device 6402 may display 6508 an image icon indicating the source of a voice signal, for example, an image of an individual. The electronic device 6402 may display 6508 multiple icons corresponding to the at least one audio signal. For example, the electronic device may display at least one image icon and/or icons that identify the audio signal as a noise/interference signal or a voice signal. [00397] The electronic device 6402 (e.g., user interface 6428) may align 6510 all of part of the user interface 6428 with a reference plane. For example, the electronic device 6402 may align 6510 the coordinate system 6430 with a reference plane. In some configurations, aligning 6510 all or part of the user interface 6428 may include mapping (e.g., projecting) a two-dimensional plot (e.g., polar plot) into a three-dimensional display space. Additionally or alternatively, the electronic device 6402 may align one or more of the sector selection feature 6432 and the sector editing feature 6436 with a reference plane. The reference plane may be horizontal (e.g., correspond to earth coordinates). In some implementations, the part of the user interface 6428 that is aligned with the reference plane may be aligned with the reference plane independent of the electronic device 6402 orientation. In other words, as the electronic device 6402 translates and/or rotates, all or part of the user interface 6428 that is aligned with the reference plane may remain aligned with the reference plane. In some implementations, the electronic device 6402 may align 6510 all or part of the user interface 6428 in realtime. [00398] The electronic device 6402 may provide 6512 a sector selection feature 6432 that allows selection of at least one sector of the coordinate system 6430. In some implementations, this may be done as described in connection with Figure 63. [00399] In some implementations, the electronic device 6402 (e.g., user interface 6428 and/or sector selection feature 6432) may pad 6514 a selected sector. For example, the electronic device 6402 may include additional information with the audio signal to improve spatial audio processing. For example, padding may refer to providing visual feedback provided as highlighted (e.g., bright color) padding for the selected sector. For example, the selected sector 7150 (e.g., the outline of the sector) illustrated in Figure 71 may be highlighted to enable easy identification of the selected sector. [00400] The electronic device 6402 (e.g., the display 6464, the user interface 6428, etc.) may provide 6516 a sector editing feature 6436 that allows editing at least one sector. As described above, the electronic device 6402 may provide 6516 a sector editing feature 6436 via the user interface 6428. In some implementations, the sector editing feature 6436 may operate based on touch input. For example, the sector editing feature 6436 may allow editing of a selected sector based on a single or multiple touch inputs. For instance, the user interface 6428 may include at least one touch point that allows a user to adjust the size of a selected sector. In this implementation, the electronic device 6402 may provide a touch sensor 6438 that receives touch input that allows editing of the at least one sector. [00401] The electronic device 6402 may provide 6518 a fixed mode and an editable mode. In an editable mode, the user interface 6428 may respond to input to manipulate at least one feature (e.g., sector selection feature 6432) of the user interface 6428. In a fixed mode, the user interface 6428 may not respond to input to manipulate at least one feature of the user interface 6428. In some implementations, the electronic device 6402 may allow selection between a fixed mode and an editable mode. For example, a radio button of the user interface 6428 may allow for selection between an editable mode and a fixed mode. [00402] The electronic device 6402 may pass 6520 audio signals indicated within at least one sector. For example, the electronic device 6402 may pass 6520 audio signals indicated in a selected sector. In some implementations, the electronic device 6402 may attenuate 6522 an audio signal. For example, the electronic device 6402 may attenuate 6522 (e.g., reduce and/or reject) audio signals not included within the at least one selected sectors. For example, the audio signals may include a voice signal. In this example, the electronic device 6402 may attenuate 6522 undesirable audio signals aside from a user voice signal. [00403] Figure 66 illustrates examples of the user interface 6628a-b for displaying a directionality of at least one audio signal. In some implementations, the user interfaces 6628a-b may be examples of the user interface 6228 described in connection with Figure 62. The user interfaces 6628a-b may include coordinate systems 6630a-b that may be examples of the coordinate system 6230 described in connection with Figure 62. [00404] In Figure 66, an electronic device 6202 (e.g., phone) may be lying flat. This may occur, for example, in a tabletop mode. In Figure 66, the coordinate systems 6630a-b may include at least one audio signal indicator 6646a-b that may indicate the directionality of at least one audio signal (according to an angle or range of angles, for instance). The at least one audio signal may originate from a person, a speaker, or anything that can create an audio signal. In a first user interface 6628a, a first audio signal indicator 6646a may indicate that a first audio signal is at roughly 180 degrees. By comparison, in a second user interface 6628b, a second audio signal indicator 6646b may indicate that a second audio signal is at roughly 270 degrees. In some implementations, the audio signal indicators 6646a-b may indicate the strength of the audio signal. For example, the audio signal indicators 6646a-b may include a gradient of at least one color that indicates the strength of an audio signal. [00405] The first user interface 6628a provides examples of one or more characteristics that may be included in one or more of the user interfaces described herein. For example, the first user interface 6628a includes a title portion 6601. The title portion 6601 may include a title of the user interface or application that provides the user interface. In the example illustrated in Figure 66, the title is "SFAST." Other titles may be utilized. In general, the title portion 6601 is optional: some configurations of the user interface may not include a title portion. Furthermore, it should be noted that the title portion may be located anywhere on the user interface (e.g., top, bottom, center, left, right and/or overlaid, etc.). [00406] In the example illustrated in Figure 66, the first user interface 6628a includes a control portion 6603. The control portion 6603 includes examples of interactive controls. In some configurations, one or more of these interactive controls may be included in a user interface described herein. In general, the control portion 6603 may be optional: some configurations of the user interface may not include a control portion 6603. Furthermore, the control portion may or may not be grouped as illustrated in Figure 66. For example, one or more of the interactive controls may be located in different sections of the user interface (e.g., top, bottom, center, left, right and/or overlaid, etc.). [00407] In the example illustrated in Figure 66, the first user interface 6628a includes an activation/deactivation button 6607, check boxes 6609, a target sector indicator 6611, radio buttons 6613, a smoothing slider 6615, a reset button 6617 and a noise suppression (NS) enable button 6619. It should be noted, however, that the interactive controls may be implemented in a wide variety of configurations. For example, one or more of slider(s), radio button(s), button(s), toggle button(s), check box(es), list(s), dial(s), tab(s), text box(es), drop-down list(s), link(s), image(s), grid(s), table(s), label(s), etc., and/or combinations thereof may be implemented in the user interface to control various functions. [00408] The activation/deactivation button 6607 may generally activate or deactivate functionality related to the first user interface 6628a. For example, when an event (e.g., touch event) corresponding to the activation/deactivation button 6607 occurs, the user interface 6628a may enable user interface interactivity and display an audio signal indicator 6646a in the case of activation or may disable user interface interactivity and pause or discontinue displaying the audio signal indicator 6646a in the case of deactivation. [00409] The check boxes 6609 may enable or disable display of a target audio signal and/or an interferer audio signal. For example, the show interferer and show target check boxes enable visual feedback on the detected angle of the detected/computed interferer and target audio signal(s), respectively. For example, the "show interferer" element may be a pair with the "show target" element, which enable visualizing points for target and interference locations in the user interface 6628a. In some configurations, the "show interferer" and "show target" elements may enable/disable display of some actual picture of a target source or interferer source (e.g., their actual face, an icon, etc.) on the angle location detected by the device. [00410] The target sector indicator 6611 may provide an indication of a selected or target sector. In this example, all sectors are indicated as the target sector. Another example is provided in connection with Figure 71 below. [00411] The radio buttons 6613 may enable selection of a fixed or editable sector mode. In the fixed mode, one or more sectors (e.g., selected sectors) may not be adjusted. In the editable mode, one or more sectors (e.g., selected sectors) may be adjusted. [00412] The smoothing slider 6615 may provide selection of a value used to filter the input. For example, a value of 0 indicates that there is no filter, whereas a value of 25 may indicate aggressive filtering. In some configurations, the smoothing slider 6615 stands for an amount of smoothing for displaying the source activity polar plot. For instance, the amount of smoothing may be based on the value indicated by the smoothing slider 6615, where recursive smoothing is performed (e.g., polar = (1- alpha)*polar + (alpha) *polar_current_frame, so less alpha means more smoothing). [00413] The reset button 6617 may enable clearing of one or more current user interface 6628a settings. For example, when a touch event corresponding to the reset button 6617 occurs, the user interface 6628a may clear any sector selections, may clear whether the target and/or interferer audio signals are displayed and/or may reset the smoothing slider to a default value. The noise suppression (NS) enable button 6619 may enable or disable noise suppression processing on the input audio signal(s). For example, an electronic device may enable or disable filtering interfering audio signal(s) based on the noise suppression (NS) enable button 6619. [00414] The user interface 6628a may include a coordinate system portion 6605 (e.g., a plot portion). In some configurations, the coordinate system portion 6605 may occupy the entire user interface 6628a (and/or an entire device display). In other configurations, the coordinate system may occupy a subsection of the user interface 6628a. Although polar coordinate systems as given as examples herein, it should be noted that alternative coordinate systems, such as rectangular coordinate systems, may be included in the user interface 6628a. [00415] Figure 94 illustrates another example of a user interface 9428. In this example, the user interface 9428 includes a rectangular (e.g., Cartesian) coordinate system 9430. One example of an audio signal indicator 9446 is also shown. As described above, the coordinate system 9430 may occupy the entire user interface 9428 (and/or an entire display 9464 included in an electronic device 6202) as illustrated in Figure 94. In other configurations, the coordinate system 9430 may occupy a subsection of the user interface 9428 (and/or display 9464). It should be noted that a rectangular coordinate system may be implemented alternatively from any of the polar coordinate systems described herein. [00416] Figure 67 illustrates another example of the user interface 6728 for displaying a directionality of at least one audio signal. In some implementations, the user interface 6728 may be an example of the user interface 6228 described in connection with Figure 62. The user interface may include a coordinate system 6730 and at least one audio signal indicator 6746a-b that may be examples of corresponding elements described in connection with one or more of Figures 62 and 66. In Figure 67, the user interface 6728 may include multiple audio signal indicators 6746a-b. For example, a first audio signal indicator 6746a may indicate that a first audio signal source 6715a is at approximately 90 degrees and a second audio signal source 6715b is at approximately 270 degrees. For example, Figure 67 illustrates one example of voice detection to the left and right of an electronic device that includes the user interface 6728. More specifically, the user interface 6728 may indicate voices detected from the left and right of an electronic device. For instance, the user interface 6728 may display multiple (e.g., two) different sources at the same time in different locations. In some configurations, the procedures described in connection with Figure 78 below may enable selecting two sectors corresponding to the audio signal indicators 6746a-b (and to the audio signal sources 6715a-b, for example). [00417] Figure 68 illustrates another example of the user interface 6828 for displaying a directionality of at least one audio signal. In some implementations, the user interface 6828 may be an example of the user interface 6228 described in connection with Figure 62. The user interface may include a coordinate system 6830, and an audio signal indicator 6846 that may be examples of corresponding elements described in connection with one or more of Figures 62 and 66. Figure 68 illustrates one example of a two-dimensional coordinate system 6830 being projected into three- dimensional display space, where the coordinate system 6830 appears to extend inward into the user interface 6828. For instance, an electronic device 6202 (e.g., phone) may be in the palm of a user's hand. In particular, the electronic device 6202 may be in a horizontal face-up orientation. In this example, a part of the user interface 6828 may be aligned with a horizontal reference plane as described earlier. The audio signal in Figure 68 may originate from a user that is holding the electronic device 6202 in their hands and speaking in front of it (at roughly 180 degrees, for instance). [00418] Figure 69 illustrates another example of the user interface 6928 for displaying a directionality of at least one audio signal. In some implementations, the user interface 6928 may be an example of the user interface 6228 described in connection with Figure 62. The user interface may include a coordinate system 6930 and an audio signal indicator 6946 that may be examples of corresponding elements described in connection with one or more of Figures 62 and 66. In Figure 69, the electronic device 6202 (e.g., phone) may be in the palm of a user's hand. For example, the electronic device 6202 may be in a horizontal face-up orientation. In this example, a part of the user interface 6928 may be aligned with a horizontal reference plane as described earlier. The audio signal in Figure 69 may originate from behind the electronic device 6202 (at roughly 0 degrees, for instance). [00419] Figure 70 illustrates another example of the user interface 7028 for displaying a directionality of at least one audio signal. In some implementations, the user interface 7028 may be an example of the user interface 6228 described in connection with Figure 62. The user interface may include a coordinate system 7030 and at least one audio signal indicator 7046a-b that may be examples of corresponding elements described in connection with one or more of Figure 62 and Figure 66. In some configurations, the user interface 7028 may include at least one icon 7048a-b corresponding to the type of audio signal indicator 7046a-b that is displayed. For example, the user interface 7028 may display a triangle icon 7048a next to a first audio signal indicator 7046a that corresponds to a target audio signal (e.g., a speaker's or user's voice). Similarly, the user interface 7028 may display a diamond icon 7048b next to a second audio signal indicator 7046b that corresponds to interference (e.g., an interfering audio signal or noise). [00420] Figure 71 illustrates an example of the sector selection feature 6232 of the user interface 7128. In some implementations, the user interface 7128 may be an example of the user interface 6228 described in connection with Figure 62. The user interface 7128 may include a coordinate system 7130 and/or an audio signal indicator 7146 that may be examples of corresponding elements described in connection with one or more of Figures 62 and 66. As described above, the user interface 7128 may include a sector selection feature 6232 that allows selection of at least one sector, by touch input for example. In Figure 71, a selected sector 7150 is indicated by the dashed line. In some implementations, the angle range of a selected sector 7150 may also be displayed (e.g., approximately 225 degrees to approximately 315 degrees as shown in Figure 71). As described earlier, in some implementations, the electronic device 6202 may pass the audio signal (e.g., represented by the audio signal indicator 7146) indicated within the selected sector 7150. In this example, the audio signal source is to the side of the phone (at approximately 270 degrees). In some configurations, the other sector(s) outside of the selected sector 7150 may be noise suppressed and/or attenuated. [00421] In the example illustrated in Figure 71, the user interface 7128 includes a target sector indicator. The target sector indicator indicates a selected sector between 225 and 315 degrees in this case. It should be noted that sectors may be indicated with other parameters in other configurations. For instance, the target sector indicator may indicate a selected sector in radians, according to a sector number, etc. [00422] Figure 72 illustrates another example of the sector selection feature 6232 of the user interface 7228. In some implementations, the user interface 7228 may be an example of the user interface 6228 described in connection with Figure 62. The user interface 7228 may include a coordinate system 7230, an audio signal indicator 7246 and at least one selected sector 7250a-b that may be examples of corresponding elements described in connection with at least one of Figures 62, 66 and 71. As described above, the sector selection feature 6232 may allow selection of multiple sectors at the same time. In Figure 72, two sectors 7250a-b have been selected (as indicated by the dashed lines, for instance). In this example, the audio signal is at roughly 270 degrees. The other sectors(s) outside of the selected sectors 7250a-b may be noise suppressed and/or attenuated. Thus, the systems and methods disclosed herein may enable the selection of two or more sectors 7250 at once. [00423] Figure 73 illustrates another example of the sector selection feature 6232 of the user interface 7328. In some implementations, the user interface 7328 may be an example of the user interface 6228 described in connection with Figure 62. The user interface 7328 may include a coordinate system 7330, at least one audio signal indicator 7346a-b and at least one selected sector 7350a-b that may be examples of corresponding elements described in connection with at least one of Figures 62, 66 and 71. In Figure 73, two sectors 7350a-b have been selected (as indicated by the dashed lines, for instance). In this example, the speaker is to the side of the electronic device 6202. The other sectors(s) outside of the selected sectors 7250a-b may be noise suppressed and/or attenuated. [00424] Figure 74 illustrates more examples of the sector selection feature 6232 of the user interfaces 7428a-f. In some implementations, the user interfaces 7428a-f may be examples of the user interface 6228 described in connection with Figure 62. The user interfaces 7428a-f may include coordinate systems 7430a-f, at least one audio signal indicator 7446a-f and at least one selected sector 7450a-c that may be examples of corresponding elements described in connection with at least one of Figures 62, 66 and 71. In this example, the selected sector(s) 7450a-c may be determined based on the touch input 7452. For instance, the sectors and/or sector angles may be selected based upon finger swipes. For example, a user may input a circular touch input 7452. A selected sector 7150b may then be determined based on the circle touch input 7452. In other words, a user may narrow a sector by drawing the region of interest instead of manually adjusting (based on touch points or "handles," for instance). In some implementations, if multiple sectors are selected based on the touch input 7452, then the "best" sector 7450c may be selected and readjusted to match the region of interest. In some implementations, the term "best" may indicate a sector with the strongest at least one audio signal. This may be one user-friendly way to select and narrow sector(s). It should be noted that for magnifying or shrinking a sector, multiple fingers (e.g., two or more) can be used at the same time on or above the screen. Other examples of touch input 7452 may include a tap input from a user. In this example, a user may tap a portion of the coordinate system and a sector may be selected that is centered on the tap location (or aligned to a pre-set degree range). In this example, a user may then edit the sector by switching to editable mode and adjusting the touch points, as will be described below. [00425] Figure 75 illustrates more examples of the sector selection feature 6232 of the user interfaces 7528a-f. In some implementations, the user interfaces 7528a-f may be examples of the user interface 6228 described in connection with Figure 62. The user interfaces 7528a-f may include coordinate systems 7530a-f, at least one audio signal indicator 7546a-f and at least one selected sector 7550a-c that may be examples of corresponding elements described in connection with at least one of Figures 62, 66 and 71. In this example, the selected sector(s) 7550a-c may be determined based on the touch input 7552. For instance, the sectors and/or sector angles may be selected based upon finger swipes. For example, a user may input a swipe touch input 7552. In other words, a user may narrow a sector by drawing the region of interest instead of manually adjusting (based on touch points or "handles," for instance). In this example, sector(s) may be selected and/or adjusted based on just a swipe touch input 7552 (instead of a circular drawing, for instance). A selected sector 7150b may then be determined based on the swipe touch input 7552. In some implementations, if multiple sectors are selected based on the touch input 7552, then the "best" sector 7550c may be selected and readjusted to match the region of interest. In some implementations, the term "best" may indicate a sector with the strongest at least one audio signal. This may be one user- friendly way to select and narrow sector(s). It should be noted that for magnifying or shrinking a sector, multiple fingers (e.g., two or more) can be used at the same time on or above the screen. It should be noted that a single finger or multiple fingers may be sensed in accordance with any of the sector selection and/or adjustment techniques described herein. [00426] Figure 76 is a flow diagram illustrating one configuration of a method 7600 for editing a sector. The method 7600 may be performed by the electronic device 6202. The electronic device 6202 (e.g., display 6264) may display 7602 at least one point (e.g., touch point) corresponding to at least one sector. In some implementations, the at least one touch point may be implemented by the sector editing feature 6436 to allow editing of at least one sector. For example, the user interface 6228 may include at least one touch point that allows a user to adjust the size (e.g., expand or narrow) of a selected sector. The touch points may be displayed around the borders of the sectors. [00427] The electronic device 6202 (e.g., a touch sensor) may receive 7604 a touch input corresponding to the at least one point (e.g., touch point). For example, the electronic device 6202 may receive a touch input that edits a sector (e.g., adjusts its size and/or shape). For instance, a user may select at least one touch point by touching them. In this example, a user may move touch points displayed on the user interface 6228. In this implementation, receiving 7604 a touch input may include adjusting the touch points based on the touch input. For example, as a user moves the touch points via the touch sensor 6438, the electronic device 6202 may move the touch points accordingly. [00428] The electronic device 6202 (e.g., user interface 6228) may edit 7606 the at least one sector based on the touch input. For example, the electronic device 6202 may adjust the size and/or shape of the sector based on the single or multi-touch input. Similarly, the electronic device 6202 may change the position of the sector relative to the coordinate system 6230 based on the touch input. [00429] Figure 77 illustrates examples of a sector editing feature 6436 of the user interfaces 7728a-b. In some implementations, the user interfaces 7728a-b may be examples of the user interface 6228 described in connection with Figure 62. The user interfaces 7728a-b may include coordinate systems 7730a-b that may be examples of corresponding elements described in connection with Figure 62. The user interfaces 7728a-b may include at least one touch point 7754a-h. As described above, the touch points 7754a-h may be handles that allow editing of at least one sector. The touch points 7754a-h may be positioned at the apexes of the sectors. In some implementations, sector editing may be done independent of sector selection. Accordingly, a sector that is not selected may be adjusted in some configurations. [00430] In some implementations, the user interfaces 7728a-b may provide an interactive control that enables a fixing mode and an editing mode of the user interfaces 7728a-b. For example, the user interfaces 7728a-b may each include an activation/deactivation button 7756a-b that controls whether the user interface 7728a-b is operable. The activation/deactivation buttons 7756a-b may toggle activated/deactivated states for the user interfaces 7728a-b. While in an editable mode, the user interfaces 7728a-b may display at least one touch point 7754a-f (e.g., handles) corresponding to at least one sector (e.g., the circles at the edges of the sectors). [00431] Figure 78 illustrates more examples of the sector editing feature 6436 of the user interface 7828a-c. In some implementations, the user interfaces 7828a-c may be examples of the user interface 6228 described in connection with Figure 62. The user interfaces 7828a-c may include coordinate systems 7830a-c, at least one audio signal indicator 7846a-b, at least one selected sector 7850a-e and at least one touch point 7854a-l that may be examples of corresponding elements described in connection with at least one of Figures 62, 66 and 71. In Figure 78, at least one sector has been selected (as illustrated by the dashed lines, for instance). As depicted in Figure 78, the selected sectors 7850a-e may be narrowed for more precision. For example, a user may use the touch points 7854a-l to adjust (e.g., expand and narrow) the selected sector 7850a-e. The other sectors(s) outside of the selected sectors 7850a-e may be noise suppressed and/or attenuated. [00432] Figure 79 illustrates more examples of the sector editing feature 6436 of the user interfaces 7928a-b. In some implementations, the user interfaces 7928a-b may be examples of the user interface 6228 described in connection with Figure 62. The user interfaces 7928a-b may include coordinate systems 7930a-b, at least one audio signal indicator 7946a-b, at least one selected sector 7950a-b and at least one touch point 7954a-h that may be examples of corresponding elements described in connection with at least one of Figures 62, 66 and 71. In Figure 79, the electronic device 6202 (e.g., phone) may be in the palm of a user's hand. For example, the electronic device 6202 may be tilted upward. In this example, a part of the user interfaces 7928a-b (e.g., the coordinate systems 7930a-b) may be aligned with a horizontal reference plane as described earlier. Accordingly, the coordinate systems 7930a-b appear in a three- dimensional perspective extending into the user interfaces 7928a-b. The audio signal in Figure 79 may originate from a user that is holding the electronic device 6202 in their hands and speaking in front of it (at roughly 180 degrees, for instance). Figure 79 also illustrates that at least one sector can be narrowed or widened in real-time. For instance, a selected sector 7950a-b may be adjusted during an ongoing conversation or phone call. [00433] Figure 80 illustrates more examples of the sector editing feature 6436 of the user interfaces 8028a-c. In some implementations, the user interfaces 8028a-c may be examples of the user interface 6228 described in connection with Figure 62. The user interfaces 8028a-c may include coordinate systems 8030a-c, at least one audio signal indicator 8046a-c, at least one selected sector 8050a-b and at least one touch point 8054a-b that may be examples of corresponding elements described in connection with at least one of Figures 62, 66 and 71. The first illustration depicts an audio signal indicator 8046a indicating the presence of an audio signal at approximately 270 degrees. The middle illustration shows a user interface 8028b with a selected sector 8050a. The right illustration depicts one example of editing the selected sector 8050b. In this case, the selected sector 8050b is narrowed. In this example, an electronic device 6202 may pass the audio signals that have a direction of arrival associated with the selected sector 8050b and attenuate other audio signals that have a direction of arrival associated with the outside of the selected sector 8050b. [00434] Figure 81 illustrates more examples of the sector editing feature 6436 of the user interfaces 8128a-d. In some implementations, the user interfaces 8128a-d may be examples of the user interface 6228 described in connection with Figure 62. The user interfaces 8128a-d may include coordinate systems 8130a-d, at least one audio signal indicator 8146a-d, at least one selected sector 8150a-c and at least one touch point 8154a-h that may be examples of corresponding elements described in connection with at least one of Figures 62, 66 and 71. The first illustration depicts an audio signal indicator 8146a indicating the presence of an audio signal at approximately 270 degrees. The second illustration shows a user interface 8128b with a selected sector 8150a. The third illustration shows at least one touch point 8154a-d used for editing a sector. The fourth illustration depicts one example of editing the selected sector 8150d. In this case, the selected sector 8150d is narrowed. In this example, an electronic device 6202 may pass the audio signals that have a direction of arrival associated with the selected sector 8150d (e.g., that may be based on user input) and attenuate other audio signals that have a direction of arrival associated with the outside of the selected sector 8150d. [00435] Figure 82 illustrates an example of the user interface 8228 with a coordinate system 8230 oriented independent of electronic device 6202 orientation. In some implementations, the user interface 8228 may be an example of the user interface 6228 described in connection with Figure 62. The user interface includes a coordinate system 8230, and an audio signal indicator 8246 that may be examples of corresponding elements described in connection with at least one of Figures 62 and 66. In Figure 82, the electronic device 6202 (e.g., phone) is tilted upward (in the palm of a user's hand, for example). The coordinate system 8230 (e.g., the polar graph) of the user interface 8228 shows or displays the audio signal source location. In this example, a part of the user interface 8228 is aligned with a horizontal reference plane as described earlier. The audio signal in Figure 82 originates from a source 8215 at roughly 180 degrees. As described above, a source 8215 may include a user (that is holding the electronic device 6202 in their hand and speaking in front of it, for example), a speaker, or anything that is capable of generating an audio signal. [00436] Figure 83 illustrates another example of the user interface 8328 with a coordinate system 8330 oriented independent of electronic device 6202 orientation. In some implementations, the user interface 8328 may be an example of the user interface 6228 described in connection with Figure 62. The user interface 8328 includes a coordinate system 8330 and an audio signal indicator 8346 that may be examples of corresponding elements described in connection with at least one of Figures 62 and 66. In Figure 83, the electronic device 6202 (e.g., phone) is in a slanted or tilted orientation (in the palm of a user's hand, for example) increasing in elevation from the bottom of the electronic device 6202 to the top of the electronic device 6202 (towards the sound source 8315). The coordinate system 8330 (e.g., the polar graph) of the user interface 8328 displays the audio signal source location. In this example, a part of the user interface 8328 is aligned with a horizontal reference plane as described earlier. The audio signal in Figure 83 originates from a source 8315 that is toward the back of (or behind) the electronic device 6202 (e.g., the phone). Figure 83 illustrates that the reference plane of the user interface 8328 is aligned with the physical plane (e.g., horizontal) of the 3D world. Note that in Figure 83, the user interface 8328 plane goes into the screen, even though the electronic device 6202 is being held semi-vertically. Thus, even though the electronic device 6202 is at approximately 45 degrees relative to the physical plane of the floor, the user interface 8328 coordinate system 8330 plane is at 0 degrees relative to the physical plane of the floor. For example, the reference plane on the user interface 8328 corresponds to the reference plane in the physical coordinate system. [00437] Figure 84 illustrates another example of the user interface 8428 with a coordinate system 8430 oriented independent of electronic device 6202 orientation. In some implementations, the user interface 8428 may be an example of the user interface 6228 described in connection with Figure 62. The user interface 8428 includes a coordinate system 8430 and an audio signal indicator 8446 that may be examples of corresponding elements described in connection with at least one of Figures 62 and 66. In Figure 84, the electronic device 6202 (e.g., phone) is in a vertical orientation (in the palm of a user's hand, for example). The coordinate system 8430 (e.g., the polar graph) of the user interface 8428 displays the audio signal source location. In this example, a part of the user interface 8428 is aligned with a horizontal reference plane as described earlier. The audio signal in Figure 84 originates from a source 8415 that is toward the back left of (e.g., behind) the electronic device 6202 (e.g., the phone). [00438] Figure 85 illustrates another example of the user interface 8528 with a coordinate system 8530 oriented independent of electronic device 6202 orientation. In some implementations, the user interface 8528 may be an example of the user interface 6228 described in connection with Figure 62. The user interface 8528 includes a coordinate system 8530 and an audio signal indicator 8546 that may be examples of corresponding elements described in connection with at least one of Figures 62 and 66. In Figure 85, the electronic device 6202 (e.g., phone) is in a horizontal face-up orientation (e.g., a tabletop mode). The coordinate system 8530 (e.g., the polar graph) of the user interface 8528 displays the audio signal source location. The audio signal in Figure 85 may originate from a source 8515 that is toward the top left of the electronic device 6202 (e.g., the phone). In some examples, the audio signal source is tracked. For example, when noise suppression is enabled, the electronic device 6202 may track the loudest speaker or sound source. For instance, the electronic device 6202 (e.g., phone) may track the movements of a loudest speaker while suppressing other sounds (e.g., noise) from other areas (e.g., zones or sectors). [00439] Figure 86 illustrates more examples of the user interfaces 8628a-c with a coordinate systems 8630a-c oriented independent of electronic device 6202 orientation. In other words, the coordinate systems 8630a-c and/or the audio signal indicators 8646a-c remain at the same orientation relative to physical space, independent of how the electronic device 6202 is rotated. In some implementations, the user interfaces 8628a-c may be examples of the user interface 6228 described in connection with Figure 62. The user interfaces 8628a-c may include coordinate systems 8630a-c and audio signal indictors 8646a-c that may be examples of corresponding elements described in connection with at least one of Figures 62 and 66. Without a compass, the sector selection feature 6232 may not have an association with the physical coordinate system of the real world (e.g., north, south, east, west, etc.). Accordingly, if the electronic device 6202 (e.g., phone) is in a vertical orientation facing the user (e.g., a browse-talk mode), the top of the electronic device 6202 may be designated as "0 degrees" and runs along a vertical axis. When the electronic device 6202 is rotated, for example by 90 degrees in a clockwise direction, "0 degrees" is now located on a horizontal axis. Thus, when a sector is selected, rotation of the electronic device 6202 affects the selected sector. By adding another component that can detect direction, for example, a compass, the sector selection feature 6232 of the user interface 8628a-c can be relative to physical space, and not the phone. In other words, by adding a compass, when the phone is selected from a vertically upright position to a horizontal position, "0 degrees" still remains on the top side of the phone that is facing the user. For example, in the first image of Figure 86, the user interface 8628a is illustrated without tilt (or with 0 degrees tilt, for instance). For example, the coordinate system 8630a is aligned with the user interface 8628a and/or the electronic device 6202. By comparison, in the second image of Figure 86, the user interface 8628b and/or electronic device 6202 are tilted to the left. However, the coordinate system 8630b (and mapping between the real world and electronic device 6202) may be maintained. This may be done based on tilt sensor data 5608, for example. In the third image of Figure 86, the user interface 8628c and/or electronic device 6202 are tilted to the right. However, the coordinate system 8630c (and mapping between the real world and electronic device 6202) may be maintained. [00440] It should be noted that as used herein, the term "physical coordinates" may or may not denote geographic coordinates. In some configurations, for example, where the electronic device 6202 does not include a compass, the electronic device 6202 may still map coordinates from a multi-microphone configuration to physical coordinates based on sensor data 5608. In this case, the mapping 5612 may be relative to the electronic device 6202 and may not directly correspond to earth coordinates (e.g., north, south, east, west). Regardless, the electronic device 6202 may be able to discriminate the direction of sounds in physical space relative to the electronic device 6202. In some configurations, however, the electronic device 6202 may include a compass (or other navigational instrument). In this case, the electronic device 6202 may map coordinates from a multi-microphone configuration to physical coordinates that correspond to earth coordinates (e.g., north, south, east, west). Different types of coordinate systems 6230 may be utilized in accordance with the systems and methods disclosed herein. [00441] Figure 87 illustrates another example of the user interface 8728 with a coordinate system 8730 oriented independent of electronic device 6202 orientation. In some implementations, the user interface 8728 may be an example of the user interface 6228 described in connection with Figure 62. The user interface 8728 may include a coordinate system 8730 and an audio signal indicator 8746 that may be examples of corresponding elements described in connection with at least one of Figures 62 and 66. In some implementations, the user interface 8728 also includes a compass 8756 in conjunction with a coordinate system 8730 (as described above). In this implementation, the compass 8756 may detect direction. The compass 8756 portion may display an electronic device 6202 orientation relative to real world coordinates. Via the compass 8756, the sector selection feature 6232 on the user interface 8728 may be relative to physical space, and not the electronic device 6202. In other words, by adding a compass 8756, when the electronic device 6202 is selected from a vertical position to a horizontal position, "0 degrees" still remains near the top side of the electronic device 6202 that is facing the user. It should be noted that determining physical electronic device 6202 orientation can be done with a compass 8756. However, if a compass 8756 is not present, it also may be alternatively determined based on GPS and/or gyro sensors. Accordingly, any sensor 5604 or system that may be used to determine physical orientation of an electronic device 6202 may be used alternatively from or in addition to a compass 8756. Thus, a compass 8756 may be substituted with another sensor 5604 or system in any of the configurations described herein. So, there are multiple sensors 5604 that can provide screenshots where the orientation remains fixed relative to the user. [00442] In the case where a GPS receiver is included in the electronic device 6202, GPS data may be utilized to provide additional functionality (in addition to just being a sensor). In some configurations, for example, the electronic device 6202 (e.g., mobile device) may include GPS functionality with map software. In one approach, the coordinate system 8730 may be aligned such that zero degrees always points down a street, for example. With the compass 8756, for instance, the electronic device 6202 (e.g., the coordinate system 8730) may be oriented according to a physical north and/or south, whereas GPS functionality may be utilized to provide more options. [00443] Figure 88 is a block diagram illustrating another configuration of a user interface 8828 in which systems and methods for displaying a user interface 8828 on an electronic device 8802 may be implemented. The user interface 8828 may be an example of the user interface 6228 described in connection with Figure 62. In some implementations, the user interface 8828 may be presented on a display 8864 of the electronic device 8802 that may be examples of corresponding elements described in connection with Figure 62. The user interface 8828 may include a coordinate system 8830 and/or a sector selection feature 8832 that may be examples of corresponding elements described in connection with at least one of Figures 62 and 66. The user interface 8828 may be coupled to at least one microphone 8806 and/or an operation block/module 8814 that may be examples of corresponding elements described in connection with at least one of Figures 56 and 66. [00444] In some implementations, the user interface 8828 may be coupled to a database 8858 that may be included and/or coupled to the electronic device 8802. For example, the database 8858 may be stored in memory located on the electronic device 8802. The database 8858 may include one or more audio signatures. For example, the database 8858 may include one or more audio signatures pertaining to one or more audio signal sources (e.g., individual users). The database 8858 may also include information based on the audio signatures. For example, the database 8858 may include identification information for the users that correspond to the audio signatures. Identification information may include images of the audio signal source (e.g., an image of a person corresponding to an audio signature) and/or contact information, such as name, email address, phone number, etc. [00445] In some implementations, the user interface 8828 may include an audio signature recognition block/module 8860. The audio signature recognition block/module 8860 may recognize audio signatures received by the at least one microphone 8806. For example, the microphones 8806 may receive an audio signal. The audio signature recognition block/module 8860 may obtain the audio signal and compare it to the audio signatures included in the database 8858. In this example, the audio signature recognition block/module 8860 may obtain the audio signature and/or identification information pertaining to the audio signature from the database 8858 and pass the identification information to the display 8864. [00446] Figure 89 is a flow diagram illustrating another configuration of a method 8900 for displaying a user interface 8828 on an electronic device 8802. The method 8900 may be performed by the electronic device 8802. The electronic device 8802 may obtain 8902 a coordinate system 8830 that corresponds to physical coordinates. In some implementations, this may be done as described in connection with Figure 63. [00447] The electronic device 8802 may present 8904 the user interface 8828 that may include the coordinate system 8830. In some implementations, this may be done as described in connection with Figure 63. [00448] The electronic device 8802 may recognize 8906 an audio signature. An audio signature may be a characterization that corresponds to a particular audio signal source. For example, an individual user may have an audio signature that corresponds to that individual's voice. Examples of audio signatures include voice recognition parameters, audio signal components, audio signal samples and/or other information for characterizing an audio signal. In some implementations, the electronic device 8802 may receive an audio signal from at least one microphone 8806. The electronic device 8802 may then recognize 8906 the audio signature, for example, by determining whether the audio signal is from an audio signal source such as an individual user, as compared to a noise signal. This may be done by measuring at least one characteristic of the audio signal, (e.g., harmonicity, pitch, etc.). In some implementations, recognizing 8906 an audio signature may include identifying an audio signal as coming from a particular audio source. [00449] The electronic device 8802 may then look up 8908 the audio signature in the database 8858. For example, the electronic device 8802 may look for the audio signature in the database 8858 of audio signatures. The electronic device 8802 may obtain 8910 identification information corresponding to the audio signature. As described above, the database 8858 may include information based on the audio signatures. For example, the database 8858 may include identification information for the users that correspond to the audio signatures. Identification information may include images of the audio signal source (e.g., the user) and/or contact information, such as name, email address, phone number, etc. After obtaining 8910 the identification information (e.g., the image) corresponding to the audio signature, the electronic device 8802 may display 8912 the identification information on the user interface 8828. For example, the electronic device 8802 may display 8912 an image of the user next to the audio signal indicator 6646 on the display 6264. In other implementations, the electronic device 8802 may display 8912 at least one identification information element as part of an identification display. For example, a portion of the user interface 8828 may include the identification information (e.g., image, name, email address etc.) pertaining to the audio signature. [00450] The electronic device 8802 may provide 8914 a sector selection feature 6232 that allows selection of at least one sector of the coordinate system 8830. In some implementations, this may be done as described in connection with Figure 63. [00451] Figure 90 illustrates an example of the user interface 9028 coupled to the database 9058. In some implementations, the user interface 9028 may be an example of the user interface 6228 described in connection with Figure 62. The user interface 9028 may include a coordinate system 9030 and an audio signal indicator 9046 that may be examples of corresponding elements described in connection with at least one of Figures 62 and 66. As described above in some implementations, the user interface 9028 may be coupled to the database 9058 that includes at least one audio signature 9064 and/or identification information 9062a corresponding to the audio signature 9064 that may be examples of corresponding elements described in connection with at least one of Figures 88 and 89. In some configurations, the electronic device 6202 may recognize an audio signature 9064 and look up the audio signature 9064 in the database 9058. The electronic device 6202 may then obtain (e.g., retrieve) the corresponding identification information 9062a corresponding to the audio signature 9064 recognized by the electronic device 6202. For example, the electronic device 6202 may obtain a picture of the speaker or person, and display the picture (and other identification information 9062b) of the speaker or person by the audio signal indicator 9046. In this way, a user can easily identify a source of an audio signal. It should be noted that the database 9058 can be local or can be remote (e.g., on a server across a network, such as a LAN or the Internet). Additionally or alternatively, the electronic device 6202 may send the identification information 9062 to another device. For instance, the electronic device 6202 may send one or more user names (and/or images, identifiers, etc.) to another device (e.g., smartphone, server, network, computer, etc.) that presents the identification information 9062 such that a far-end user is apprised of a current speaker. This may be useful when there are multiple users talking on a speakerphone, for example. [00452] Optionally, in some implementations, the user interface 9028 may display the identification information 9062 separate from the coordinate system 9030. For example, the user interface 9028 may display the identification information 9062c below the coordinate system 9030. [00453] Figure 91 is a flow diagram illustrating another configuration of a method 9100 for displaying a user interface 6428 on an electronic device 6402. The method 9100 may be performed by the electronic device 6402. The electronic device 6402 may obtain 9102 a coordinate system 6430 that corresponds to physical coordinates. In some implementations, this may be done as described in connection with Figure 63. [00454] The electronic device 6402 may present 9104 the user interface 6428 that may include the coordinate system 6430. In some implementations, this may be done as described in connection with Figure 63. [00455] The electronic device 6402 may provide 9106 a sector selection feature 6432 that allows selection of at least one sector of the coordinate system 6430. In some implementations, this may be done as described in connection with Figure 63. [00456] The electronic device 6402 may indicate 9108 image data from at least one sector. As described above, the electronic device 6402 may include at least one image sensor 6434. For example, several image sensors 6434 that collect data relating to the electronic device 6402 may be included on the electronic device 6402. More specifically, the at least one image sensor 6434 may collect image data. For example, a camera (e.g., an image sensor 6434) may generate an image. In some implementations, the at least one image sensor 6434 may provide image data to the user interface 6428. In some implementations, the electronic device 6402 may indicate 9108 image data from the at least one image sensor 6434. In other words, the electronic device 6402 may display image data (e.g., still photo or video) from the at least one image sensor 6434 on the display 6464. [00457] In some implementations, the electronic device 6402 may pass 9110 image data based on the at least one sector. For example, the electronic device 6402 may pass 9110 image data indicated in a selected sector. In other words, at least one of the techniques described herein regarding the user interface 6428 may be applied to image data alternatively from or in addition to audio signals. [00458] Figure 92 is a block diagram illustrating one configuration of a wireless communication device 9266 which systems and methods for mapping a source location may be implemented. The wireless communication device 9266 illustrated in Figure 92 may be an example of at least one of the electronic devices described herein. The wireless communication device 9266 may include an application processor 9278. The application processor 9278 generally processes instructions (e.g., runs programs) to perform functions on the wireless communication device 9266. The application processor 9278 may be coupled to an audio coder/decoder (codec) 9276. [00459] The audio codec 9276 may be an electronic device (e.g., integrated circuit) used for coding and/or decoding audio signals. The audio codec 9276 may be coupled to at least one speaker 9268, an earpiece 9270, an output jack 9272 and/or at least one microphone 9206. The speakers 9268 may include one or more electro-acoustic transducers that convert electrical or electronic signals into acoustic signals. For example, the speakers 9268 may be used to play music or output a speakerphone conversation, etc. The earpiece 9270 may be another speaker or electro-acoustic transducer that can be used to output acoustic signals (e.g., speech signals) to a user. For example, the earpiece 9270 may be used such that only a user may reliably hear the acoustic signal. The output jack 9272 may be used for coupling other devices to the wireless communication device 9266 for outputting audio, such as headphones. The speakers 9268, earpiece 9270 and/or output jack 9272 may generally be used for outputting an audio signal from the audio codec 9276. The at least one microphone 9206 may be an acousto-electric transducer that converts an acoustic signal (such as a user's voice) into electrical or electronic signals that are provided to the audio codec 9276. [00460] A coordinate mapping block/module 9217a may be optionally implemented as part of the audio codec 9276. For example, the coordinate mapping block/module 9217a may be implemented in accordance with one or more of the functions and/or structures described herein. For example, the coordinate mapping block/module 9217a may be implemented in accordance with one or more of the functions and/or structures described in connection with Figures 57, 59, 60 and 61. [00461] Additionally or alternatively, a coordinate mapping block/module 9217b may be implemented in the application processor 9278. For example, the coordinate mapping block/module 9217b may be implemented in accordance with one or more of the functions and/or structures described herein. For example, the coordinate mapping block/module 9217b may be implemented in accordance with one or more of the functions and/or structures described in connection with Figures 57, 59, 60 and 61. [00462] The application processor 9278 may also be coupled to a power management circuit 9280. One example of a power management circuit 9280 is a power management integrated circuit (PMIC), which may be used to manage the electrical power consumption of the wireless communication device 9266. The power management circuit 9280 may be coupled to a battery 9282. The battery 9282 may generally provide electrical power to the wireless communication device 9266. For example, the battery 9282 and/or the power management circuit 9280 may be coupled to at least one of the elements included in the wireless communication device 9266. [00463] The application processor 9278 may be coupled to at least one input device 9286 for receiving input. Examples of input devices 9286 include infrared sensors, image sensors, accelerometers, touch sensors, keypads, etc. The input devices 9286 may allow user interaction with the wireless communication device 9266. The application processor 9278 may also be coupled to one or more output devices 9284. Examples of output devices 9284 include printers, projectors, screens, haptic devices, etc. The output devices 9284 may allow the wireless communication device 9266 to produce output that may be experienced by a user. [00464] The application processor 9278 may be coupled to application memory 9288. The application memory 9288 may be any electronic device that is capable of storing electronic information. Examples of application memory 9288 include double data rate synchronous dynamic random access memory (DDRAM), synchronous dynamic random access memory (SDRAM), flash memory, etc. The application memory 9288 may provide storage for the application processor 9278. For instance, the application memory 9288 may store data and/or instructions for the functioning of programs that are run on the application processor 9278. [00465] The application processor 9278 may be coupled to a display controller 9290, which in turn may be coupled to a display 9292. The display controller 9290 may be a hardware block that is used to generate images on the display 9292. For example, the display controller 9290 may translate instructions and/or data from the application processor 9278 into images that can be presented on the display 9292. Examples of the display 9292 include liquid crystal display (LCD) panels, light emitting diode (LED) panels, cathode ray tube (CRT) displays, plasma displays, etc. [00466] The application processor 9278 may be coupled to a baseband processor 9294. The baseband processor 9294 generally processes communication signals. For example, the baseband processor 9294 may demodulate and/or decode received signals. Additionally or alternatively, the baseband processor 9294 may encode and/or modulate signals in preparation for transmission. [00467] The baseband processor 9294 may be coupled to baseband memory 9296. The baseband memory 9296 may be any electronic device capable of storing electronic information, such as SDRAM, DDRAM, flash memory, etc. The baseband processor 9294 may read information (e.g., instructions and/or data) from and/or write information to the baseband memory 9296. Additionally or alternatively, the baseband processor 9294 may use instructions and/or data stored in the baseband memory 9296 to perform communication operations. [00468] The baseband processor 9294 may be coupled to a radio frequency (RF) transceiver 9298. The RF transceiver 9298 may be coupled to a power amplifier 9201 and one or more antennas 9203. The RF transceiver 9298 may transmit and/or receive radio frequency signals. For example, the RF transceiver 9298 may transmit an RF signal using a power amplifier 9201 and at least one antenna 9203. The RF transceiver 9298 may also receive RF signals using the one or more antennas 9203. [00469] Figure 93 illustrates various components that may be utilized in an electronic device 9302. The illustrated components may be located within the same physical structure or in separate housings or structures. The electronic device 9302 described in connection with Figure 93 may be implemented in accordance with at least one of the electronic devices and the wireless communication device described herein. The electronic device 9302 includes a processor 9311. The processor 9311 may be a general purpose single- or multi-chip microprocessor (e.g., an ARM), a special purpose microprocessor (e.g., a digital signal processor (DSP)), a microcontroller, a programmable gate array, etc. The processor 9311 may be referred to as a central processing unit (CPU). Although just a single processor 9311 is shown in the electronic device 9302 of Figure 93, in an alternative configuration, a combination of processors (e.g., an ARM and DSP) could be used. [00470] The electronic device 9302 also includes memory 9305 in electronic communication with the processor 9311. That is, the processor 9311 can read information from and/or write information to the memory 9305. The memory 9305 may be any electronic component capable of storing electronic information. The memory 9305 may be random access memory (RAM), read-only memory (ROM), magnetic disk storage media, optical storage media, flash memory devices in RAM, on-board memory included with the processor, programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable PROM (EEPROM), registers, and so forth, including combinations thereof. [00471] Data 9309a and instructions 9307a may be stored in the memory 9305. The instructions 9307a may include at least one program, routine, sub-routine, function, procedure, etc. The instructions 9307a may include a single computer-readable statement or many computer-readable statements. The instructions 9307a may be executable by the processor 9311 to implement at least one of the methods described above. Executing the instructions 9307a may involve the use of the data 9309a that is stored in the memory 9305. Figure 93 shows some instructions 9307b and data 9309b being loaded into the processor 9311 (which may come from instructions 9307a and data 9309a). [00472] The electronic device 9302 may also include at least one communication interface 9313 for communicating with other electronic devices. The communication interface 9313 may be based on wired communication technology, wireless communication technology, or both. Examples of different types of communication interfaces 9313 include a serial port, a parallel port, a Universal Serial Bus (USB), an Ethernet adapter, an IEEE 1394 bus interface, a small computer system interface (SCSI) bus interface, an infrared (IR) communication port, a Bluetooth wireless communication adapter, and so forth. [00473] The electronic device 9302 may also include at least one input device 9386 and at least one output device 9384. Examples of different kinds of input devices 9386 include a keyboard, mouse, microphone, remote control device, button, joystick, trackball, touchpad, lightpen, etc. For instance, the electronic device 9302 may include at least one microphone 9306 for capturing acoustic signals. In one configuration, a microphone 9306 may be a transducer that converts acoustic signals (e.g., voice, speech) into electrical or electronic signals. Examples of different kinds of output devices 9384 include a speaker, printer, etc. For instance, the electronic device 9302 may include at least one speaker 9368. In one configuration, a speaker 9368 may be a transducer that converts electrical or electronic signals into acoustic signals. One specific type of output device that may be typically included in an electronic device 9302 is a display device 9392. Display devices 9392 used with configurations disclosed herein may utilize any suitable image projection technology, such as a cathode ray tube (CRT), liquid crystal display (LCD), light-emitting diode (LED), gas plasma, electroluminescence, or the like. A display controller 9390 may also be provided for converting data stored in the memory 9305 into text, graphics, and/or moving images (as appropriate) shown on the display device 9392. [00474] The various components of the electronic device 9302 may be coupled together by at least one bus, which may include a power bus, a control signal bus, a status signal bus, a data bus, etc. For simplicity, the various buses are illustrated in Figure 93 as a bus system 9315. It should be noted that Figure 93 illustrates only one possible configuration of an electronic device 9302. Various other architectures and components may be utilized. [00475] Some Figures illustrating examples of functionality and/or of the user interface as described herein are given hereafter. In some configurations, the functionality and/or user interface may be referred to in connection with the phrase "Sound Focus and Source Tracking," "SoFAST" or "SFAST." - I l l - [00476] In the above description, reference numbers have sometimes been used in connection with various terms. Where a term is used in connection with a reference number, this may be meant to refer to a specific element that is shown in at least one of the Figures. Where a term is used without a reference number, this may be meant to refer generally to the term without limitation to any particular Figure. [00477] The term "couple" and any variations thereof may indicate a direct or indirect connection between elements. For example, a first element coupled to a second element may be directly connected to the second element, or indirectly connected to the second element through another element. [00478] The term "processor" should be interpreted broadly to encompass a general purpose processor, a central processing unit (CPU), a microprocessor, a digital signal processor (DSP), a controller, a microcontroller, a state machine, and so forth. Under some circumstances, a "processor" may refer to an application specific integrated circuit (ASIC), a programmable logic device (PLD), a field programmable gate array (FPGA), etc. The term "processor" may refer to a combination of processing devices, e.g., a combination of a digital signal processor (DSP) and a microprocessor, a plurality of microprocessors, at least one microprocessor in conjunction with a digital signal processor (DSP) core, or any other such configuration. [00479] The term "memory" should be interpreted broadly to encompass any electronic component capable of storing electronic information. The term memory may refer to various types of processor-readable media such as random access memory (RAM), read-only memory (ROM), non-volatile random access memory (NVRAM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable PROM (EEPROM), flash memory, magnetic or optical data storage, registers, etc. Memory is said to be in electronic communication with a processor if the processor can read information from and/or write information to the memory. Memory that is integral to a processor is in electronic communication with the processor. [00480] The terms "instructions" and "code" should be interpreted broadly to include any type of computer-readable statement(s). For example, the terms "instructions" and "code" may refer to at least one programs, routines, sub-routines, functions, procedures, etc. "Instructions" and "code" may comprise a single computer-readable statement or many computer-readable statements. [00481] It should be noted that at least one of the features, functions, procedures, components, elements, structures, etc., described in connection with any one of the configurations described herein may be combined with at least one of the functions, procedures, components, elements, structures, etc., described in connection with any of the other configurations described herein, where compatible. In other words, any compatible combination of the functions, procedures, components, elements, etc., described herein may be implemented in accordance with the systems and methods disclosed herein. [00482] The methods and apparatus disclosed herein may be applied generally in any transceiving and/or audio sensing application, especially mobile or otherwise portable instances of such applications. For example, the range of configurations disclosed herein includes communications devices that reside in a wireless telephony communication system configured to employ a code-division multiple-access (CDMA) over-the-air interface. Nevertheless, it would be understood by those skilled in the art that a method and apparatus having features as described herein may reside in any of the various communication systems employing a wide range of technologies known to those of skill in the art, such as systems employing Voice over IP (VoIP) over wired and/or wireless (e.g., CDMA, time division multiple access (TDMA), frequency division multiple access (FDMA), and/or time division synchronous code division multiple access (TDSCDMA)) transmission channels. [00483] It is expressly contemplated and hereby disclosed that communications devices disclosed herein may be adapted for use in networks that are packet-switched (for example, wired and/or wireless networks arranged to carry audio transmissions according to protocols such as VoIP) and/or circuit-switched. It is also expressly contemplated and hereby disclosed that communications devices disclosed herein may be adapted for use in narrowband coding systems (e.g., systems that encode an audio frequency range of about four or five kilohertz) and/or for use in wideband coding systems (e.g., systems that encode audio frequencies greater than five kilohertz), including whole -band wideband coding systems and split-band wideband coding systems. [00484] Examples of codecs that may be used with, or adapted for use with, transmitters and/or receivers of communications devices as described herein include the Enhanced Variable Rate Codec, as described in the Third Generation Partnership Project 2 (3GPP2) document C.S0014-C, vl.O, titled "Enhanced Variable Rate Codec, Speech Service Options 3, 68, and 70 for Wideband Spread Spectrum Digital Systems," February 2007 (available online at www.3gpp.org); the Selectable Mode Vocoder speech codec, as described in the 3GPP2 document C.S0030-0, v3.0, titled "Selectable Mode Vocoder (SMV) Service Option for Wideband Spread Spectrum Communication Systems," January 2004 (available online at www.3gpp.org); the Adaptive Multi Rate (AMR) speech codec, as described in the document ETSI TS 126 092 V6.0.0 (European Telecommunications Standards Institute (ETSI), Sophia Antipolis Cedex, FR, December 2004); and the AMR Wideband speech codec, as described in the document ETSI TS 126 192 V6.0.0 (ETSI, December 2004). Such a codec may be used, for example, to recover the reproduced audio signal from a received wireless communications signal. [00485] The presentation of the described configurations is provided to enable any person skilled in the art to make or use the methods and other structures disclosed herein. The flowcharts, block diagrams and other structures shown and described herein are examples only, and other variants of these structures are also within the scope of the disclosure. Various modifications to these configurations are possible, and the generic principles presented herein may be applied to other configurations as well. Thus, the present disclosure is not intended to be limited to the configurations shown above but rather is to be accorded the widest scope consistent with the principles and novel features disclosed in any fashion herein, including in the attached claims as filed, which form a part of the original disclosure. [00486] Those of skill in the art will understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits and symbols that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. [00487] Important design requirements for implementation of a configuration as disclosed herein may include minimizing processing delay and/or computational complexity (typically measured in millions of instructions per second or MIPS), especially for computation-intensive applications, such as playback of compressed audio or audiovisual information (e.g., a file or stream encoded according to a compression format, such as one of the examples identified herein) or applications for wideband communications (e.g., voice communications at sampling rates higher than eight kilohertz, such as 12, 16, 32, 44.1, 48, or 192 kHz). [00488] An apparatus as disclosed herein (e.g., any device configured to perform a technique as described herein) may be implemented in any combination of hardware with software, and/or with firmware, that is deemed suitable for the intended application. For example, the elements of such an apparatus may be fabricated as electronic and/or optical devices residing, for example, on the same chip or among two or more chips in a chipset. One example of such a device is a fixed or programmable array of logic elements, such as transistors or logic gates, and any of these elements may be implemented as one or more such arrays. Any two or more, or even all, of these elements may be implemented within the same array or arrays. Such an array or arrays may be implemented within one or more chips (for example, within a chipset including two or more chips). [00489] One or more elements of the various implementations of the apparatus disclosed herein may be implemented in whole or in part as one or more sets of instructions arranged to execute on one or more fixed or programmable arrays of logic elements, such as microprocessors, embedded processors, intellectual property (IP) cores, digital signal processors, FPGAs (field-programmable gate arrays), ASSPs (application-specific standard products), and ASICs (application-specific integrated circuits). Any of the various elements of an implementation of an apparatus as disclosed herein may also be embodied as one or more computers (e.g., machines including one or more arrays programmed to execute one or more sets or sequences of instructions, also called "processors"), and any two or more, or even all, of these elements may be implemented within the same such computer or computers. [00490] A processor or other means for processing as disclosed herein may be fabricated as one or more electronic and/or optical devices residing, for example, on the same chip or among two or more chips in a chipset. One example of such a device is a fixed or programmable array of logic elements, such as transistors or logic gates, and any of these elements may be implemented as one or more such arrays. Such an array or arrays may be implemented within one or more chips (for example, within a chipset including two or more chips). Examples of such arrays include fixed or programmable arrays of logic elements, such as microprocessors, embedded processors, IP cores, DSPs, FPGAs, ASSPs and ASICs. A processor or other means for processing as disclosed herein may also be embodied as one or more computers (e.g., machines including one or more arrays programmed to execute one or more sets or sequences of instructions) or other processors. It is possible for a processor as described herein to be used to perform tasks or execute other sets of instructions that are not directly related to a procedure of an implementation of a method as disclosed herein, such as a task relating to another operation of a device or system in which the processor is embedded (e.g., an audio sensing device). It is also possible for part of a method as disclosed herein to be performed by a processor of the audio sensing device and for another part of the method to be performed under the control of one or more other processors. [00491] Those of skill will appreciate that the various illustrative modules, logical blocks, circuits, and tests and other operations described in connection with the configurations disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. Such modules, logical blocks, circuits, and operations may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an ASIC or ASSP, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to produce the configuration as disclosed herein. For example, such a configuration may be implemented at least in part as a hard-wired circuit, as a circuit configuration fabricated into an application-specific integrated circuit, or as a firmware program loaded into non-volatile storage or a software program loaded from or into a data storage medium as machine-readable code, such code being instructions executable by an array of logic elements such as a general purpose processor or other digital signal processing unit. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. A software module may reside in a non-transitory storage medium such as RAM (random- access memory), ROM (read-only memory), nonvolatile RAM (NVRAM) such as flash RAM, erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), registers, hard disk, a removable disk, or a CD-ROM; or in any other form of storage medium known in the art. An illustrative storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal. The term "computer-program product" refers to a computing device or processor in combination with code or instructions (e.g., a "program") that may be executed, processed or computed by the computing device or processor. [00492] It is noted that the various methods disclosed herein may be performed by an array of logic elements such as a processor, and that the various elements of an apparatus as described herein may be implemented as modules designed to execute on such an array. As used herein, the term "module" or "sub-module" can refer to any method, apparatus, device, unit or computer-readable data storage medium that includes computer instructions (e.g., logical expressions) in software, hardware or firmware form. It is to be understood that multiple modules or systems can be combined into one module or system and one module or system can be separated into multiple modules or systems to perform the same functions. When implemented in software or other computer-executable instructions, the elements of a process are essentially the code segments to perform the related tasks, such as with routines, programs, objects, components, data structures, and the like. The term "software" should be understood to include source code, assembly language code, machine code, binary code, firmware, macrocode, microcode, any one or more sets or sequences of instructions executable by an array of logic elements, and any combination of such examples. The program or code segments can be stored in a processor readable medium or transmitted by a computer data signal embodied in a carrier wave over a transmission medium or communication link. [00493] The implementations of methods, schemes, and techniques disclosed herein may also be tangibly embodied (for example, in tangible, computer-readable features of one or more computer-readable storage media as listed herein) as one or more sets of instructions executable by a machine including an array of logic elements (e.g., a processor, microprocessor, microcontroller, or other finite state machine). The term "computer-readable medium" may include any medium that can store or transfer information, including volatile, nonvolatile, removable, and non-removable storage media. Examples of a computer-readable medium include an electronic circuit, a semiconductor memory device, a ROM, a flash memory, an erasable ROM (EROM), a floppy diskette or other magnetic storage, a CD-ROM/DVD or other optical storage, a hard disk or any other medium which can be used to store the desired information, a fiber optic medium, a radio frequency (RF) link, or any other medium which can be used to carry the desired information and can be accessed. The computer data signal may include any signal that can propagate over a transmission medium such as electronic network channels, optical fibers, air, electromagnetic, RF links, etc. The code segments may be downloaded via computer networks such as the Internet or an intranet. In any case, the scope of the present disclosure should not be construed as limited by such embodiments. Each of the tasks of the methods described herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. In a typical application of an implementation of a method as disclosed herein, an array of logic elements (e.g., logic gates) is configured to perform one, more than one, or even all of the various tasks of the method. One or more (possibly all) of the tasks may also be implemented as code (e.g., one or more sets of instructions), embodied in a computer program product (e.g., one or more data storage media such as disks, flash or other nonvolatile memory cards, semiconductor memory chips, etc.), that is readable and/or executable by a machine (e.g., a computer) including an array of logic elements (e.g., a processor, microprocessor, microcontroller, or other finite state machine). The tasks of an implementation of a method as disclosed herein may also be performed by more than one such array or machine. In these or other implementations, the tasks may be performed within a device for wireless communications such as a cellular telephone or other device having such communications capability. Such a device may be configured to communicate with circuit-switched and/or packet- switched networks (e.g., using one or more protocols such as VoIP). For example, such a device may include RF circuitry configured to receive and/or transmit encoded frames. [00494] It is expressly disclosed that the various methods disclosed herein may be performed by a portable communications device such as a handset, headset, or portable digital assistant (PDA), and that the various apparatus described herein may be included within such a device. A typical real-time (e.g., online) application is a telephone conversation conducted using such a mobile device. [00495] In one or more exemplary embodiments, the operations described herein may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, such operations may be stored on or transmitted over a computer-readable medium as one or more instructions or code. The term "computer- readable media" includes both computer-readable storage media and communication (e.g., transmission) media. By way of example, and not limitation, computer-readable storage media can comprise an array of storage elements, such as semiconductor memory (which may include without limitation dynamic or static RAM, ROM, EEPROM, and/or flash RAM), or ferroelectric, magnetoresistive, ovonic, polymeric, or phase-change memory; CD-ROM or other optical disk storage; and/or magnetic disk storage or other magnetic storage devices. Such storage media may store information in the form of instructions or data structures that can be accessed by a computer. Communication media can comprise any medium that can be used to carry desired program code in the form of instructions or data structures and that can be accessed by a computer, including any medium that facilitates transfer of a computer program from one place to another. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, and/or microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technology such as infrared, radio, and/or microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray Disc™ (Blu-Ray Disc Association, Universal City, CA), where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. [00496] An acoustic signal processing apparatus as described herein may be incorporated into an electronic device that accepts speech input in order to control certain operations, or may otherwise benefit from separation of desired noises from background noises, such as communications devices. Many applications may benefit from enhancing or separating clear desired sound from background sounds originating from multiple directions. Such applications may include human-machine interfaces in electronic or computing devices that incorporate capabilities such as voice recognition and detection, speech enhancement and separation, voice-activated control, and the like. It may be desirable to implement such an acoustic signal processing apparatus to be suitable in devices that only provide limited processing capabilities. [00497] The elements of the various implementations of the modules, elements, and devices described herein may be fabricated as electronic and/or optical devices residing, for example, on the same chip or among two or more chips in a chipset. One example of such a device is a fixed or programmable array of logic elements, such as transistors or gates. One or more elements of the various implementations of the apparatus described herein may also be implemented in whole or in part as one or more sets of instructions arranged to execute on one or more fixed or programmable arrays of logic elements such as microprocessors, embedded processors, IP cores, digital signal processors, FPGAs, ASSPs, and ASICs. [00498] It is possible for one or more elements of an implementation of an apparatus as described herein to be used to perform tasks or execute other sets of instructions that are not directly related to an operation of the apparatus, such as a task relating to another operation of a device or system in which the apparatus is embedded. It is also possible for one or more elements of an implementation of such an apparatus to have structure in common (e.g., a processor used to execute portions of code corresponding to different elements at different times, a set of instructions executed to perform tasks corresponding to different elements at different times, or an arrangement of electronic and/or optical devices performing operations for different elements at different times). [00499] It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications, changes and variations may be made in the arrangement, operation and details of the systems, methods, and apparatus described herein without departing from the scope of the claims. [00500] What is claimed is:
Examples described herein provide a central processing unit (CPU) to reserve a region of memory for use to store both a boot firmware code and a second boot firmware code and to perform the second boot firmware code without reboot. The reserved region of memory can be a region that is not configured for access by an operating system (OS). The reserved region of memory comprises System Management Random Access Memory (SMRAM). If a first interrupt handler is not overwritten after a second boot firmware code is stored, the CPU can roll back to use of the first interrupt handler.
CLAIMS:What is claimed is:1. An apparatus comprising: a central processing unit (CPU) to store both a boot firmware code and a replacement boot firmware code and to perform a portion of the replacement boot firmware code without reboot of the CPU.2. The apparatus of claim 1, wherein the CPU is to reserve a region of memory for the both the boot firmware code and the replacement boot firmware code and wherein the reserved region of memory comprises a region that is not configured for access by an operating system (OS).3. The apparatus of claim 2, wherein the reserved region of memory comprises System Management Random Access Memory (SMRAM).4. The apparatus of claim 1, wherein the boot firmware code comprises one or more of: Basic Input/Output System (BIOS), Universal Extensible Firmware Interface (UEFI), a boot loader, or System Management Interrupt (SMI) handler and wherein the replacement boot firmware code comprises one or more of: a BIOS, UEFI, a boot loader, or SMI handler.5. The apparatus of claim 1, comprising a boot controller to load the replacement boot firmware code from a storage device.6. The apparatus of claim 5, wherein the storage device is locally or remotely connected with the CPU using one or more of a bus, interface, fabric, or network.7. The apparatus of claim 1, wherein the CPU is to perform the portion of the replacement boot firmware code based on authentication of the portion of the replacement boot firmware code.8. The apparatus of claim 1, wherein the replacement boot firmware code comprises a replacement System Management Interrupt (SMI) handler and wherein the CPU is to switch to execution of the replacement SMI handler in response to an SMI or interrupt without reboot of the CPU.9. The apparatus of claim 1, comprising a boot controller to load the replacement boot firmware code from a storage device and comprising a server, data center, or rack, wherein the server, data center, or rack is to receive the replacement boot firmware code and store the replacement boot firmware code into the storage device.10. A method comprising: based on execution of a portion of a first version of boot firmware code by a processor, generating a region in memory large enough to store the first version of boot firmware code and a second version of boot firmware code and based on a detected indication of an update to boot firmware code, storing a portion of the second version of boot firmware code in the region in memory.11. The method of claim 10, wherein the first version of the boot firmware code comprises one or more of: Basic Input/Output System (BIOS), Universal Extensible Firmware Interface (UEFI), a boot loader, or System Management Interrupt (SMI) handler and wherein the second version of the boot firmware code comprises one or more of: a BIOS, UEFI, a boot loader, or SMI handler.12. The method of claim 10, wherein the region in memory comprises a region that is not configured for access by an operating system (OS).13. The method of claim 10, wherein the region in memory comprises System Management Random Access Memory (SMRAM).14. The method of claim 10, comprising loading the portion of the second version of boot firmware code from one or more of: a locally connected storage device, a network accessible storage device, or a fabric accessible storage device.15. The method of claim 10, wherein storing the portion of the second version of boot firmware code in memory comprises authenticating the portion of the second version of boot firmware code in memory prior to storing the portion of the second version of boot firmware code in memory.16. The method of claim 10, wherein the portion of the second version of boot firmware code comprises a replacement System Management Interrupt (SMI) handler and comprising executing the replacement SMI handler in response to an SMI or interrupt without rebooting the processor.17. At least one non-transitory computer-readable medium, comprising instructions stored thereon, that if executed by one or more processors, cause the one or more processors to: allocate a region in memory that is hidden from an operating system (OS), wherein the region is large enough to store at least a new version of boot firmware code; and based on an indication of a new version of a boot firmware code and authentication of the new version of a boot firmware code, copy a portion of the new version of the boot firmware code into the region.18. The at least one non-transitory computer-readable medium of claim 17, wherein the new version of boot firmware code comprises one or more of: Basic Input/Output System (BIOS),Universal Extensible Firmware Interface (UEFI), a boot loader, or System Management Interrupt (SMI) handler.19. The at least one non-transitory computer-readable medium of claim 17, comprising instructions stored thereon, that if executed by one or more processors, cause the one or more processors to: attempt to validate the new version of the boot firmware code and do not permit execution of any portion of the new version of the boot firmware code of the boot firmware code based on failure to validate the new version of the boot firmware code.20. The at least one non-transitory computer-readable medium of claim 17, comprising instructions stored thereon, that if executed by one or more processors, cause the one or more processors to: rollback to execution of a prior version of the boot firmware code stored in the region.
UPDATE OF BOOT CODE HANDLERSCLAIM OF PRIORITYThis application claims priority under 35 U.S.C. § 365(c) to US Application No. 16/790,488 filed February 13, 2020, entitled “UPDATE OF BOOT CODE HANDLERS”, which is incorporated in its entirety herewith.DESCRIPTIONIn a platform with a central processing unit (CPU) such as a server or personal computer (PC), System Management Mode (SMM) is a most privileged operating mode of a CPUs in which all other running tasks are suspended, including the operating system. SMM involves managing and handling various platform events such as errors and other reliability, availability and serviceability (RAS) events.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 depicts a CPU.FIGs. 2 and 3 depict example processes.FIG. 4 depicts a system.FIG. 5 depicts an example environment.DETAILED DESCRIPTIONSMM can be used for handling runtime events that involve specific silicon and platform configurations and capabilities (e.g., reliability availability and serviceability (RAS) events, memory, connection errors, or other failure conditions), in an OS transparent fashion, which can be platform and silicon specific. SMM can be invoked by a System Management Interrupt (SMI) and exited by a Resume from a System Management Mode (RSM) instruction. An SMI can be generated by platform events such as RAS, power management, thermal events or via software triggered SMIs. SMIs are a high priority, non-maskable, broadcast interrupt. On receipt of this interrupt, the CPU(s) in the system save their context and transition to SMM. An SMI handler, copied from BIOS flash to System Management RAM (SMRAM) during boot, executes during SMM.As a runtime component, an SMI handler can be updated for a variety of reasons such as bug fixes, security patches, feature enhancements and so forth. SMI handler updates are deployed as part of a BIOS update. Updating a BIOS (and the SMI handler) can involve a system reset. In data center and cloud environments, where fleets of hundreds of thousands of server nodes are being deployed, resetting them introduces unavailability of the server nodes. This platform reset is a very expensive downtime for cloud service providers (CSPs) hosting a variety of critical workloads. In other words, updating a BIOS causes non-monetizable downtime and potentially failure to achieve service level agreements (SLAs) with customers.In SMM, a CPU saves the context of its executed instructions. The context can be saved into System Management Random Access Memory (SMRAM). The SMI handler then sets up its own environment (e.g., page tables, Interrupt Redirection Tables (IDTs)) and executes code that is placed by the platform BIOS in an area of SMRAM. In some cases, SMRAM is an area of memory that is hidden from the OS such that the OS cannot access or configure the SMRAM. For example, from outside of SMM, any writes to SMRAM are not performed and reads will result in return a static value, for example, all F, Os, or - Is. SMRAM is accessible to processors which have entered SMM. During SMM, the CPU can continue to execute an OS or virtualized execution environment. For an example description of SMM, see Intel® 64 and IA-32 Architectures Software Developer’s Manual, System Programming Guide, Volume 3C: System Programming Guide, Part 3, which is incorporated by reference in its entirety.Various embodiments provide updating or modifying boot firmware code or system management mode handler without platform reset or a reboot of a CPU that is to execute the updated or modified boot firmware code or system management mode handler. Various embodiments securely add an alternative or second SMI handler for use in addition to a first SMI handler, that is also available to use after receipt of the second SMI handler. The second SMI handler can be stored in a different region of SMRAM than that of the first SMI handler. The second SMI handler can be an update as compared to the first SMI handler. Securely adding a second SMI can occur by decrypting the second SMI using public and private keys at the platform. Boot firmware code can configure or re-configure system software at or after boot.While in SMM, the processor executes SMI handler code to perform operations such as powering down unused storage or displays, executing proprietary code, or placing the whole system in a suspended state. When the SMI handler has completed its operations, it executes a resume (RSM) instruction. After an alternative or second SMI is added, in response to a subsequent SMI, the CPU platform (or its boot core or processor) can load the second SMI.To update an SMI handler, various embodiments can perform one or more of: (1) store the new SMI handler as an UEFI capsule (e.g., part of a boot firmware code update), (2) authenticate the staged image from within the existing SMM handler (e.g., authentication libraries in a current or former SMI handler can authenticate the new SMI handler or boot firmware code), (3) decline or accept the new SMI handler code, or (4) install accepted SMI handler code within an SMM. Note that (3) and (4) can be performed by the current SMI handler and executed during runtime. In various embodiments the first SMI handler is not overwritten. In various embodiments provide a capability to roll back to use of the first SMI handler. For example, if the second SMI has bugs or errors, or any reason, after a subsequent SMI, the platform can switch to use the first SMI handler.Accordingly, reboot or reset of a CPU is not needed (but can take place) in the event of an update to an SMI handler because the updated SMI handler is used during a next SMI or interrupt during runtime. Moreover, virtualized execution environments or operating systems need not be rebooted during an addition of an SMI handler or boot firmware code. Data center owners and cloud service providers can potentially avoid downtime due to updating an SMI handler or boot firmware code. In addition, vendors of CPUs or processors have the capability to avoid or reduce downtime in the event of updating of an SMI handler or boot firmware code.FIG. 1 depicts an embodiment. Central processing unit (CPU) 102 can include cores 104-0 to 104- n. A core can be an execution core or computational engine that is capable of executing instructions. A core can have access to its own cache and read only memory (ROM), or multiple cores can share a cache or ROM. Cores can be homogeneous and/or heterogeneous devices. Any type of inter-processor communication techniques can be used, such as but not limited to messaging, inter-processor interrupts (IPI), inter-processor communications, and so forth. Cores can be connected in any type of manner, such as but not limited to, bus, ring, or mesh.A core may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. CPU 102 causes boot controller 114 to access boot firmware code from storage 120. Boot firmware code can have a header file that identifies a map of what boot code is to be copied by CPU 102. For example, a .h file for a boot firmware code can have a flash image layout map of which segments of the boot firmware code are to be copied. When executed by a processor, boot firmware code can be executed by a processor to perform hardware initialization during a booting process (e.g., power-on startup), and provide runtime services for operating systems and programs. Some examples of portions of boot firmware code (e.g., SMI handler) can configure or re configure system software at or after boot.In some embodiments, boot firmware code can be one or more of: Basic Input/Output System (BIOS), Universal Extensible Firmware Interface (UEFI), or a boot loader. The BIOS firmware can be pre-installed on a personal computer's system board or accessible through an SPI interface from a boot storage (e.g., flash memory). In some examples, a BIOS can be stored on a device and accessible from the device by one or more cores or CPUs using an interface such as Serial Peripheral Interface (SPI) or other interface (e.g., PCIe). BIOS can initialize and test the system hardware components and loads a boot loader from a memory device which initializes and executes an operating system. The OS, in some examples can be Linux®, Windows®, FreeBSD®, Android®, MacOS®, iOS®, or any other operating system. The OS and driver can execute on a CPU sold or designed by Intel®, ARM®, AMD®, Qualcomm®, IBM®, Texas Instruments®, among others.In some examples, a Universal Extensible Firmware Interface (UEFI) can be used instead or in addition to a BIOS for booting or restarting cores or processors. UEFI is a specification that defines a software interface between an operating system and platform firmware. UEFI can read from entries from disk partitions by not just booting from a disk or storage but booting from a specific boot loader in a specific location on a specific disk or storage. UEFI can support remote diagnostics and repair of computers, even with no operating system installed. A boot loader can be written for UEFI and can be instructions that a boot code firmware can execute and the boot loader is to boot the operating system(s). A UEFI bootloader can be a bootloader capable of reading from a UEFI type firmware.Boot controller 114 can be any type of controller (e.g., microcontroller) capable of managing boot firmware code loading and storage into memory 106. In some examples, boot controller 114 can be coupled to storage 120 using one or more of the following protocols: serial peripheral interface (SPI), enhanced SPI (eSPI), PCIe, or other interface. In some examples, storage 120 can be connected to boot controller 114 using a fabric or network and transmitted using a packet. Memory 106 can be any type of volatile or non-volatile memory. For example, by execution of a boot firmware code, memory 106 can allocate a portion of its addressable memory as SMRAM 108. SMRAM 108 can be an area of memory 106 that is hidden from the OS such that the OS cannot access or configure SMRAM 108. For example, from outside of SMM, any writes to SMRAM 108 are not performed and reads will result in return of a static value, for example, all F, 0s, or -Is. SMRAM 108 can be accessible to processors which have entered SMM.During normal boot, execution of boot firmware code by a core (e.g., a boot strap processor (BSP) core that executes boot firmware on a CPU node) causes installation of at least one SMM handler (e.g., first handler 110) as part of the boot process. In addition, execution of boot firmware code also allocates additional memory space in the SMRAM for accepting at least one more SMM handler or other portion of replacement boot firmware code, shown as reserved region 112.In some examples, a boot firmware code build process generates an alternative or second SMM handler that is the same or different than any current or prior SMM handler stored in SMRAM 108. The second SMM handler can be, for example, an UEFI capsule. A UEFI capsule is an industry standard way of encapsulating a binary image for boot firmware code updates. But in some embodiments, the UEFI capsule is used to update a runtime component of the boot firmware code. In some cases, modification of boot firmware code in SMRAM 108 can involve use of an alternative or second SMM handler. The UEFI capsule can include updatable binary images with relocatable Portable Executable (PE) file format for executable or dynamic linked library (dll) files based on COFF (Common Object File Format). For example, the UEFI capsule can include executable (*.exe) files. This UEFI capsule can be deployed to a target platform as an SMM image via existing OS specific techniques (e.g., Windows Update for Azure, or LVFS for Linux).An Update Tool executed on CPU 102 (e.g., an OS tool (in band) or baseboard management controller (BMC) (out of band)) identifies boot firmware code updates in storage 120. For example, the Update Tool can read a flag stored in memory and indicate if there is an update to boot firmware code. In some cases, an Update Tool can receive indications that there is an update to boot firmware code. In response to an indication of a second boot firmware code 124 in storage 120, the Update Tool invokes first SMI handler 110 and indicates a new boot firmware code image is available to authenticate.Execution of first SMI handler 110 causes authentication of second boot firmware code 124. Secure Sockets Layer (SSL) can be used to authenticate second boot firmware code 124. If second boot firmware code 124 is encapsulated in a UEFI capsule, the signature of the capsule can be validated using public and private key decryption. For example, a boot code supplier generates boot code supplier signed with a public key and a private key is stored in SMRAM 108. The private key can be used to authenticate the signature of the capsule.If there is authentication of the signature, first SMI handler 110 accepts the image and informs the Update Tool that invokes SMRAM update that second boot firmware code 124 is valid and accepted for use. In some examples, if authenticated, second boot firmware code 124 is stored in storage 120 and copied to reserved region 112. In some examples, second boot firmware code 124 is stored in reserved region 112 and authenticated from reserved region 112. If second boot firmware code 124 is authenticated, then second boot firmware code 124 can be used. However, if second boot firmware code 124 is not authenticated, then the second boot firmware code 124 can be ignored, deleted or overwritten.An Update Tool or administrator can trigger or approve a switch to use of second boot firmware code 124. Use of second boot firmware code 124 can occur by changing a pointer to indicate use of second boot firmware code 124 and/or a second SMI handler received with second boot firmware code 124. In some examples, first handler 110 can be overwritten with a second handler. In some examples, roll-back from a second handler to a first handler 110 can be performed. If a problem occurs with an alternative or second handler in reserved region 112, the CPU can roll back or use first handler 110. An Update Tool or administrator can trigger or approve a switch to use of first handler or another boot firmware code stored in SMRAM 108. For example, a version number of a handler can be specified to identify a handler to use.In some examples, any core can execute a packet processing process as an application or part of a virtual execution environment. Packet processing process can perform processing of received packets such as one or more of: determination if a packet is valid (e.g., correct Ethernet type, correct checksum, correct IP Protocol type, valid layers 4-7 protocol type), determination of packet destination (e.g., next hop, destination queue), match-action activity, or perform one or more of: IP filter checks, flow table lookup, access control lists (ACL), firewall, match-actions operations, outgoing port selection using a forwarding table, packet decryption, packet encryption, denial of server protection, packet counting, billing, traffic management/conditioning, traffic shaping/traffic scheduling, packet marking/remarking, packet inspection of layers 4-7, or traffic load balancing/load distribution. For example, packet processing process can perform Data Plane Development Kit (DPDK) or OpenDataPlane (ODP) compatible packet processing.A packet can include a header and payload. A header can be a media access control (MAC) source and destination addresses, Ethertype, Internet Protocol (IP) source and destination addresses, IP protocol, Transmission Control Protocol (TCP) port numbers, virtual local area network (VLAN) or Multi-Protocol Label Switching (MPLS) tags.A packet processing process can perform packet processing using Network Function Virtualization (NFV), software-defined networking (SDN), virtualized network function (VNF), Evolved Packet Core (EPC), or 5G network slicing. Some example implementations of NFV are described in European Telecommunications Standards Institute (ETSI) specifications or Open Source NFV Management and Orchestration (MANO) from ETSI's Open Source Mano (OSM) group. VNF can include a service chain or sequence of virtualized tasks executed on generic configurable hardware such as firewalls, domain name system (DNS), caching or network address translation (NAT) and can run in virtual execution environments. VNFs can be linked together as a service chain. In some examples, EPC is a 3GPP-specified core architecture at least for Long Term Evolution (LTE) access. 5G network slicing can provide for multiplexing of virtualized and independent logical networks on the same physical network infrastructure.Any core can execute a virtualized execution environment. A virtualized execution environment can include at least a virtual machine or a container. A virtual machine (VM) can be software that runs an operating system and one or more applications. A VM can be defined by specification, configuration files, virtual disk file, non-volatile random access memory (NVRAM) setting file, and the log file and is backed by the physical resources of a host computing platform. A VM can be an OS or application environment that is installed on software, which imitates dedicated hardware. The end user has the same experience on a virtual machine as they would have on dedicated hardware. Specialized software, called a hypervisor, emulates the PC client or server's CPU, memory, hard disk, network and other hardware resources completely, enabling virtual machines to share the resources. The hypervisor can emulate multiple virtual hardware platforms that are isolated from each other, allowing virtual machines to run Linux® and Windows® Server operating systems on the same underlying physical host.A container can be a software package of applications, configurations and dependencies so the applications run reliably on one computing environment to another. Containers can share an operating system installed on the server platform and run as isolated processes. A container can be a software package that contains everything the software needs to run such as system tools, libraries, and settings. Containers are not installed like traditional software programs, which allows them to be isolated from the other software and the operating system itself. Isolation can include permitted access of a region of addressable memory or storage by a particular container but not another container. The isolated nature of containers provides several benefits. First, the software in a container will run the same in different environments. For example, a container that includes PHP and MySQL can run identically on both a Linux computer and a Windows® machine. Second, containers provide added security since the software will not affect the host operating system. While an installed application may alter system settings and modify resources, such as the Windows® registry, a container can only modify settings within the container.FIG. 2 depicts a process that can be performed by a processor during a boot operation. At 202, in connection with execution of boot firmware code, a region of memory is allocated to include a boot firmware code. For example, the region of memory can be SMRAM. The additional region can be allocated to store at least a second handler and potentially other boot firmware code.At 204, in connection with execution of boot firmware code, a default SMI handler can be stored into the region of memory to store boot firmware code. For example, a first SMI handler can be stored into SMRAM but the SMRAM can have left over space for the additional region to store at least a second handler or boot firmware code. The process can continue to boot using the available boot firmware code.FIG. 3 depicts a process that can be used to provide another SMI handler and/or boot firmware code. Actions 302-306 can be performed by an administrator to deploy a boot firmware code to one or more CPU nodes. At 302, a second SMI handler image is formed. For example, the second SMI handler can be the same or different than a prior SMI handler that is loaded on a platform with one or more CPU nodes. The second SMI handler can be formed within a capsule such as a UEFI capsule. At 304, the second SMI handler can be deployed. For example, deployment can involve providing the second SMI handler via an OS or virtual execution environment specific deployment manner. For example, for Windows operating system, Windows Update (WU) can be used to deploy and authenticate a second SMI handler. For Linux, Linux Vendor Firmware Service (LVFS) can be used to deploy and authenticate the second SMI handler.At 306, the second SMI handler can be invoked. For example, when the second SMI handler is formed within a UEFI capsule, the UEFI capsule can be invoked in order to copy a second boot firmware code (e.g., second SMI handler) from boot firmware storage to SMRAM. For example, instruction Invoke UEFI UpdateCapsule() can be used to copy a second boot firmware code (e.g., second SMI handler) from boot firmware storage to SMRAM.Actions 308-316 can be performed by one or more CPU platforms. At 308, a check can be made for any boot firmware code update. A current SMI handler can be used to check for any updates to boot firmware code stored in boot storage. For example, a Port 0xB2 write can be used to check for a firmware code stored in boot storage. In this example, a boot firmware code update is stored in boot storage. In some examples, the boot firmware code update includes an SMI handler.At 310, the boot firmware code newly identified to be stored in boot storage is authenticated to determine if the boot firmware code can be executed. For example, various validation procedures can take place such as but not limited to use of secure sockets layer (SSL) or Transport Layer Security (TLS) to ensure that the boot firmware code is authentic or use of public-private key encryption to authenticate the boot firmware code. An Update Tool can identify boot firmware code updates in storage.At 312, a determination is made if the authentication passed. If authentication passed, the process continues to 314 but if authentication fails the process continues to 320.At 314, the boot firmware code update is accessible for use. For example, the boot firmware code update can be copied to SMRAM from a boot storage.At 316, the boot firmware code update is invoked for use. For example, a pointer can be updated to point to a memory address of the boot firmware code update in SMRAM to cause loading and use of the boot firmware code update. For example, if the boot firmware code update includes a second SMI handler, the second SMI handler can be used to response to an SMI or interrupt during runtime.At 320, the boot firmware code update is rejected. An administrator can be notified as the attempt to load another boot firmware code may be a malicious attack and further attack or impact can be stopped. The boot firmware code that was not authenticated can be deleted or overwritten in some examples.FIG. 4 depicts a system. Various embodiments can be used by system 400 to update or access another boot firmware code without rebooting or restarting a CPU or processor. System 400 includes processor 410, which provides processing, operation management, and execution of instructions for system 400. Processor 410 can include any type of microprocessor, central processing unit (CPU), graphics processing unit (GPU), processing core, or other processing hardware to provide processing for system 400, or a combination of processors. Processor 410 controls the overall operation of system 400, and can be or include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such devices.In one example, system 400 includes interface 412 coupled to processor 410, which can represent a higher speed interface or a high throughput interface for system components that needs higher bandwidth connections, such as memory subsystem 420 or graphics interface 440, or accelerators 442. Interface 412 represents an interface circuit, which can be a standalone component or integrated onto a processor die. Where present, graphics interface 440 interfaces to graphics components for providing a visual display to a user of system 400. In one example, graphics interface 440 can drive a high definition (HD) display that provides an output to a user. High definition can refer to a display having a pixel density of approximately 100 PPI (pixels per inch) or greater and can include formats such as full HD (e.g., 1120p), retina displays, 4K (ultra-high definition or UHD), or others. In one example, the display can include a touchscreen display. In one example, graphics interface 440 generates a display based on data stored in memory 430 or based on operations executed by processor 410 or both. In one example, graphics interface 440 generates a display based on data stored in memory 430 or based on operations executed by processor 410 or both.Accelerators 442 can be a programmable or fixed function offload engine that can be accessed or used by a processor 410. For example, an accelerator among accelerators 442 can provide sequential and speculative decoding operations in a manner described herein, compression (DC) capability, cryptography services such as public key encryption (PKE), cipher, hash/authentication capabilities, decryption, or other capabilities or services. In some embodiments, in addition or alternatively, an accelerator among accelerators 442 provides field select controller capabilities as described herein. In some cases, accelerators 442 can be integrated into a CPU socket (e.g., a connector to a motherboard or circuit board that includes a CPU and provides an electrical interface with the CPU). For example, accelerators 442 can include a single or multi-core processor, graphics processing unit, logical execution unit single or multi-level cache, functional units usable to independently execute programs or threads, application specific integrated circuits (ASICs), neural network processors (NNPs), programmable control logic, and programmable processing elements such as field programmable gate arrays (FPGAs). Accelerators 442 can provide multiple neural networks, CPUs, processor cores, general purpose graphics processing units, or graphics processing units can be made available for use by artificial intelligence (AI) or machine learning (ML) models. For example, the AI model can use or include any or a combination of: a reinforcement learning scheme, Q-learning scheme, deep-Q learning, or Asynchronous Advantage Actor-Critic (A3C), combinatorial neural network, recurrent combinatorial neural network, or other AI or ML model. Multiple neural networks, processor cores, or graphics processing units can be made available for use by AI or ML models.Memory subsystem 420 represents the main memory of system 400 and provides storage for code to be executed by processor 410, or data values to be used in executing a routine. Memory subsystem 420 can include one or more memory devices 430 such as read-only memory (ROM), flash memory, one or more varieties of random access memory (RAM) such as DRAM, or other memory devices, or a combination of such devices. Memory 430 stores and hosts, among other things, operating system (OS) 432 to provide a software platform for execution of instructions in system 400. Additionally, applications 434 can execute on the software platform of OS 432 from memory 430. Applications 434 represent programs that have their own operational logic to perform execution of one or more functions. Processes 436 represent agents or routines that provide auxiliary functions to OS 432 or one or more applications 434 or a combination. OS 432, applications 434, and processes 436 provide software logic to provide functions for system 400. In one example, memory subsystem 420 includes memory controller 422, which is a memory controller to generate and issue commands to memory 430. It will be understood that memory controller 422 could be a physical part of processor 410 or a physical part of interface 412. For example, memory controller 422 can be an integrated memory controller, integrated onto a circuit with processor 410.While not specifically illustrated, it will be understood that system 400 can include one or more buses or bus systems between devices, such as a memory bus, a graphics bus, interface buses, or others. Buses or other signal lines can communicatively or electrically couple components together, or both communicatively and electrically couple the components. Buses can include physical communication lines, point-to-point connections, bridges, adapters, controllers, or other circuitry or a combination. Buses can include, for example, one or more of a system bus, a Peripheral Component Interconnect (PCI) bus, a Hyper Transport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), or an Institute of Electrical and Electronics Engineers (IEEE) standard 1394 bus (Firewire).In one example, system 400 includes interface 414, which can be coupled to interface 412. In one example, interface 414 represents an interface circuit, which can include standalone components and integrated circuitry. In one example, multiple user interface components or peripheral components, or both, couple to interface 414. Network interface 450 provides system 400 the ability to communicate with remote devices (e.g., servers or other computing devices) over one or more networks. Network interface 450 can include an Ethernet adapter, wireless interconnection components, cellular network interconnection components, USB (universal serial bus), or other wired or wireless standards-based or proprietary interfaces. Network interface 1050 can transmit data to a device that is in the same data center or rack or a remote device, which can include sending data stored in memory. Network interface 450 can receive data from a remote device, which can include storing received data into memory. Various embodiments can be used in connection with network interface 450, processor 410, and memory subsystem 420.In one example, system 400 includes one or more input/output (I/O) interface(s) 460. I/O interface 460 can include one or more interface components through which a user interacts with system 400 (e.g., audio, alphanumeric, tactile/touch, or other interfacing). Peripheral interface 470 can include any hardware interface not specifically mentioned above. Peripherals refer generally to devices that connect dependently to system 400. A dependent connection is one where system 400 provides the software platform or hardware platform or both on which operation executes, and with which a user interacts.In one example, system 400 includes storage subsystem 480 to store data in a nonvolatile manner. In one example, in certain system implementations, at least certain components of storage 480 can overlap with components of memory subsystem 420. Storage subsystem 480 includes storage device(s) 484, which can be or include any conventional medium for storing large amounts of data in a nonvolatile manner, such as one or more magnetic, solid state, or optical based disks, or a combination. Storage 484 holds code or instructions and data 1046 in a persistent state (i.e., the value is retained despite interruption of power to system 400). Storage 484 can be generically considered to be a "memory," although memory 430 is typically the executing or operating memory to provide instructions to processor 410. Whereas storage 484 is nonvolatile, memory 430 can include volatile memory (i.e., the value or state of the data is indeterminate if power is interrupted to system 400). In one example, storage subsystem 480 includes controller 482 to interface with storage 484. In one example controller 482 is a physical part of interface 414 or processor 410 or can include circuits or logic in both processor 410 and interface 414.A volatile memory is memory whose state (and therefore the data stored in it) is indeterminate if power is interrupted to the device. Dynamic volatile memory can involve refreshing the data stored in the device to maintain state. One example of dynamic volatile memory incudes DRAM (Dynamic Random Access Memory), or some variant such as Synchronous DRAM (SDRAM). A memory subsystem as described herein may be compatible with a number of memory technologies, such as DDR3 (Double Data Rate version 3, original release by JEDEC (Joint Electronic Device Engineering Council) on June 27, 2007). DDR4 (DDR version 4, initial specification published in September 2012 by JEDEC), DDR4E (DDR version 4), LPDDR3 (Low Power DDR version3, JESD209-3B, August 2013 by JEDEC), LPDDR4) LPDDR version 4, JESD209-4, originally published by JEDEC in August 2014), WI02 (Wide Input/output version 2, JESD229-2 originally published by JEDEC in August 2014, HBM (High Bandwidth Memory, JESD325, originally published by JEDEC in October 2013, LPDDR5 (currently in discussion by JEDEC), HBM2 (HBM version 2), currently in discussion by JEDEC, or others or combinations of memory technologies, and technologies based on derivatives or extensions of such specifications.A non-volatile memory (NVM) device is a memory whose state is determinate even if power is interrupted to the device. In one embodiment, the NVM device can comprise a block addressable memory device, such as NAND technologies, or more specifically, multi-threshold level NAND flash memory (for example, Single-Level Cell (“SLC”), Multi-Level Cell (“MLC”), Quad-Level Cell (“QLC”), Tri-Level Cell (“TLC”), or some other NAND). A NVM device can also comprise a byte-addressable write-in-place three dimensional cross point memory device, or other byte addressable write-in-place NVM device (also referred to as persistent memory), such as single or multi-level Phase Change Memory (PCM) or phase change memory with a switch (PCMS), NVM devices that use chalcogenide phase change material (for example, chalcogenide glass), resistive memory including metal oxide base, oxygen vacancy base and Conductive Bridge Random Access Memory (CB-RAM), nanowire memory, ferroelectric random access memory (FeRAM, FRAM), magneto resistive random access memory (MRAM) that incorporates memristor technology, spin transfer torque (STT)-MRAM, a spintronic magnetic junction memory based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin Orbit Transfer) based device, a thyristor based memory device, or a combination of any of the above, or other memory. A power source (not depicted) provides power to the components of system 400. More specifically, power source typically interfaces to one or multiple power supplies in system 400 to provide power to the components of system 400. In one example, the power supply includes an AC to DC (alternating current to direct current) adapter to plug into a wall outlet. Such AC power can be renewable energy (e.g., solar power) power source. In one example, power source includes a DC power source, such as an external AC to DC converter. In one example, power source or power supply includes wireless charging hardware to charge via proximity to a charging field. In one example, power source can include an internal battery, alternating current supply, motion-based power supply, solar power supply, or fuel cell source.In an example, system 400 can be implemented using interconnected compute sleds of processors, memories, storages, network interfaces, and other components. High speed interconnects can be used such as: Ethernet (IEEE 802.3), remote direct memory access (RDMA), InfiniBand, Internet Wide Area RDMA Protocol (iWARP), quick UDP Internet Connections (QUIC), RDMA over Converged Ethernet (RoCE), Peripheral Component Interconnect express (PCIe), Intel QuickPath Interconnect (QPI), Intel Ultra Path Interconnect (UPI), Intel On-Chip System Fabric (IOSF), Omnipath, Compute Express Link (CXL), HyperTransport, high-speed fabric, NVLink, Advanced Microcontroller Bus Architecture (AMBA) interconnect, OpenCAPI, Gen-Z, Cache Coherent Interconnect for Accelerators (CCIX), 3GPP Long Term Evolution (LTE) (4G), 3GPP 5G, and variations thereof. Data can be copied or stored to virtualized storage nodes using a protocol such as NVMe over Fabrics (NVMe-oF) or NVMe.Embodiments herein may be implemented in various types of computing and networking equipment, such as switches, routers, racks, and blade servers such as those employed in a data center and/or server farm environment. The servers used in data centers and server farms comprise arrayed server configurations such as rack-based servers or blade servers. These servers are interconnected in communication via various network provisions, such as partitioning sets of servers into Local Area Networks (LANs) with appropriate switching and routing facilities between the LANs to form a private Intranet. For example, cloud hosting facilities may typically employ large data centers with a multitude of servers. A blade comprises a separate computing platform that is configured to perform server-type functions, that is, a “server on a card.” Accordingly, a blade includes components common to conventional servers, including a main printed circuit board (main board) providing internal wiring (i.e., buses) for coupling appropriate integrated circuits (ICs) and other components mounted to the board.Various embodiments can be used in a base station that supports communications using wired or wireless protocols (e.g., 3GPP Long Term Evolution (LTE) (4G) or 3GPP 5G), on-premises data centers, off-premises data centers, edge network elements, fog network elements, and/or hybrid data centers (e.g., data center that use virtualization, cloud and software-defined networking to deliver application workloads across physical data centers and distributed multi-cloud environments).Embodiments herein may be implemented in various types of computing and networking equipment, such as switches, routers, racks, and blade servers such as those employed in a data center and/or server farm environment. The servers used in data centers and server farms comprise arrayed server configurations such as rack-based servers or blade servers. These servers are interconnected in communication via various network provisions, such as partitioning sets of servers into Local Area Networks (LANs) with appropriate switching and routing facilities between the LANs to form a private Intranet. For example, cloud hosting facilities may typically employ large data centers with a multitude of servers. A blade comprises a separate computing platform that is configured to perform server-type functions, that is, a “server on a card.” Accordingly, each blade includes components common to conventional servers, including a main printed circuit board (main board) providing internal wiring (i.e., buses) for coupling appropriate integrated circuits (ICs) and other components mounted to the board.FIG. 5 depicts an environment 500 includes multiple computing racks 502, one or more including a Top of Rack (ToR) switch 504, a pod manager 506, and a plurality of pooled system drawers. Various embodiments can be used among racks to share content or data or results of processing or storing content. Generally, the pooled system drawers may include pooled compute drawers and pooled storage drawers. Optionally, the pooled system drawers may also include pooled memory drawers and pooled Input/Output (I/O) drawers. In the illustrated embodiment the pooled system drawers include an Intel® XEON® pooled computer drawer 508, and Intel® ATOM™ pooled compute drawer 510, a pooled storage drawer 512, a pooled memory drawer 514, and a pooled I/O drawer 516. Any of the pooled system drawers is connected to ToR switch 504 via a high-speed link 518, such as a 40 Gigabit/second (Gb/s) or lOOGb/s Ethernet link or a 100+ Gb/s Silicon Photonics (SiPh) optical link, or higher speeds.Multiple of the computing racks 502 may be interconnected via their ToR switches 504 (e.g., to a pod-level switch or data center switch), as illustrated by connections to a network 520. In some embodiments, groups of computing racks 502 are managed as separate pods via pod manager(s) 506. In one embodiment, a single pod manager is used to manage all of the racks in the pod. Alternatively, distributed pod managers may be used for pod management operations. Environment 500 further includes a management interface 522 that is used to manage various aspects of the environment. This includes managing rack configuration, with corresponding parameters stored as rack configuration data 524.In some examples, network interface and other embodiments described herein can be used in connection with a base station (e.g., 3G, 4G, 5G and so forth), macro base station (e.g., 5G networks), picostation (e.g., an IEEE 802.11 compatible access point), nanostation (e.g., for Point- to-MultiPoint (PtMP) applications).Various examples may be implemented using hardware elements, software elements, or a combination of both. In some examples, hardware elements may include devices, components, processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, ASICs, PLDs, DSPs, FPGAs, memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. In some examples, software elements may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, APIs, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an example is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given implementation. A processor can be one or more combination of a hardware state machine, digital control logic, central processing unit, or any hardware, firmware and/or software elements.Some examples may be implemented using or as an article of manufacture or at least one computer- readable medium. A computer-readable medium may include a non-transitory storage medium to store logic. In some examples, the non-transitory storage medium may include one or more types of computer-readable storage media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. In some examples, the logic may include various software elements, such as software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, API, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof.According to some examples, a computer-readable medium may include a non-transitory storage medium to store or maintain instructions that when executed by a machine, computing device or system, cause the machine, computing device or system to perform methods and/or operations in accordance with the described examples. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. The instructions may be implemented according to a predefined computer language, manner or syntax, for instructing a machine, computing device or system to perform a certain function. The instructions may be implemented using any suitable high-level, low-level, object- oriented, visual, compiled and/or interpreted programming language.One or more aspects of at least one example may be implemented by representative instructions stored on at least one machine-readable medium which represents various logic within the processor, which when read by a machine, computing device or system causes the machine, computing device or system to fabricate logic to perform the techniques described herein. Such representations, known as “IP cores” may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.The appearances of the phrase “one example” or “an example” are not necessarily all referring to the same example or embodiment. Any aspect described herein can be combined with any other aspect or similar aspect described herein, regardless of whether the aspects are described with respect to the same figure or element. Division, omission or inclusion of block functions depicted in the accompanying figures does not infer that the hardware components, circuits, software and/or elements for implementing these functions would necessarily be divided, omitted, or included in embodiments.Some examples may be described using the expression "coupled" and "connected" along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, descriptions using the terms “connected” and/or “coupled” may indicate that two or more elements are in direct physical or electrical contact with each other. The term "coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.The terms “first,” “second,” and the like, herein do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items. The term “asserted” used herein with reference to a signal denote a state of the signal, in which the signal is active, and which can be achieved by applying any logic level either logic 0 or logic 1 to the signal. The terms “follow” or “after” can refer to immediately following or following after some other event or events. Other sequences of steps may also be performed according to alternative embodiments. Furthermore, additional steps may be added or removed depending on the particular applications. Any combination of changes can be used and one of ordinary skill in the art with the benefit of this disclosure would understand the many variations, modifications, and alternative embodiments thereof.Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood within the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present. Additionally, conjunctive language such as the phrase “at least one of X, Y, and Z,” unless specifically stated otherwise, should also be understood to mean X, Y, Z, or any combination thereof, including “X, Y, and/or Z.”’Illustrative examples of the devices, systems, and methods disclosed herein are provided below. An embodiment of the devices, systems, and methods may include any one or more, and any combination of, the examples described below.
A system for error correction code (ECC) management of write-once memory (WOM) codes includes, for example, a controller for selecting between one of a WOM (Write-Only Memory) mode and an ECC (error correction code) mode. A codec is arranged to operate in the selected mode. The codec while operating in the ECC mode is arranged to identify a bit position of at least one bit error in response to ECC parity bits of a first received data word. The codec while operating in the WOM mode is arranged to receive a WOM-encoded word from an addressed location in a WOM device, to receive a second received data word to be encoded and written to the addressed location, and to generate WOM-encoded word for writing to the addressed location in the WOM device. The WOM-encoded word for writing to the addressed location is optionally ECC encoded.
1.A circuit including:A controller that is operable to select one of the WOM mode, that is, the write-only memory mode, and the ECC mode, that is the error correction code mode; andIn response to the codec of the controller, in the ECC mode, it is operable to respond to the ECC parity of the first received data word to identify the bit position of at least one bit error, and in the WOM mode Operatively receive WOM-encoded words from the address location in the WOM device, receive the second received data word, and generate WOM-encoded words for writing to the address location in the WOM device, where The generated WOM encoded word is generated in response to the second received data word and includes information from the second received data word.2.The circuit according to claim 1, wherein the WOM-encoded word is written to the WOM by changing the block initialization state of the selection bit of the WOM-encoded word previously written to the address position of the WOM device The addressing location in the device.3.The circuit of claim 1, wherein the first received data word is the same received data word as the second received data word.4.The circuit of claim 1, wherein the codec is operable to generate a syndrome based on the first received data word in the ECC mode.5.The circuit of claim 4, wherein the codec is operable to generate the syndrome based on a check matrix in the ECC mode.6.The circuit of claim 5, wherein the codec is operable in the ECC mode to address a syndrome table according to the generated syndrome.7.7. The circuit of claim 6, wherein the codec is operable in the WOM mode to address part of the syndrome table according to the second received data word.8.The circuit of claim 7, the codec includes a counter operable to generate a first candidate data word for addressing the first part of the syndrome table.9.8. The circuit of claim 8, the codec includes a comparator operable to generate a second candidate data word for addressing the second part of the syndrome table.10.The circuit of claim 9, wherein the comparator operable to generate a second candidate data word generates the second candidate data word in response to a comparison of a first delta value with the first candidate data word .11.The circuit of claim 10, wherein the first delta value is generated in response to comparing the second received data word with a decoded WOM word, wherein the first delta value is generated in response to the address from the WOM device Position the received WOM-encoded word, and decode the decoded WOM word.12.The circuit of claim 9, wherein the second delta word is generated in response to the output of the first part of the syndrome table and in response to the output of the second part of the syndrome table.13.The circuit of claim 12, wherein in response to comparing the second delta word with a WOM-encoded word received from an addressing location in the WOM device, the second delta word is written to the The addressing location in the WOM device.14.A system for error correction, including:The main processor, which can operatively select one of the WOM mode, that is, the write-only memory mode, and the ECC mode, that is the error correction code mode;A WOM device communicatively coupled to the main processor, which is operatively block-initialized according to a block initialization state; andIn response to the codec of the main processor, it is operatively responsive to the ECC parity of the first received data word in the ECC mode to identify the bit position of at least one bit error, and in the ECC mode, In the WOM mode, it is operable to receive WOM-encoded words from the address location in the WOM device, receive the second received data word, and generate WOM-encoded words for writing to the address location in the WOM device, The generated WOM encoded word is generated in response to the second received data word and includes information from the second received data word.15.The system of claim 14, wherein the codec is operable to generate a syndrome based on the first received data word in the ECC mode, and addresses a syndrome table based on the generated syndrome.16.The system of claim 15, wherein the codec is operable in the WOM mode to address part of the syndrome table according to the second received data word.17.The system of claim 16, wherein the second delta is generated in response to the output of the first individually addressed portion of the syndrome table and in response to the output of the second individually addressed portion of the syndrome table Tower word, wherein the first individually addressed part is separately addressed from the second individually addressed part, and wherein in response to comparing the second delta word with the addressing location from the WOM device In the received WOM-encoded word, the second delta word is written to the encoding position in the WOM device.18.A method for error correction, including:Select one of WOM mode, namely write-only memory mode and ECC mode, namely error correction code mode;When operating in the ECC mode, in response to the ECC parity bit of the first received data word, identify the bit position of at least one bit error; andWhen operating in the WOM mode, generate WOM-encoded words for writing to the addressing location in the WOM device, where the generated WOM-encoded words are in response to the encoding from the WOM device The second received data word received at the address location is generated, and the generated WOM-encoded word is generated in response to information from the second received data word.19.The method of claim 18, wherein in response to the output of the syndrome table, a bit position corresponding to at least one bit error of the ECC parity of the first received data word is identified.20.The method of claim 19, wherein the generated WOM-encoded words are generated in response to the output of the syndrome table.
Dual-mode error correction code/writeable memory codecBackground techniqueThe computer system includes a processor operable to retrieve, process, and store data in a memory device. Memory devices used in computer systems include different types of memory devices, and different types of memory devices generally have different capabilities and operating characteristics. According to the requirements of the specific application of the computer system, the type of memory device used for the specific system is selected. For example, certain system designs require the ability to write data to and read data from non-volatile memory locations. However, due to increased costs and/or reduced performance characteristics, certain memory device solutions (such as electrically erasable read-only memory) are not suitable for certain applications.Summary of the inventionThe problems pointed out above can be solved in a system for dual-mode error correction code (ECC) and write-once memory (WOM) encoding and decoding, which includes, for example, a control for selecting one between WOM mode and ECC mode Device. The codec responsive to the controller is arranged to operate in the selected mode. When the codec operates in the ECC mode, it is arranged to identify the bit position of at least one bit error in response to the ECC parity bit of the first received data word. ). When the codec is operating in the WOM mode, it is arranged to receive WOM-encoded words from an addressed location in the WOM device, and receive a second received word to be encoded and written to the addressed location. Data words and WOM coded words generated for writing to the addressing location in the WOM device. The WOM-encoded words used to write to the addressing location are optionally ECC-encoded.This summary is presented with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Further, the summary is neither intended to identify key features or basic features of the claimed subject matter, nor is it intended to be used to help determine the scope of the claimed subject matter.Description of the drawingsFig. 1 shows an exemplary computing system according to an exemplary embodiment of the present disclosure.Fig. 2 is a block diagram of a processing system including a WOM for managing ECC according to an embodiment of the present disclosure.Figure 3 shows symbol-level WOM encoding in an exemplary memory system.Fig. 4 is a block diagram of a dual-mode ECC/WOM codec operating in ECC mode according to an embodiment of the present disclosure.Fig. 5 is a data flow diagram of a dual-mode ECC/WOM codec used in an ECC mode according to an embodiment of the present disclosure.Fig. 6 is a block diagram of a dual-mode ECC/WOM codec operating in WOM mode according to an embodiment of the present disclosure.Fig. 7 is a data flow diagram of a dual-mode ECC/WOM codec used in a WOM mode according to an embodiment of the present disclosure.Fig. 8 is a process flow diagram according to an embodiment of the present disclosure.Detailed waysThe following discussion leads to various embodiments of the invention. Although one or more of these embodiments may be preferred, the disclosed embodiments should not be interpreted as or otherwise used to limit the scope of the present disclosure, including the claims. In addition, those skilled in the art will understand that the following description has a wide range of applications, and the discussion of any embodiment only refers to an example of that embodiment, and is not intended to imply that the scope of the present disclosure, including the claims, is limited to that embodiment.Throughout the following description-and in the claims-certain terms are used to indicate specific system components. As understood by those skilled in the art, various names can be used to indicate components or systems. Therefore, this article does not have to distinguish between components with different names and not different functions. Further, the system may be a subsystem of another system. In the following discussion and claims, the terms "including" and "including" are used in an open manner and are therefore interpreted to mean "including, but not limited to...". Moreover, the terms "coupled to" or "coupled to" (etc.) are intended to describe an indirect or direct electrical connection. Thus, if the first device is coupled to the second device, the connection can be made through a direct electrical connection or an indirect electrical connection via other devices and connections. The term "part" may mean the whole part or a part smaller than the whole part.Figure 1 shows an exemplary computing system 100 according to certain embodiments of the present disclosure. For example, the computing system 100 is an electronic system 129 or is incorporated into an electronic system 129, such as a computer, an electronic control "box" or display, a communication device (including a transmitter), or any other type of electronic system arranged to generate radio frequency signals.In some embodiments, the computing system 100 includes a mega-unit or system on a chip (SoC), which includes control logic such as a CPU 112 (central processing unit), storage 114 (e.g., random access memory (RAM)), and a power supply 110. The CPU 112 may be, for example, a CISC type (complex instruction set computer) CPU, a RISC type CPU (reduced instruction set computer), an MCU type (microcontroller unit), or a digital signal processor (DSP). The storage 114 (may be, for example, a cache on the processor, a cache off the processor, RAM, flash memory, or disk storage) stores one or more software applications 130 (for example, embedded applications), which are executed by the CPU 112 , Perform any appropriate functions associated with the computing system 100.The CPU 112 includes memory and logic that store information frequently accessed by the storage 114. The computing system 100 is generally controlled by a user who uses a UI (User Interface) 116, and during the execution of the software application 130, the UI (User Interface) 116 provides output to and receives input from the user. The display 118, indicator lights, speakers, vibration, etc. are used to provide the output. The input is received using audio and/or video input (eg, using voice or image recognition) and electrical and/or mechanical devices such as keypads, switches, proximity detectors, gyroscopes, accelerometers, etc. The CPU 112 is coupled to an I/O (input-output) port 128, which provides an interface configured to receive input from the networked device 131 (and/or provide output to the networked device 131). The networked device 131 can include any device that can communicate with the computing system 100 peer-to-peer and/or networked. The computing system 100 can also be coupled to peripheral devices and/or computing devices, including tangible non-transitory media (such as flash memory) and/or wired or wireless media. These and other input and output devices are selectively coupled to the computing system 100 by external devices using wireless or wireless connections. The storage 114 can be accessed by the networked device 131, for example.The CPU 112 is coupled to an I/O (input-output) port 128, which provides an interface configured to receive input from (and/or provide output to) a peripheral device and/or computing device 131, The interface includes tangible (e.g., "non-transitory") media (e.g., flash memory) and/or wired or wireless media (e.g., Joint Test Action Group (JTAG) interface). These and other input and output devices are selectively coupled to the computing system 100 by external devices using or wired connections. The CPU 112, storage 114, and power source 110 can be coupled to an external power source (not shown) or to a local power source (eg, batteries, solar cells, alternators, inductive fields, fuel cells, capacitors, etc.).The computing system 100 includes a memory 138. The memory 138 is suitable for relatively fast memory access, and is generally formed using a solid-state memory device. Such solid-state memory devices include (Electronic Error Correction Code) Management-ECC WOM (Write Once Memory) 140. WOM 140 is a memory that can generally be written once (or a relatively small number of times) before being discarded or erased, for example.The write access of WOM 140 that manages ECC is generally faster than the erase cycle of WOM 140 that manages ECC (if any), and in one embodiment, the write access can change the bit position in WOM 140 that manages ECC from the erased state. Write state (for example, "0" to "1"). The erasing state generally depends on the selected technology, so it can be from "0" to "1", or from "1" to "0", and the writing state is generally opposite to the erasing state. (Some memory devices can store multiple bits of information in a single memory cell. In this case, the written bit includes one or more bits of information having a state opposite to the erased state.The WOM 140 of the management ECC is written using the WOM code for effectively writing to the WOM, so that the written WOM can be written multiple times without (for example, block) erasure. The WOM 140 that manages ECC can be used to provide cost-effective non-volatile memory (NVM) with limited reprogramming capabilities and/or an increased number of write/erase cycles (e.g., compared to conventional NVM solutions) ).The memory 138 includes an ECC-WOM (Dual Mode) codec (encoder/decoder) 142. The codec 142 is operable to encode/decode ECC codes and encode/decode WOM codes. As discussed below, the codec 142 calculates syndromes by using a syndrome calculation block, and performs an ECC operation by searching a syndrome table to locate errors. The codec 142 performs the WOM operation by using the syndrome calculation block as a decoder for the WOM code, and reuses the syndrome table during the WOM encoding process. Therefore, the codec 142 is operable to decode WOM-encoded, ECC-decoded data words (e.g., received from a uniquely addressed location in the WOM device).Fig. 2 is a block diagram of a processing system including a WOM for managing ECC according to an embodiment of the present disclosure. In summary, the processing system 200 includes an MCU 204 and a memory controller 210. The MCU 204 and the memory controller 210 are generally arranged on a common substrate 202. The memory controller 210 is communicatively coupled to the MCU 204, and operatively manages the memory devices of (at least) the WOM 140 that manages ECC, as well as RAM 292, PROM (Programmable Read Only Memory) 294, and optional EEPROM ( An electrically erasable read-only memory) 296 (which is optionally formed using a substrate different from the substrate 202) is a memory access for a memory device.During operation, the memory access provided by the memory controller 210 includes a write operation and a read operation. Generally speaking, the data transferred in the write operation is transferred in the top-to-bottom direction as shown in FIG. 2 and the data of the read operation is transferred in the bottom-to-up direction as shown in FIG. 2. Therefore, the main interface 220 is arranged to select (e.g., in response to a system address provided with a memory access command) a memory device to write data to or read data from.The memory controller 210 includes a codec 142 for encoding and decoding ECC and WOM codes. A write-once memory (WOM) code allows the memory to be written multiple times without erasing, which enhances the programmability of otherwise limited devices and/or enhances the write/erase endurance of the WOM 140 that manages ECC. The codec 142 is a low-complexity codec that supports (for example, Hamming type) ECC and WOM codec functions by sharing components between the ECC manager 250 and the WOM manager 260.The codec 142 includes an ECC controller such as the ECC manager 250. During the write operation, the ECC manager 250 is operable to apply the error correction code to the data for writing to the WOM 140 that manages the ECC. During a read operation, the ECC manager 250 is operable to evaluate the retrieved data, and if necessary, the ECC controller is operable to perform corrective actions in response to the evaluation (for example, using the management ECC via the WOM manager 260 The WOM 140 reads the ECC-encoded data to correct the retrieved data).The codec 142 includes a WOM manager 260. During the write operation, the WOM manager 260 is operable to use the WOM code for writing WOM (and ECC) encoded data to the WOM 140 that manages ECC to encode the data (for example, encoded by the ECC manager 260 that uses ECC encoding) data). During the read operation, the WOM manager 260 is operable to decode WOM-encoded data from the WOM 140 that manages ECC. After decoding the WOM-encoded data, the decoded data is transferred to the ECC manager 250 to be further decoded according to the ECC encoding used for the initial encoding of the data written to the WOM 140 that manages the ECC.The memory controller 210 includes a memory device interface 270. During the write operation, the memory device interface 270 is operable to write encoded data (eg, data encoded by the ECC manager 260 using ECC encoding and the WOM manager 260 using WOM code) to the WOM 140 that manages ECC. During a read operation, the memory device interface 270 is operable to read encoded data from the WOM 140 that manages ECC. The memory device interface 270 is also operable to perform a block initialization routine on the WOM 140 that manages ECC (for example, block erase WOM 140 that manages ECC, so that all addressed memory locations are erased/erased to a logic 0 state). Generally, the block initialization routine requires more time to execute than each read or write cycle of the WOM 140 that manages ECC.In various embodiments, the WOM manager 260 is operable to encode the payload data into WOM-encoded data such that the ECC manager 250 encodes the WOM-encoded data. Similarly, the ECC manager 250 is operable to decode the ECC-encoded data retrieved from the memory, so that the WOM manager 260 decodes the WOM-encoded data to retrieve the initially encoded payload data.It is possible to implement WOM encoding using n-bit symbols that are written to the WOM memory a limited number of times. For example, Table 1 shows WOM codes for 2-bit symbols that can be written to WOM twice (for example, before memory erasure is required).Table 12Bit symbol 3-digit WOM code (first write) 3-digit WOM code (second write) 00 000 111 01 100 011 10 010 101 11 001 110Each row of Table 1 shows that the 2-bit symbol WOM is encoded into a 3-bit field. When the WOM-encoded data is written to the WOM memory for the first time, only 1 bit (at most) is set in the encoded data. When the WOM-encoded data is written to the WOM memory for the second time, at least two (three) bits are set in the encoded data. Therefore, the WOM manager 260 can determine the number of times to write to the location by reading the data stored in the WOM location (symbols used to store the WOM encoding) (and for example, without relying on independent counters for each memory location).Figure 3 shows symbol-level WOM encoding in an exemplary memory system. In summary, the memory system 300 includes a symbol space 302 and a WOM code-encoded memory 304. The symbol space 302 includes a 2-bit symbol 310 having a value of "00", for example.In operation 312, the symbol 310 is encoded according to Table 1. (Understand that the principles and techniques described in this article can be used for n-bit symbols and are not limited to 2-bit symbols). The WOM code-encoded memory 304 includes a 3-bit value ("000") 320 for storing the encoded symbol 312 (for example, to simplify the description, the erased bit value "0" of the WOM code-encoded memory 304 is used) . The memory 304 encoded by the WOM code is prone to bit errors, which may cause data loss.In operation 322, an error occurs in the least significant bit of the memory location 320. Regardless of whether error correction codes are used (or not used) in the encoded data, there may be single or multiple bit errors in the encoded data: the strength of the error correction code determines all that can be corrected in the decoded data. Describe the degree of error in the encoded data. Therefore, the 3-bit value 320 erroneously becomes the 3-bit value ("001") 330, which represents a 1-bit error.In operation 332, the 3-bit value ("001") 330 is read and decoded according to Table 1, so that the 2-bit symbol (representing the decoded 3-bit value 330) has the value "11". The value "1" in the example indicates a 2-bit error in the symbol, although only a 1-bit error occurs in the memory 304 encoded by the WOM code.Fig. 4 is a block diagram of a dual-mode ECC/WOM codec operating in ECC mode according to an embodiment of the present disclosure. The ECC/WOM codec 400 includes a decoder (DEC) 410, an exclusive OR (XOR) comparator 420, an XOR comparator 430, an m-bit counter 440, syndrome tables 450 and 460 (discussed below, which are collectively used as a single sheet Table operation, or part of it separately as two separately addressable tables), OR gate 470, controller 480, and mode selector 490. The mode selector 490 is operable to control the operation of the ECC/WOM codec 400 according to the selected mode. For example, a processor (such as the CPU 112) is communicatively coupled to the mode selector 490 to place the ECC/WOM codec 400 in the ECC mode or the WOM mode.Generally, the ECC/WOM codec 400 is operable to encode and decode data words stored in memory. In one embodiment, at most 1 bit error (and correctable) is expected in the ECC/WOM codec 400 (there may be other embodiments that encounter more than one correctable error). The ECC mode of the ECC/WOM codec 400 is described below, for example, and the WOM mode of the ECC/WOM codec 400 is described below with respect to FIGS. 6 and 7.In the ECC mode, the codec 400 includes a syndrome calculation block that is operable to calculate syndromes in response to received ECC-encoded words (e.g., previously stored in a memory). As discussed below with reference to the following figure, the calculated syndrome is used as an index in the syndrome table 450 to locate errors (if any in the received ECC encoded words). For example, in response to detecting an error in the received ECC encoded word, the signal "err" ("err") is asserted. As discussed below with respect to FIG. 5, the codec 400 is operable to determine the bit position of the detected error so that the bit with the detected error can be corrected by switching the value of the bit with the detected error.Fig. 5 is a data flow diagram of a dual-mode ECC/WOM codec operating in ECC mode according to an embodiment of the present disclosure. In summary, the data flow diagram 500 shows the matrix operation of the dual-mode ECC/WOM codec 400 operating in (e.g., Hamming) ECC decoding mode. For each received codeword, a syndrome is calculated in response to the value of the received codeword. By indexing the syndrome table in response to the calculated syndrome, errors in the received codeword are located. The data flow diagram 500 includes a received codeword 502, a check matrix 510, a syndrome matrix 520, a syndrome index 522, and an error vector matrix 530.The received codeword matrix 502 is a one-dimensional matrix d[1] to d[15] of received codewords (for example, received data vectors), in which each bit is prone to bit errors. Bits d[1] to d[4] are initially coded as data bits, and bits d[5] to d[15] are initially coded as parity bits.The check matrix 510 is a 4×15 matrix with rows 512, 514, 516, and 518, where each row is respectively associated with data bits (for example, bits d[1] to d[4]) of the received codeword. The columns c[1] to c[4] each have a set bit, which constitutes an association with a specific bit of the data bit of the received codeword matrix 502. The columns c[5] to c[15] each have one or more bits, which are set to indicate which of the data bits of the received codeword 502 are used for each corresponding column c[5] to c[15] Generate parity bits.The syndrome matrix 520 is generated by vector multiplying the received codeword matrix 502 and the check matrix 510. The syndrome matrix 520 indicates, for example, whether the received codeword matrix 502 is error-free (for example, when all bits of the syndrome matrix are 0), and if not, which column of the received codeword matrix 502 contains bit errors.When the syndrome matrix 520 includes non-zero bit values, the syndrome matrix 520 is used to address the index 522 to determine which column of the received codeword matrix 502 contains bit errors. For example, the error vector matrix 530 includes (ECC) columns e[1] to e[15]. The non-zero value in each of the columns indicates a bit error of a particular bit within the received codeword matrix 502. Therefore, the value of the syndrome matrix 520 is used to select a specific row within the error vector matrix 530, where the position of the non-zero bit value in the selected row indicates the specific column where the detected bit error occurred. Each line of the error vector matrix 530 is a "coset leader", which indicates a word with the smallest number of non-zero entities in the coding theory.Fig. 6 is a block diagram of a dual-mode ECC/WOM codec operating in WOM mode according to an embodiment of the present disclosure. Generally speaking, the ECC/WOM codec 400 is operative in WOM mode to encode and decode data words stored in memory according to syndrome calculations, where up to 1 bit error (can be corrected by the associated parity bit) is expected. ).The ECC/WOM codec 400 operating in the WOM mode slightly changes (for example, "reuses") the syndrome calculation block used when operating in the ECC mode (for example, includes syndrome tables 450 and 460 according to the Hamming decoding operation). ). Because the Hamming decoder with code parameter (size) "m" can also encode/decode WOM code with symbol size mWOM=m-1, when the size of data written to and from WOM is less than "m", The WOM encoding/decoding process does not require the entire part of the syndrome calculation block. (As discussed below with reference to Figure 7, the symbol for WOM mode is 3 bits long, while the symbol for ECC mode is 4 bits long).In the WOM mode, the decoder 410 is operable to decode WOM-encoded words (eg, "OldC") received from WOM storage, and in response to such decoding, generate decoded data ("OldD") words. A new data ("NewD") word is received, which has information to be written to (for example) the same WOM storage location as OldD was read. In area 602, a search-based process is initiated to determine candidate data words, each candidate data word is generated in response to the NewD word, and (at least) one candidate word among the candidate words is suitable for writing to the same WOM storage location.The OldD and NewD signals are compared by the comparator 420, which is a bitwise XOR gate that is operable to generate a first delta signal value (eg, "delta1") in response. The m-bit counter 440 is operable to generate various count values that are iteratively processed and, for example, compared with the first delta signal value during the search process, so as to determine acceptable candidates for WOM encoding (for example, as described below, in After iteratively comparing various count values with the first delta signal value, and processing the comparison result). Each generated count value is usually different from the previous count value, and it does not have to be generated in the sequence of consecutive values.The count generated by the m-bit counter 440 is also used as the first candidate data word, which provides an index (X1) for addressing the syndrome table 460. The syndrome table 460 is operable in response to the first candidate data word X1 in the WOM mode to generate a first WOM encoded candidate X1C.The comparator 430 is operable to compare the first delta signal value with the count generated by the m-bit counter 440 to generate a candidate data word X2. The candidate data word X2 is operable as an index for addressing the syndrome table 450. The syndrome table 450 is operable to generate a second WOM encoded candidate X2C in response to the first candidate data word X2 in the WOM mode.The success of each iteration of the candidate search process is evaluated in area 604. For example, the first WOM coded candidate X1C and the second WOM coded candidate X2C are logically ORed (for example, through an OR gate 470) to generate a second delta signal value ("delta2"). The second delta signal value is logically ANDed (for example, by the controller 480) with OldC (for example, the value of the previously programmed WOM code read from the WOM storage location). If the result of the operation of the controller 480 contains all 0s, it is determined that a suitable delta candidate for writing to the WOM storage location has been found (for example, see the signal "found_delta"), and the search process does not need to be further iterated. Next, write the appropriate delta candidate (X1C) to the addressed WOM storage location.If the result of the operation of the controller 480 includes a non-zero value, it is determined that a suitable delta candidate for writing to the WOM storage location has not been found, and at least one subsequent iteration of the search process is indicated. In each subsequent iteration of the process loop, a different count value is generated (e.g., determined by incrementing a counter), which is used to generate further candidate values as discussed above.The mode selector 490 is operable to selectively place the codec 400 in, for example, ECC mode (e.g., where ECC encoding and/or ECC decoding functions are performed) or WOM mode mode (e.g., where WOM encoding and/or WOM decoding functions are performed) ). "Reuse" parts of the codec 400 (e.g., decoder 410 and syndrome table 450) (e.g. operable in two modes), which (e.g.) reduces the design complexity required to additionally implement a separate ECC decoder Sex. The mode selector 490 can be implemented in hardware, software or a combination of both, wherein the implementation is formed on a common or separate substrate.Fig. 7 is a data flow diagram of a dual-mode ECC/WOM codec operating in WOM mode according to an embodiment of the present disclosure. In summary, the data flow diagram 700 shows the matrix operation of a dual-mode ECC/WOM codec operating in WOM mode.In the WOM mode, the selected partial check matrix 510 is used. For example, a received WOM-encoded word with 8 bits is received. Therefore, select (WOM) three rows (for example, rows 512, 514, 516) of columns c[1] to c[8] in the WOM mode. The columns c[1] to c[3] (in the WOM mode) each have one bit, which is set to form an association with a specific bit of the data bit of the received codeword matrix. The columns c[4] to c[7] each have one or more bits, which are set to indicate which bits of the data bits of the received codeword 502 are used for each corresponding column c[4] to c[7 ] Generate parity bit.By vector multiplying the received codeword matrix 502 and the check matrix 510 (the position of the WOM mode selection), a syndrome matrix X1 (as discussed above with respect to FIG. 6) is generated. The syndrome matrix X1 is used to address the first part 702 of the index 522. In a similar manner, the syndrome matrix X2 (also discussed above with respect to FIG. 6) is used to address the second part 704 of the index 522.In the WOM mode, different sets of selection columns are used to generate the first WOM coded candidate X1C and the second WOM coded candidate X2C. For example, the candidate X1C of the first WOM encoding is related to column e[1] (shared with ECC mode column e[2]), column e[2] (shared with ECC mode column e[3]), column e[3] ( Shared with ECC mode column e[4]), column e[4] (shared with ECC mode column e[6]), column e[5] (shared with ECC mode column e[7]), column e[6] (Shared with ECC mode column e[12]) and column e[7] (shared with ECC mode column e[10]) are associated with the upper 8 rows.For another example, the candidate X2C of the second WOM encoding is related to column e[1] (shared with ECC mode column e[5]), column e[2] (shared with ECC mode column e[9]), and column e[3] (Shared with ECC mode column e[15]), column e[4] (shared with ECC mode column e[11]), column e[5] (shared with ECC mode column e[14]), column e[6 ] (Shared with ECC mode column e[13]) and column e[7] (shared with ECC mode column e[8]) are associated with the bottom 8 rows.Fig. 8 is a process flow diagram according to an embodiment of the present disclosure. The process flow starts at end 802, and the process flow proceeds to operation 810. In operation 810, an operation (e.g., operable) mode is selected. For example, select one of WOM (Write Only Memory) mode and ECC (Error Correction Code) mode. Because the WOM-encoded words read from the addressing position in the WOM device may also be ECC-encoded, the mode is selected in an order compatible with the order of the strategy type used to initially encode the word (e.g., decoding) . The program flow proceeds to step 820.In operation 820, for example, an ECC decoding operation is performed on the code word received from the WOM device. The program flow proceeds to step 830.In operation 830, a WOM encoding/decoding operation is performed when, for example, the programming word previously written to the address position in the WOM device is overwritten. For example, the WOM encoding/decoding operation is performed by reading and decoding the programming words previously written to the address position in the WOM device. The decoded (old) word is used in conjunction with the new word to generate a WOM code suitable for writing to the addressing location in the WOM device. The program flow proceeds to step 840.In operation 840, the generated word having the WOM code suitable for writing to the addressing position in the WOM device is stored in the WOM device. Generally, the WOM device is block-initialized (where each WOM bit is set or cleared to the same logic state), and by changing the bit from the block-initialized state to the write (for example, programming) state (as opposed to the block-initialized state) )It is written. As discussed above, by using WOM encoding, a specific WOM memory location can be overwritten at least once. The program flow proceeds to step 850.In operation 850, the words stored in the WOM are retrieved, decoded, and the decoded words are evaluated. For example, when the WOM decoded symbol and ECC bit read from the WOM device indicate an error, the ECC controller is operable to perform corrective actions in response to the evaluation (for example, correcting bit errors, generating system interrupts, using free storage locations Replace the fault location of the WOM device, etc.). The program flow proceeds to the end 899 where the program flow terminates.The various embodiments described above are provided by way of example only, and should not be construed as limiting the appended claims. Those skilled in the art can easily recognize that various modifications and changes can be made without following the exemplary embodiments and applications shown and described herein, without departing from the true spirit and scope of the following claims.
In one embodiment, a floating body field-effect transistor includes a pair of source/drain regions having a floating body channel region received therebetween. The source/drain regions and the floating body channel region are received over an insulator. A gate electrode is proximate the floating body channel region. A gate dielectric is received between the gate electrode and the floating body channel region. The floating body channel region has a semiconductor SixGe(1_X)-comprising region. The floating body channel region has a semiconductor silicon-comprising region received between the semiconductor SixGe(1_X)-comprising region and the gate dielectric. The semiconductor SixGe(1_X)-comprising region has greater quantity of Ge than any quantity of Ge within the semiconductor silicon-comprising region. Other embodiments are contemplated, including methods of forming floating body field-effect transistors.
CLAIMS: 1. A floating body field-effect transistor comprising: a pair of source/drain regions having a floating body channel region received therebetween, the source/drain regions and the floating body channel region being received over an insulator; a gate electrode proximate the floating body channel region; a gate dielectric received between the gate electrode and the floating body channel region; the floating body channel region comprising a semiconductor SixGe(1_X)-comprising region; and the floating body channel region comprising a semiconductor silicon-comprising region received between the semiconductor SixGe(1_X)-comprising region and the gate dielectric, the semiconductor SixGe(1.X)-comprising region having greater quantity of Ge than any quantity of Ge within the semiconductor silicon-comprising region. 2. The floating body field-effect transistor of claim 1 wherein the semiconductor silicon-comprising region is void of Ge. 3. The floating body field-effect transistor of claim 1 wherein the semiconductor silicon-comprising region comprises Ge. 4. The floating body field-effect transistor of claim 3 wherein Ge quantity in the semiconductor silicon-comprising region is less than about 10 atomic per cent. 5. The floating body field-effect transistor of claim 4 wherein Ge quantity in the semiconductor silicon-comprising region is less than about 1 atomic per cent. 6. The floating body field-effect transistor of claim 4 wherein Ge quantity in the semiconductor silicon-comprising region is less than about 0.1 atomic per cent. 7. The floating body field-effect transistor of claim 1 wherein x is at least 0.5. 8. The floating body field-effect transistor of claim 7 wherein x is at least 0.7. 9. The floating body field-effect transistor of claim 7 wherein x is no greater than 0.85. 10. The floating body field-effect transistor of claim 7 wherein x is no greater than 0.8. 11. The floating body field-effect transistor of claim 7 wherein x is from 0.7 to 0.85. 12. The floating body field-effect transistor of claim 1 wherein x is from 0.5 to 0.6, and the semiconductor SixGe(1_X)-comprising region has a thickness of from about 20 Angstroms to about 50 Angstroms. 13. The floating body field-effect transistor of claim 1 wherein the semiconductor SixGe(1_X)-comprising region has a thickness of at least 20 Angstroms. 14. The floating body field-effect transistor of claim 13 the semiconductor SixGe(1.X)-comprising region has a thickness of from about 100 Angstroms to about 600 Angstroms. 15. The floating body field-effect transistor of claim 14 wherein the transistor is partially depleted in operation, and the semiconductor SixGe(i_X)-comprising region has a thickness of from about 300 Angstroms to about 600 Angstroms. 16. The floating body field-effect transistor of claim 14 wherein the transistor is fully depleted in operation, and the semiconductor SixGe(i_X)-comprising region has a thickness of from about 100 Angstroms to about 300 Angstroms. 17. The floating body field-effect transistor of claim 1 wherein the semiconductor SixGe( 1_X)-comprising region has a thickness which is from about 25% to about 75% of total thickness of the floating body channel region. 18. The floating body field-effect transistor of claim 17 wherein the semiconductor SixGe(1_X)-comprising region has a thickness which is about equal to that of the semiconductor silicon-comprising region. 19. The floating body field-effect transistor of claim 1 wherein the semiconductor SixGe(i_X)-comprising region has a thickness which is less than that of the semiconductor silicon-comprising region. 20. The floating body field-effect transistor of claim 1 wherein the semiconductor SixGe(1_x)-comprising region has a thickness which is greater than that of the semiconductor silicon-comprising region. 21. The floating body field-effect transistor of claim 1 wherein the semiconductor SixGe(i_X)-comprising region has a maximum width which is greater than that of the semiconductor silicon-comprising region. 22. The floating body field-effect transistor of claim 1 wherein the semiconductor SixGe(1_x)-comprising region and the semiconductor silicon-comprising region have the same maximum widths. 23. The floating body field-effect transistor of claim 1 wherein each of the pair of source/drain regions comprises an elevated source/drain portion and a non-elevated source/drain portion. 24. The floating body field-effect transistor of claim 23 wherein the elevated source/drain portion comprises SixGe(1_X)-comprising material. 25. The floating body field-effect transistor of claim 24 wherein Ge quantity in the elevated source/drain portion is greater than any quantity of Ge within the non-elevated source/drain portion. 26. The floating body field-effect transistor of claim 25 wherein the non-elevated source/drain portion comprises a highest dopant concentration region, at least the highest dopant concentration region of the non-elevated source/drain portion being void of Ge. 27. The floating body field-effect transistor of claim 26 wherein at least the highest dopant concentration region of the non-elevated source/ drain portion comprises silicon. 28. The floating body field-effect transistor of claim 1 wherein the semiconductor SixGe(1 -X)-comprising region is received directly physically contacting against the insulator. 29. The floating body field-effect transistor of claim 1 wherein the semiconductor SixGe(1.X)-comprising region is not received directly physically contacting against the insulator. 30. The floating body field-effect transistor of claim 1 comprising another semiconductor silicon-comprising region received between the semiconductor SixGe(1.X)-comprising region and the insulator, the semiconductor SixGe(1_X)-comprising region having greater quantity of Ge than any quantity of Ge within the another semiconductor silicon-comprising region. 31. The floating body field-effect transistor of claim 1 wherein the semiconductor SixGe(i_X)-comprising region comprises laterally outermost sidewalls which directly physically contact against the source/drain regions. 32. The floating body field-effect transistor of claim 1 wherein the semiconductor SixGe(1_X)-comprising region comprises laterally outermost sidewalls which do not directly physically contact against the source/drain regions. 33. The floating body field-effect transistor of claim 1 wherein the semiconductor SixGe(i_X)-comprising region comprises laterally outermost sidewalls, insulative material being received between at least some of the laterally outermost sidewalls and the source/drain regions. 34. The floating body field-effect transistor of claim 33 wherein the insulative material is received between all of the laterally outermost sidewalls and the source/drain regions. 35. The floating body field-effect transistor of claim 1 wherein the semiconductor SixGe(1_X)-comprising region is homogenous at least regarding Ge concentration. 36. The floating body field-effect transistor of claim 1 wherein the semiconductor SixGe(i_X)-comprising region is not homogenous at least regarding Ge concentration. 37. The floating body field-effect transistor of claim 36 wherein one portion of the semiconductor SixGe(1_x)-comprising region has a higher concentration of Ge than does another portion of the semiconductor SixGe(1_X)-comprising region, the another portion being received between the one portion and the insulator. 38. The floating body field-effect transistor of claim 36 wherein one portion of the semiconductor SixGe(i_X)-comprising region has a higher concentration of Ge than does another portion of the semiconductor SixGe(1_X)-comprising region, the one portion being received between the another portion and the insulator. 39. A floating body field-effect transistor comprising: a pair of source/drain regions having a floating body channel region received therebetween, the source/drain regions and the floating body channel region being received over an insulator; a gate electrode proximate the floating body channel region; a gate dielectric received between the gate electrode and the floating body channel region; and each of the pair of source/drain regions comprising an elevated source/drain portion and a non-elevated source/drain portion, the elevated source/drain portion comprising SixGe(1-X), the non-elevated source/drain portion comprising a highest dopant concentration portion comprising silicon, Ge quantity in the elevated source/drain portion being greater than any quantity of Ge within the highest dopant concentration portion of the non-elevated silicon- comprising source/drain portion. 40. The floating body field-effect transistor of claim 39 wherein the highest dopant concentration portion of the non-elevated source/drain portion is void of Ge. 41. The floating body field-effect transistor of claim 39 wherein the highest dopant concentration portion of the non-elevated source/drain portion comprises Ge. 42. The floating body field-effect transistor of claim 41 wherein Ge quantity in the highest dopant concentration portion of the non-elevated source/drain portion is less than about 10 atomic per cent. 43. The floating body field-effect transistor of claim 42 wherein Ge quantity in the highest dopant concentration portion of the non-elevated source/drain portion is less than about 1 atomic per cent. 44. A floating body field-effect transistor comprising: a pair of source/drain regions having a floating body channel region received therebetween, the source/drain regions and the floating body channel region being received over an insulator; a gate electrode proximate the floating body channel region; a gate dielectric received between the gate electrode and the floating body channel region; and the floating body channel region comprising a semiconductor SixGe(1_χ)-comprising region, a semiconductor silicon-comprising region, and an insulating material region received between the semiconductor SixGe(i_X)-comprising region and the semiconductor silicon-comprising region, the semiconductor SixGe(1-X)-comprising region having greater quantity of Ge than any quantity of Ge within the semiconductor silicon-comprising region. 45. The floating body field-effect transistor of claim 44 wherein the semiconductor silicon-comprising region is void of Ge. 46. The floating body field-effect transistor of claim 44 wherein the semiconductor silicon-comprising region comprises Ge. 47. The floating body field-effect transistor of claim 46 wherein Ge quantity in the semiconductor silicon-comprising region is less than about 10 atomic per cent. 48. The floating body field-effect transistor of claim 47 wherein Ge quantity in the semiconductor silicon-comprising region is less than about 1 atomic per cent. 49. A floating body field-effect transistor comprising: a pair of source/drain regions having a floating body channel region received therebetween, the source/drain regions and the floating body channel region being received over an insulator; a gate electrode proximate the floating body channel region; a gate dielectric received between the gate electrode and the floating body channel region; the floating body channel region comprising a semiconductor first silicon-comprising region, a semiconductor second silicon-comprising region, and a semiconductor SixGe(1_X)-comprising region; the semiconductor SixGe(1.χ)-comprising region being received between the semiconductor first and second silicon-comprising regions; the semiconductor SixGe(1-X). comprising region having greater quantity of Ge than any quantity of Ge within each of the semiconductor first and second silicon-comprising regions; and the semiconductor first silicon-comprising region being received directly physically contacting against the insulator and comprising laterally outermost sidewalls, an insulative material being received between at least some of the laterally outermost sidewalls and the source/drain regions. 50. The floating body field-effect transistor of claim 49 wherein the semiconductor first and second silicon-comprising regions are void of Ge. 51. The floating body field-effect transistor of claim 49 wherein the semiconductor first and second silicon-comprising regions consist essentially of p-doped silicon. 52. The floating body field-effect transistor of claim 49 wherein the semiconductor second silicon-comprising region and the semiconductor SixGe(1_X)-comprising region comprise laterally outermost sidewalls which directly physically contact against the source/drain regions. 53. A floating body field-effect transistor comprising: a pair of source/drain regions having a floating body channel region received therebetween, the source/drain regions and the floating body channel region being received over an insulator; a gate electrode proximate the floating body channel region; a gate dielectric received between the gate electrode and the floating body channel region; and the floating body channel region comprising first and second regions, the second region being received elevationally between the gate dielectric and the first region, the first region comprising laterally outermost sidewalls, insulative material being received contacting directly physically against the laterally outermost sidewalls of the first region. 54. The floating body field-effect transistor of claim 53 wherein each of the pair of source/drain regions is formed over the insulative material. 55. The floating body field-effect transistor of claim 53 wherein each of the pair of source/drain regions comprises an elevated source/drain portion and a non-elevated source/drain portion. 56. The floating body field-effect transistor of claim 53 wherein the first region has a thickness which is greater than that of the second. 57. The floating body field-effect transistor of claim 53 wherein each of the first and second regions is void of Ge. 58. The floating body field-effect transistor of claim 53 wherein at least one of the first and second regions comprises Ge. 59. A method of forming a floating body field-effect transistor, comprising: forming a semiconductor SixGe(i_X)-comprising layer over an insulator; forming a semiconductor silicon-comprising layer over and in direct physical contact with the semiconductor SixGe(1_X)-comprising layer, the semiconductor SixGe(1_x)-comprising layer having greater quantity of Ge than any quantity of Ge within the semiconductor silicon-comprising layer; forming a gate dielectric and a gate electrode over the semiconductor silicon-comprising layer; and using the gate dielectric and the gate electrode at least in part as a mask, ion implanting n-type conductivity enhancing impurity into unmasked portions of the semiconductor silicon-comprising layer and the semiconductor SixGe(1_X)-comprising layer to form a pair of highest dopant concentration n-type source/drain regions comprising the semiconductor silicon-comprising layer and the semiconductor SixGe(1_X). comprising layer, and forming a floating body channel region between the pair, the floating body channel region comprising the semiconductor silicon-comprising layer and the semiconductor SixGe(1_X)-comprising layer. 60. The method of claim 59 comprising epitaxially growing a semiconductor SixGe(1_X)_comprising material outwardly from the silicon-comprising pair of highest dopant concentration n-type source/drain regions to form elevated source/drain portions comprising the semiconductor SixGe(i_X)-comprising material. 61. The method of claim 59 wherein forming the semiconductor SixGe(i_X)-comprising layer forms the semiconductor SixGe(i_x)-comprising layer to be received directly physically contacting against the insulator. 62. The method of claim 59 wherein forming the semiconductor SixGe(1-X)-comprising layer forms the semiconductor SixGe(1.X)-comprising layer to not be received directly physically contacting against the insulator. 63. The method of claim 59 wherein forming the semiconductor SixGe(i_X)-comprising layer to be homogenous at least regarding Ge concentration. 64. The method of claim 59 wherein forming the semiconductor SixGe(1_X)-comprising layer to not be homogenous at least regarding Ge concentration. 65. The method of claim 64 wherein forming the semiconductor SixGe(1_X)-comprising layer comprising forming one portion of the semiconductor SixGe(i_X)-comprising layer to have a higher concentration of Ge than does another portion of the semiconductor SixGe(1_X)-comprising layer, the another portion being received between the one portion and the insulator. 66. The method of claim 64 wherein forming the semiconductor SixGe(i_X)-comprising layer comprising forming one portion of the semiconductor SixGe(1_X)-comprismg layer to have a higher concentration of Ge than does another portion of the semiconductor SixGe(i_X)-comprising layer, the one portion being received between the another portion and the insulator. 67. A method of forming a floating body field-effect transistor, comprising: forming a semiconductor SixGe(i_X)-comprising layer over an insulator; forming a semiconductor silicon-comprising layer over and in direct physical contact with the semiconductor SixGe(1_X)-comprising layer, the semiconductor SixGe(i_X)-comprising layer having greater quantity of Ge than any quantity of Ge within the semiconductor silicon-comprising layer; forming a gate dielectric and a gate electrode over the semiconductor silicon-comprising layer; using the gate dielectric and the gate electrode at least in part as a mask, etching into unmasked portions of the semiconductor silicon-comprising layer and the semiconductor SixGe(i_X)-comprising layer to form a floating body channel region comprising the semiconductor silicon-comprising layer and the semiconductor SixGe(1_X)-comprising layer; and after the etching, epitaxially growing semiconductive silicon-comprising material from laterally outermost sidewalls of at least the silicon-comprising layer to form a pair of source/drain regions. 68. The method of claim 67 comprising epitaxially growing semiconductive silicon-comprising material from laterally outermost sidewalls of both the silicon-comprising layer and the SixGe(1-X)-comprising layer to form the a pair of source/drain regions. 69. The method of claim 67 comprising epitaxially growing semiconductive silicon-comprising material from laterally outermost sidewalls of only the silicon-comprising layer to form the pair of source/drain regions. 70. The method of claim 67 wherein forming the semiconductor SixGe(i-X)-comprising layer forms the semiconductor SixGe(1.X)-comprising layer to be received directly physically contacting against the insulator. 71. The method of claim 67 wherein forming the semiconductor SixGe(1-X)-comprising layer forms the semiconductor SixGe(1_x)-comprising layer to not be received directly physically contacting against the insulator. 72. The method of claim 71 comprising forming the semiconductor SixGe(1_X)-comprising layer over another semiconductor silicon-comprising layer, the semiconductor SixGe(1.x)-comprising region having greater quantity of Ge than any quantity of Ge within the another semiconductor silicon-comprising region. 73. The method of claim 72 comprising epitaxially growing semiconductive silicon-comprising material from laterally outermost sidewalls of the another silicon-comprising layer and from the SixGe(1_x)-comprising layer to form the pair of source/drain regions. 74. The method of claim 67 wherein, prior to the etching, forming first sidewall spacers over sidewalls of the gate electrode; the etching comprises first etching through the semiconductor silicon-comprising layer at least to the semiconductor SixGe(i.X)-comprising layer using the gate dielectric, the gate electrode and the first sidewall spacers at least in part as masking; the etching comprises after the first etching, forming second sidewall spacers over the first sidewall spacers and over sidewalls of the etched through semiconductor silicon-comprising layer; the etching comprises after forming the second sidewall spacers, second etching through at least some of the semiconductor SixGe(i.X)-comprising layer using the gate dielectric, the gate electrode, the first sidewall spacers, and the second sidewall spacers at least in part as masking; after the second etching, forming insulative material over sidewalls of the semiconductor SixGe(i_x)-comprising layer; after forming the insulative material, etching the second sidewall spacers to expose sidewalls of the semiconductor silicon-comprising layer; and after etching the second sidewall spacers to expose sidewalls of the semiconductor silicon-comprising layer, conducting said epitaxially growing. 75. The method of claim 67 comprising: after the etching, selectively etching a portion of the semiconductor SixGe( I. X)-comprising layer relative to the semiconductor silicon-comprising layer; forming insulative material in place of the etched portion; and after forming the insulative material, conducting said epitaxially growing 76. A method of forming a floating body field-effect transistor, comprising: forming a semiconductor SixGe(1_X)-comprising layer over and in direct physical contact with a semiconductor silicon-comprising material that is received over an insulator; forming a semiconductor silicon-comprising layer over and in direct physical contact with the semiconductor SixGeQ-x/j-comprising layer, the semiconductor SixGe(1_X)-comprising layer having greater quantity of Ge than any quantity of Ge within the semiconductor silicon-comprising layer; forming a gate dielectric and a gate electrode over the semiconductor silicon-comprising layer; and using the gate dielectric and the gate electrode at least in part as a mask, forming a pair of source/drain regions and forming a floating body channel region between the pair or source/ drain regions ; the floating body channel region comprising the semiconductor silicon-comprising layer, the semiconductor SixGe(1_X)-comprising layer, and the semiconductor silicon-comprising material. 77. The method of claim 76 wherein forming the pair of source/drain regions comprises etching the semiconductor silicon-comprising layer, the semiconductor SixGe(1_X)-comprising layer, and the semiconductor silicon-comprising material, and thereafter epitaxially growing silicon- containing material from sidewalls of the floating body channel region. 78. A method of forming a floating body field-effect transistor, comprising: forming a semiconductor silicon-comprising layer over an insulator; forming a gate dielectric and a gate electrode over the semiconductor silicon-comprising layer; using the gate dielectric and the gate electrode at least in part as a mask, ion implanting n-type conductivity enhancing impurity into unmasked portions of the semiconductor silicon-comprising layer to form a pair of highest dopant concentration n-type source/drain regions comprising the semiconductor silicon-comprising layer, and forming a floating body channel region between the pair comprising the semiconductor silicon-comprising layer; and epitaxially growing a semiconductor SixGe(1.X)-comprising material outwardly from the pair of highest dopant concentration n-type silicon-comprising source/drain regions to form elevated source/drain portions comprising the semiconductor SixGe(1-X)-comprising material. 79. A method of forming a floating body field-effect transistor, comprising: forming a semiconductor first SixGe(!_X)-comprising layer over an insulator; forming a semiconductor second SixGe(1_X)-comprising layer over the semiconductor first SixGe(1_x)-comprising layer, the second SixGe(1-x)- comprising layer having greater Ge quantity than the first SixGe(1.X)-comprising layer; forming a semiconductor silicon-comprising layer over the second SixGe(i_X)-comprising layer, the second SixGe(i.X)-comprising layer having greater quantity of Ge than any quantity of Ge within the semiconductor silicon-comprising layer; forming a gate dielectric and a gate electrode over the semiconductor silicon-comprising layer; using the gate dielectric and the gate electrode at least in part as a mask, etching into unmasked portions of the semiconductor silicon-comprising layer, the second SixGe(I. X)-comprising layer, and the first SixGe(1_X)-comprising layer to form a floating body channel region comprising at least the semiconductor silicon-comprising layer and the first SixGe(i_X)-comprising layer; replacing at least some of the second SixGe(1_X)-comprising layer with insulative material; and after the replacing, epitaxially growing semiconductive silicon-comprising material from laterally outermost sidewalls of at least the silicon-comprising layer and the first SixGe(1_x)-comprising layer to form a pair of source/drain regions. 80. A method of forming a floating body field-effect transistor, comprising: forming a semiconductor SixGe(1-X)-comprising layer and a semiconductor silicon-comprising layer over an insulator; forming a gate dielectric and a gate electrode over the semiconductor silicon-comprising layer; using the gate dielectric and the gate electrode at least in part as a mask, etching into unmasked portions of the semiconductor SixGe(i_X)-comprising layer and the semiconductor silicon-comprising layer to form a floating body channel region comprising the semiconductor SixGe(1_x)-comprising layer and the semiconductor silicon-comprising layer; after the etching, forming insulative material over outermost lateral sidewalls of only one of the semiconductor SixGe(1_X)-comprising layer and the semiconductor silicon-comprising layer of the floating body channel region and not over the other of the semiconductor SixGe(1_X)-comprising layer and the semiconductor silicon-comprising layer of the floating body channel region; and after forming the insulative material, epitaxially growing semiconductive silicon-comprising material from outermost lateral sidewalls of the other of the semiconductor SixGe(i_X)-comprising layer and the semiconductor silicon-comprising layer of the floating body channel region to form a pair of source/drain regions. 81. A method of forming a floating body field-effect transistor, comprising: forming a semiconductive material first region over an insulator, insulative material being received contacting directly physically against laterally outermost sidewalls of the first region; forming a semiconductive material second region over and in direct physical contact with the semiconductive material first region and over the insulative material; forming a gate dielectric and a gate electrode over the semiconductive material second region; using the gate dielectric and the gate electrode at least in part as a mask, etching into unmasked portions of the semiconductive material second region to the insulative material to form a floating body channel region comprising the semiconductive material first region and the semiconductive material second region; and after the etching, epitaxially growing semiconductive material from laterally outermost sidewalls of at least the semiconductive material second region to form a pair of source/drain regions. 82. The method of claim 81 forming the semiconductive first region comprises: depositing a silicon-comprising material over the insulator; etching trenches into the silicon-comprising material to the insulator; and filling the trenches with the insulative material. 83. The method of claim 81 wherein forming the semiconductive material second region comprises epitaxial growth. 84. The method of claim 83 comprising polishing the epitaxial grown semiconductive material second region prior to forming the gate dielectric thereover. 85. The method of claim 81 comprising forming anisotropically etched sidewall spacers over laterally outermost sidewalls of the gate electrode, and using said spacers at least in part as said mask during said etching. 86. The method of claim 81 wherein the pair of source/drain regions is epitaxially grown over the insulative material. 87. The method of claim 86 wherein the pair of source/drain regions is formed in direct physical contact with the insulative material. 88. The method of claim 81 wherein the pair of source/drain regions is formed to comprise an elevated source/drain portion and a non-elevated source/drain portion. 89. The method of claim 81 comprising forming the first region to have a thickness which is greater than that of the second region. 90. The method of claim 81 wherein each of the first and second regions are formed to be void of Ge. 91. The method of claim 81 wherein at least one of the first and second regions is formed to comprise Ge.
DESCRIPTION FLOATING BODY FIELD-EFFECT TRANSISTORS, AND METHODS OF FORMING FLOATING BODY FIELD-EFFECT TRANSISTORS TECHNICAL FIELD Embodiments disclosed herein pertain to floating body field-effect transistors, and to methods of forming floating body field-effect transistors. BACKGROUND One type of dynamic random access memory (DRAM) includes individual memory cells that include a field-effect transistor and a storage capacitor. As the size of integrated circuitry shrinks, the size of the capacitor also shrinks. Generally as the storage capacitor shrinks, the quantity of charge and the time which the charge can be retained decreases as well. Consequently, maintaining an acceptable level of performance of this type of DRAM structure becomes more difficult as the capacitor size decreases. Using capacitor dielectrics having high dielectric constants and increasing capacitor plate surface area through surface roughening, greater vertical dimensions, and other various capacitor shapes have been the conventional approaches to maintaining sufficiently high capacitance. Another type of DRAM cell uses a structure which is void of a discrete storage capacitor. An example of a capacitor-less DRAM consists essentially of only a single transistor (IT) memory cell. Such DRAM cells use a semiconductor-on-insulator (SOI) structure for storing positive electrical charge in the form of "holes". The stored positive charge reduces the transistor threshold voltage (Vt), which is the voltage applied to the gate at which the channel region between the pair of source/drain regions becomes conductive. Accordingly, binary data states are represented in a IT memory cell based on whether the transistor is switched "on" or remains "off" in response to a voltage applied to its gate during a memory read operation. Various SOI IT DRAM cell structures have been developed based on metal-oxide-semiconductor (MOS) field-effect transistor (FET) devices using a floating SOI channel body in which the holes accumulate. Accordingly, the source/drain regions are n-type, and the channel region is lightly doped p-type.These types of IT DRAM cells are generally referred to as floating body cells (FBCs) due to the use of a floating SOI body. As accumulated holes lower the voltage at which the channel becomes conductive, a conductive channel is formed in the same floating SOI body in which the holes accumulate upon appropriate voltage application to the gate of the FET device. A data "1" is written by creating holes (for example, by impact ionization) and push up the body potential to a high level. Conversely, data "0" is written by extracting holes from the body which pulls the body potential down to a low level. By grounding the bit line and by applying negative voltage to the word line, body potential level which is either high or low is held for a certain time. The data can be distinguished using MOSFET current modulated by body potential. The floating SOI channel body can be designed for use as partially depleted semiconductor-on-insulator (PDSOI) or fully depleted semiconductor- on-insulator (FDSOI), which refers to the extent of the formation of the conductive channel within thickness of the floating SOI body. In the case of FDSOI operation, negative substrate (plate) bias is applied so that the back surface of the semiconductor film accumulates holes. In the case of a partially depleted floating body cell (PDFBC), a neutral volume region exists. Accordingly, the neutral volume region is used in the case of PDFBC, and a bottom "plane" is used in the case of fully depleted floating body cell (FDFBC) for respective hole storage regions representing data states by potential level. Regardless, writing a "1" to a floating body cell is achieved by voltage application in which excessive holes are stored in the floating body channel region of the FET. Conversely, application of different voltage potentials to the various FET components removes holes of the floating body channel region, thereby writing a "0". A mostly non-destructive read or data determination state of the FET is conducted typically utilizing a different set of voltage parameters particularly in which the voltage of one of the source/drain regions functioning as a drain is provided at lower voltage than at which such is provided during either a writing " 1" operation or a writing "0" operation. There is a need for refresh of a written "1 " due to hole loss due to injection into the source/drain because of the forward biased junction. Accordingly, any structure which facilitates quantity of hole storage and minimizes hole loss by any mechanism would be an improvement in the context of floating body field-effect transistors.Floating body field-effect transistors might also be used in other than DRAM or in other than memory circuitry. BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is a diagrammatic sectional view of a semiconductor substrate in process in accordance with an embodiment of the invention. Fig. 2 is a view of the Fig. 1 substrate at a processing step subsequent to that shown in Fig. 1. Fig. 3 is a view of the Fig. 2 substrate at a processing step subsequent to that shown in Fig. 2. Fig. 4 is a view of the Fig. 3 substrate at a processing step subsequent to that shown in Fig. 3. Fig. 5 is a view of the Fig. 4 substrate at a processing step subsequent to that shown in Fig. 4. Fig. 6 is a diagrammatic sectional view of another semiconductor substrate in process in accordance with an embodiment of the invention. Fig. 7 is a view of the Fig. 6 substrate at a processing step subsequent to that shown in Fig. 6. Fig. 8 is a diagrammatic sectional view of another semiconductor substrate in process in accordance with an embodiment of the invention. Fig. 9 is a view of the Fig. 8 substrate at a processing step subsequent to that shown in Fig. 8. Fig. 10 is a view of the Fig. 9 substrate at a processing step subsequent to that shown in Fig. 9. Fig. 11 is a diagrammatic sectional view of another semiconductor substrate in process in accordance with an embodiment of the invention. Fig. 12 is a view of the Fig. 11 substrate at a processing step subsequent to that shown in Fig. 11. Fig. 13 is a view of the Fig. 12 substrate at a processing step subsequent to that shown in Fig. 12. Fig. 14 is a view of the Fig. 13 substrate at a processing step subsequent to that shown in Fig. 13. Fig. 15 is a view of the Fig. 14 substrate at a processing step subsequent to that shown in Fig. 14. Fig. 16 is a view of the Fig. 15 substrate at a processing step subsequent to that shown in Fig. 15.Fig. 17 is a diagrammatic sectional view of another semiconductor substrate in process in accordance with an embodiment of the invention. Fig. 18 is a diagrammatic sectional view of another semiconductor substrate in process in accordance with an embodiment of the invention. Fig. 19 is a view of the Fig. 18 substrate at a processing step subsequent to that shown in Fig. 18. Fig. 20 is a view of the Fig. 19 substrate at a processing step subsequent to that shown in Fig. 19. Fig. 21 is a diagrammatic sectional view of another semiconductor substrate in process in accordance with an embodiment of the invention. Fig. 22 is a view of the Fig. 21 substrate at a processing step subsequent to that shown in Fig. 21. Fig. 23 is a view of the Fig. 22 substrate at a processing step subsequent to that shown in Fig. 22. Fig. 24 is a view of the Fig. 23 substrate at a processing step subsequent to that shown in Fig. 23. Fig. 25 is a diagrammatic sectional view of another semiconductor substrate in process in accordance with an embodiment of the invention. Fig. 26 is a view of the Fig. 25 substrate at a processing step subsequent to that shown in Fig. 25. Fig. 27 is a view of the Fig. 26 substrate at a processing step subsequent to that shown in Fig. 26. Fig. 28 is a view of the Fig. 27 substrate at a processing step subsequent to that shown in Fig. 27. Fig. 29 is a diagrammatic sectional view of another semiconductor substrate in process in accordance with an embodiment of the invention. Fig. 30 is a view of the Fig. 29 substrate at a processing step subsequent to that shown in Fig. 29. Fig. 31 is a view of the Fig. 30 substrate at a processing step subsequent to that shown in Fig. 30. Fig. 32 is a view of the Fig. 31 substrate at a processing step subsequent to that shown in Fig. 31. Fig. 33 is a view of the Fig. 32 substrate at a processing step subsequent to that shown in Fig. 32.Fig. 34 is a view of the Fig. 33 substrate at a processing step subsequent to that shown in Fig. 33. Fig. 35 is a view of the Fig. 34 substrate at a processing step subsequent to that shown in Fig. 34. Fig. 36 is a view of the Fig. 35 substrate at a processing step subsequent to that shown in Fig. 35. Fig. 37 is a diagrammatic sectional view of another semiconductor substrate in process in accordance with an embodiment of the invention. Fig. 38 is a view of the Fig. 37 substrate at a processing step subsequent to that shown in Fig. 37. Fig. 39 is a view of the Fig. 38 substrate at a processing step subsequent to that shown in Fig. 38. Fig. 40 is a view of the Fig. 39 substrate at a processing step subsequent to that shown in Fig. 39. Fig. 41 is a view of the Fig. 40 substrate at a processing step subsequent to that shown in Fig. 40. Fig. 42 is a view of the Fig. 41 substrate at a processing step subsequent to that shown in Fig. 41. Fig. 43 is a view of the Fig. 42 substrate at a processing step subsequent to that shown in Fig. 42. DETAILED DESCRIPTION QF EXAMPLE EMBODIMENTS Embodiments encompass methods of forming floating body field-effect transistors, for example for use as memory cells or in other circuitry, and floating body field-effect transistors independent of method of fabrication, also for example for use as memory cells or in other circuitry. Initial embodiments are described with reference to Figs. 1-5. Referring to Fig. 1 , a semiconductor substrate is indicated generally with reference numeral 10. In the context of this document, the term "semiconductor substrate" or "semiconductive substrate" is defined to mean any construction comprising semiconductive material, including, but not limited to, bulk semiconductive materials such as a semiconductive wafer (either alone or in assemblies comprising other materials thereon), and semiconductive material layers (either alone or in assemblies comprising other materials). The term "substrate" refers to any supporting structure, including, but not limited to, the semiconductive substrates described above. Substrate 10 is depicted as comprising a semiconductor region 12 having an insulator 14 formed thereover. An example semiconductor material 12 is doped or undoped monocrystalline silicon (including for example bulk monocrystalline silicon), and an example insulator 14 is silicon dioxide. By way of example only, a thickness range for insulator 14 is from about 30 Angstroms to about 5,000 Angstroms. Referring to Fig. 2, a semiconductor SixGe(1_X)-comprising layer 16 has been formed over insulator 14. Such might be provided by any existing or yet- to-be developed manner. Existing examples may include physical vapor deposition, chemical vapor deposition, atomic layer deposition, and/or epitaxial deposition or lateral overgrowth, and by way of examples only. One specific manner of depositing a SixGe(1.X)-comprising layer 16 includes epitaxial growth wherein a suitable seed layer is provided over insulator 14, with SixGe(1_X)-comprising layer 16 being epitaxially grown therefrom by using a silane and a germane as feed gases with the relative portions thereof determining silicon and germanium concentration within SixGe(1_X)-comprising layer 16. By way of example only, embodiments of the invention include where x is at least 0.5, at least 0.7, no greater than 0.85, no greater than 0.8, and from 0.7 to 0.85. Regardless, Fig. 2 depicts an embodiment wherein semiconductor SixGe(1-X)-comprising layer 16 is formed to be received directly physicallycontacting against insulator 14. Additional embodiments are contemplated wherein semiconductor SixGe(1_X)-comprising layer 16 is not received directly physically contacting against insulator 14, and including where some of the base of semiconductor SixGe(i.X)-comprising layer 16 contacts insulator 14 and some does not. Further by way of example only, semiconductor SixGe(i-X)-comprising layer 16 might be formed to be homogenous at least regarding Ge concentration, or formed to not be homogenous at least regarding Ge concentration. Further by way of example only, semiconductor SixGe(1-X)-comprising layer 16 might be formed to be entirely homogenous as respects all its components. Referring to Fig. 3, a semiconductor silicon-comprising layer 18 has been formed over and in direct physical contact with semiconductor SixGe(1 -X)-comprising layer 16. Semiconductor SixGe(1.X)-comprising layer 16 has a greater quantity of Ge than any quantity of Ge within semiconductor silicon-comprising layer 18. Accordingly, semiconductor silicon-comprising layer 18 may contain some quantity of Ge or may be void of Ge. In the context of this document, "void of Ge" defines no detectable Ge being present within a silicon-comprising layer such as layer 18. In one embodiment, semiconductor silicon-comprising layer 18 is void of Ge. In one embodiment, semiconductor silicon-comprising layer 18 comprises Ge, but ideally in considerably lower concentration than present in SixGe(1_X)-comprising layer 16. Example embodiments include Ge quantity in semiconductor silicon-comprising layer 18 being less than about 10 atomic percent, less than about 1 atomic percent, less than 0.1 atomic percent, and being void of Ge. Referring to Fig. 4, a gate construction 20 has been formed over semiconductor silicon-comprising layer 18. Such is depicted as comprising a gate dielectric 22 having a conductive gate electrode 24 formed thereover. Gate electrode 24 might comprise one or a combination of conductively doped semiconductive material, elemental metal, alloys of elemental metals, and/or conductive metal compounds. Further by way of example only, an insulative cap over gate electrode 24 (not shown) might be associated with gate construction 20. Gate construction 20 is also depicted as comprising anisotropically etched insulative sidewall spacers 26 formed about sidewalls of gate electrode 24 and gate dielectric 22. By way of example only, LDD, halo, and/or other implants into one or both of semiconductor silicon-comprisinglayer 18 and SixGe(1.X)-comprising layer 16 might be conducted prior to or after formation of example anisotropically etched insulative sidewall spacers 26. Referring to Fig. 5, a pair of source/drain regions 28 and a floating body channel region 30 therebetween have been formed using gate dielectric 22 and gate electrode 24 at least in part as a mask. In the context of this document, a source/drain region is any source region and/or drain region of a field-effect transistor which will function as one or both of a source and drain during current flow through the channel region of the FET. Accordingly, a source/drain region in operation might always function as either a source or a drain of a field-effect transistor, or circuitry construction and operation might be provided wherein in some operational regimes a source becomes a drain and a drain becomes a source. In the context of this document, a floating body channel region is that portion of the FET capable of operating as a conductive channel upon suitable application of gate voltage and which includes some portion thereof operable for a hole storage in a hole storage region whether operating in a fully depleted or partially depleted mode. Fig. 5 depicts an embodiment wherein ion implanting of n-type conductivity enhancing impurity has been conducted into unmasked portions of semiconductor silicon-comprising layer 18 and semiconductor SiχGe(1_x)-comprising layer 16 to form a pair of highest dopant concentration n- type source/drain regions 28 using gate dielectric 22 and gate electrode 24 at least in part as a mask during such ion implanting. In the depicted embodiment, insulative sidewall spacers 26 have also effectively been used at least in part as a mask during such implanting. Additional masking might also be used. Regardless, in the Figs. 1 -5 embodiments, a pair of highest dopant concentration n-type source/drain regions 28 comprises both semiconductor silicon-comprising layer 18 and SixGe(1_x)-comprising layer 16 which have been suitably highly conductively doped to be capable of functioning as source/drain regions. Floating body channel region 30 comprises semiconductor silicon-comprising layer 18 and SixGe(1.X)-comprising layer 16. Accordingly and by way of example only, Fig. 5 depicts one embodiment floating body field-effect transistor 32. Such comprises a pair of source/drain regions 28 having a floating body channel region 30 received therebetween. Source/drain regions 28 and floating body channel region 30 are received overan insulator 14. A gate electrode 24 is received proximate floating body channel region 30, with "proximate" in the context of this document requiring being in operable closeness to a floating body channel region to enable operation of the field-effect transistor to selectively cause current flow through some portion of the channel region. A gate dielectric 22 is received between gate electrode 24 and floating body channel region 30. Floating body channel region 30 comprises a semiconductor SixGe(i_X)-comprising region 16 and a semiconductor silicon-comprising region 18 received between semiconductor SixGe( 1_X)-comprising region 16 and gate dielectric 22. Semiconductor SixGe(i_X)-comprising region 16 has greater quantity of Ge than any quantity of Ge within semiconductor silicon-comprising region 18 as explained fully above. In one embodiment, SixGe(i_X)-comprising region 16 has a thickness of at least 20 Angstroms, and in one embodiment has a thickness of from about 100 Angstroms to about 600 Angstroms. In one embodiment wherein the transistor is partially depleted in operation, semiconductor SixGe(1-X)-comprising region 16 has a thickness of from about 300 Angstroms to about 600 Angstroms. In one embodiment wherein the transistor is fully depleted in operation, semiconductor SixGe(1_X)-comprising region 16 has a thickness of from about 100 Angstroms to about 300 Angstroms. In one embodiment where semiconductor SixGe(i_X)-comprising region 16 has a thickness of from about 20 Angstroms to about 50 Angstroms, x is from 0.5 to 0.6. In one embodiment, SixGe(1_X)-comprising region 16 has a thickness which is from about 25% to about 75% of total thickness of floating body channel region 30. Semiconductor SixGe(i_X)-comprising region 16 may have a thickness which is about equal to, less than, or greater than (as shown) that of semiconductor silicon-comprising region 18. Semiconductor SixGe(1 -X)- comprising region 16 and semiconductor silicon-comprising region 18 might be provided to have the same maximum widths, or different maximum widths. For example and by way of example only, Fig. 5 depicts semiconductor SixGe(1_X)- comprising region 16 having a greater maximum width 34 than a maximum width 36 of semiconductor silicon-comprising region 18. This size relationship might of course be reversed, or the maximum widths made equal.Without being limited to any advantages or theory of operation, constructions as provided above and in certain embodiments below might enhance floating body field-effect transistor operation. For example, the band gap offset between SixGe(i_X) and silicon (that has low or no Ge content) lies in the valence band with type II alignment, thereby forming a SiGe potential well for excessive holes which as a result of channel hot electron impact ionization are stored in the bottom SixGe(1-X) potential well. Further, a smaller source/drain junction in a thin SixGe(1-X)-comprising floating body channel might be provided and result in less hole dissipation and longer refresh time than in a floating body channel region the entirety of which is homogenous and predominantly comprises silicon, or in a floating body channel region which is homogenous and predominantly comprises SixGe(1.x). Further, the above attributes are applicable in both partially depleted SOI and fully depleted SOI floating body cells. Further embodiments are next described with reference to Figs. 6 and 7. Like numerals from the above first described embodiments are utilized where appropriate, with differences being indicated with the suffix "a" or with different numerals. Fig. 6 depicts a semiconductor substrate 10a at a processing step subsequent to that depicted by Fig. 4 and alternate to that depicted by Fig. 5. Fig. 6 depicts etching into unmasked portions of semiconductor silicon- comprising layer 18 and semiconductor SixGe(i_x)-comprising layer 16 using gate dielectric 22 and gate electrode 24 at least in part as a mask for such etching. Such also depicts using anisotropically etched insulative sidewall spacers 26 as masking for such etching, and the forming of floating body channel region 30a which comprises semiconductor silicon-comprising layer 18 and semiconductor SixGe(I. X)-comprising layer 16. In Fig. 6 and for purposes of the continuing discussion, such can be considered as comprising respective laterally outermost sidewalls 37. After the etching, semiconductive silicon-comprising material is epitaxially grown from laterally outermost sidewalls of at least the silicon-comprising layer to form a pair of source/drain regions. Fig. 7 depicts one example wherein semiconductive silicon-comprising material 39 has been epitaxially grown from laterally outermost sidewalls 37 of both silicon- comprising layer 18 and SixGe(1-X)-comprising layer 16 to form a pair ofsource/drain regions 38. Semiconductive silicon-comprising material 39 might comprise any of the materials described above with respect to layers 18 and 16, and ideally includes low Ge quantity or is void of Ge as described above. Regardless and further in the depicted example Fig. 7 embodiment, pair of source/drain regions 38 can be considered as respectively comprising an elevated source/drain portion 40 and a non-elevated source/drain portion 42. Further embodiments are next described with reference to Figs. 8-10 with respect to a semiconductor substrate 10b. Like numerals from the first described embodiment have been utilized where appropriate, with differences being indicated with a suffix "b" or with different numerals. Fig. 8 depicts alternate processing to that depicted at least in Fig. 4. In Fig. 8, another semiconductor silicon-comprising layer 44 has been provided to be received between semiconductor SixGe(1_X)-comprising layer 16 and insulator 14. Semiconductor SixGe(1.X)-comprising layer 16 is provided to have greater quantity of Ge than any quantity of Ge within such another semiconductor silicon-comprising layer 44. Accordingly, example attributes as respects Ge quantity in layer 44 are the same as that described above with respect to semiconductor silicon-comprising layer 18, although layer 18 and 44 may have different respective Ge quantities, if any. By way of example only, a thickness range for layer 44 is from about 20 Angstroms to about 100 Angstroms. Further and regardless, Fig. 8 depicts an example embodiment wherein semiconductor SixGe(i_x)-comprising region 16 is not received directly physically contacting against insulator 14. Processing may occur subsequent to Fig. 8 in accordance with Fig. 5 and/or Figs. 6-7, or otherwise, in fabrication of a floating body channel region. By way of example only, Fig. 9 depicts processing corresponding to that of Fig. 6 in formation of a floating body channel region 30b. Fig. 10 depicts processing corresponding to that of Fig. 7 in the fabrication of a pair of source/drain regions 38. Further embodiments of the invention are next described with reference to Figs. 11- 16. Fig. 11 depicts processing of the Fig. 4 substrate to produce an alternate construction to that depicted by Fig. 6 in conjunction with a semiconductor substrate 10c. Like numerals from the first described embodiments have been utilized where appropriate, with differences beingindicated with the suffix "c" or with different numerals. In the context of Fig. 11 , anisotropically etched sidewall spacers 26 might be considered as first sidewall spacers formed over sidewalls of gate electrode 24. Figs. 11-16 depict an embodiment wherein etching occurs into unmasked portions of the semiconductor silicon-comprising layer and the semiconductor SixGe(^x)- comprising layer to form a floating body channel region comprising such layers using the gate dielectric and the gate electrode at least in part as a mask. Fig. 11 illustrates first etching having been conducted through semiconductor silicon- comprising layer 18 at least to semiconductor SixGe(1_X)-comprising layer 16 using gate dielectric 22, gate electrode 24, and first sidewall spacers 26 at least in part as masking during such etching. Such might, by way of example only, be conducted as a timed etch, or an etching chemistry selected to selectively etch silicon-comprising layer 18 selectively relative to SixGe(1_X)-comprising layer 16. Example selective etching chemistries for doing so include plasma using either a SF6, H2, and CF4 mixture or a CF4, CH2F2, N2, and O2 mixture. Referring to Fig. 12, second sidewall spacers 48 have been formed over first sidewall spacers 26 and over sidewalls of etched-through semiconductor silicon-comprising layer 18. An example technique for doing so includes deposition and maskless anisotropic etch. In one embodiment, second sidewall spacers 48 are ideally selectively etchable relative to first sidewall spacers 26. Referring to Fig. 13, a second etching has been conducted, this time through at least some of semiconductor SixGe(1.X)-comprising layer 16 using gate dielectric 22, gate electrode 24, first sidewall spacers 26, and second sidewall spacers 48 at least in part as masking during such etching. Fig. 13 depicts one example embodiment wherein SixGe(1_X)-comprising layer 16 is completely etched through to insulator 14, and forms a floating body channel region 30c. Referring to Fig. 14, insulative material 50 has been formed over sidewalls of semiconductor SixGe(1.X)-comprising layer 16. An example technique for doing so includes exposure to oxidizing conditions effective to thermally oxidize such sidewalls, thereby forming an insulative silicon- germanium oxide material. An example lateral thickness range for insulative material 50 is from about 30 Angstroms to about 300 Angstroms. Referring to Fig. 15, second sidewall spacers 48 (not shown) have been etched to expose sidewalls of semiconductor silicon-comprising layer 18.Where, for example, insulative material 50 comprises a silicon-germanium oxide, first spacers 26 comprise silicon dioxide, and second spacers 48 comprise silicon nitride, an example etching to produce the Fig. 15 construction includes a mixture of H3PO4 and H2O heated to from about 1500C to about 1800C. Referring to Fig. 16, semiconductive silicon-comprising material 39 has been epitaxially grown from laterally outermost sidewalls of only silicon-comprising layer 18 (since sidewalls of semiconductor SixGe(1-χ)-comprising layer 16 are covered with insulator 50) to form a pair of source/drain regions 38c. Accordingly and by way of example only, Figs. 5, 7, and 10 depict example embodiments wherein semiconductor SixGe(1_x)-comprising region 16 of a respective floating body channel region comprises laterally outermost sidewalls which directly physically contact against the respective source/drain regions. On the other hand, Fig. 16 depicts an example embodiment wherein semiconductor SixGe(1_X)-comprising region 16 of a floating body channel region comprises laterally outermost sidewalls which do not directly physically contact against the source/drain regions. For example and by way of example only, the Fig. 16 embodiment depicts insulative material 50 being received between at least some of the laterally outermost sidewalls of the semiconductor SixGe(1-X)- comprising region 16 and the source/drain regions 38c, with Fig. 16 more specifically illustrating insulative material 50 being received between all of the laterally outermost sidewalls of the semiconductor SixGe(1_X)-comprising region 16 and the source/drain regions 38c. Further example embodiments are next described with reference to Fig. 17 with respect to a semiconductor substrate 1Od. Like numerals from the above-described embodiments are utilized where appropriate, with differences being indicated with the suffix "d" or with different numerals. Fig. 17 depicts processing subsequent to that depicted by Fig. 5, although such Fig. 17 processing could be conducted subsequent to any of that depicted by Figs. 7, 10, or 16, by way of examples only. In Fig. 17, a semiconductor SixGe(1-X)- comprising material 54 has been epitaxially grown outwardly from silicon- comprising pair of highest dopant concentration n-type source/drain regions 28 to form elevated source/drain portions 55 comprising semiconductor SixGe(^x)- comprising material. SixGe(1-x)-comprising material 54 might be the same ordifferent in composition from that of SixGe(1_X)-comprising layer 16 described above. In one embodiment, Ge quantity in elevated source/drain portions 55 is greater than any quantity of Ge within non-elevated source/drain portions 28. In one embodiment, non-elevated source/drain portions 28 are void of Ge. Regardless in one embodiment, non-elevated source/drain portions 28 comprise silicon. Without being limited by any theory of invention or operation, elevated source/drain portions comprising the stated SixGe^.^-comprising material as part of the source/drain regions may help increase probability of programming by impact ionization in excessive hole accumulation in a SixGe(1_x)-comprising region of a floating body channel region. Band bending can increase in the overlap region as to increase tunneling from the valence band via gate induced drain leakage, and also possibly help excessive hole generation during programming. Further embodiments of the invention are next described in connection with Figs. 18-20. Like numerals from the above-described embodiments have been utilized where appropriate, with differences being indicated with the suffix "e" or with different numerals. Fig. 18 depicts a semiconductor substrate 1Oe largely in accordance with the example Fig. 17 processing. However as with Fig. 17, processing in connection with source/drain fabrication in production of the embodiments of Figs. 7, 10 and/or 16, or otherwise, is also contemplated in the context of Fig. 18. In Fig. 18, a semiconductor silicon-comprising layer 60 has been formed over insulator 14. Composition of layer 60 in certain embodiments is in accordance with composition of layer 18 as described above. Accordingly, such may comprise Ge, or may be void of Ge. Regardless, example gate construction 20 is depicted as being formed thereover. Referring to Fig. 19, n-type conductivity enhancing impurity has been ion implanted into unmasked portions of semiconductor silicon-comprising layer 60 to form a pair of highest dopant concentration n-type source/drain regions 28e comprising semiconductor silicon-comprising layer 60 using gate dielectric 22 and gate electrode 24 at least in part as a mask during such ion implanting. A floating body channel region 3Oe is formed between pair of source/drain regions 28e, and comprises semiconductor silicon-comprising layer 60.Referring to Fig. 20, semiconductor SixGe(1_X)-comprising material 54 has been epitaxially grown outwardly from pair of highest dopant concentration n-type silicon-comprising source/drain regions 28e to form elevated source/drain portions 55 which comprise semiconductor SixGe(1_x)-comprising material. Accordingly by way of example only, and further independent of method, Fig. 20 depicts an example floating body field-effect transistor 32e comprising a pair of source/drain regions 28e/55 having a floating body channel region 30e received therebetween. Source/drain regions 28e/55 and floating body channel region 30e are received over an insulator 14. A gate electrode 24 is received proximate floating body channel region 30e, with a gate dielectric 22 being received between gate electrode 24 and floating body channel region 30e. Each of the pair of source/drain regions 28e/55 comprises an elevated source/drain portion 55 and a non-elevated source/drain portion 28e. The elevated source/drain portions comprise SixGe(1-X). Non-elevated source/drain portions 28e comprise highest dopant concentration portions comprising silicon. Ge quantity in elevated source/drain portions 55 is greater than any quantity of Ge within the highest dopant concentration portions of non-elevated silicon- comprising source/drain portion 28e. Further example embodiments are next described in connection with Figs. 21-24 in connection with a semiconductor substrate 1Of. Like numerals from the above-described embodiments are utilized where appropriate, with differences being indicated with the suffix "f" or with different numerals. Fig. 21 is similar to the in-process embodiment of Fig. 3, however with a SixGe(i.X)- comprising layer 16f not being homogenous at least regarding Ge concentration. For example, Fig. 21 depicts one portion 62 intended to designate a different Ge concentration from that of another portion 64 of SixGe(1_X)-comprising layer 16f. For example, portion 62 might have higher Ge concentration than portion 64, or portion 64 might have higher concentration Ge than portion 62. Further, a gradual or other different gradient in Ge concentration across the thickness of SixGe(i_X)-comprising layer 16f may be used. Figs. 22 and 23 illustrate subsequent processing occurring which corresponds to that of Figs. 4 and 5, respectively. Alternately by way of example only, processing in accordance with any of the other above-described embodiments might also be conducted. Fig. 24 depicts processing subsequent tothat of Fig. 23 corresponding in accordance with processing depicted by Fig. 17. Alternately by way of example only, processing in accordance with any of the other above-described embodiments might also be conducted. Without being limited by any theory of invention or operation, in one example embodiment, germanium concentration in portion 64 is provided to be higher than germanium concentration in portion 62. Such might facilitate displacing hole quantity slightly away from insulator 14 to separate such holes from defects inherently occurring at an interface of a semiconductive material such as silicon with insulator 14. Further, a germanium concentration gradient may help control carrier lifetime within the floating body channel for retention improvement. Further example embodiments are next described with reference to Figs. 25-28 in connection with a semiconductor substrate 1Og. Like numerals from the above-described embodiments are utilized where appropriate, with differences being indicated with the suffix "g" or with different numerals. Fig. 25 depicts processing of the Fig. 22 substrate alternate to that depicted by Fig. 23. For example and by way of example only, Fig. 25 depicts previous formation of a semiconductor SixGe(i_X)-comprising layer 64 over and in direct physical contact with a semiconductor silicon-comprising material 62 that is received over an insulator 14. Semiconductor silicon-comprising material 62 in one embodiment comprises Ge, and might be considered as a first SixGe(1-X)- comprising layer. In one embodiment, Ge concentration in semiconductor SixGe(1_X)-comprising layer 64 is of greater concentration than any Ge concentration in silicon-comprising layer 62, and in one embodiment semiconductor SixGe(I. x)-comprising layer 64 might be considered as a second SixGe(1_X)-comprising layer 64. A semiconductor silicon-comprising layer 18 has been formed over and in direct physical contact with semiconductor SixGe(1_X)-comprising layer 64. The semiconductor SixGe(1_X)-comprising layer 64 has greater quantity of Ge than any quantity of Ge within semiconductor silicon-comprising layer 18. Example gate construction 20 has been formed over semiconductor silicon-comprising layer 18. Fig. 25 also depicts etching having been conducted into unmasked portions of semiconductor silicon-comprising layer 18, second SixGe(1_X)-comprising layer 64, and first SixGe(1_x)-comprising layer 62 to form afloating body channel region 3Og comprising at least semiconductor silicon-comprising layer 18 and first SixGe(1.X)-comprising layer 62. Referring to Fig. 26, at least some of second SixGe(1_X)-comprising layer 64 has been etched selectively relative to first SixGe(1-X)-comprising layer 62, thereby leaving the depicted gap. The depicted structure would be supported at opposite ends on portions received into and out of the plane of the page upon which Fig. 26 appears. Example etching a higher concentration Ge-comprising silicon-germanium material relative to lower germanium or no germanium concentration silicon-comprising material includes using an HF, HNO3, H2O solution or a CH3COOH, H2O2, HF, H2O solution, or using CF4, CF2Cl2, and HBr plasmas. Referring to Fig. 27, insulative material 68 has been provided to replace at least some of second SixGe(1-X)-comprising layer 64 (not shown) which was removed. An example technique for doing so comprises thermal oxidation of one or both of materials 18 and 62. An example thickness range for insulative material 68 is from about 20 Angstroms to about 250 Angstroms. Regardless, outer sidewalls of material 18 and 62 are ultimately outwardly exposed as shown in Fig. 27, with Fig. 28 depicting subsequent epitaxial growth of a semiconductive silicon-comprising material 70 from laterally outermost sidewalls of at least the silicon-comprising layer 18 and first SixGe(1-X)- comprising layer 62 to form a pair of source/drain regions 71 Not being limited by any theory of invention or operation, a thin insulative layer 68 provided as described in the Fig. 28 embodiment might further isolate excessive holes which are stored in a bottom silicon-germanium-comprising buried channel region and reduce dissipation and thereby perhaps enhance charge retention. Further embodiments are next described in connection with Figs. 29-36 with respect to a semiconductor substrate 1Oh. Like numerals from the above- described embodiments are utilized where appropriate, with differences being indicated with a suffix "h" or with different numerals. Referring to Fig. 29, a semiconductor first silicon-comprising layer 72 has been formed over insulator 14. Composition and dimensional parameters of first silicon- comprising layer 72 can, by way of example only, be the same as those described above with respect to layer 18 of the first-described embodiment. A semiconductor SixGe(i_X)-comprising layer 74 has been formed over first silicon-comprising layer 72. Composition can, by way of example only, be the same as that described above in connection with SixGe(i.X)-comprising layer 16. An example thickness range for semiconductor SixGe(1-x)-comprising layer 74 is from about 20 Angstroms to about 250 Angstroms. A semiconductor second silicon-comprising layer 76 is formed over semiconductor SixGe(1.x)-comprising layer 74. By way of example only, composition for second silicon-comprising layer 76 may be the same as that described above with respect to layer 18, although layers 72 and 76 of course need not be, but may be, of the same composition. An example thickness range for layer 76 is from 20 Angstroms to 250 Angstroms. Referring to Fig. 30, an example gate construction 20 has been formed over second silicon-comprising layer 76. Referring to Fig. 31, etching has been conducted into unmasked portions of second silicon-comprising layer 76 and semiconductor SixGe(1.X)-comprising layer 74 at least to an outer surface of first silicon-comprising layer 72. Such might be conducted by a timed etch, or an etch at least through semiconductor SiχGe(i_X)-comprising layer 74 which is substantially selective to semiconductor first silicon-comprising layer 72. Referring to Fig. 32, second spacers 48h have been formed over first spacers 26 and laterally outermost sidewalls of first silicon-comprising layer 76 and SixGe(i_X)-comprising layer 74. Referring to Fig. 33, etching is continued this time into first silicon-comprising layer 72 at least using semiconductor SixGe(1.x)-comprising layer 74, second silicon-comprising layer 76, second sidewall spacers 48h, first sidewall spacers 26, gate dielectric 22, and gate electrode 24 as a mask during such etching. As depicted, such etching is in one embodiment completely through first silicon-comprising layer 72 to insulator 14. In one embodiment, such thereby forms a floating body channel region 30h. Referring to Fig. 34, insulative material 50h has been formed over outermost lateral sidewalls of first silicon-comprising layer 72. Referring to Fig. 35, second sidewall spacers 48h (not shown) have been removed to expose outer lateral sidewalls of second silicon-comprising layer 76 and semiconductor SixGe( 1_x)-comprising layer 74 of floating body channel region 30h.Referring to Fig. 36, semiconductive silicon-comprising material 39 has been epitaxially grown from outermost lateral sidewalls of second silicon-comprising layer 76 and semiconductor SixGe(1-X)-comprising layer 74 of floating body channel region 30h to form a pair of source/drain regions 38h. Without being limited by any theory of invention or operation, such might facilitate excess hole storage within the floating body channel region, and reduce excessive hole dissipation to the source/drains, thereby lengthening required refresh time. Regardless, and by way of example only, Fig. 36 depicts an example embodiment floating body field-effect transistor 32h comprising a pair of source/drain regions 38h having a floating body channel region 3Oh received therebetween. The source/drain regions 38h and floating body channel region 30h are received over an insulator 14. A gate electrode 24 is provided proximate floating body channel region 3Oh, with a gate dielectric 22 being received between gate electrode 24 and floating body channel region 30h. Floating body channel region 30h comprises a semiconductor first silicon-comprising region 72, a semiconductor second silicon-comprising region 76, and a semiconductor SixGe(1-X)-comprising region 74 received between region 76 and 72. Semiconductor SixGe(1_X)-comprising region 74 has greater quantity of Ge than any quantity of Ge within each of semiconductor first and second silicon- comprising regions 72, 76, respectively. Semiconductor first silicon-comprising region 72 is received directly physically contacting against insulator 14, and comprises laterally outermost sidewalls. An insulative material 50h is received between at least some of such laterally outermost sidewalls and source/drain regions 38h. In one embodiment, first and second silicon-comprising regions 72, 76, respectively, are void of Ge. In one embodiment, semiconductor first and second silicon-comprising region 72, 76, respectively, consists essentially of p-doped silicon. In one embodiment, second silicon-comprising region 76 and semiconductor SixGe(1.x)-comprising region 74 comprise laterally outermost sidewalls which directly physically contact against source/drain regions 38h. In one embodiment, a method of forming a floating body field-effect transistor includes forming a semiconductor SixGe(1_x)-comprising layer and a semiconductor silicon-comprising layer over an insulator. Either might beformed before the other. Regardless, a gate dielectric and a gate electrode are formed over the semiconductor silicon-comprising layer. Using the gate dielectric and the gate electrode at least in part as a mask, etching is conducted into unmasked portions of the semiconductor SixGe(1_X)-comprising layer and the semiconductor silicon-comprising layer to form a floating body channel region comprising the semiconductor SixGe(i_X)-comprising layer and the semiconductor silicon-comprising layer. By way of example only, Figs. 13 and 32/33 depict exemplary such processings. Insulative material is formed over outermost lateral sidewalls of only one of the semiconductor SixGe(1_X)-comprising layer and the semiconductor silicon- comprising layer of the floating body channel region and not over the other of the semiconductor SixGe(i_x)-comprising layer and the semiconductor silicon-comprising layer of the floating body channel region. By way of example only, Figs. 14 and 34 depict such processing. After such formation of insulative material, semiconductive silicon-comprising material is epitaxially grown from outermost lateral sidewalls of the other of the semiconductor SixGe(1_χ)-comprising layer and the semiconductor silicon-comprising layer of the floating body channel region to form a pair of source/drain regions. Again by way of example only, Figs. 16 and 36 depict exemplary such processing. Further embodiments are next described in conjunction with Figs. 37-43 with respect to a semiconductor substrate 10m. Like numerals from the above- described embodiments are utilized where appropriate, with differences being indicated with the suffix "m" or with different numerals. Referring to Fig. 37, a semiconductive material first region 80 has been formed over insulator 14. By way of examples only, composition for the same might be either of that described above in connection with layers 16 and 18 in the first-described embodiments. Accordingly and in but one embodiment, first region 80 comprises a silicon-comprising material which has been deposited over insulator 14, and in one embodiment in direct physical contact therewith. Referring to Fig. 38, trenches 81 and 82 have been etched into silicon-comprising material 80 to insulator 14. Referring to Fig. 39, trenches 81 and 82 have been filled with insulative material 84. Example materials 84 include doped or undoped silicon dioxide, and/or silicon nitride. An example manner of forming the construction of Fig.39 is by deposition of material 84 effective to overfill trenches 81 and 82, followed by chemical mechanical polishing thereof at least to an outer surface of semiconductive material first region 80. For purposes of the continuing discussion, first region 80 can be considered as comprising laterally outermost sidewalls 85 having insulative material 84 received contacting directly physically there-against. Such provides but one example method of forming a semiconductive material first region 80 over an insulator 14, where insulative material 84 is received contacting directly physically against laterally outermost sidewalls 85 of first region 80. Any alternate example manner of forming the same might also be utilized, and whether existing or yet-to-be developed. Referring to Fig. 40, a semiconductive material second region 86 has been formed over and in direct physical contact with semiconductive material first region 80 and over insulative material 84. Again, example materials for semiconductive second material second region 86 are either of those as described above in connections with layers 16 and 18 of the first-described embodiment. Alternate materials are, of course, contemplated. Regardless, materials 80 and 86 might be of the same composition, or of different composition. Further, respective materials 80 and 86 might be homogenous or non-homogenous. One manner of forming semiconductive material second region 86 is by epitaxial growth. For example, a seed layer can be deposited at least over insulative material 84, with material 86 being epitaxially grown therefrom and from semiconductive material first region 80. In one embodiment and after such growth, semiconductive material second region 86 might be polished, for example by chemical mechanical polishing. Regardless, Fig. 40 depicts a gate dielectric 88 as having been formed over semiconductive material second region 86. An example material is thermally grown silicon dioxide. Referring to Fig. 41 , a gate construction 89 has been formed. Such is depicted as comprising a gate electrode 90 comprised of conductive layers 91 and 92. By way of example only, conductive layer 92 might comprise conductively doped polysilicon, while conductive layer 91 might comprise one or a combination of a refractory metal and/or a refractory metal suicide. Gate construction 89 is also depicted as comprising anisotropically etched insulative sidewall spacers 93 which have been formed over laterally outermost sidewallsof gate electrode 90. An insulative cap (not shown) might also of course be formed. Referring to Fig. 42, etching has been conducted into unmasked portions of semiconductive material second region 86 to insulative material 84 to form a floating body channel region 30m comprising semiconductive material first region 80 and semiconductive material second region 86. Such etching has been conducted using gate dielectric 88 and gate electrode 90 at least in part as a mask for such etching. In the depicted embodiment, anisotropically etched sidewall spacers 93 have also been used as a mask during such etching, with semiconductive material second region 86 being unmasked first by the etching of gate dielectric 88. For purposes of the continuing discussion, semiconductive material second region 86 can be considered as comprising laterally outermost sidewalls 94. Referring to Fig. 43, semiconductive material has been epitaxially grown from laterally outermost sidewalls 94 of at least semiconductive material second region 86 to form a pair of source/drain regions 96. In one embodiment and as shown, pair of source/drain regions 96 are epitaxially grown over insulative material 84 and in one embodiment in direct physical contact therewith. Each source/drain region 96 in one embodiment, and as shown, is formed to comprise an elevated source/drain portion 97 and a non-elevated source/drain portion 98. In one embodiment, source/drain regions comprise silicon, with example materials being as described above in connection with source/drain regions 38 of the example Fig. 7 embodiment. Fig. 43 also depicts an example floating body field-effect transistor 100 independent of method of fabrication. In one such embodiment, such comprises a pair of source/drain regions 96 having a floating body channel region 30m received therebetween. Source/drain regions 96 and floating body channel region 30m are received over an insulator 14. A gate electrode 90 is received proximate floating body channel region 30m, with a gate dielectric 88 being received between gate electrode 90 and floating body channel region 30m. Such floating body channel region comprises first and second regions 80 and 86, respectively, with second region 86 being received elevalionally between gate dielectric 88 and first region 80. First region 80 comprises laterally outermost sidewalls 85, with insulative material 84 being received contacting directlyphysically against laterally outermost sidewalls 85 of first region 80. In one embodiment, first region 80 has a thickness which is greater than that of second region 86. In one embodiment, each of first and second regions 80 and 86 is void of Ge. Yet in one embodiment, at least one of first and second regions 80 and 86, respectively, comprises Ge. One or both of regions 80 and 86 might form hole storage volume. In one embodiment, region 80 comprises hole storage volume and in one embodiment an elevationally inward portion thereof. Region 80 may comprise SixGe(1_x) for example in any of the orientations, positions, and/or concentrations as described above and with or without other silicon-comprising material as also described above.
Various embodiments are generally directed to an apparatus, method and other techniques for communicating metric data between a plurality of management controllers for sleds via an out-of-band (OOB) network, the sleds comprising physical resources and the metric data to indicate one or more metrics for the physical resources. Embodiments may also include determining a physical resource of the physical resources to perform a task based at least in part on the one or more metrics, and causing the task to be performed by the physical resources.
1.A system comprising:A cabin management controller coupled to the skateboard via an out-of-band (OOB) network, the cabin management controller for:Metric data is received from a plurality of management controllers of the skateboard via the OOB network, the skateboard comprising a plurality of physical resources and the metric data indicating one or more metrics of the plurality of physical resources;Determining a physical resource of the plurality of physical resources for performing a task based on the one or more metrics;The task is sent for execution by the physical resource of one of the skateboards.2.The system of claim 1 , the cabin management controller to determine a location to perform the task based on the metric data indicating that the physical resource is capable of meeting a requirement of a service level agreement associated with the task Physical resources.3.The system of claim 1 wherein said cabin management controller is operative to receive said metric data from said plurality of management controllers of said skateboard located within a single rack.4.The system of claim 1 wherein said cabin management controller is operative to receive said metric data from said plurality of management controllers of said skateboard located within two or more racks.5.The system of claim 1 wherein said cabin management controller is operative to receive said metric data via said OOB network using a representative state transition (REST) architecture and employing a JavaScript Object Notation (JSON) data format.6.The system of claim 1, the plurality of physical resources comprising one or more physical memory resources, and wherein the metric data for each of the physical memory resources includes an indication of whether the physical memory resources are interleaved or non-interlaced One or more of one or more, memory throughput, memory input/output operations per second (IOPS) metrics, memory latency, memory size, and memory utilization.7.The system of claim 1, the plurality of physical resources comprising one or more physical computing resources, and the metric data for each of the physical computing resources comprises a processor identifier, a processor cache capability, One or more of a processor topology, a processor cache topology, a processor-to-processor link access latency, and processor bandwidth information.8.The system of claim 1, the plurality of physical resources comprising one or more physical storage resources, and the metric data for each of the one or more physical storage resources comprises storage throughput, storage per second One or more of input/output operations (IOPS) metrics, storage latency, storage size, and storage utilization.9.The system of claim 1 wherein said cabin management controller is operative to receive said metric data via a rack management controller that receives metric data from said skateboard of one or more racks.10.A non-transitory computer readable storage medium comprising a plurality of instructions that, when executed, cause a processing circuit to:Metric data is received from a plurality of management controllers of the skateboard via an out-of-band (OOB) network by a cabin management controller, the skateboard comprising a plurality of physical resources and one or more metrics for indicating the plurality of physical resources The metric data;Determining, by the cabin management controller, physical resources for performing tasks in the plurality of physical resources based at least in part on the one or more metrics;The task is sent by the cabin management controller to be performed by the physical resource of one of the skateboards.11.A computer readable storage medium as recited in claim 10, comprising a plurality of instructions that, when executed, cause the processing circuit to be capable of based on a service level agreement indicating that the physical resource is capable of meeting a task associated with the task The metric data is required to determine the physical resource to perform the task.12.A computer readable storage medium as recited in claim 10 comprising a plurality of instructions that, when executed, enable processing circuitry to receive from said plurality of management controllers of said skateboard located within a single rack The metric data.13.A computer readable storage medium as recited in claim 10, comprising a plurality of instructions that, when executed, enable processing circuitry to be capable of said plurality of said skateboards located in two or more racks The management controller receives the metric data.14.The computer readable storage medium of claim 10, comprising a plurality of instructions that, when executed, enable the processing circuit to utilize a representative state transition (REST) architecture and employ JavaScript Object Notation (JSON) data The format receives the metric data via the OOB network.15.The computer readable storage medium of claim 10, the plurality of physical resources comprising one or more physical memory resources, and wherein the metric data of each of the physical memory resources comprises whether the physical memory resources are interlaced or One or more of non-interlaced indications, one or more of memory throughput, memory input/output operations per second (IOPS) metrics, memory latency, memory size, and memory utilization.16.The computer readable storage medium of claim 10, the plurality of physical resources comprising one or more physical computing resources, and the metric data for each of the physical computing resources comprises a processor identifier, a processor One or more of cache capability, processor topology, processor cache topology, processor-to-processor link access latency, and processor bandwidth information.17.The computer readable storage medium of claim 10, the plurality of physical resources comprising one or more physical storage resources, and the metric data for each of the one or more physical storage resources comprises a storage throughput One or more of input/output operations (IOPS) metrics, storage latency, storage size, and storage utilization per second.18.A computer readable storage medium as recited in claim 10, comprising a plurality of instructions that, when executed, enable processing circuitry to receive rack management of metric data via a plurality of skateboards from one or more racks The controller receives the metric data.19.A device that includes:A management controller for a skateboard coupled to a cabin management controller via an out-of-band (OOB) link, the management controller for:Determining metric data for one or more physical resources of the skateboard, the metric data being used to indicate one or more metrics of the one or more physical resources;Transmitting the metric data to the cabin management controller via the OOB link;Receiving a task to be processed by at least one of the one or more physical resources of the skateboard.20.The apparatus of claim 19, the management controller to transmit the metric data via the OOB link using a representative state transition (REST) architecture and employing a JavaScript Object Notation (JSON) data format.21.The apparatus of claim 19, the physical resource comprising one or more physical memory resources, and the metric data of each of the physical memory resources comprises an indication of whether the physical memory resource is interleaved or non-interlaced One or more of multiple, memory throughput, memory input/output operations per second (IOPS) metrics, memory latency, memory size, and memory utilization.22.The device of claim 19, the physical resource comprising one or more physical computing resources, and the metric data for each of the physical computing resources comprises a processor identifier, a processor cache capability, a processor One or more of topology, processor cache topology, processor-to-processor link access latency, and processor bandwidth information.23.The device of claim 19, the physical resource comprising one or more physical storage resources, and the metric data for each of the one or more physical storage resources comprises a storage throughput, a storage input per second / One or more of output operation (IOPS) metrics, storage latency, storage size, and storage utilization.24.The apparatus of claim 19, the management controller to transmit the metric data to the cabin management controller via a rack management controller.25.The device of claim 19, the physical resource comprising one or more of a physical memory resource, a physical computing resource, a physical storage resource, and a physical accelerator resource.
Technique for determining and processing metric data for physical resourcesrelated caseThe present application claims the benefit and priority of a previously filed U.S. Patent Application Serial No. 15/396, 173, entitled <RTIgt; U.S. Patent Application Serial No. 62/427,268, entitled "Framework and Technologies for Pools of Configurable Computing Resources", filed on November 29, 2016, filed on August 18, 2016, with serial number assigned US Provisional Patent Application entitled "Scalable System Framework Prime (SSFP) Omnibus Provisional II", 62/376859; and "Framework and Techniques for Pools of Configurable", filed on July 22, 2016 and with serial number 62/365969 The priority of U.S. Provisional Patent Application, the entire disclosure of which is incorporated herein by reference in its entirety.Technical fieldEmbodiments described herein generally include metrics that determine and communicate physical resources in a data center environment.Background techniqueA computing data center can include one or more computing systems, the one or more computing systems including a plurality of computing nodes, which can include various computing structures (eg, servers or skateboards) and can be physically located Multiple racks. A skateboard can include multiple physical resources that are interconnected via one or more computing structures and buses. In addition, the skateboard can be interconnected with other skateboards via a networked connection.In general, a computing data center can include management entities to distribute workload among computing structures located within a rack. However, these computing structures currently fail to provide detailed administrative system information with performance-related information to the management entity to enable the management entity to make intelligent decisions when providing workloads.DRAWINGSThe embodiments of the invention are illustrated by way of example and not by way of limitation,Figure 1 illustrates an example of a data center.Figure 2 illustrates an example of a rack.Figure 3 illustrates an example of a data center.Figure 4 illustrates an example of a data center.Figure 5 illustrates an example of a switching infrastructure.Figure 6 illustrates an example of a data center.Figure 7 illustrates an example of a skateboard.Figure 8 illustrates an example of a data center.Figure 9 illustrates an example of a data center.Figure 10 illustrates an example of a skateboard.Figure 11 illustrates an example of a data center.Figure 12 illustrates an example of a data center.Figure 13 illustrates an example of a data center.Figure 14 illustrates an example of a data center.Figure 15 illustrates an example of a skateboard.Figure 16 illustrates an example of a data center.Figure 17 illustrates an example of a first logic flow diagram.Figure 18 illustrates an example of a second logic flow diagram.Detailed waysVarious embodiments may generally involve determining metric data for a plurality of physical resources in a data center environment and providing metric data such that the management controller can make intelligent decisions when assigning workloads to physical resources. As mentioned previously, current computing architectures fail to provide the management controller with detailed system information including performance metrics for the computing structure to enable the management controller to make intelligent decisions when providing workload. Accordingly, the embodiments discussed herein relate to solving these other problems.For example, embodiments discussed herein can include circuitry for determining metric data for one or more physical resources of a skateboard for indicating one or more metrics of one or more physical resources. The circuitry may also send metric data to the cabin management controller via an out-of-band (OOB) link, which may include a physical link or a virtual link, and is secure. The metrics data can include performance metrics and additional information such that the cabin management controller can make intelligent decisions to process the workload through physical resources. Further, the cabin management controller can receive metric data from the plurality of skateboards via the OOB network, determine physical resources for performing tasks in the physical resources based at least in part on the one or more metrics, and cause the tasks to be carried out. The embodiments are not limited to this manner, and these other details will become apparent in the following discussion.Reference is now made to the drawings in which like reference numerals In the following description, for the purposes of illustration However, it is apparent that the novel embodiments can be practiced without these specific details. In other instances, well known structures and devices are shown in block diagram form in order to facilitate a description. The invention is intended to cover all modifications, equivalents, and alternatives1 illustrates a conceptual overview of a data center 100, which may generally represent a data center or other type in which/for which one or more techniques described herein may be implemented, in accordance with various embodiments. Computing network. As shown in FIG. 1, data center 100 can generally include a plurality of racks, each of which can house computing devices including a corresponding set of physical resources. In the specific non-limiting example depicted in FIG. 1, data center 100 includes four racks 102A-102D that house computing devices that include respective sets of physical resources (PCR) 105A-105D. According to this example, a common set of physical resources 106 of data center 100 includes various sets of physical resources 105A-105D distributed between racks 102A-102D. Physical resources 106 may include multiple types of resources such as, for example, a processor, a coprocessor, an accelerator, a field programmable gate array (FPGA), memory, and storage. Embodiments are not limited to these examples.The illustrative data center 100 differs from a typical data center in many respects. For example, in an illustrative embodiment, a circuit board ("slide") on which components (such as CPUs, memory, and other components) are placed is designed for increased thermal performance. In particular, in the illustrative embodiment, the skateboard is shallower than a typical panel. In other words, the skateboard is shorter from front to back (where the cooling fan is located). This reduces the length of the path that air must travel across the components on the board. Moreover, the components on the sled are spaced further apart than in a typical circuit board, and the components are arranged to reduce or eliminate shadowing (ie, one of the air flow paths of another component). In an illustrative embodiment, a processing component such as a processor is located on a top side of the sled, while a near memory such as a DIMM is located on a bottom side of the sled. As a result of the enhanced air flow provided by this design, the components can operate at higher frequencies and power levels than in typical systems, thereby increasing performance. In addition, the skateboard is configured to be blindly paired with power and data communication cables in each of the racks 102A, 102B, 102C, 102D to enhance their ability to be quickly removed, upgraded, reinstalled, and/or replaced. Similarly, the various components located on the skateboard (eg, processor, accelerator, memory, and data storage drive) are configured to be easily upgraded (due to their increased spacing from each other). In an illustrative embodiment, the components additionally include hardware proof features to verify their reliability.Moreover, in the illustrative embodiment, data center 100 utilizes a single network architecture ("organization") that supports multiple other network architectures, including Ethernet and Omni-Path. In an illustrative embodiment, the sled is coupled to the switch via fiber optics, which provides higher bandwidth and lower latency than typical twisted pair cables (eg, Category 5, Category 5e, Category 6, etc.). Due to the high bandwidth, low latency interconnect and network architecture, data center 100 can use physically decentralized pool resources (eg, memory, accelerators (eg, graphics accelerators, FPGAs, ASICs, etc.), and data storage drives), and They are provided to computing resources (eg, processors) on an as needed basis, enabling computing resources to access pooled resources as if they were local. The illustrative data center 100 additionally receives utilization information for various resources, predicts resource utilization of different types of workloads based on past resource utilization, and dynamically reallocates resources based on this information.The racks 102A, 102B, 102C, 102D of the data center 100 may include automated physical design features that facilitate various types of maintenance tasks. For example, data center 100 can be implemented using a rack that is designed to be robotically accessible and that accepts and houses a robotic steerable resource skateboard. Moreover, in the illustrative embodiment, the racks 102A, 102B, 102C, 102D include an integrated power source that receives a higher current than is typical for a power source. The increased current enables the power source to provide additional power to the components on each of the sleds, enabling the components to operate at frequencies above typical frequencies. FIG. 2 illustrates an exemplary logical configuration of a rack 202 of a data center 100. As shown in FIG. 2, rack 202 can typically house a plurality of skateboards, each of which can include a corresponding set of physical resources. In the specific non-limiting example depicted in FIG. 2, rack 202 houses sleds 204-1 through 204-4 including respective sets of physical resources 205-1 through 205-4, each of which is comprised in rack 202 Part of a common collection of physical resources 206. For FIG. 1, if rack 202 represents, for example, rack 102A, physical resource 206 may correspond to physical resource 105A included in rack 102A. In the context of this example, physical resource 105A may thus be comprised of a respective set of physical resources, including physical storage resource 205-1, physical accelerator resource 205-2 included in skateboards 204-1 through 204-4 of rack 202. Physical memory resource 204-3 and physical computing resource 205-5. Embodiments are not limited to this example. Each skateboard can contain a pool of each of various types of physical resources (eg, compute, memory, accelerator, storage). By having robotic accessible and robotic steerable skateboards including depolymerization resources, each type of resource can be upgraded independently of each other and at its own optimized refresh rate.3 illustrates an example of a data center 300, which may generally represent a data center in which/for which one or more techniques described herein may be implemented, in accordance with various embodiments. In a specific, non-limiting example depicted in FIG. 3, data center 300 includes racks 302-1 through 302-32. In various embodiments, the rack of data center 300 can be arranged in such a manner as to define and/or accommodate various access paths. For example, as shown in FIG. 3, the rack of data center 300 can be arranged in such a manner as to define and/or accommodate access paths 311A, 311B, 311C, and 311D. In some embodiments, the presence of such access paths may generally enable an automated maintenance device (eg, a robotic maintenance device) to physically access computing devices housed in various racks of data center 300 and perform automated maintenance tasks (eg, , replace the faulty skateboard, upgrade the skateboard). In various embodiments, the size of the access paths 311A, 311B, 311C, and 311D, the size of the racks 302-1 through 302-32, and/or one or more other aspects of the physical layout of the data center 300 may be selected to Promote such automated operations. Embodiments are not limited in this context.4 illustrates an example of a data center 400, which may generally represent a data center in which/for which one or more techniques described herein may be implemented, in accordance with various embodiments. As shown in FIG. 4, data center 400 can be characterized by light fabric 412. Light fabric 412 can generally include a combination of optical signaling media (e.g., fiber optic cable) and optical switching infrastructure through which any particular skateboard in data center 400 can send signals to and receive data from each of the other data centers in data center 400. The signal of each of the other skateboards in the center 400. The signalling connectivity provided by the light fabric 412 to any given skateboard may include connectivity to both the other skateboards in the same rack and the skateboards in other racks. In the specific, non-limiting example depicted in FIG. 4, data center 400 includes four racks 402A-402D. Racks 402A-402D house respective pairs 404A-1 and 404A-2, 404B-1 and 404B-2, 404C-1 and 404C-2, and 404D-1 and 404D-2 of the skateboard. Thus, in this example, data center 400 includes a total of eight skateboards. Each such skateboard can have signaling connectivity to each of the other seven skateboards in data center 400 via light fabric 412. For example, via the light fabric 412, the sled 404A-1 in the rack 402A can have signalling connectivity to the sled 404A-2 in the rack 402A, as well as to other racks 402B, 402C and in the data center 400. Signaling connectivity of the other six skateboards 404B-1, 404B-2, 404C-1, 404C-2, 404D-1, and 404D-2 between 402D. Embodiments are not limited to this example.5 illustrates an overview of a connectivity scheme 500, which may generally represent any of the data centers (eg, any of the example data centers 100, 300, and 400 of FIGS. 1, 3, and 4) that may be in some embodiments. A) link layer connectivity established between various skateboards. The connectivity scheme 500 can be implemented using a light fabric characterized by a dual mode optical switching infrastructure 514. Dual mode optical switching infrastructure 514 may generally include a switching infrastructure capable of receiving communications via the same set of unified optical signaling media in accordance with multiple link layer protocols and exchanging such communications appropriately. In various embodiments, dual mode optical switching infrastructure 514 can be implemented using one or more dual mode optical switches 515. In various embodiments, dual mode optical switch 515 can typically include a high-radix switch. In some embodiments, the dual mode optical switch 515 can include a multi-layer switch, such as a four-layer switch. In various embodiments, the dual mode optical switch 515 can be characterized by integrated silicon photonics (to enable them to exchange communications with significantly reduced latency compared to conventional switching devices). In an embodiment, the dual mode switch may be a single physical network line that may be capable of carrying Ethernet or full path communications, which may be automatically detected by the dual mode optical switch 515 or configured by the cabin management controller. This allows for the same network to be used for cloud services (Ethernet) or High Performance Computing (HPC) (typically full path or unlimited bandwidth). Moreover, and in some instances, the full path protocol can carry full path communication and Ethernet communication. In some embodiments, the dual mode optical switch 515 can form a leaf switch 530 in a ridge architecture that additionally includes one or more dual mode optical ridge switches 520. It is noted that in some embodiments, the architecture may not be a ridge architecture, but may be a two layer switch architecture to connect directly to the skateboard.In various embodiments, the dual mode optical switch may be capable of receiving Ethernet protocol communications carrying Internet Protocol (IP packets) and according to a second High Performance Computing (HPC) link layer protocol via optical fabrics of optical fabrics ( For example, Intel's full-path architecture of Infiniband communication. As reflected in Figure 5, for any particular pair of sleds 504A and 504B that have optical signaling connectivity to the optical fabric, the connectivity scheme 500 can thus provide a pair of links via the Ethernet link and the HPC link. Layer connectivity support. Thus, both Ethernet and HPC communications can be supported by a single high bandwidth, low latency switch fabric. Embodiments are not limited to this example.FIG. 6 illustrates a general overview of a rack architecture 600 that may represent the architecture of any of the racks depicted in FIGS. 1 through 4, in accordance with some embodiments. As reflected in FIG. 6, the rack architecture 600 can generally be characterized by a plurality of skateboard spaces into which the skateboard can be inserted, each of which can be robotically accessible via the rack access area 601. In a specific, non-limiting example depicted in FIG. 6, rack architecture 600 features five skateboard spaces 603-1 through 603-5. The skateboard spaces 603-1 to 603-5 are characterized by respective multi-function connector modules (MPCM) 616-1 to 616-5. In some examples, when the sled is inserted into any one of the sled spaces 603-1 through 603-5, the corresponding MPCM can be coupled to the counterpart MPCM of the inserted sled. This coupling can provide the inserted skateboard with connectivity to both the signaling infrastructure and the electrical infrastructure of the rack in which it is placed.Among the types of skateboards to be accommodated by the rack architecture 600 may be one or more types of skateboards characterized by extended capabilities. FIG. 7 illustrates an example of a skateboard 704 that can represent such types of skateboards. As shown in FIG. 7, the skateboard 704 can include a collection of physical resources 705, and a MPCM 716 that is designed to be inserted into the skateboard space (eg, any of the skateboard spaces 603-1 through 603-5 of FIG. 6). Coupling with the counterpart MPCM. The sled 704 can also feature an expansion connector 717. Expansion connector 717 can generally include a socket, socket or other type of connection element (which can accept one or more types of expansion modules, such as expansion slide 718). The expansion connector 717 can provide physical resource 705 with access to supplemental computing resources 705B residing on the expansion sled 718 by coupling with a counterpart connector on the expansion sled 718. Embodiments are not limited in this context.8 illustrates an example of a rack architecture 800 that may represent a rack architecture that may be implemented to provide support for a skateboard that features extended capabilities, such as skateboard 704 of FIG. In a specific, non-limiting example depicted in FIG. 8, rack architecture 800 includes seven skateboard spaces 803-1 through 803-7 that are characterized by respective MPCMs 816-1 through 816-7. The slider spaces 803-1 to 803-7 include respective main areas 803-1A to 803-7A and corresponding extended areas 803-1B to 803-7B. For each such slide space, when the corresponding MPCM is coupled to the counterpart MPCM of the inserted slide, the main area can generally form an area of the skateboard space that can physically accommodate the inserted slide. The extended area may generally constitute an area of the skateboard space that may physically accommodate the expansion module, such as the expansion slide 718 of Figure 7 (in the case where the inserted slide is configured with such a module).FIG. 9 illustrates an example of a rack 902 that may represent a rack implemented in accordance with the rack architecture 800 of FIG. 8 in accordance with some embodiments. In a specific, non-limiting example depicted in FIG. 9, the gantry 902 features seven skateboard spaces 903-1 through 903-7 that include respective primary regions 903-1A through 903-7A and corresponding extended regions 903. -1B to 903-7B. In various embodiments, an air cooling system can be used to achieve temperature control in the rack 902. For example, as reflected in FIG. 9, the gantry 902 can be characterized by a plurality of fans 919 that are generally arranged to provide air cooling within the various skating spaces 903-1 through 903-7. In some embodiments, the height of the skateboard space is greater than a conventional "1U" server height. In such embodiments, the fan 919 may typically include a relatively slow large diameter cooling fan as compared to a fan used in a conventional rack configuration. Operating a larger diameter cooling fan at a lower speed relative to a smaller diameter cooling fan operating at a higher speed can increase fan life while still providing the same amount of cooling. Skateboards are physically shallower than conventional rack sizes. In addition, components are arranged on each of the slides to reduce heat shielding (ie, not arranged in series in the direction of air flow). Thus, a wider, shallower slider allows for increased device performance due to improved cooling (ie, no thermal shielding, more space between devices, more space for larger heatsinks, etc.), The device can be operated with a higher heat envelope (eg, 250 W).The MPCMs 916-1 through 916-7 can be configured to be inserted into a sled to provide for the power supplied by the respective power modules 920-1 through 920-7, each of which can draw power from an external power source 921. In various embodiments, external power source 921 can deliver alternating current (AC) power to rack 902, and power modules 920-1 through 920-7 can be configured to convert such AC power to be supplied to the inserted skateboard. Direct current (DC) power. In some embodiments, for example, power modules 920-1 through 920-7 can be configured to convert 277 volts AC power to 12 volts DC power for provision to the inserted skateboard via respective MPCMs 916-1 through 916-7. Embodiments are not limited to this example.The MPCMs 916-1 through 916-7 may also be arranged to provide optical signaling connectivity to the dual mode optical switching infrastructure 914 for the inserted skateboard, which may be coupled to the dual mode optical switching infrastructure of FIG. 514 is the same or similar. In various embodiments, the optical connectors included in MPCMs 916-1 through 916-7 can be designed to couple with counterpart optical connectors included in the MPCM of the inserted sled to pass the respective length of fiber optic cable 922- 1 to 922-7 provide optical signaling connectivity to the dual mode optical switching infrastructure 914 for such skateboards. In some embodiments, each such length of fiber optic cable can extend from its corresponding MPCM to an optical interconnect loom 923 that is external to the skateboard space of the frame 902. In various embodiments, the optical interconnect loom 923 can be arranged to pass through a support post or other type of load bearing element of the frame 902. Embodiments are not limited in this context. Since the inserted skateboard is connected to the optical switching infrastructure via the MPCM, resources that are typically spent manually configuring the rack cable to accommodate the newly inserted skateboard can be saved.FIG. 10 illustrates an example of a skateboard 1004 that may represent a skateboard designed for use with the frame 902 of FIG. 9 in accordance with some embodiments. The sled 1004 can be characterized by a MPCM 1016 that includes an optical connector 1016A and a power connector 1016B and is designed to couple with the counterpart MPCM of the sled space (in conjunction with inserting the MPCM 1016 into the sled space). Coupling the MPCM 1016 with such a counterpart MPCM can couple the power connector 1016 with a power connector included in the counterpart MPCM. This can generally enable physical resource 1005 of sled 1004 to supply power from an external source via power connector 1016 and power transfer medium 1024 that electrically couples power connector 1016 to physical resource 1005.The skateboard 1004 can also include a dual mode optical network interface circuit 1026. Dual mode optical network interface circuit 1026 may generally include circuitry capable of communicating over an optical signaling medium in accordance with each of a plurality of link layer protocols supported by dual mode optical switching infrastructure 914 of FIG. In some embodiments, the dual mode optical network interface circuit 1026 can have the capabilities of both Ethernet protocol communication and communication in accordance with the second high performance protocol. In various embodiments, dual mode optical network interface circuit 1026 can include one or more optical transceiver modules 1027, each of which can be capable of transmitting and receiving through each of one or more optical channels Optical signal. Embodiments are not limited in this context.Coupling the MPCM 1016 with the counterpart MPCM of the skateboard space in a given rack can couple the optical connector 1016A with the optical connector included in the counterpart MPCM. This can typically establish optical connectivity between the dual mode optical network interface circuit 1026 and the fiber optic cable of the sled via each of the set of optical channels 1025. Dual mode optical network interface circuit 1026 can communicate with physical resource 1005 of skateboard 1004 via electrical signaling medium 1028. In addition to the arrangement of the components on the sled for providing improved cooling and enabling operation with a relatively high heat envelope (eg, 250 W) and the size of the sled (as described above with reference to Figure 9), in some implementations In an example, the skateboard may include one or more additional features to facilitate air cooling, such as heat pipes and/or heat sinks (arranged to dissipate heat generated by physical resources 1005). Notably, although the example skateboard 1004 depicted in FIG. 10 is not characterized by an expansion connector, any given skateboard featuring the design elements of the skateboard 1004 may also feature an expansion connector in accordance with some embodiments. Embodiments are not limited in this context.11 illustrates an example of a data center 1100, which may generally represent a data center in which/for which one or more techniques described herein may be implemented, in accordance with various embodiments. As reflected in FIG. 11, a physical infrastructure management framework 1150A can be implemented to facilitate management of the physical infrastructure 1100A of the data center 1100. In various embodiments, one function of the physical infrastructure management framework 1150A may be to manage automated maintenance functions within the data center 1100, such as using a robotic maintenance device to service computing devices within the physical infrastructure 1100A. In some embodiments, physical infrastructure 1100A can be characterized by an advanced telemetry system that performs telemetry reports that are robust enough to support remote automated management of physical infrastructure 1100A. In various embodiments, telemetry information provided by such advanced telemetry systems can support features such as fault prediction/prevention capabilities and capacity planning capabilities. In some embodiments, the physical infrastructure management framework 1150A can also be configured to use hardware attestation techniques to manage authentication of physical infrastructure components. For example, the robot can verify the reliability of the component by analyzing the information gathered from the radio frequency identification (RFID) tags associated with each component to be installed prior to installation. Embodiments are not limited in this context.As shown in FIG. 11, physical infrastructure 1100A of data center 1100 can include a light fabric 1112, which can include a dual mode optical switching infrastructure 1114. Light fabric 1112 and dual mode optical switching infrastructure 1114 may be the same or similar to light fabric 412 of FIG. 4 and dual mode optical switching infrastructure 514 of FIG. 5, respectively, and may provide high between skateboards of data center 1100. Bandwidth, low latency, multi-protocol connectivity. As discussed above, with reference to FIG. 1, in various embodiments, the availability of such connectivity may enable de-aggregation and dynamic pooling of resources such as accelerators, memory, and storage. In some embodiments, for example, one or more pooled accelerator skateboards 1130 can be included between physical infrastructure 1100A of data center 1100, and each physical infrastructure 1100A can include an accelerator resource pool - such as a coprocessor and/or The FPGA - for example - it is globally accessible to other skateboards via optical fabric 1112 and dual mode optical switching infrastructure 1114.In another example, in various embodiments, one or more pooled storage sleds 1132 can be included between physical infrastructure 1100A of data center 1100, each physical infrastructure 1100A can include a fabric that can be used to communicate via light The 1112 and dual mode optical switching infrastructure 1114 is a pool of storage resources that are globally accessible to other skateboards. In some embodiments, such pooled storage sled 1132 can include a pool of solid state storage devices, such as solid state drives (SSDs). In various embodiments, one or more high performance processing skateboards 1134 can be included between the physical infrastructure 1100A of the data center 1100. In some embodiments, the high performance processing sled 1134 can include a high performance processor pool and cooling features that enhance air cooling to produce a higher heat envelope of up to 250 W or higher. In various embodiments, any given high performance processing sled 1134 can be characterized by an expansion connector 1117 that can accept a far memory expansion sled so that the high performance processing sled 1134 is locally available far The memory is de-aggregated from the near memory included in the slider and the processor. In some embodiments, such a high performance processing sled 1134 can be configured with a far memory (using an extended sled including a low latency SSD storage device). The optical infrastructure allows computing resources on one skateboard to utilize remote accelerator/FPGA, memory, and/or SSD resources (which are de-aggregated on a skateboard located on the same rack or any other rack in the data center). In the ridge-leaf network architecture described above with reference to Figure 5, remote resources may be located at a distance from one switch or from two switches. Embodiments are not limited in this context.In various embodiments, one or more abstract layers may be applied to the physical resources of physical infrastructure 1100A in order to define a virtual infrastructure, such as software defined infrastructure 1100B. In some embodiments, the virtual computing resource 1136 of the software defined infrastructure 1100B can be distributed to support the provisioning of the cloud service 1140. In various embodiments, a specific set of virtual computing resources 1136 can be grouped for provisioning (in the form of SDI service 1138) for cloud service 1140. Examples of cloud service 1140 may include, but are not limited to, Software as a Service (SaaS) service 1142, Platform as a Service (PaaS) service 1144, and Infrastructure as a Service (IaaS) service 1146.In some embodiments, the virtual infrastructure management framework 1150B can be used to manage the software defined infrastructure 1100B. In various embodiments, virtual infrastructure management framework 1150B can be designed to implement workload fingerprinting techniques and/or machine learning techniques in conjunction with the allocation of virtual computing resources 1136 and/or SDI services 1138 managed to cloud services 1140. In some embodiments, virtual infrastructure management framework 1150B can use/consult telemetry data in conjunction with performing such resource allocation. In various embodiments, an application/service management framework 1150C can be implemented to provide QoS management capabilities for the cloud service 1140. Embodiments are not limited in this context.12 illustrates an example of a data center 1200 that may generally represent a data center or other type of computing network in which/for which one or more techniques described herein may be implemented in accordance with various embodiments. As shown in FIG. 12, data center 1200 can be similar to and incorporate the features and components discussed previously. For example, data center 1200 can generally include multiple racks 1202A through 1202D, where each rack can house computing devices containing a corresponding set of physical resources 1205A-x through 1205D-x, where x can be from 1 to 4. Any positive integer. Physical resource 1205 can be included within multiple skateboards 1204A through 1204D. As mentioned, physical resource 1205 can include multiple types of resources such as, for example, a processor, a coprocessor, a fully programmable gate array (FPGA), a memory, an accelerator, and a storage device. In an embodiment, physical resources 1205 may include physical computing resources, physical memory resources, physical storage resources, and physical accelerator resources.In an embodiment, physical resources 1205 may be pooled within and between racks. For example, the physical resource 1205A-1 of the skateboard 1204A-1 can be pooled with the physical resources 1205A-3 of the skateboard 1204A-3 to provide combined processing capabilities for workload across the skateboard within the same rack (eg, rack 1202A). . Similarly, physical resources of one or more racks can be combined with physical resources of one or more other racks to create a pool of physical resources to handle the workload. In one example, physical resources 1205A-3 and physical resources 1205B-1 located within rack 1202A and rack 102B, respectively, can be combined and pooled. Any combination of physical resources 1205 can be pooled to handle the workload and embodiments are not limited in this manner. Moreover, some embodiments may include more or less physical resources 1205, skateboards 1204, and/or racks 1202 and the illustrated examples should not be construed in a limiting manner.In the illustrated example of FIG. 12, data center 1200 can provide management functionality to monitor physical resources 1205 and provide intelligent workloads and processing capabilities. Intelligent workload capabilities may include, but are not limited to, collecting metric data for physical resources 1205, determining that one or more tasks of the workload are processed by physical resources 1205, and passing one or more tasks of the workload through one or more Specific physical resources 1205 are processed based on metric data and service level agreement requirements.To perform these capabilities, embodiments include passing low level metric data of physical resources 1205 to the cabin management controller 1231. In addition, data center 1200 includes a cabin management controller 1231 for providing management functionality. The cabin management controller 1231 can be implemented using circuitry and logic and is part of a cabin management system. The cabin management controller 1231 can provide a set of application programming interfaces (APIs) to enable operations operating on the skateboard 1204 and the rack 1202 to utilize management functionality.In an embodiment, the cabin management controller 1231 may be coupled to one or more racks 1202 via one or more Ethernet links that are part of an out-of-band (OOB) network. In one example, for example, the OOB network can be a separate network from the fiber optic network used to communicate data between the skateboards. Additionally, the OOB network can be a private network for communicating management and control data between the skateboard 1204, the rack 1202, and the cabin management controller 1231. In some instances, the OOB network may support other protocols and techniques for communicating metric data, such as infinite bands, full paths, and the like. These communications may include metric data for physical resources 1205 for use in predicting the use of physical resources 1205 and determining the allocation of physical resources 1205 for processing current and future workloads. These and other details will become more apparent from the description below.13 illustrates an example of a data center management architecture 1300 that may represent data centers or other types of computing in/for which one or more techniques described herein may be implemented, in accordance with various embodiments. The internet. The data center management architecture 1300 of Figure 13 shows a rack 1302 having a plurality of sleds 1304-1 through 1304-n, where n can be any positive integer. Note that FIG. 13 only shows a single rack 1302 having a sled 130-n coupled to the pod management system 1333. However, embodiments are not limited in this manner, for example as previously discussed in FIG.Each of the skateboards 1304-n includes a plurality of components including a physical resource 1305, a management controller 1362, and an Ethernet (ETH) circuit 1352. As will be discussed in greater detail with respect to FIG. 15, each sled 1304 can also include a MPCM with an ETH connector to couple with a corresponding ETH connector to enable communication via an Ethernet link.In an embodiment, ETH circuit 1352 can enable communication via one or more Ethernet links of the OOB network. In some embodiments, ETH circuit 1352 may be capable of implementing gigabyte communications with other devices, such as cabin management system 1333. The ETH circuit 1352 can be a medium independent interface that can use any network signal transmission medium. For example, the media independent interface can be a Reduced Media Independent Interface (RMII), a Gigabit Media Independent Interface (GMII), a Reduced Gigabit Media Independent Interface (RGMII), a 10 Gigabit Media Independent Interface (XGMII), and a string. Gigabit Media Independent Interface (SGMII).In an embodiment, ETH circuit 1352 can support communication via one or more architectures and data structures. For example, ETH circuit 1352 can utilize a representative state transition (REST) architecture and pass data in a JavaScript Object Notation (JSON) data format. In one example, the ETH circuit 1352 can support a set of APIs and modes (such as Redfish®) to enable delivery of data between the skateboard 1304, the rack 1302, and the cabin management controller 1331. In an embodiment, the ETH circuit 1352 can be coupled to a memory and include a memory to store the functionality required to operate on the REST architecture and to communicate data in a JSON data format. For example, ETH circuit 1352 can be coupled with 256 megabytes (MB) of error correction control (ECC) memory to run an embedded operating system (eg, Linux) to provide an ETH interface (Redfish®). Embodiments are not limited in this manner, and other web service architectures can be utilized to communicate metric data and are consistent with the embodiments discussed herein.In an embodiment, each of the skateboards 1304-n may also include a management controller 1362-n to collect and determine metric data for the physical resources 1305-n. The management controller 1362 also provides management functionality that includes transmitting metric data in the data structure to the cabin management controller 1331. In some examples, management controller 1362 can be part of an Intelligent Platform Management Interface (IPMI) architecture and can be a Baseboard Management Controller (BMC) or a dedicated service processor that uses sensors to monitor the physical state of physical resources 1305 and The operational state is communicated with the physical resource 1305 itself. In some examples, management controller 1362 can be a skateboard management controller. Embodiments are not limited to this manner. For example, in some embodiments, metric data may also be passed to the cabin management controller 1331 when a new original equipment manufacturer (OEM) records in a system management basic input/output (SMBIOS) record.The management controller 1362 can collect metric data such as temperature, humidity, power supply voltage, fan speed, communication parameters, and operating system functions. If it is determined that the metric associated with these variables is out of specification or does not satisfy one or more service level agreement requirements, the management controller 1362 can notify or send metric data to the cabin management controller 1331. Moreover, and in some embodiments, metric data may be reported or transmitted to the cabin management controller 1331 on a periodic or semi-periodic basis even when the variables do not exceed the specification. The embodiment is not limited to this manner.In some embodiments, the management controller 1362 can collect metric data specific to a certain type of physical resource 1305. For example, physical resource 1305 can be a physical memory resource, and management controller 1362 can collect metric data such as multiple memory channels, memory bandwidth, memory size, memory type, read/write parameters, memory speed, read/write latency, Partition information, high bandwidth memory size, socket interconnect latency, interleaved or non-interleaved indication, whether to utilize two levels of memory (2LM), 2LM near and far memory types, 2LM size, 2LM area performance, etc. Note that embodiments may also include multiple levels of memory (eg, 3LM), where near memory (first layer) may be connected via physical interconnections on the same substrate, and far memory may be connected and located via fiber optic interconnections Another independent skateboard. For example, the separation may include volatile memory (second layer) and 3D XPoint memory (third layer). In some embodiments, the management controller 1362 can provide advanced configuration and power interface (ACPI) static resource similarity table (SRAT) information and hierarchical memory attribute table (HMAT) information via the OOB network. Typically, these low level metrics associated with physical memory resources are not provided to the cabin management controller 1331. Thus, by providing the low level metric data to the cabin management controller 1331, the cabin management controller 1331 can make smarter decisions when scheduling tasks for the workload for processing.In another example, the management controller 1362 can provide metric data for physical computing resources or processors, such as processor identifiers, processor cache capabilities, processor topology, processor cache topology, processor to processing Delay, bandwidth information, performance (speed/throughput) metrics, etc. In a third example, physical resource 1305 can be a physical storage resource, and management controller 1362 can collect metric data such as storage throughput, storage input/output operations per second (IOPS) metrics, storage latency, storage size, and Storage utilization. Embodiments are not limited to these examples. For example, physical resource 1305 can be an accelerator resource and can determine an accelerator related metric for the accelerator, such as associated with a Peripheral Component Interconnect Express (PCIe) card, a Solid State Drive (SSD), a Field Programmable Gate Array (FPGA) card, and the like. Card link width.In an embodiment, the management controller 1362 can utilize any number of methods or techniques to collect and determine metric data for each physical resource 1305. For example, the management controller 1362 can include sensors for detecting the metric data itself. In another example, the management controller 1362 can determine metric data based on the sensor and the determination made by the physical resource 1305 itself. More specifically, physical resource 1305 may be able to monitor metrics such as throughput, usage, processing time, read/write speed, etc., and pass the metric data to management controller 1362. Physical resource 1305 may also store metric data about itself within a memory location, fuse logic, etc. that may be polled by management controller 1362 to collect metric data. For example, a memory resource can store an indication of memory size, memory type, memory channel, memory bandwidth, and the like. Processor resources can store similar information such as processor speed, processor topology, processor type, and the like. Similar metrics may also be polled by management controller 1362 relative to storage resources such as storage size, storage throughput (read/write time), storage type, and the like. Embodiments are not limited in this manner, and management controller 1362 can collect metric data via other means, including receiving or retrieving information from an operating system.The management controller 1362 can collect and determine metric data on a periodic or semi-periodic basis. In some instances, metric data may be collected on a periodic/semi-periodic basis based on user settings, factory (manufacturer's time) settings, service level agreement requirements, and the like. The embodiment is not limited to this manner.Management controller 1362 may be capable of communicating via one or more different interfaces or bus types to collect metric data. For example, management controller 1362 can be coupled to one or more physical resources via a low pin count (LPC) bus, a system management bus (SMBus), an internal integrated (I2C) bus, an IPMI utilizing SMBus, and a serial port. These interfaces and buses can be used to collect and determine metric data from different physical resources.The management controller 1362 can also be coupled to the ETH circuit 1352 via one or more buses/interfaces to communicate metric data with the cabin management controller 1331. More specifically, the management controller 1362 can utilize the ETH circuit 1352 to communicate metric data via the REST architecture and in a JSON data format.14 illustrates an example of a data center management architecture 1400 that may represent data centers or other types of computing in/for which one or more techniques described herein may be implemented, in accordance with various embodiments. The internet. The data center management architecture 1400 of Figure 14 can be similar to the data center management architecture 1300 shown in Figure 13; however, the rack 1402 can include a rack management controller 1464 to communicate metric data with the skateboard 1404 and the cabin management controller 1431. . For example, rack management controller 1464 can collect (or receive) metric data from skateboard 1404 and send metric data to cabin management system 1433 and cabin management controller 1431. In data center management architecture 1400, each rack (such as rack 1402) may include a rack management controller 1464 to collect metric data and other information from each skateboard 1404.In an embodiment, rack management controller 1464 may include instructions and logic (such as Intel® Pooled System Management Engine (PSME)) to collect, manage, and communicate metric data for each skateboard 1404 in rack 1402. Rack management controller 1464 may also include ETH circuit 1456, which may be similar to ETH circuit 1452 of sled 1404. For example, ETH circuit 1456 can be an RGMII interface and can utilize a REST architecture to communicate data using a JSON data structure such as Redfish®. In some examples, rack management controller 1464 can operate as a server including a REST API to collect metric data from skateboard 1405 and present/send metric data to cabin management controller 1431. For example, the skateboard 1405, the rack management controller 1464, and the cabin management controller 1431 can communicate metric data through JSON-RPC as a transport and JSON as a data structure. Moreover, and as discussed similarly above, these communications can be made in an OOB network environment to prevent interference and bandwidth usage with other data. The embodiment is not limited to this manner.Figure 15 shows an example of a sled 1504, such as a sled 1504 that may represent a sled that is designed for use with the racks discussed herein. In an embodiment, the sled 1504 can be similar to the sled 1004 discussed in Figure 15 and has similar components and functionality thereto. The sled 1504 can be characterized by a MPCM 1516 that can include an optical connector 1516A, a power connector 1516B, and an ETH connector 1516C, and is designed to couple with the counterpart MPCM of the sled space, along with the insertion of the MPCM 1516 into the sled space. . Coupling the MPCM 1516 with such a counterpart MPCM can couple the power connector 1516B with a power connector included in the counterpart MPCM. This may enable the physical resource 1505 of the sled 1504 to draw power from an external source via the power connector 1516B and the power transfer medium 1524 that conductively couples the power connector 1516 to the physical resource 1505.The skateboard 1504 can also include a dual mode optical network interface circuit 1526. The dual mode optical network interface circuit 1526 can include circuitry capable of communicating over an optical signaling medium in accordance with each of a plurality of link layer protocols supported by the dual mode optical switching infrastructure, as previously described in Figures 9 and 10. discussed. In some embodiments, dual mode optical network interface circuitry 1526 may be capable of Ethernet protocol communication and communication in accordance with a second high performance protocol. In various embodiments, the dual mode optical network interface circuit 1526 can include one or more optical transceiver modules 1527, each of which can be capable of transmitting and receiving through each of one or more optical channels. Optical signal. Embodiments are not limited to this context.Coupling the MPCM 1516 with the counterpart MPCM of the skateboard space in a given rack can couple the optical connector 1516A with the optical connector included in the counterpart MPCM. This can establish optical connectivity between the sled and the fiber optic cable of the dual mode optical network interface circuit 1526 via each of a set of optical channels 1525. Dual mode optical network interface circuit 1526 can communicate with physical resource 1505 of sled 1504 via electrical signaling medium 1528.Skateboard 1504 may also include a management controller 1562, which may be the same as or similar to management controller 1362 of FIG. 13 and management controller 1462 of FIG. The management controller 1562 can determine and collect metric data for the physical resource 1505, including but not limited to physical memory resource 1505-1, physical computing resource 1505-2, physical storage resource 1505-3, and physical accelerator resource 1505-4. . The embodiment is not limited to this manner.Physical memory resources 1505-1 can be any type of memory, such as any machine readable or computer readable medium capable of storing data, including both volatile and nonvolatile memory. In some embodiments, a machine readable or computer readable medium can include a non-transitory medium. Moreover, physical memory resource 1505-1 can include one or more higher speed memory cells, such as read only memory (ROM), random access memory (RAM), dynamic RAM (DRAM), double data rate DRAM (DDRAM), Synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, polymer memory (such as iron Electropolymer memory), Austrian memory, phase change or ferroelectric memory, silicon oxide silicon oxynitride (SONOS) memory, magnetic or optical card, device array (such as Redundant Array of Independent Disks (RAID) drives), solid state memory devices (eg, USB memory, solid state drive (SSD), 3D Xpoint®), and any other type of storage medium suitable for storing information. Embodiments are not limited to these examples.Physical computing resource 1505-2 can be any type of circuit capable of processing information. Moreover, physical computing resources 1505-2 can be implemented using any processor or logic device. The physical computing resource 1505-2 can be one or more of any type of computing element such as, but not limited to, a microprocessor, a processor, a central processing unit, a digital signal processing unit, a dual core processor, a mobile device processor, a table Upper processor, single core processor, system on chip (SoC) device, complex instruction set computing (CISC) microprocessor, reduced instruction set (RISC) microprocessor, very long instruction word (VLIW) microprocessor, Or any other type of processor or processing circuitry on a single chip or integrated circuit. The physical computing resources 1505-2 can be connected to and communicate with other physical resources 1505 of the computing system via interconnects, such as one or more buses, control lines, and data lines.In an embodiment, physical storage resource 1505-3 may be any type of storage device and may be implemented as a non-volatile storage device such as, but not limited to, a disk drive, an optical disk drive, a tape drive, an internal storage device, an attached storage The device, flash memory, backup battery SDRAM (synchronous DRAM), and/or network accessible storage device. In an embodiment, physical storage resources 1505-3 may include, for example, techniques for increasing storage performance enhanced protection for valuable digital media when multiple hard drives are included. Further examples of physical storage resources 1505-3 may include a hard disk, a floppy disk, a compact disk read only memory (CD-ROM), a recordable compact disk (CD-R), a rewritable compact disk (CD-RW), Optical disk, magnetic media, magneto-optical media, removable memory card or disk, various types of DVD devices, magnetic tape devices, tape cassette devices, or the like. Embodiments are not limited to this context.Physical accelerator resource 1505-4 may be any type of accelerator device designed to increase the processing power of a processor, such as physical computing resource 1505-2. Physical accelerator resource 1505-4 accelerates transmission or processing beyond processor capability. In one example, physical accelerator resource 1505-4 can calculate a faster floating point unit (FPU) by assisting mathematical calculus or by increasing speed. In another example, physical accelerator resource 1505-4 may be a graphics processing unit (GPU) for 3D images or faster graphical display. In an embodiment, physical accelerator resource 1505-4 may be implemented as a field programmable gate array (FPGA); however, embodiments are not limited in this manner.The management controller 1562 can collect metric data for one or more physical resources 1505 via one or more interconnects 1538 and electrical signals. Interconnect 1538 can be a low pin count (LPC) bus, a system management bus (SMBus), an internal integrated (I2C) bus, an IPMI utilizing SMBus, and a serial port. Embodiments are not limited to these examples.In an embodiment, the management controller 1562 can communicate metric data to the cabin management controller. In some examples, the management controller 1562 can communicate metric data to the cabin management controller via the rack management controller. To send metric data, the management controller 1562 can utilize the ETH circuit 1552 to send metric data via the REST architecture and in a JSON data format, as previously discussed. For example, ETH circuit 1552 can provide a layered architecture to pass metrics in a REST architecture and JSON data format. For example, the management controller 1562 can transmit metric data via one or more interconnects 1538 as electrical signals.The ETH circuit 1552 can process the metric data to pass it in a JSON data format via the REST architecture. For example, ETH circuit 1552 can place metric data in one or more packets having a JSON data format. The ETH circuit 1552 can transmit metric data via the ETH connector 1515C, which is coupled to the counterpart ETH connector in the skateboard space, along with the insertion of the MPCM 1515 into the skateboard space. In some examples, the ETH connector 1515C can be a modular connector or a uniquely designed connector to couple with a counterpart ETH connector in the skateboard space. In one example, the ETH connector 1515C can have the same wiring outlets as the registration jack 45 (RJ45) modular connector. However, the ETH connector 1515C can be designed differently than the standard RJ45 connector so that it can be coupled to the counterpart ETH connector in the skateboard space.In some embodiments, the management controller 1562 and the ETH circuit 1552 can perform a serial to local area network (LAN) conversion to pass metric data to the cabin management controller. For example, the management controller 1562 can collect metric data via one or more serial links, convert the data for transmission into a packet in a JSON data format, and pass the metric data to a cabin management controller. Embodiments are not limited to these examples.16 illustrates an example of a data center system 1600 that may represent a data center or other type of computing network in/for which one or more of the techniques described herein may be implemented, in accordance with various embodiments. As shown in FIG. 16, data center system 1600 can be similar to and include the features and components previously discussed. For example, data center system 1600 can generally include multiple racks 1602A through 1602D, each rack can accommodate a computing device including a respective set of physical resources 1605A-X through 1605D-X, where x can be any from 1 to 4 A positive integer. Physical resources 1605 can be included in multiple skateboards 1604A through 1604D. As mentioned, physical resource 1605 can include multiple types of resources such as, for example, a processor, a coprocessor, a fully programmable gate array (FPGA), memory, an accelerator, and a storage device.In an embodiment, the skateboard 1604 can communicate metric data to the cabin management controller 1631 as previously discussed. For example, the skateboards 1604 can each include a management controller (not shown) that can collect metric data for the physical resources 1605 and send the metric data to the cabin management controller 1631 either directly or via a rack management controller (not shown). In addition, metric data can be sent to the cabin management controller 1631 via the OOB network on one or more ETH links using the REST architecture and employing the JSON data format.The cabin management controller 1631 utilizes the metric data to determine physical resources 1605 for processing one or more tasks of the workload. For example, the cabin management controller 1631 may implement an orchestration layer such as OpenStack® to consume metric data to allocate physical resources for processing workloads. In an embodiment, the cabin management controller 1631 may utilize metric data in conjunction with service level agreement (SLA) requirements to cause tasks of the workload to be processed by physical resources while maintaining the requirements specified in the SLA. SLAs can be based on a policy-based storage management system to help assess and maintain the appropriate level of performance in the data center. The SLA may specify a set of one or more values or metrics associated with one or more particular measurable performance characteristics and specify one or more desired or desired workloads to be provided to the workload including one or more tasks Service level. Some requirements may include latency, cost, protection against local failures or corruptions, geographic dispersion, efficiency, throughput, processing time, and more. Thus, SLA requirements can be defined with respect to any one or more of these characteristics, as well as other characteristics. By collecting metric data and determining actual performance relative to the SLA, it can be determined if the data center is performing properly, and if not, adjustments can be made to the state of the data center system. For example, the cabin management controller 1631 can adjust, send, cause, etc. which physical resources are processing specific tasks of the workload to ensure that the requirements of the SLA are met. More specifically, processing cycles on physical computing resources, memory reads/writes of physical memory resources, data storage of physical storage resources, and processing cycles of accelerators may be assigned to workloads based on SLA requirements and metric data.In an embodiment, the cabin management controller 1631 may determine the SLA requirements based on data stored in a memory or storage device, such as data store 1677. The SLA requirements may be stored in data store 1677 based on user input or computer determination of a particular SLA requirement specifying the workload. Accordingly, the cabin management controller 1631 can receive an indication of the workload to be processed by the data center 1600 from one or more clients 1679. The cabin management controller 1631 can determine the SLA requirements for the workload based on the data in the data store 1677. For example, the cabin management controller 1631 can perform an SLA request to find and retrieve a workload based on an identifier that identifies the workload.The cabin management controller 1631 may utilize the metric data received from the rack 1602 and the SLA requirements of the workload to determine which physical resources 1605 will process one or more tasks of the workload. For example, the cabin management controller 1631 can determine which physical resources 1605 are capable of processing one or more tasks of the workload to meet the SLA requirements of the workload. The cabin management controller 1631 may cause one or more tasks to be processed by the determined physical resource 1605. Note that metric data may be communicated between the rack 1602 and the cabin management controller 1631 via the OOB network; however, one or more tasks may be communicated to the rack 1602 and specific resources via different networks, such as fiber optic networks. 1605. The embodiment is not limited to this manner.FIG. 17 illustrates an embodiment of a logic flow 1700. Logic flow 1700 can represent some or all of the operations performed by one or more embodiments described herein. For example, logic flow 1700 can illustrate the operations performed by the cabin management controller, as discussed herein. However, embodiments are not limited thereto, and one or more operations may be performed by other components or systems discussed herein.At block 1702, logic flow 1700 includes determining metric data for one or more physical resources. As previously discussed, the cabin management controller can receive metric data from a management controller of the skateboard and a rack management controller of the rack having one or more skateboards. Metric data can be collected and determined by a management controller of a skateboard with physical resources and provided via an OOB network.At block 1704, the logic flow 1700 includes one or more tasks that determine a workload to be processed by the data center. In some instances, the cabin management controller can receive instructions for tasks and workloads or tasks and workloads. For example, the indication can identify tasks and workloads. Tasks can include any type of operations, jobs, and processes that can be done by physical resources. For example, tasks include instructions processed by physical computing resources, read/write requests to physical memory resources, read/write requests to physical storage resources, and instructions processed by physical accelerator resources.At block 1706, logic flow 1700 includes determining one or more SLA requirements for the workload. For example, the cabin management controller can retrieve SLA request data from data associated with the workload. SLA requirements can specify one or more requirements for workloads and tasks, such as processing, throughput, IOPS, read/write speed, and the like. The embodiment is not limited to this manner.At block 1708, the logic flow 1700 can determine one or more physical resources to process one or more tasks of the workload. For example, the cabin management controller can determine which physical resources are capable of processing tasks while meeting the SLA requirements of the task. Note that the cabin management controller can utilize a single physical resource to perform tasks or pool two more physical resources to process tasks. In an embodiment, the cabin management controller may determine which physical resources based on metric data indicating that those physical resources may satisfy the SLA requirements.At block 1710, the logic flow 1700 includes causing the task to be performed by one or more physical resources to determine an SLA requirement that can satisfy the task. For example, the cabin management controller can communicate information to one or more clients or other systems indicating which physical resources will perform/process one or more tasks of the workload. The embodiment is not limited to this manner.Although logic flow 1700 illustrates particular operations occurring in a particular order, embodiments are not limited in this manner, and some operations may occur before, after, or during other operations. Moreover, the logic flow 1700 can be repeated any number of times, and embodiments are not limited in this manner.FIG. 18 shows an embodiment of a logic flow 1800. Logic flow 1800 can represent some or all of the operations performed by one or more embodiments described herein. For example, logic flow 1800 can illustrate the operations performed by the management controller of the skateboard, as discussed herein. However, embodiments are not limited thereto, and one or more operations may be performed by other components or systems discussed herein.At block 1802, the logic flow 1800 includes determining metric data for one or more physical resources of the skateboard. For example, the management controller can determine and collect metric data for one or more physical resources including, but not limited to, physical memory resources, physical computing resources, physical storage resources, and physical accelerator resources. The management controller can collect metric data from physical resources and from the operating system of the skateboard via one or more sensors. The embodiment is not limited to this manner.At block 1804, the logic flow 1800 includes transmitting the metric data to the cabin management controller. In some examples, the management controller can send the metric data directly to the cabin management controller in a JSON data format using the REST architecture via the OOB network. In another example, the management controller can send metric data to the cabin management controller via the rack management controller. The rack management controller can receive metric data from any number of physical resources of the skateboard and skateboard for transmission to the cabin management controller. The rack management controller can also send metrics in a REST architecture and JSON data format. Note that in both examples, metric data may be sent to the cabin management controller via the OOB network such that the metric data does not interfere with processing and data transfer on other networks, such as fiber optic networks.At block 1806, the logic flow 1800 can include receiving a task to be processed by one or more physical resources of the skateboard. In some instances, the task may be part of a workload being processed by the data center and may be sent to a skateboard having one or more physical resources based on the metric data. The embodiment is not limited to this manner.The detailed disclosure now turns to providing examples related to further embodiments. Examples one through twenty five (1-25) provided below are intended to be exemplary and not limiting.In a first example, a system, apparatus, device, etc., can include a cabin management controller receiving metric data from a plurality of management controllers of a skateboard via an out-of-band (OOB) network (slides include multiple physical resources and are used to indicate multiple physics) The metric data of one or more metrics of the resource), the physical resources of the plurality of physical resources used to perform the task are determined based at least in part on the one or more metrics, and the tasks are performed by the physical resources.In a second example and in the facilitation of the first example, the system, apparatus, device, etc., includes a cabin management controller for determining based on metric data indicating that the physical resource is capable of meeting the requirements of the service level agreement associated with the task The physical resource to perform the task.In a third example and in the promotion of any of the previous examples, the system, apparatus, device, etc., includes a cabin management controller for receiving metric data from a plurality of management controllers of the skateboard located within a single rack.In a fourth example and in the promotion of any of the previous examples, the system, apparatus, device, etc., includes logic for receiving metrics from a plurality of management controllers of the skateboards located within two or more racks data.In a fifth example and in the promotion of any of the previous examples, the system, apparatus, device, etc., includes a cabin management controller for utilizing a representative state transition (REST) architecture and employing JavaScript object notation (JSON) The data format receives metric data via the OOB network.In a sixth example and in the promotion of any of the previous examples, the system, apparatus, device, etc., includes a cabin management controller for facilitating the inclusion of each of one or more physical memory resources and physical memory resources Metric processing of physical resources of the data, the cabin management controller includes physical resources, the physical resources include one or more physical memory resources, and the metric data for each of the physical memory resources includes an indication of whether the physical memory resources are interleaved or non-interlaced One or more of one or more, memory throughput, memory input/output operations per second (IOPS) metrics, memory latency, memory size, and memory utilization.In a seventh example and in the promotion of any of the previous examples, a system, apparatus, device, etc., includes a cabin management controller for facilitating processing of physical resources, the physical resources including one or more physical computing resources, and The metric data for each of the physical computing resources includes one of a processor identifier, a processor cache capability, a processor topology, a processor cache topology, a processor-to-processor link access latency, and processor bandwidth information. Or multiple.In an eighth example and in the promotion of any of the previous examples, the system, apparatus, device, etc., includes a cabin management controller for facilitating processing of physical resources, the physical resources including one or more physical storage resources, and The metric data for each of the one or more physical storage resources includes one or more of storage throughput, storage input/output operations per second (IOPS) metrics, storage latency, storage size, and storage utilization.In a ninth example and in the promotion of any of the previous examples, the system, apparatus, device, etc., includes a cabin management controller for receiving a rack of metric data via a plurality of skateboards from one or more racks The controller is managed to receive metric data.In a tenth example and in the promotion of any of the previous examples, embodiments can include a non-transitory computer readable storage medium including a plurality of instructions that, when executed, enable processing circuitry to be An out-of-band (OOB) network receives metric data from a plurality of management controllers of the skateboard (the skateboard includes a plurality of physical resources and metric data for indicating one or more metrics of the plurality of physical resources), based at least in part on one or more A metric determines a physical resource of a plurality of physical resources for performing a task, and causes the task to be executed by the physical resource.In an eleventh example and in the promotion of any of the preceding examples, embodiments can include a non-transitory computer readable storage medium including a plurality of instructions that, when executed, enable a processing circuit to A physical resource to perform a task is determined based on metric data indicating that the physical resource is capable of meeting the requirements of the service level agreement associated with the task.In a twelfth example and in the promotion of any of the preceding examples, embodiments can include a non-transitory computer readable storage medium including a plurality of instructions that, when executed, enable processing circuitry to Metric data is received from multiple management controllers of the skateboard located within a single rack.In a thirteenth example and in the promotion of any of the preceding examples, embodiments can include a non-transitory computer readable storage medium including a plurality of instructions that, when executed, enable a processing circuit to Metric data is received from a plurality of management controllers of the skateboards located in two or more racks.In a fourteenth example and in the promotion of any of the previous examples, embodiments can include a non-transitory computer readable storage medium including a plurality of instructions that, when executed, enable a processing circuit to The metric data is received via the OOB network using a representative state transition (REST) architecture and using a JavaScript Object Notation (JSON) data format.In a fifteenth example and in the promotion of any of the previous examples, embodiments can include a non-transitory computer readable storage medium including a plurality of instructions that, when executed, enable a processing circuit to The metric data is received via a rack management controller that receives metric data from a plurality of skateboards of one or more racks.In a sixteenth example and in the promotion of any of the previous examples, the system, apparatus, device, etc., includes a management controller for determining metric data for one or more physical resources of the skateboard (metric data is used to indicate One or more metrics of one or more physical resources), the metric data is sent to the cabin management controller via an out-of-band (OOB) link, and the task is received for processing by one or more of the physical resources.In a seventeenth example and in the promotion of any of the previous examples, the system, apparatus, device, etc., includes a management controller for utilizing a representative state transition (REST) architecture and employing JavaScript object notation (JSON) The data format sends metric data via the OOB link.In an eighteenth example and in the promotion of any of the previous examples, the system, apparatus, device, etc., includes a management controller for transmitting metric data to the cabin management controller via the rack management controller.In a nineteenth example and in the promotion of any of the preceding examples, embodiments can include a non-transitory computer readable storage medium including a plurality of instructions that, when executed, enable a processing circuit to Metric data for one or more physical resources of the skateboard is determined, the metric data is used to indicate one or more metrics of one or more physical resources, and the metric data is sent to the cabin management controller via an out of band (OOB) link.In a twentieth example and in the promotion of any of the preceding examples, embodiments can include a non-transitory computer readable storage medium including a plurality of instructions that, when executed, enable a processing circuit to The metric data is transmitted over a OOB link using a Representational State Transfer (REST) architecture and using a JavaScript Object Notation (JSON) data format.In a twenty-first example and in the promotion of any of the preceding examples, embodiments can include a non-transitory computer readable storage medium including a plurality of instructions that, when executed, cause processing circuitry The metric data can be sent to the cabin management controller via the rack management controller.In a twenty-second example and in the promotion of any of the previous examples, embodiments may include one or more methods for performing any combination of the above examples or other methods/logic flows discussed herein.Some embodiments may be described using the expression "one embodiment" or "an embodiment" along with their derivatives. These terms are meant to encompass a particular feature, structure, or characteristic that is described in connection with the embodiments. The appearances of the phrase "in an embodiment" Further, some embodiments may be described using the expression "coupled" and "connected" along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, the terms "connected" and "coupled" may be used to describe some embodiments in order to indicate that two or more elements are in direct physical or electrical contact with each other. However, the term "coupled" may also mean that two or more elements are not in direct contact with each other, but still cooperate or interact with each other.It is emphasized that the disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. The Abstract of the Disclosure is submitted with the understanding that the Abstract of the Disclosure will not be used to the extent or the scope of the claims. In addition, in the foregoing Detailed Description, various features are grouped together in a single embodiment for the purpose of the disclosure. The method disclosed is not to be interpreted as reflecting the intention that the claimed embodiments require more features than those recited in the claims. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Therefore, the following claims are hereby incorporated into the Detailed Description, the claims In the appended claims, the terms "comprises" and "comprising", respectively, are used as the <RTIgt; Moreover, the terms "first," "second," "third," etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.What has been described above includes examples of the disclosed architecture. Of course, it is not possible to describe every possible combination of components and/or methods, but those skilled in the art will recognize that many additional combinations and permutations are possible. Accordingly, the novel architecture is intended to embrace all such modifications, modifications and variations,
A data storage system having non-volatile media, a buffer memory, a processing device, and a data prefetcher. The data prefetcher receives commands to be executed in the data storage system, provides the commands as input to a predictive model, obtains at least one command identified for pre-fetching, as output from the predictive model having the commands as input. Prior to the command being executed in the data storage device, the data prefetcher retrieves, from the non-volatile memory, at least a portion of data to be used in execution of the command; and stores the portion of data in the buffer memory. The retrieving and storing the portion of the data can be performed concurrently with the execution of many commands before the execution of the command, to reduce the latency impact of the command on other commands that are executed concurrently with the execution of the command.
CLAIMSWhat is claimed is:1. A data storage system, comprising:non-volatile media;a buffer memory;a processing device coupled to the buffer memory and the non-volatile media; anda data pre-fetcher configured to:receive commands to be executed in the data storage system;provide the commands as input to a predictive model;identify, using the predictive model and based on the commands, at least one command for pre-fetching; andprior to the command being executed in the data storage device,retrieve, from the non-volatile memory, at least a portion of data to be used in execution of the command; andstore the portion of data in the buffer memory.2. The data storage system of claim 1 , wherein the data pre-fetcher configured to use the predictive model periodically.3. The data storage system of claim 1 , wherein the data pre-fetcher configured to provide the commands of a predetermined number as input to the predictive model during each use of the predictive model.4. The data storage system of claim 1 , wherein the predictive model is trained using a supervised machine learning technique.5. The data storage system of claim 4, wherein the data pre-fetcher configured to spread latency impact of the command over more than a threshold number of commands.6. The data storage system of claim 4, wherein the data pre-fetcher configured to retrieve the portion of data from the non-volatile memory and store the portion of data in the buffer memory during execution of a plurality of commands, using resources that are not required for the execution of the plurality of commands.7. The data storage system of claim 4, wherein the command is predicted to cause more than a threshold amount of increase in latency in execution of a further command if the portion of data is not available in the buffer memory.8. The data storage system of claim 4, wherein the command is identified by the predictive model based at least in part that the command is in apredetermined category.9. The data storage system of claim 8, wherein commands in the predetermined category have an average in execution latency that is longer than a threshold.10. The data storage system of claim 9, further configured to:generate latency data of second commands executed in the data storagesystem;identify, from the latency data, the third commands causing more than athreshold amount of increase in latency in execution of at least one of the second commands; andtrain the predictive model using the supervised machine learning technique to reduce differences between third commands identified using the latency data and commands identified by the predictive model from the second commands.11. A method, comprising:receiving, in a controller of a data storage system, commands from a host system for execution in the data storage system;providing, to a predictive model, the commands as input;identifying, using the predictive model and based on the commands as input, at least one command for pre-fetching; andprior to the command being executed in the data storage device,retrieving, from non-volatile memory of the data storage media, at least a portion of data to be used in execution of the command; and storing the portion of data in buffer memory of the data storage system.12. The method of claim 11 , wherein the predictive model is trained using a supervised machine learning technique.13. The method of claim 12, further comprising:generating execution latency data of first commands;identify, from the latency data, second commands causing more than athreshold amount of increase in execution latency of at least one of the first commands; andtraining the predictive model using the supervised machine learning technique to reduce differences between the second commands identified using the latency data and third commands identified by the predictive model from the first commands.14. The method of claim 13, further comprising:computing averages of execution latency of different types of commands; and comparing execution latency of the first commands to the averages to identify the at least one of the first commands that has more than the threshold amount of increase in execution latency.15. The method of claim 14, further comprising:identifying the second commands in response to a determination that thesecond commands have a predetermined characteristic and that the second commands have been executed concurrently with the at least one of the first commands.16. The method of claim 15, wherein the predetermined characteristic includes a predetermined command type, a predetermined command category, or an average execution latency being above a threshold, or any combination thereof.17. The method of claim 12, further comprising:spreading latency impact of the command over more than a threshold number of commands.18. The method of claim 12, further comprising:retrieving the portion of data from the non-volatile memory and storing the portion of data in the buffer memory during execution of a plurality of commands, using resources that are not used for the execution of the plurality of commands.19. A non-transitory computer storage medium storing instructions which, when executed by a computing system, cause the computing system to perform a method, the method comprising:receiving latency data of first commands executed in a data storage system; identify, from the latency data, second commands causing more than athreshold amount of increase in execution latency of at least one of the first commands; andtraining a predictive model using the supervised machine learning technique to reduce differences between the second commands identified using the latency data and third commands identified by the predictive model from the first commands.20. The non-transitory computer storage medium of claim 19, storing furtherinstructions which, when executed by a computing system, cause the computing system to perform the method, the method further comprising: receiving, in a controller of a data storage system, pending commands from a host system for execution in the data storage system;providing, to the predictive model, the pending commands as input;identifying, using the predictive model and based on the pending commands as input, at least one fifth command for pre-fetching; and prior to the fifth command being executed in the data storage device,retrieving, from non-volatile memory of the data storage media, at least a portion of data to be used in execution of the fifth command; andstoring the portion of data in buffer memory of the data storage system.
PREDICTIVE DATA PRE-FETCHING IN A DATA STORAGE DEVICERELATED APPLICATION[0001] The present application claims priority to U.S. Pat. App. Ser. No.16/384,618, filed Apr. 15, 2019 and entitled“PREDICTIVE DATA PRE-FETCHING IN A DATA STORAGE DEVICE,” the entire disclosure of which is herebyincorporated herein by reference.FIELD OF THE TECHNOLOGY[0002] At least some embodiments disclosed herein relate to memory systems in general, and more particularly, but not limited to predictive data pre-fetching in data storage devices.BACKGROUND[0003] A memory sub-system can include one or more memory components that store data. A memory sub-system can be a data storage system, such as a solid- state drive (SSD), or a hard disk drive (HDD). A memory sub-system can be a memory module, such as a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), or a non-volatile dual in-line memory module (NVDIMM). The memory components can be, for example, non-volatile memory components and volatile memory components. Examples of memory components include memory integrated circuits. Some memory integrated circuits are volatile and require power to maintain stored data. Some memory integrated circuits are non-volatile and can retain stored data even when not powered. Examples of non-volatile memory include flash memory, Read-Only Memory (ROM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM) and Electronically Erasable Programmable Read-Only Memory (EEPROM) memory, etc. Examples of volatile memory include Dynamic Random-Access Memory (DRAM) and Static Random-Access Memory (SRAM). In general, a host system can utilize a memory sub-system to store data at the memory components and to retrieve data from the memory components.[0004] A computer can include a host system and one or more memory sub- systems attached to the host system. The host system can have a central processing unit (CPU) in communication with the one or more memory sub-systems to store and/or retrieve data and instructions. Instructions for a computer can include operating systems, device drivers, and application programs. An operating system manages resources in the computer and provides common services for application programs, such as memory allocation and time sharing of the resources. A device driver operates or controls a particular type of devices in the computer; and the operating system uses the device driver to offer resources and/or services provided by the type of devices. A central processing unit (CPU) of a computer system can run an operating system and device drivers to provide the services and/or resources to application programs. The central processing unit (CPU) can run an application program that uses the services and/or resources. For example, an application program implementing a type of applications of computer systems can instruct the central processing unit (CPU) to store data in the memory components of a memory sub-system and retrieve data from the memory components.[0005] A host system can communicate with a memory sub-system in accordance with a pre-defined communication protocol, such as Non-Volatile Memory Flost Controller Interface Specification (NVMFICI), also known as NVM Express (NVMe), which specifies the logical device interface protocol for accessing non-volatile storage devices via a Peripheral Component Interconnect Express (PCI Express or PCIe) bus. In accordance with the communication protocol, the host system can send commands of different types to the memory sub-system; and the memory sub system can execute the commands and provide responses to the commands.Some commands instruct the memory sub-system to store data items at addresses specified in the commands, or to retrieve data items from addresses specified in the commands, such as read commands and write commands. Some commands manage the infrastructure in the memory sub-system and/or administrative tasks, such as commands to manage namespaces, commands to attach namespaces, commands to create input/output submission or completion queues, commands to delete input/output submission or completion queues, commands for firmware management, etc.BRIEF DESCRIPTION OF THE DRAWINGS [0006] The embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings in which like references indicate similar elements.[0007] FIG. 1 illustrates an example computing system having a memory sub system in accordance with some embodiments of the present disclosure.[0008] FIG. 2 illustrates a system configured to train a predictive model to identify commands that can cause increased latency in the execution of other commands.[0009] FIG. 3 illustrates a system having a predictive model to pre-fetch data of commands from non-volatile media to buffer memory.[0010] FIG. 4 shows a method to train a predictive model to identify high impact commands.[0011] FIG. 5 shows a method to pre-fetch data for high impact commands based on the predictions of a predictive model.[0012] FIG. 6 is a block diagram of an example computer system in which embodiments of the present disclosure can operate.DETAILED DESCRIPTION[0013] At least some aspects of the present disclosure are directed to predictive pre-fetching data for commands that can increase execution latency of other commands executed concurrently in a data storage device. For example, a predictive model is configured in a data storage device to identify such commands that can cause significant delays in the execution of other commands. The data used by the identified commands can be pre-fetched from non-volatile storage media of the data storage device to buffer memory of the storage device. Pre-fetching the data to the buffer memory can reduce, minimize and/or eliminate the delays caused by the identified commands in the execution of other commands. The predictive model can be established by applying machine learning techniques on a training set of commands, using the execution latency data of the commands in the training set.[0014] In general, infrastructure commands can be used to manage, configure, administrate, or report on the status of, the infrastructure in a data storage system. Certain infrastructure command can often cause unexpected increases in latency in the execution of other commands that not related to such commands. Such infrastructure commands can have high latency. When certain resources in the data storage system are used for the execution of the high latency infrastructure commands, the resources become unavailable for the execution of other commands, causing apparently random delays in the execution of other commands that may use the resources.[0015] In at least some embodiments disclosed herein, a predictive model is configured to predict infrastructure commands that are most likely to increase latency of other commands. The prediction is based on some characteristics of commands that are currently queued for processing in the data storage system. The prediction allows the data storage system to pre-fetch data from non-volatile storage media to buffer memory for the predicted infrastructure commands. After the pre-fetching of the data for the predicted commands, the likelihood of the predicted infrastructure commands using resources during their execution to access the non-volatile storage media and make them unavailable for execution of other commands is reduced. Therefore, the impact of the execution of the infrastructure commands on other commands can be reduced, minimized, and/or eliminated.[0016] For example, a supervised machine learning technique can be applied to a group of commands in a training data set. The training data set can have a mixed set of infrastructure commands of different types and other commands of different types. The training set of commands can represent an example of workload for a data storage device/system, or a real workload during a period of service. Some parameters of the commands in the training set can be used as input parameters to the predictive model, such as the types of commands, the regions in the storage system being accessed by the commands, etc. The measured latency in the execution of the commands in the training set can be used to identify infrastructure commands that have high impact on the execution of other commands and infrastructure commands that do not have high impact on the execution of other commands. For example, high impact commands cause more than a threshold amount of increased latency in the execution of other commands; and low impact commands cause no more than the threshold amount of increase in latency of other commands. The supervised machine learning technique can be used to train the predictive model by adjusting the parameters in the predictive model to minimize the differences between the classification/prediction of the infrastructure commands identified by the predictive model and the classification/prediction of infrastructure commands identified from the latency data in the training data set. [0017] For example, the predictive model can be trained to classify a sequence of commands. Each infrastructure commands in the sequence can be classified as either having potential for high impact or not having the potential for the commands in the sequence.[0018] For example, the predictive model can be trained to predict, for a sequence of commands, latency increases caused by an infrastructure command in the sequence in the execution of other commands in the sequence. The predicted increases in execution latency can be compared with a threshold to classify the infrastructure command as either a high impact command, or a low impact command.[0019] For example, the predictive model can be trained to predict, for a sequence of commands, an infrastructure command that will enter the data storage device/system to cause more than a threshold amount of increase in the execution latency of some of the commands in the sequence. The prediction can be made based on the pattern of infrastructure commands and other commands.[0020] For example, the predictive model can be based on statistical correlation using logistic regression and/or an artificial neural network.[0021] For example, different sets of training sets can be used for data storage systems having different structures and different configurations.[0022] A data storage system of a particular design can be initially configured with a predictive model trained according to a typical workload of commands for the design. Subsequently, the predictive model can be further trained and/or updated for the typical workload of the data storage system in a computer system and/or based on a recent real-time workload of the data storage system.[0023] Optionally, the data storage system can be further configured to monitor differences between the real-time predictions made using the predictive model and subsequent measurement of increased latency in command executions to further train the predictive model periodically to adapt its predictive capability in accordance with the real-time workload.[0024] During the usage of the data storage system that has a predictive model, the incoming commands to be executed by the data storage system can be provided as input to the predictive model to identify a table of commandsscheduled/suggested for pre-fetching.[0025] For example, the predictive model can be used to process a predetermined number of commands pending in one or more queues for execution (e.g., 1000 commands) or once every predetermined time period (e.g., 10 ms).During the use of the predictive model, the commands pending for execution by the data storage system can be fed into the predictive model to identify a table of high impact commands for pre-fetching. The data storage system is configured to pre fetch the data that is likely to be used by the high impact commands in the table before the actual execution of the high impact commands, such that impact of the execution of the high impact commands is distributed to a large number of other commands. Further, the pre-fetching can be configured to use spare resources that are not used/required for the execution of the other commands, which are executed before the high impact commands; and such an arrangement can reduce the overall impact of the high impact commands on other commands.[0026] In some instances, the predictive model can predict an infrastructure command before the host system sends the infrastructure command to the data storage system and/or before the infrastructure command is retrieved from a queue for execution. The data storage system can use a flag to indicate whether or not the pre-fetched data for the predicted infrastructure command is valid.[0027] In general, a memory sub-system can also be referred to as a“memory device”. An example of a memory sub-system is a memory module that is connected to a central processing unit (CPU) via a memory bus. Examples of memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), a non-volatile dual in-line memory module (NVDIMM), etc.[0028] Another example of a memory sub-system is a data storage device/system that is connected to the central processing unit (CPU) via a peripheral interconnect (e.g., an input/output bus, a storage area network). Examples of storage devices include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, and a hard disk drive (HDD).[0029] In some embodiments, the memory sub-system is a hybridmemory/storage sub-system that provides both memory functions and storage functions. In general, a host system can utilize a memory sub-system that includes one or more memory components. The host system can provide data to be stored at the memory sub-system and can request data to be retrieved from the memory sub-system.[0030] FIG. 1 illustrates an example computing system having a memory sub- system (110) in accordance with some embodiments of the present disclosure.[0031] The memory sub-system (110) can include non-volatile media (109) that includes memory components. In general, memory components can be volatile memory components, non-volatile memory components, or a combination of such.In some embodiments, the memory sub-system (110) is a data storage system. An example of a data storage system is an SSD. In other embodiments, the memory sub-system (110) is a memory module. Examples of a memory module includes a DIMM, NVDIMM, and NVDIMM-P. In some embodiments, the memory sub-system (110) is a hybrid memory/storage sub-system.[0032] In general, the computing environment can include a host system (120) that uses the memory sub-system (110). For example, the host system (120) can write data to the memory sub-system (110) and read data from the memory sub system (110).[0033] The host system (120) can be part of a computing device, such as a desktop computer, laptop computer, network server, mobile device, or such computing device that includes a memory and a processing device. The host system (120) can include or be coupled to the memory sub-system (110) so that the host system (120) can read data from or write data to the memory sub-system (110). The host system (120) can be coupled to the memory sub-system (110) via a physical host interface. As used herein,“coupled to” generally refers to aconnection between components, which can be an indirect communicativeconnection or direct communicative connection (e.g., without interveningcomponents), whether wired or wireless, including connections such as electrical, optical, magnetic, etc. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), a double data rate (DDR) memory bus, etc. The physical host interface can be used to transmit data and/or commands between the host system (120) and the memory sub-system (110). The host system (120) can further utilize an NVM Express (NVMe) interface to access the non-volatile media (109) when the memory sub-system (110) is coupled with the host system (120) by the PCIe interface. The physical host interface can provide an interface for passing control, address, data, and other signals between the memory sub-system (110) and the host system (120). FIG. 1 illustrates a memory sub- system (110) as an example. In general, the host system (120) can access multiple memory sub-systems via a same communication connection, multiple separate communication connections, and/or a combination of communication connections.[0034] The host system (120) includes a processing device (118) and a controller (116). The processing device (118) of the host system (120) can be, for example, a microprocessor, a central processing unit (CPU), a processing core of a processor, an execution unit, etc. In some instances, the controller (116) can be referred to as a memory controller, a memory management unit, and/or an initiator. In one example, the controller (116) controls the communications over a bus coupled between the host system (120) and the memory sub-system (110).[0035] In general, the controller (116) can send commands or requests to the memory sub-system (110) for desired access to non-volatile media (109). The controller (116) can further include interface circuitry to communicate with the memory sub-system (110). The interface circuitry can convert responses received from memory sub-system (110) into information for the host system (120).[0036] The controller (116) of the host system (120) can communicate with controller (115) of the memory sub-system (110) to perform operations such as reading data, writing data, or erasing data in the non-volatile media (109) and other such operations. In some instances, the controller (116) is integrated within the same package of the processing device (118). In other instances, the controller (116) is separate from the package of the processing device (118). The controller (116) and/or the processing device (118) can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, a cache memory, or a combination thereof. The controller (116) and/or the processing device (118) can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor.[0037] The non-volatile media (109) can include any combination of the different types of non-volatile memory components. In some instances, volatile memory components can also be used. An example of non-volatile memory components includes a negative-and (NAND) type flash memory. A memory component in the media (109) can include one or more arrays of memory cells such as single level cells (SLCs) or multi-level cells (MLCs) (e.g., triple level cells (TLCs) or quad-level cells (QLCs)). In some embodiments, a particular memory component can include both an SLC portion and an MLC portion of memory cells. Each of the memory cells can store one or more bits of data (e.g., data blocks) used by the host system (120). Although non-volatile memory components such as NAND type flash memory are described, the memory components used in the non-volatile media (109) can be based on any other type of memory. Further, a volatile memory can be used. In some embodiments, the memory components in the media (109) can include, but are not limited to, random access memory (RAM), read-only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), phase change memory (PCM), magneto random access memory (MRAM), Spin Transfer Torque (STT)-MRAM, ferroelectric random-access memory (FeTRAM), ferroelectric RAM (FeRAM), conductive bridging RAM(CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), negative-or (NOR) flash memory, electrically erasable programmable read-only memory (EEPROM), nanowire-based non-volatile memory, memory thatincorporates memristor technology, or a cross-point array of non-volatile memory cells, or any combinations thereof. A cross-point array of non-volatile memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash- based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non volatile memory cell being previously erased. Furthermore, the memory cells of the memory components in the media (109) can be grouped as memory pages or data blocks that can refer to a unit of the memory component used to store data.[0038] The controller (115) of the memory sub-system (110) can communicate with the memory components in the media (109) to perform operations such as reading data, writing data, or erasing data at the memory components and other such operations (e.g., in response to commands scheduled on a command bus by controller (116)). The controller (115) can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The controller (115) can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor. The controller (115) can include a processing device (117) (e.g., processor) configured to execute instructions stored in local memory (119). In the illustrated example, the buffer memory (119) of the controller (115) includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system (110), including handling communications between the memory sub-system (110) and the host system (120). In some embodiments, the controller (115) can include memory registers storing memory pointers, fetched data, etc. The controller (115) can also include read-only memory (ROM) for storing micro-code. While the example memory sub-system (110) in FIG. 1 has been illustrated as including the controller (115), in another embodiment of the present disclosure, a memory sub-system (110) may not include a controller (115), and can instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory sub-system).[0039] In general, the controller (115) can receive commands or operations from the host system (120) and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory components in the media (109). The controller (115) can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical block address and a physical block address that are associated with the memory components in the media (109). The controller (115) can further include host interface circuitry to communicate with the host system (120) via the physical host interface. The host interface circuitry can convert the commands received from the host system into command instructions to access the memory components in the media (109) as well as convert responses associated with the memory components into information for the host system (120).[0040] The memory sub-system (110) can also include additional circuitry or components that are not illustrated. In some embodiments, the memory sub-system (110) can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the controller (115) and decode the address to access the memory components of the media (109).[0041] The computing system includes a data pre-fetcher (113) in the memory sub-system (110) that can retrieve data from the non-volatile media (109) to the buffer memory (119) for predicted high impact commands. The predicted high impact commands can cause more than a threshold amount of increase in execution latency of other commands when the data is not pre-fetched to the buffer memory(119) before the execution of the high impact commands.[0042] In some embodiments, the controller (115) in the memory sub-system (110) includes at least a portion of the data pre-fetcher (113). In otherembodiments, or in combination, the controller (116) and/or the processing device (118) in the host system (120) includes at least a portion of the data pre-fetcher (113). For example, the controller (115), the controller (116), and/or the processing device (118) can include logic circuitry implementing the data pre-fetcher (113). For example, the controller (115), or the processing device (118) (processor) of the host system (120), can be configured to execute instructions stored in memory for performing the operations of the data pre-fetcher (113) described herein. In some embodiments, the data pre-fetcher (113) is implemented in an integrated circuit chip disposed in the memory sub-system (110). In other embodiments, the data pre- fetcher (113) is part of an operating system of the host system (120), a device driver, or an application.[0043] The memory sub-system (110) can have a queue (123) for commands of one category, and another queue (125) for commands of another category. For example, the queue (123) can be configured for typical input/output commands, such as read commands and write commands. The queue (125) can be configured for infrastructure commands that are not typical input/output commands. Some of the infrastructure commands can be high impact commands that cause more than a threshold amount of latency increase in the execution of certain commands in the queue (123). The memory sub-system (110) can include one or more completion queue (121 ) for the reporting, to the host system (120), the results of the executions of commands in the command queues (123 and 125). In some implementations, one or more queues can be created in response to commands from the host system(120). Thus, the memory sub-system (110) in general is not limited to a particular number of queues illustrated in FIG. 1.[0044] The data pre-fetcher (113) is configured to predict/classify some of the commands of the category in the queue (125) as high impact commands. Before a high impact command is retrieved from the command queue (125) for execution, the data pre-fetcher (113) is configured to load data that may be used by the high impact command from the non-volatile media (109) to the buffer memory (119). The loading of the data in preparation of the execution of the high impact command can be performed to use resources that are not used in the execution of commands from the queue (123) to improve resource utilization and reduce the overall impact of the high impact command. Alternatively, or in combination, the loading of the data in preparation of the execution of the high impact command can be performed to spread its impact among the execution of more commands from the queue (123) such that its impact is not concentrated on one or more commands that are executed concurrently with the execution of the high impact command.[0045] FIG. 1 illustrates an example where high impact commands are known to be in a specific queue (e.g., 125). In other implementations, different categories of commands can be mixed in a same queue. For example, an infrastructure command can be placed in a same queue of non-infrastructure commands in some systems; and the techniques of the present disclosure can also be used to predict the high impact commands and pre-fetch data to the buffer memory for the high impact commands. Thus, the application of the techniques of the present disclosure is not limited to a specific command queue structure.[0046] FIG. 2 illustrates a system configured to train a predictive model (131 ) to identify commands that can cause increased latency in the execution of other commands.[0047] For example, the predictive model (131 ) of FIG. 2 can be configured in the data pre-fetcher (113) in a memory sub-system (110) of FIG. 1.[0048] In FIG. 2, a training set of commands (137) is used capture the patterns of latency impacts of different types of commands on each other. The training set of commands (137) can be an example of commands representing a typical workload for a memory sub-system (110), or the actual workload of a memory sub-system (110) during a particular period of usage in a computer system of FIG. 1.[0049] During the execution of the commands in the training set in the memory sub-system (110) (e.g., without using the data pre-fetcher (113)), the execution latency data (139) of the commands in the training set is measured. The execution latency data (139) can be used to identify high impact commands (135) that cause increased latency.[0050] For example, the average execution latency of commands of a specific type can be computed from the execution latency data (139). For each respective command in the training set, the increased latency for the execution of the respective command can be computed from the difference between the actual execution latency of the command and the average execution latency of commands that are of the same type as the command. When the latency increase is above a threshold, the command is considered to have received high impact. In a time window of the execution of the command that has received high impact in latency, other commands being executed in the time window and/or concurrently with the execution of the command can be examined to identify a high impact command that causes the high impact. For example, an infrastructure command executed in the time window can be identified as the source of the high impact; and thus, the infrastructure command can be identified as a high impact command. For example, a command of a particular category and executed in the time window can be identified as the source of the high impact; and thus, the command can be identified as a high impact command. For example, an command of a type with an average execution latency above a threshold and executed in the time window can be identified as the source of the high impact; and thus, the command can be identified as a high impact command.[0051] In FIG. 2, the predictive model (131 ) is configured to identify high impact commands (141 ) that are predicted to cause increased latency from the training set of commands. The predictive model (131 ) computes the predictions (141 ) based on parameters of the commands in the training set and/or the order in which the commands appear in the training set. The parameters can include the types of the commands in the training set and/or the address areas/regions accessed by the commands. Supervised machine learning (133) is applied to the predictive model (131 ) to reduce or minimize the differences between the high impact commands (135) identified from the execution latency data (139) and the high impact commands (141 ) predicted by the prediction model (131 ).[0052] After the training of the predictive model (131 ) using a technique of supervised machine learning (133), the predictive model (131 ) can be used in a data pre-fetcher (113) of a memory sub-system (110) of FIG. 1 and/or a system as illustrated in FIG. 3.[0053] FIG. 3 illustrates a system having a predictive model (131 ) to pre-fetch data of commands from non-volatile media (109) to buffer memory (119). For example, the system of FIG. 3 can be the memory sub-system (1110) of FIG. 1.[0054] In FIG. 3, commands in one or more queues (e.g., 123 and/or 125) are provided as inputs to the predictive model (131 ) to generate predictions of high impact commands (141 ) that can cause increased latency. A data pre-fetcher (113) is configured to retrieve data from non-volatile media (109) to buffer memory (119) prior to the actual execution of the high impact commands (141 ) predicted by the predictive model (131 ).[0055] Typically, accessing the non-volatile media (109) for an amount of data takes a longer time period than accessing the buffer memory (119). Further, the system can have less resources for accessing the non-volatile media (109) for concurrently executing multiple commands than for accessing the buffer memory (119). Thus, when the data to be used by a high impact command is pre-fetched into the buffer memory (119), its impact on the concurrent execution of other commands can be reduced.[0056] FIG. 4 shows a method to train a predictive model to identify commands that have a high probability of causing significant delay in the execution of other commands. For example, the method of FIG. 4 can be implemented in a computer system of FIG. 1 using the technique discussed in connection with FIG. 2.[0057] At block 151 , first commands (e.g., 137) are executed in a data storage system.[0058] The first commands can be a sample of commands that are typical in data storage systems having the same or similar structure as the data storage system. Optionally, the first commands can be the real-life workload of the data storage system in a period of time.[0059] At block 153, the data storage system (or a host connected to the data storage system) measures the execution latency of the first commands. For example, the execution latency of a command can be measured as the time duration between the command being retrieved from a queue for execution and thecompletion of execution of the command in the data storage system. A typical command retrieves data from an address specified in the command, or writes data at an address specified in the command.[0060] At block 155, a computing device is used to identify second commands (e.g., 135) that cause more than a threshold amount increase in execution latency in some of the first commands. The computing device can be a computer that is separate from the data storage system and/or the host system of the data storage system, or the host system of the data storage system, or the controller of the data storage system. [0061] For example, the second commands can be identified by computing the average latency for different command types, identifying impacted commands that have execution latency exceeding the averages of their respective command types by more than a threshold amount, and identifying the second commands that have been executed concurrently with the impacted commands and that have a predetermined characteristic. For example, the predetermined characteristic can be a pre-defined command category (e.g., infrastructure commands), commands of a type having an average latency that is above a threshold, and/or other attributes.[0062] At block 157, the computing device identifies third commands (e.g., 141 ) using a predictive model (131 ) based on the first commands.[0063] At block 159, the computing device applies supervised machine learning (133) to the predictive model (131 ) to reduce differences between the second commands (e.g., 135) and the third commands (141 ).[0064] FIG. 5 shows a method to pre-fetch data for high impact commands based on the predictions of a predictive model (e.g., 131 ), which can be trained using the method of FIG. 4.[0065] For example, the method of FIG. 5 can be implemented in a computer system of FIG. 1 using the technique discussed in connection with FIG. 3.[0066] At block 171 , a data pre-fetcher (113) of a data storage system (e.g., 110) receives identification of commands that are queued for execution in the data storage system.[0067] At block 173, the data pre-fetcher (113) provides the commands as input to the predictive model (131 ).[0068] At block 175, the data pre-fetcher (113) identifies, using the predictive model (131 ) and based on the commands as input, at least one command for pre fetching.[0069] Prior to the command being retrieved from a queue for execution in the data storage system, the data pre-fetcher (113) retrieves at least a portion of data to be used in execution of the command at block 177 and store the retrieved portion of data in a buffer memory (119) of the data storage system at block 179.[0070] Concurrently, a controller (115) of the data storage system retrieves some of the queued commands at block 181 and executes the retrieved commands at block 183.[0071] Preferably, the retrieving (177) and storing (179) of the portion of data for the pre-fetched command are performed using resources that are not required/used in the concurrently execution (183) of the commands such an arrangement reduces the overall impact of the command on other commands as a whole. Alternatively, or in combination, the impact of the retrieving (177) and storing (179) of the portion of data for the pre-fetched command is distributed among the execution (183) of many commands such that the impact on each individual command is reduced and small.[0072] Subsequently, the controller (115) of the data storage system retrieves the command from a queue at block 185 and executes the command using at least the portion of data in the buffer memory At block 187.[0073] Since at least the portion of data is in the buffer memory, the execution of the command has less impact on the execution latency of other commands that are executed concurrently with the execution of the command.[0074] Optionally, the data pre-fetcher (113) can include the supervised machine learning (133) functionality illustrated in FIG. 2 and/or discussed in FIG. 4. For example, the data pre-fetcher (113) can measure the execution latency (139) of commands, identify commands (135) causing increased latency, and use the supervise machine learning (133) to minimize the number of commands that are predicted to not cause increased latency (141 ) but are found to have caused increased latency (135) based the measured execution latency data (139).[0075] In some implementations, a communication channel between the processing device (118) and a memory sub-system includes a computer network, such as a local area network, a wireless local area network, a wireless personal area network, a cellular communications network, a broadband high-speed always- connected wireless communication connection (e.g., a current or future generation of mobile network link); and the processing device (118) and the memory sub-system can be configured to communicate with each other using data storage management and usage commands similar to those in NVMe protocol.[0076] A memory sub-system in general can have non-volatile storage media. Examples of non-volatile storage media include memory cells formed in an integrated circuit and magnetic material coated on rigid disks. Non-volatile storage media can maintain the data/information stored therein without consuming power. Memory cells can be implemented using various memory/storage technologies, such as NAND logic gate, NOR logic gate, phase-change memory (PCM), magnetic memory (MRAM), resistive random-access memory, cross point storage and memory devices (e.g., 3D XPoint memory). A cross point memory device uses transistor-less memory elements, each of which has a memory cell and a selector that are stacked together as a column. Memory element columns are connected via two perpendicular lays of wires, where one lay is above the memory element columns and the other lay below the memory element columns. Each memory element can be individually selected at a cross point of one wire on each of the two layers. Cross point memory devices are fast and non-volatile and can be used as a unified memory pool for processing and storage.[0077] The controller (e.g., 115) of a memory sub-system (e.g., 110) can run firmware to perform operations responsive to the communications from the processing device (118). Firmware in general is a type of computer program that provides control, monitoring and data manipulation of engineered computing devices.[0078] Some embodiments involving the operation of the controller (115) and/or the data pre-fetcher (113) can be implemented using computer instructions executed by the controller (115), such as the firmware of the controller (115). In some instances, hardware circuits can be used to implement at least some of the functions. The firmware can be initially stored in the non-volatile storage media, or another non-volatile device, and loaded into the volatile DRAM and/or the in processor cache memory for execution by the controller (115).[0079] A non-transitory computer storage medium can be used to storeinstructions of the firmware of a memory sub-system (e.g., 110). When the instructions are executed by the controller (115) and/or the processing device (117), the instructions cause the controller (115) and/or the processing device (117) to perform a method discussed above.[0080] FIG. 6 illustrates an example machine of a computer system (200) within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed. In some embodiments, the computer system (200) can correspond to a host system (e.g., the host system (120) of FIG. 1) that includes, is coupled to, or utilizes a memory sub-system (e.g., the memory sub-system (110) of FIG. 1 ) or can be used to perform the operations of a data pre-fetcher (113) (e.g., to execute instructions to perform operationscorresponding to the data pre-fetcher (113) described with reference to FIGS. 1 - 5). In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) networkenvironment, or as a server or a client machine in a cloud computing infrastructure or environment.[0081] The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term“machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of themethodologies discussed herein.[0082] The example computer system (200) includes a processing device (202), a main memory (204) (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), static random access memory (SRAM), etc.), and a data storage system (218), which communicate with each other via a bus (230) (which can include multiple buses).[0083] Processing device (202) represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets.Processing device (202) can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a fieldprogrammable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device (202) is configured to execute instructions (226) for performing the operations and steps discussed herein. The computer system (200) can further include a network interface device (208) to communicate over the network (220).[0084] The data storage system (218) can include a machine-readable storage medium (224) (also known as a computer-readable medium) on which is stored one or more sets of instructions (226) or software embodying any one or more of the methodologies or functions described herein. The instructions (226) can also reside, completely or at least partially, within the main memory (204) and/or within the processing device (202) during execution thereof by the computer system (200), the main memory (204) and the processing device (202) also constituting machine- readable storage media. The machine-readable storage medium (224), data storage system (218), and/or main memory (204) can correspond to the memory sub-system (110) of FIG. 1.[0085] In one embodiment, the instructions (226) include instructions to implement functionality corresponding to a data pre-fetcher (113) (e.g., the data pre- fetcher (113) described with reference to FIGS. 1 - 5). While the machine-readable storage medium (224) is shown in an example embodiment to be a single medium, the term“machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term“machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.[0086] Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.[0087] It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system’s registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems.[0088] The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.[0089] The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety ofprogramming languages can be used to implement the teachings of the disclosure as described herein.[0090] The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine- readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc.[0091] In this description, various functions and operations are described as being performed by or caused by computer instructions to simplify description.However, those skilled in the art will recognize what is meant by such expressions is that the functions result from execution of the computer instructions by one or more controllers or processors, such as a microprocessor. Alternatively, or incombination, the functions and operations can be implemented using special purpose circuitry, with or without software instructions, such as using Application- Specific Integrated Circuit (ASIC) or Field-Programmable Gate Array (FPGA).Embodiments can be implemented using hardwired circuitry without software instructions, or in combination with software instructions. Thus, the techniques are limited neither to any specific combination of hardware circuitry and software, nor to any particular source for the instructions executed by the data processing system.[0092] In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.
Examples may include techniques to a schedule a workload to one or more computing resources of a data center. A class is determined for the workload based on a workload type or profile for the workload. Predicted operating values for at least one of the one or more computing resources is determined based on the class and the predicted operating values are used as inputs in at least one scoring model to evaluate the workload being supported by the at least one of the one or more computing resources. The workload is then scheduled to the at least one or more computing resources based on the evaluation.
CLAIMS:1. An apparatus comprising:circuitry;a distribution logic for execution by the circuitry to receive an indication of a workload distribution for a workload to be supported by one or more computing resources of a data center; a classify logic for execution by the circuitry to determine a class for the workload based on a workload type or profile for the workload;a model logic for execution by the circuitry to determine predicted operating values for at least one of the one or more computing resources based on the class and input the one or more predicted operating values in at least one scoring model to evaluate the workload being supported by the at least one of the one or more computing resources; anda schedule logic for execution by the circuitry to schedule the workload to the at least one of the one or more computing resources based on the evaluation.2. The apparatus of claim 1, comprising the model logic to input the predicted operating values in at least one scoring model to evaluate the workload comprises the model logic to: determine separate weight factors for individual attributes of a plurality of attributes, at least one predicted operating value from among the predicted operating values to correspond to an individual attribute of the plurality of attributes;multiply the separate weight factors for individual attributes with corresponding predicted operating values;sum products of the multiplication of the separate weight factors for individual attributes with corresponding predicted operating values to generate a scoring model predicted value; and evaluate the workload based on the scoring model predicted value.3. The apparatus of claim 2, comprising the separate weight factors normalized to [0, 1].4. The apparatus of claim 2, comprising the plurality of attributes including a thermal attribute, a performance attribute, a power attribute or a reliability attribute.5. The apparatus of claim 4, comprising a first predicted operating value corresponding to the thermal attribute includes a predicted operating temperature for at least one of the one or more computing resources, a second predicted operating value corresponding to the performance attribute includes a predicted cache miss rate for at least one of the one or more computing resources, a third predicted operating value corresponding to the power attribute includes a predicted power utilization rate for at least one of the one or more computing resources and a fourth predicted operating value corresponding to the reliability attribute includes a failure probability for at least one of the one or more computing resources.6. The apparatus of claim 5, comprising the model logic to apply one or more constraining rules to the scoring model predicted value to cause the scoring model predicted value to be set to a value of 0 if at least one of the one or more constraining rules are met, the one or more constraining rules including:j the predicted operating temperature exceeding a throttle temperature threshold for at least one of the one or more computing resources;the predicted cache miss rate exceeding a cache miss rate threshold;the predicted power utilization rate exceeding a power utilization rate threshold; or the failure probability exceeding a failure probability threshold.7. The apparatus of claim 1 , the one or more computing resources comprising one or more of a processor, a memory device, a storage device, a power module, a cooling module, a network input/output device, a network switch or a virtual machine.8. The apparatus of claim 1 , comprising:a collect logic for execution by the circuitry to gather training operating information for one or more workloads included in the workload type or profile while the one or more computing resources of the data center support the one or more workloads;a store logic for execution by the circuitry to store the gathered training operating information; andthe classify logic to use the stored and gathered training operating information to:cluster group the training operating information via λ-means clustering;learn a classifier based on the cluster group of the training operating information via a support vector machine (SVM); andassign a regression fit function to classified training operating information classified by the classifier based on a least squares approach.9. The apparatus of claim 8, the model logic to determine the one or more predicted operating values based on the class comprises the model logic to use the regression fit function assigned to the classified training operating information to determine the one or more predicted operating values.10. The apparatus of claim 8, the workload type or profile comprising one of a first workload type or profile that is processing or processor intensive, a second workload type or profile that is memory intensive, a third workload type or profile that is network switch intensive, a fourth workload type or profile that is storage intensive or a fifth workload type or profile that is a balanced workload type or profile that has relatively equal processor, memory, network switch and storage intensities.11. The apparatus of claim 8, the training operating information comprising an inlet or an outlet temperature for a platform or rack housing the one or more computing resources, a power consumption for a platform or rack housing the one or more computing resources, processor cache miss information, network data throughput latency information, memory access latency information, throttling activation information for the one or more computing resources, margin to a peak operating temperature threshold for the one or more computing resources, or a volumetric airflow for a platform and/or rack housing the one or more computing resources.12. A method comprising:receiving, at a processor circuit, an indication of a workload distribution for a workload to be supported by one or more computing resources of a data center;determining a class for the workload based on a workload type or profile for the workload;determining predicted operating values for at least one of the one or more computing resources based on the class and inputting the one or more predicted operating values in at least one scoring model to evaluate the workload being supported by the at least one of the one or more computing resources; andscheduling the workload to the at least one of the one or more computing resources based on the evaluation.13. The method of claim 12, inputting the predicted operating values in at least one scoring model to evaluate the workload comprises:determining separate weight factors for individual attributes of a plurality of attributes, at least one predicted operating value from among the predicted operating values to correspond to an individual attribute of the plurality of attributes;multiplying the separate weight factors for individual attributes with corresponding predicted operating values;summing products of the multiplication of the separate weight factors for individual attributes with corresponding predicted operating values to generate a scoring model predicted value; andevaluating the workload based on the scoring model predicted value.14. The method of claim 13, comprising the separate weight factors normalized to [0, 1].15. The method of claim 13, comprising the plurality of attributes including a thermal attribute, a performance attribute, a power attribute or a reliability attribute.16. The method of claim 12, determining the one or more predicted operating values based on the class includes: gathering training operating information for one or more workloads included in the workload type or profile while the one or more computing resources of the data center support the one or more workloads;cluster grouping the training operating information using c-means clustering;learning a classifier based on the cluster grouping of the training operating information using a support vector machine (SVM);assigning a regression fit function to classified training operating information classified by the classifier based on a least squares approach; andusing the regression fit function assigned to the classified training operating information to determine the one or more predicted operating values.17. The method of claim 16, the workload type or profile comprising one of a first workload type or profile that is processing or processor intensive, a second workload type or profile that is memory intensive, a third workload type or profile that is network switch intensive, a fourth workload type or profile that is storage intensive or a fifth workload type or profile that is a balanced workload type or profile that has relatively equal processor, memory, network switch and storage intensities.18. At least one machine readable medium comprising a plurality of instructions that in response to being executed by a system causes the system to carry out a method according to any one of claims 12 to 17.19. At least one machine readable medium comprising a plurality of instructions that in response to being executed by a system causes the system to:receive an indication of a workload distribution for a workload to be supported by one or more computing resources of a data center;determine a class for the workload based on a workload type or profile for the workload; determine predicted operating values for at least one of the one or more computing resources based on the class and input the one or more predicted operating values in at least one scoring model to evaluate the workload being supported by the at least one of the one or more computing resources; andschedule the workload to the at least one of the one or more computing resources based on the evaluation.20. The at least one machine readable medium of claim 19, comprising the instructions to cause the system to input the predicted operating values in at least one scoring model to evaluate the workload further comprises the instructions to cause the system to: determine separate weight factors for individual attributes of a plurality of attributes, at least one predicted operating value from among the predicted operating values to correspond to an individual attribute of the plurality of attributes;multiply the separate weight factors for individual attributes with corresponding predicted operating values;sum products of the multiplication of the separate weight factors for individual attributes with corresponding predicted operating values to generate a scoring model predicted value; and evaluate the workload based on the scoring model predicted value.21. The at least one machine readable medium of claim 20, comprising the separate weight factors normalized to [0, 1].22. The at least one machine readable medium of claim 20, comprising the plurality of attributes including a thermal attribute, a performance attribute, a power attribute or a reliability attribute.23. The at least one machine readable medium of claim 19, the instructions to cause the system to determine the one or more predicted operating values based on the class includes the instructions to further cause the system to:gather training operating information for one or more workloads included in the workload type or profile while the one or more computing resources of the data center support the one or more workloads;cluster grouping the training operating information using fc-means clustering;learn a classifier based on the cluster grouping of the training operating information using a support vector machine (SVM);assign a regression fit function to classified training operating information classified by the classifier based on a least squares approach; anduse the regression fit function assigned to the classified training operating information to determine the one or more predicted operating values.24. The at least one machine readable medium of claim 23, the workload type or profile comprising one of a first workload type or profile that is processing or processor intensive, a second workload type or profile that is memory intensive, a third workload type or profile that is network switch intensive, a fourth workload type or profile that is storage intensive or a fifth workload type or profile that is a balanced workload type or profile that has relatively equal processor, memory, network switch and storage intensities.25. The at least one machine readable medium of claim 23, the training operating information comprising an inlet or an outlet temperature for a platform or rack housing the one or more computing resources, a power consumption for a platform or rack housing the one or more computing resources, processor cache miss information, network data throughput latency information, memory access latency information, throttling activation information for the one or more computing resources, margin to a peak operating temperature threshold for the one or more computing resources, or a volumetric airflow for a platform and/or rack housing the one or more computing resources.
COMPUTING RESOURCES WORKLOAD SCHEDULINGRELATED CASEThis application claims the benefit of and priority to previously filed United States Patent Application Serial Number 15/089,481 , filed April 2, 2016, entitled "Computing Resources Workload Scheduling" and United States Provisional Patent Application Number 62/244,156 filed on October 20, 2015, that is hereby incorporated by reference in its entirety.BACKGROUNDTechnological advancements in networking have enabled the rise in use of pooled and/or configurable computing resources. These pooled and/or configurable computing resources may include physical infrastructure for large data centers that may support cloud computing networks. The physical infrastructure may include one or more computing systems having processors, memory, storage, networking, power, cooling, etc. Management entities of these data centers may provision computing resources to virtual computing entities such as virtual machines (VMs) to allocate portions of pooled and/or configurable computing resources in order to place or compose these VMs to support, implement, execute or run a workload such as a certain types of applications. Various types of applications or application workloads may utilize this allocated infrastructure in a shared manner.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 illustrates an example first system.FIG. 2 illustrates an example board.FIG. 3 illustrates an example second system.FIG. 4 illustrates an example first flow.FIG. 5 illustrates an example second flow.FIG. 6 illustrates an example block diagram for an apparatus.FIG. 7 illustrates an example third flow.FIG. 8 illustrates an example of a storage medium.FIG. 9 illustrates an example computing platform.DETAILED DESCRIPTIONData centers may be generally composed of a large number racks that may contain numerous types of hardware or configurable computing resources (e.g., storage, central processing units (CPUs), memory, networking, fans/cooling modules, power units, etc.). The types of hardware or configurable computing resources deployed in data centers may also be referred to as disaggregate physical elements. The size and number of computing resources and the continual disaggregation of these resources presents practically countless combinations of l computing resources that can be configured to fulfill workloads. Also, types of workloads may have different characteristics that may require a different mix of computing resources to efficiently fulfill a give type of workload. It is with respect to these and/or other challenges that the examples described herein are needed.FIG. 1 illustrates an example system 100. As shown in FIG. 1 , system 100 includes racks 1 10- 1 to 1 10-n, where n is any whole integer value greater than 3. In some examples, as shown in FIG. 1, racks 110-2 to 110-n may be arranged to host respective computing resources (C.R.s) 1 16-2 to 1 16-n. Also, each shelf or slot of a give rack may include controllers. For example, racks 1 10-2 to 1 10-n may include respective controller 1 14-2 to 1 14-n for each shelf or slot. Racks 1 10-1 to 110-n may also include cooling module(s) 118-1 to 118-n to provide cooling capabilities to their respective racks. In some examples, cooling module(s) 1 18-1 to 1 18-n may be considered as a type of computing resource. Racks 1 10-1 to 1 10-n may also include respective rack controllers 112-1 to 112-n. As described more below, controllers located at a given rack may be capable of collecting operating information that may be used build a scoring model which may then be use to evaluate an impact of a workload to be filled or supported by computing resources for a data center similar to system 100.In some examples, a data center management system 108 may communicate with rack controllers 1 12-1 to 1 12-n and controller 114-2 to 114-n via a control plane 150. Data center management system 108 may also manage C.R.s 116-2 to 116-n located within respective racks 1 10-2 to 1 10-n via a resource plane 140. This management may include the provisioning of computing resources to fulfill or support a workload and/or scheduling of a workload to computing resources.In some examples, a process can be employed of utilizing sensor data or operating information produced by each of the server platforms (e.g., located within racks 110-2 to 110-n) to develop or build an optimization indicator (OI) scoring model that uses a multi objective algorithm framework to optimize or improve resource allocation by loading a given workload in an efficient manner at a data center level through efficient workload scheduling. Applying this framework may result in more effective or optimal operation of a data center by predicting an impact of scheduling a new or different type of workload and using that information to optimally place the workload in the appropriate resource. The framework may solve for an optimal or comparatively best operating condition (by predicting demands) in the presence of competing objectives that may be related to such attributes as thermal attributes, performance attributes, reliability attributes or power attributes. This framework may render a computationally intractable NP hard optimization problem into an NP complete problem and may result in a computationally feasible problem. The resulting solution is optimal for a specified objective that may be arbitrarily defined based on operator preference.According to some examples, the 01 scoring model may provide an indicator to data center management/system 108 (e.g., configured as data center orchestration software) for logic and/or features of data center management/system 108 to improve work load orchestration within system 100 to imposed constraints such as quality of service (QoS), service level agreement (SLA) objectives or reliability, availability and serviceability (RAS) requirements. The framework may enable improved reliability at a system or data center level while supporting a desired level of performance and throughput consistent with the unique preferences of the operator. In some examples, evaluating objectives related to such attributes as thermal, performance, reliability or power via computer intelligence to optimize cloud workload scheduling may help to achieve more efficient data center management, reduce costs of operation and enhance data center performance.In some examples, a platform and/or or rack monitoring agent (like firmware and OS drivers) may be arranged to run inside a baseboard management controller (BMC), Management Engine (ME), operating system (OS) driver or any 3rd party agent that may gather operating information. The gathered operating information for computing resources may include, but is not limited to, inlet/outlet temperatures for a platform and/or rack housing, power consumption for a platform and/or rack housing, fan speed for a cooling module for a platform and/or rack housing (e.g., rack 1 10-2), derived volumetric airflow for a platform and/or rack or available margin to maximum designed or pre-determined operating specifications for computing resources (e.g., maximum utilization capacities or peak operating temperature thresholds). Gathered operating information may also include throttling activation information for such computing resources as CPUs, memory, NW I/O devices, NW switches, VMs, power modules or cooling modules. The throttling activation information may indicate if and for how long throttling was activated over a given time period. For example, responsive to utilization capacities or peak operating temperature thresholds being exceeded over the given time period.In some examples, the gathered operating information may be conveyed by platform and or or rack monitoring agent outside the platform via use of an interface and associated protocols such as an intelligent platform management interface (IPMI) described in the IPMI Specification v2.0, rev. 1.1 , published in March 2013 (hereinafter the "IPMI specification"), and/or other technologies based on derivatives or extensions of the IPMI specification.Examples are not limited to interfaces and associated protocols as described in the IPMI specification, other interfaces and associated protocols conveying operating information outside the platform are contemplated.According to some examples, controllers 1 14-2 to 114-n and/or rack controller 1 14-1 to 1 14-n may be capable of supporting a platform and/or rack monitoring agent. For these examples, control plane 150 may be configured to utilized protocols in accordance with the ΓΡΜΙ specification to relay or send operating information to data center management/system 108.In some examples, logic and/or features of data center management/system 108 may collect the operating information received or forwarded from controllers 1 14-2 to 1 14-n and/or rack controllers 1 14-1 to 1 14-n. The logic and/or features of data center management/system 108 may at least temporarily store the collected operating information in a database or central repository (not shown).In some examples, logic and/or features of data center management/system 108 may feed data associated with collected operating information to generate a learned OI scoring model that may then be used to evaluate an impact of a candidate workload to be fulfilled by at least some of the computing resources hosted by racks 110-2 to 1 10-n (e.g., further provisioned to support multiple VMs). For these examples, machine learning concepts and algorithms may perform OI scoring model learning or training for a given attribute and then this learned OI scoring model is used for estimating or predicting a demand (e.g., air flow demand) comparatively placed on different computing resources. A learned OI scoring model may be built or learned using clustering, regression learning mechanisms similar to support vector machine (SVM) (a type of machine learning linear classifier) or &-means clustering to narrow down the target computing resources. A learned OI scoring model may include use of regression methods to arrive at predicted operating values for use as inputs to the learned OI scoring model to arrive at an OI scoring model predicted value that may be used to quantify the impact of the candidate workload on a computing resource or a grouping of computing resources. As described more below, the determined OI scoring model predicted value may be based on weighted operating values related to one or more objectives associated with one or more attributes.According to some examples, based on the constraints being imposed on a system such as system 100 (e.g., QoS requirements, SLA objectives or RAS requirements) and the attributes of the workload to be scheduled, a learned OI scoring model may be able to identify one or more comparatively best target VMs to support placement or scheduling of the candidate workload.In some examples, candidate workloads may have workload profiles that may require differing types of computing resources to support. For examples, a first workload profile may be processing or CPU intensive, a second workload profile may be memory intensive, a third workload profile may be network switch intensive, a fourth workload profile may be storage intensive or a fifth workload profile may have a balanced profile that has relatively equal CPU, memory, network switch and storage intensities.According to some examples, an 01 scoring model may be arranged to consider multiple objectives for utilizing computing resources (e.g., a server or node) in a system such as system 100. The OI scoring model may be used to predict and/or evaluate an overall running status of separate computing resources when supporting a candidate workload. An 01 scoring model may be built or learned such that a comparatively best operating balance between the multiple objectives associated with multiple attributes such as, but not limited to, thermal attributes, performance attributes, power attributes or reliability attributes.In some examples, machine learning models may model each single attribute and then combine all the attributes together by using either weighted sum algorithms or Pareto model algorithms such as non-dominated sorting genetic algorithm-Π (NSGA-II) according to the complexity of the problem to generate an OI scoring model predicted value based on a learned 01 scoring model. The OI scoring model predicted value may be useful in rebalancing workloads between computing resources included in a data center similar to system 100. For example, based on OpenStack VM operations, an OI scoring model predicted value based on a learned OI scoring model may be used to facilitate scheduling a new workload in a grouping or cluster of computing resources. Also, workloads may be moved or migrated from one computing resource (e.g., a first server) to another computing resource (e.g., a second server) to improve workload balance throughout the data center using OI scoring model predicted values based on the learned OI scoring model.According to some examples, an OI scoring model may be implemented using an example weighted sum model (WSM). The WSM may resolve objectives related to multiple attributes that may include, but are not limited to, thermal attributes, performance attributes, power attributes or reliability attributes. Example equation (1) may be used for this WSM:(1 ) OI = w_pe*perf + w_po*power + w_t*thermal + w_r*reliabilityWhere: w-pe, w_po, w_t, w_r are respective OI weight factors for performance, power, thermal and reliability attributesw-pe + w_po + w_t + w_r = 1w_pe, w_po, w_t, w_r >= 0For example equation (1), OI weight factors of the four attributes for performance, power, thermal and reliability may be normalized to [0, 1] before a WSM calculation. In some examples, constraining rules may be added to a learned 01 scoring model to prevent occurrence of invalid corner cases. For example, an 01 scoring model predicted value may be set to a value of 0 if computing resources such as a CPU hits or exceeds a throttle temperature threshold (thermal attribute), if a computing resource such as a memory module hits its throttle temperature (thermal attribute), if a computing resource such as a hard drive temperature is large than 60° Celsius (thermal attribute), a processor cache miss rate exceeds a threshold (performance attribute), network data throughput latencies exceed a threshold (performance attribute), power utilization rates exceed a threshold (power attribute), memory access latencies exceed a threshold (performance attribute) or one or more QoS, RAS or SLA requirements (e.g., failure probabilities, downtime rates, etc.) are predicted not to be met (reliability attributes). Examples are not limited to the above-mentioned constraining rules.FIG. 2 illustrates an example board 200. As shown in FIG. 2, board 200 may include a controller 230, a resource plane interface 240, a control plane interface 250 and hardware resources 270. In some examples, hardware resources / disaggregate physical elements 270 may be communicatively coupled to resource plane 140 of system 100 through resource plane interface 240. Also, controller 230 may communicatively couple to control plane 150 of system 100 through control plane interface 250.According to some examples, hardware resources / disaggregate physical elements 270 may include various types of configurable computing resources or disaggregate physical elements such as, but not limited to central processing units (CPUs) or processors, storage devices, memory devices, NW I/O devices, NW switches, virtual machines, power modules, cooling modules, fans, etc. Also, controller 230 may include logic and/or features capable of providing operating information associated with hardware resources / disaggregate physical elements 270 via a control plane such as control plane 130 to a data center manager (e.g., data center management/system 108). In some examples, controller 230 may be configured or arranged to function as a baseboard management controller (BMC) or manageability engine for hardware resources / disaggregate physical elements 270 to function within a rack such as rack 1 10-2. Controller 230 may gather or collect operating information to be sent to the data center manager for use in generating one or more learned OI scoring models. The operating information may be sent or forwarded via an IPMI via control plan 130 using protocols in accordance with the IPMI specification.FIG. 3 illustrates an example system 300. As shown in FIG. 3, in some examples, system 300 may include a controller node 310 coupled in communication with an energy agent on a compute node 320 and a thermal stress indicator core 330. For these examples, controller node 310 may be located with a data center manager (e.g., located with data centermanagement/system 108) and energy agent on a compute node 320 may be a controller located at a shelf or slot of a rack (e.g., rack 1 10-2). Also, thermal stress indicator core 330 may also be implemented by other logic and/or features of the data center manager.In some examples, thermal stress indicator core 330 may be arranged to utilize a thermal stress indicator prediction model 332 to determine an OI scoring model predicted value for an objective related to thermal attributes. For these examples, energy agent at compute node 320 may provide operating information to thermal stress indicator core 330 for possible use in generating a learned OI scoring model. A determined OI scoring model predicted value based on the learned OI scoring model may then be sent to a policy engine for energy service 314 at controller node 310 and then used by OpenStack 312 to schedule a workload. As described more below, this process may be used to evaluate an impact of a workload to be fulfilled by computing resources managed by controller node 310 in order to schedule the workload.In some examples, a learned OI scoring model may be utilized in an OpenStack nova scheduler included in OpenStack 312 located with controller node 310 as shown in FIG. 3. For these examples, the OI scoring model may be learned separately for each computing resource (e.g., a server) in a computing or OpenStack cluster. When a new workload is to be run in the OpenStack cluster, each computing resource (e.g., server) from that cluster may be tested one by one and a prediction made as to what the computing resource's status would be after adding this new workload to the respective computing resource. Each prediction may be facilitated by the learned OI scoring model for the respective computing resource. A computing resource may then be picked based on the computing resource having the highest OI scoring model predicted value. The new workload may then be scheduled to the picked computing resource.According to some examples, an OI scoring model may be utilized in OpenStack nova live-migration. For these examples, a computing resource may break an OI objective constraint rule. This may lead to a need to cause one or more workloads to be supported by a different computing resource. Computing resources in a designated target pool may be tested one by one with learned OI scoring models. A prediction may be made of what the status of a given computing resource will be after adding to or causing the migrated workload to be supported by the respective computing resource. This prediction may be done by each computing resource's learned OI scoring model. A computing resource from the destination pool having the highest OI scoring model predicted value may be picked or selected and the migrated workload may then be scheduled to the selected computing resource. FIG. 4 illustrates an example flow 400. In some examples, flow 400 may be implemented by elements of systems 100, 200 or 300 as shown in FIGS. 1-3. Although flow 400 is not limited to elements included in these systems. For these examples, flow 400 illustrates how an OI scoring model may be learned or built to generate a prediction value for one or more objectives related to one or more given attributes.At block 410, a workload type or profile may be determined to evaluate impacts on computing resources. In some examples, a control feature is workload distribution (e.g., different workload types or profiles in OpenStack VM operations). For example, workload type or profile may include, but is not limited to whether the workload type or profile is processing or CPU intensive, memory intensive, network switch intensive, storage intensive, a balance of each of these various workload types or profiles or a combination of these various workload types or profiles.At block 420, capture data may include capturing or collecting training operating information for different workload running cases. The workloads used in training should envelop the workloads run after deployment. In some examples, capturing training operating information may include collected operating information from computing resources that may be arranged or capable of being arranged to support the determined workload type or profile. For example, a workload type or profile that is processing or CPU intensive may capture training operating information that includes operating information from processing or computing resources. The operating information for a CPU intensive workload, for example, may include, but is not limited to, data associated with cache misses or number of CPU instructions implemented over a given period of time.At block 430, cluster grouping may include using techniques such as fc-means clustering to do cluster grouping for training operating information. For example, use of Λ-means clustering may include use of a fc-means or Lloyd's algorithm to facilitate cluster grouping for the training operating information.At block 440, tune cluster boundary may include using pre-defined domain knowledge based rules to tune cluster boundaries associated with each determined cluster grouping determined in block 430. The pre-defined domain knowledge based rules may include, but are not limited to, such rules as a CPU idle mode, a fan speed maximum RP , maximum inlet or outlet temperature, etc.At block 450, learn a classifier may include using SVM to learn a classifier based on the cluster grouping determined at block 430. In some examples, via use of SVM, captured training operating information may be analyzed to establish patterns and classify the training operating information associated with the workload type or profile.At block 460, regression fit function may include using a least squares method or approach to assign a regression fit function to the classified training operating information. An example regression fit function prototype may be a linear function. In some examples, the regression fit function may be able to predict operating values related to thermal, performance, power, or reliability attributes that may then be used as inputs to example equation (1) to determine an 01 scoring model predicted value. The process then comes to an end.FIG. 5 illustrates an example flow 500. In some examples, flow 500 may be implemented by elements of systems 100, 200 or 300 as shown in FIGS. 1-3. Although flow 500 is not limited to elements included in these systems. For these examples, flow 500 illustrates how to evaluate an impact of a workload to be fulfilled by computing resources included in a system such as system 100, 200 or 300.At block 510, new workload distribution may cause an evaluation of the impact of the new workload on one or more computing resources (e.g., included in a data center). The new workload, for example, may be associated with a workload type or profile that may be processing or CPU intensive, memory intensive, network switch intensive, storage intensive or a balance of each of these various workload types or profiles.At block 520, use SVM classifier may include using an SVM as mentioned at block 450 of flow 400 to determine which class the new workload distribution may belong to. In some examples, expected impacts on computing resources based on the new workload's profile or type may be used as one or more input status vectors to the SVM to determine the class.At block 530, use regression function may include using the regression fit function determined at block 460 for the given class determined in block 520 to calculate the OI scoring model predicted value from an OI scoring model that was learned based on one or more objectives related to one or more attributes. In some examples, each computing resource or grouping of computing resources that may be subjected to the new workload distribution may have separate OI scoring model predicted values calculated.According to some examples, a WSM may be used to resolve the one or more objectives related to thermal, performance, power, or reliability attributes while calculating the separate OI scoring model predicted values. Example equation (1) may be used to determine the separate OI scoring model predicted values. Use of example equation (1) may include determining an OI weight factor for each of the thermal, performance, power, or reliability attributes. Also, the regression fit function may be used to predict operating values related to or corresponding to the weighted thermal, performance, power, or reliability attributes. The predicted operating values for thermal, performance, power, or reliability attributes may then be used as inputs to example equation (1 ) along with respective 01 weight factors to determine an 01 scoring model predicted value.At block 540, evaluate value may include evaluating the 01 scoring model predicted value calculated for each computing resource or grouping of computing resources that may be subjected to the new workload distribution to determine the impact of the new workload distribution on these computing resources. In some examples, the evaluation may include determining which computing resource or grouping of computing resources has the highest 01 scoring model predicted value.According to some examples, constraining rules may also be applied to each evaluated 01 scoring model predicted value to prevent occurrence of invalid corner cases. For example, if use of the regression fit function indicates a predicted value related to a thermal attribute such as a memory module hitting a throttle temperature for a given computing resource or grouping of computer resources, the 01 scoring model predicted value may be set to 0. Setting the value to 0 may eliminate selection of the given computing resource or grouping of computing resources for supporting the new workload.At block 550, schedule workload may include scheduling the new workload to at least some of the computing resources based on the evaluation of the 01 scoring model predicted value. The process then comes to an end.FIG. 6 illustrates an example block diagram for an apparatus 600. Although apparatus 600 shown in FIG. 6 has a limited number of elements in a certain topology, it may be appreciated that the apparatus 600 may include more or less elements in alternate topologies as desired for a given implementation.The apparatus 600 may be supported by circuitry 620 maintained at a computing device including logic or features to support a manager or controller for configurable computing resources (e.g. for managing a data center). Circuitry 620 may be arranged to execute one or more software or firmware implemented modules, components or logic 622-a. It is worthy to note that "a" and "b" and "c" and similar designators as used herein are intended to be variables representing any positive integer. Thus, for example, if an implementation sets a value for a = 6, then a complete set of software or firmware for modules, components or logic 622-a may include component 622-1 , 622-2, 622-3, 622-4, 622-5 or 622-6. The examples presented are not limited in this context and the different variables used throughout may represent the same or different integer values. According to some examples, circuitry 620 may include a processor, processor circuit or processor circuitry. Circuitry 620 may be part of computing device circuitry that includes processing cores (e.g., used as a central processing unit (CPU)). The circuitry including one or more processing cores can be any of various commercially available processors, including without limitation an AMD® Athlon®, Duron®, and Opteron® processors; ARM® application, embedded and secure processors; Qualcomm® Snapdragon, IBM®, Motorola® DragonBall®, Nvidia®Tegra® and PowerPC® processors; ΓΒΜ and Sony® Cell processors; Intel® Celeron®, Core (2) Duo®, Core i3, Core i5, Core i7, Itanium®, Pentium®, Xeon®, Atom®, and XScale® processors; and similar processors. Dual microprocessors, multi-core processors, and other multi-processor architectures may also be employed as part of circuitry 620. According to some examples circuitry 620 may also be an application specific integrated circuit (ASIC) and at least some components, modules or logic 622-a may be implemented as hardware elements of the ASIC.According to some examples, apparatus 600 may include a collect component 622-1. Collect component 622-1 may be executed by circuitry 620 to collect or receive operating information associated with one or more computing resources of a data center. The operating information may include training operating information and may be collected or received (e.g., from controller at a rack) via operating information 615. The training operating information may be collected responsive to receiving workload type 605 that may indicate a workload type or profile for which collect component 622-1 is to collect or gather the training operating information from the one or more computing resources of the data center while these computing resources support the indicated workload type or profile.According to some examples, apparatus 600 may also include a store component 622-2. Store component 622-2 may be executed by circuitry 620 to at least temporarily store the training operating information collected or gathered by collect component 622-1.In some examples, apparatus 600 may also include a distribution component 622-3. Distribution component 622-3 may be executed by circuitry 620 to receive an indication of a workload distribution for a workload to be supported by one or more computing resources of the data center. The workload may be either a new workload or redistributed workload (e.g., migrated from a computing resource). The workload may be associated with a workload type or profile that may be processing or CPU intensive, memory intensive, network switch intensive, storage intensive or a balance of each of these various workload types or profiles.According to some examples, apparatus 600 may also include a classify component 622-4. Classify component 622-4 may be executed by circuitry 620 to determine a class for the workload based on a workload type or profile for the workload. In some examples, classify component 622-4 may use training operating information collected by collect component 622-1 and stored by store component 622-2 to cluster group the training operating information via k- means clustering, learn a classifier based on the cluster group of the training operating information via a support vector machine (SVM) and assign a regression fit function to classified training operating information classified by the classifier based on a least squares approach.In some examples, apparatus 600 may also include a model component 622-5. Model component 622-5 may be executed by circuitry 620 to determine predicted operating values for at least one of the one or more computing resources based on the class and input the one or more predicted operating values in at least one scoring model (e.g., a learned OI scoring model) to evaluate the workload being supported by the at least one of the one or more computing resources. According to some examples, objective information 610 may be added to the scoring model as part of the evaluation. Objective information 610 may be related to one or attributes such as thermal attributes, performance attributes, power attributes or reliability attributes. In order to balance objectives related to these attributes, module component 622-5 may determine separate weight factors for individual attributes of a plurality of attributes, at least one predicted operating value from among the predicted operating values to correspond to an individual attribute of the plurality of attributes. Module component 622-5 may then multiply the separate weight factors for individual attributes with corresponding predicted operating values, sum products of the multiplication of the separate weight factors for individual attributes with corresponding predicted operating values to generate a scoring model predicted value and evaluate the workload based on the scoring model predicted value.In some examples, apparatus 600 may also include a schedule component 622-6. Schedule component 622-6 may be executed by circuitry 620 to schedule the workload to the one or more computing resources based on the evaluation conducted by module component 622-5. Schedule workload 630 may include information directing the computing resources to fulfill and/or support the workload.Included herein is a set of logic flows representative of example methodologies for performing novel aspects of the disclosed architecture. While, for purposes of simplicity of explanation, the one or more methodologies shown herein are shown and described as a series of acts, those skilled in the art will understand and appreciate that the methodologies are not limited by the order of acts. Some acts may, in accordance therewith, occur in a different order and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a methodology could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all acts illustrated in a methodology may be required for a novel implementation.A logic flow may be implemented in software, firmware, and/or hardware. In software and firmware embodiments, a logic flow may be implemented by computer executable instructions stored on at least one non-transitory computer readable medium or machine readable medium, such as an optical, magnetic or semiconductor storage. The embodiments are not limited in this context.FIG. 7 illustrates an example of a logic flow. As shown in FIG. 7 the logic flow includes a logic flow 700. Logic flow 700 may be representative of some or all of the operations executed by one or more logic, features, or devices described herein, such as apparatus 700. More particularly, logic flow 700 may be implemented by at least distribution component 622-3, classify component 622-4, model component 622-5 or schedule component 622-6.According to some examples, logic flow 700 at block 702 may receive, at a processor circuit, an indication of a workload distribution for a workload to be supported by one or more computing resources of a data center. For these examples, distribution component 622-1 may receive the operating information.In some examples, logic flow 700 at block 704 may determine a class for the workload based on a workload type or profile for the workload. For these examples, classify component 622-4 may determine the class for the workload.According to some examples, logic flow 700 at block 706 may determine predicted operating values for at least one of the one or more computing resources based on the class and inputting the one or more predicted operating values in at least one scoring model to evaluate the workload being supported by the at least one of the one or more computing resources. For these examples, module component 622-5 may determine the predicted operating values and evaluate the workload.In some examples, logic flow 700 at block 708 may schedule the workload to the at least one of the one or more computing resources based on the evaluation. For these examples, schedule component 622-6 may schedule the workload.FIG. 8 illustrates an example of a storage medium 800. Storage medium 800 may comprise an article of manufacture. In some examples, storage medium 800 may include any non-transitory computer readable medium or machine readable medium, such as an optical, magnetic or semiconductor storage. Storage medium 800 may store various types of computer executable instructions, such as instructions to implement logic flow 700. Examples of a computer readable or machine readable storage medium may include any tangible media capable of storing electronic data, including volatile memory or non-volatile memory, removable or nonremovable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. Examples of computer executable instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, object-oriented code, visual code, and the like. The examples are not limited in this context.FIG. 9 illustrates an example computing platform 900. In some examples, as shown in FIG. 9, computing platform 900 may include a processing component 940, other platform components or a communications interface 960. According to some examples, computing platform 900 may be implemented in a computing device such as a server in a system such as a data center or server farm that supports a manager or controller for managing configurable computing resources as mentioned above.According to some examples, processing component 940 may execute processing operations or logic for apparatus 600 and/or storage medium 900. Processing component 940 may include various hardware elements, software elements, or a combination of both. Examples of hardware elements may include devices, logic devices, components, processors,microprocessors, circuits, processor circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software elements may include software components, programs, applications, computer programs, application programs, device drivers, system programs, software development programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an example is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given example.In some examples, other platform components 950 may include common computing elements, such as one or more processors, multi-core processors, co-processors, memory units, chipsets, controllers, peripherals, interfaces, oscillators, timing devices, video cards, audio cards, multimedia input/output (I/O) components (e.g., digital displays), power supplies, and so forth. Examples of memory units may include without limitation various types of computer readable and machine readable storage media in the form of one or more higher speed memory units, such as read-only memory (ROM), random-access memory (RAM), dynamic RAM (DRAM), Double-Data-Rate DRAM (DDRAM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, polymer memory such as ferroelectric polymer memory, ovonic memory, phase change or ferroelectric memory, silicon-oxide-nitride-oxide- silicon (SONOS) memory, magnetic or optical cards, an array of devices such as Redundant Array of Independent Disks (RAID) drives, solid state memory devices (e.g., USB memory), solid state drives (SSD) and any other type of storage media suitable for storing information.In some examples, communications interface 960 may include logic and/or features to support a communication interface. For these examples, communications interface 960 may include one or more communication interfaces that operate according to various communication protocols or standards to communicate over direct or network communication links. Direct communications may occur via use of communication protocols or standards described in one or more industry standards (including progenies and variants) such as those associated with the PCI Express specification. Network communications may occur via use of communication protocols or standards such those described in one or more Ethernet standards promulgated by the Institute of Electrical and Electronics Engineers (IEEE). For example, one such Ethernet standard may include IEEE 802.3-2012, Carrier sense Multiple access with Collision Detection (CSMA/CD) Access Method and Physical Layer Specifications, Published in December 2012 (hereinafter "IEEE 802.3")· Network communication may also occur according to one or more OpenFlow specifications such as the OpenFlow Hardware Abstraction API Specification. Network communications may also occur according to Infiniband Architecture Specification, Volume 1, Release 1.3, published in March 2015 ("the Infiniband Architecture specification").Computing platform 900 may be part of a computing device that may be, for example, a server, a server array or server farm, a web server, a network server, an Internet server, a work station, a mini-computer, a main frame computer, a supercomputer, a network appliance, a web appliance, a distributed computing system, multiprocessor systems, processor-based systems, or combination thereof. Accordingly, functions and/or specific configurations of computing platform 900 described herein, may be included or omitted in various embodiments of computing platform 900, as suitably desired.The components and features of computing platform 900 may be implemented using any combination of discrete circuitry, ASICs, logic gates and/or single chip architectures. Further, the features of computing platform 900 may be implemented using microcontrollers, programmable logic arrays and/or microprocessors or any combination of the foregoing where suitably appropriate. It is noted that hardware, firmware and/or software elements may be collectively or individually referred to herein as "logic" or "circuit."It should be appreciated that the exemplary computing platform 900 shown in the block diagram of FIG. 9 may represent one functionally descriptive example of many potential implementations. Accordingly, division, omission or inclusion of block functions depicted in the accompanying figures does not infer that the hardware components, circuits, software and/or elements for implementing these functions would necessarily be divided, omitted, or included in embodiments.One or more aspects of at least one example may be implemented by representative instructions stored on at least one machine-readable medium which represents various logic within the processor, which when read by a machine, computing device or system causes the machine, computing device or system to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.Various examples may be implemented using hardware elements, software elements, or a combination of both. In some examples, hardware elements may include devices, components, processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. In some examples, software elements may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an example is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given implementation. Some examples may include an article of manufacture or at least one computer-readable medium. A computer-readable medium may include a non-transitory storage medium to store logic. In some examples, the non-transitory storage medium may include one or more types of computer-readable storage media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. In some examples, the logic may include various software elements, such as software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, API, instruction sets,.computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof.According to some examples, a computer-readable medium may include a non-transitory storage medium to store or maintain instructions that when executed by a machine, computing device or system, cause the machine, computing device or system to perform methods and/or operations in accordance with the described examples. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. The instructions may be implemented according to a predefined computer language, manner or syntax, for instructing a machine, computing device or system to perform a certain function. The instructions may be implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language.Some examples may be described using the expression "in one example" or "an example" along with their derivatives. These terms mean that a particular feature, structure, or characteristic described in connection with the example is included in at least one example. The appearances of the phrase "in one example" in various places in the specification are not necessarily all referring to the same example.Some examples may be described using the expression "coupled" and "connected" along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, descriptions using the terms "connected" and/or "coupled" may indicate that two or more elements are in direct physical or electrical contact with each other. The term "coupled," however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.The following examples pertain to additional examples of technologies disclosed herein. Example 1. An example apparatus may include circuitry. The apparatus may also include a distribution logic for execution by the circuitry to receive an indication of a workload distribution for a workload to be supported by one or more computing resources of a data center. The apparatus may also include a classify logic for execution by the circuitry to determine a class for the workload based on a workload type or profile for the workload. The apparatus may also include a model logic for execution by the circuitry to determine predicted operating values for at least one of the one or more computing resources based on the class and input the one or more predicted operating values in at least one scoring model to evaluate the workload being supported by the at least one of the one or more computing resources. The apparatus may also include a schedule logic for execution by the circuitry to schedule the workload to the at least one of the one or more computing resources based on the evaluation.Example 2. The apparatus of example 1, the model logic to input the predicted operating values in at least one scoring model to evaluate the workload may include the model logic to determine separate weight factors for individual attributes of a plurality of attributes, at least one predicted operating value from among the predicted operating values to correspond to an individual attribute of the plurality of attributes. The module logic may also multiply the separate weight factors for individual attributes with corresponding predicted operating values. The model logic may also sum products of the multiplication of the separate weight factors for individual attributes with corresponding predicted operating values to generate a scoring model predicted value. The model logic may also evaluate the workload based on the scoring model predicted value.Example 3. The apparatus of example 2, the separate weight factors may be normalized to[0, 1].Example 4. The apparatus of example 2, the plurality of attributes may include a thermal attribute, a performance attribute, a power attribute or a reliability attribute.Example 5. The apparatus of example 4, a first predicted operating value corresponding to the thermal attribute may include a predicted operating temperature for at least one of the one or more computing resources. A second predicted operating value corresponding to the performance attribute may include a predicted cache miss rate for at least one of the one or more computing resources. A third predicted operating value corresponding to the power attribute may include a predicted power utilization rate for at least one of the one or more computing resources. A fourth predicted operating value corresponding to the reliability attribute may include a failure probability for at least one of the one or more computing resources. Example 6. The apparatus of example 5, the model logic may apply one or more constraining rules to the scoring model predicted value to cause the scoring model predicted value to be set to a value of 0 if at least one of the one or more constraining rules are met. For these examples, the one or more constraining rules may include the predicted operating temperature exceeding a throttle temperature threshold for at least one of the one or more computing resources, the predicted cache miss rate exceeding a cache miss rate threshold, the predicted power utilization rate exceeding a power utilization rate threshold or the failure probability exceeding a failure probability threshold.Example 7. The apparatus of example 1 , the one or more computing resources may include one or more of a processor, a memory device, a storage device, a power module, a cooling module, a network input/output device, a network switch or a virtual machine.Example 8. The apparatus of example 1 may also include a collect logic for execution by the circuitry to gather training operating information for one or more workloads included in the workload type or profile while the one or more computing resources of the data center support the one or more workloads. The apparatus may also include a store logic for execution by the circuitry to store the gathered training operating information. For these examples, the classify logic may use the stored and gathered training operating information to cluster group the training operating information via fc-means clustering, learn a classifier based on the cluster group of the training operating information via an SVM and assign a regression fit function to classified training operating information classified by the classifier based on a least squares approach.Example 9. The apparatus of example 8, the model logic to determine the one or more predicted operating values based on the class may include the model logic to use the regression fit function assigned to the classified training operating information to determine the one or more predicted operating values.Example 10. The apparatus of example 8, the workload type or profile may include one of a first workload type or profile that is processing or processor intensive, a second workload type or profile that is memory intensive, a third workload type or profile that is network switch intensive, a fourth workload type or profile that is storage intensive or a fifth workload type or profile that is a balanced workload type or profile that has relatively equal processor, memory, network switch and storage intensities.Example 1 1. The apparatus of example 8, the training operating information may include an inlet or an outlet temperature for a platform or rack housing the one or more computing resources, a power consumption for a platform or rack housing the one or more computing resources, processor cache miss information, network data throughput latency information, memory access latency information, throttling activation information for the one or more computing resources, margin to a peak operating temperature threshold for the one or more computing resources, or a volumetric airflow for a platform and/or rack housing the one or more computing resources.Example 12. The apparatus of example 1 may also include a digital display coupled to the circuitry to present a user interface view.Example 13. An example method may include receiving, at a processor circuit, an indication of a workload distribution for a workload to be supported by one or more computing resources of a data center. The method may also include determining a class for the workload based on a workload type or profile for the workload. The method may also include determining predicted operating values for at least one of the one or more computing resources based on the class and inputting the one or more predicted operating values in at least one scoring model to evaluate the workload being supported by the at least one of the one or more computing resources. The method may also include scheduling the workload to the at least one of the one or more computing resources based on the evaluation.Example 14. The method of example 13 may include inputting the predicted operating values in at least one scoring model to evaluate the workload based on determining separate weight factors for individual attributes of a plurality of attributes, at least one predicted operating value from among the predicted operating values to correspond to an individual attribute of the plurality of attributes. The method may also include multiplying the separate weight factors for individual attributes with corresponding predicted operating values. The method may also include summing products of the multiplication of the separate weight factors for individual attributes with corresponding predicted operating values to generate a scoring model predicted value. The method may also include evaluating the workload based on the scoring model predicted value.Example 15. The method of example 14, the separate weight factors may be normalized to[0, 1].Example 16. The method of example 14, the plurality of attributes may include a thermal attribute, a performance attribute, a power attribute or a reliability attribute.Example 17. The method of example 16, a first predicted operating value corresponding to the thermal attribute includes a predicted operating temperature for at least one of the one or more computing resources, a second predicted operating value corresponding to the performance attribute includes a predicted cache miss rate for at least one of the one or more computing resources, a third predicted operating value corresponding to the power attribute includes a predicted power utilization rate for at least one of the one or more computing resources and a fourth predicted operating value corresponding to the reliability attribute includes a failure probability for at least one of the one or more computing resources.Example 18. The method of example 17 may also include applying one or more constraining rules to the scoring model predicted value to cause the scoring model predicted value to be set to a value of 0 if at least one of the one or more constraining rules are met. For these examples, the one or more constraining rules may include the predicted operating temperature exceeding a throttle temperature threshold for at least one of the one or more computing resources, the predicted cache miss rate exceeding a cache miss rate threshold, the predicted power utilization rate exceeding a power utilization rate threshold or the failure probability exceeding a failure probability threshold.Example 19. The method of example 13, the one or more computing resources may include one or more of a processor, a memory device, a storage device, a power module, a cooling module, a network input/output device, a network switch or a virtual machine.Example 20. The method of example 13 may also include determining the one or more predicted operating values based on the class may include gathering training operating information for one or more workloads included in the workload type or profile while the one or more computing resources of the data center support the one or more workloads. The method may also include cluster grouping the training operating information using fc-means clustering. The method may also include learning a classifier based on the cluster grouping of the training operating information using an SVM. The method may also include assigning a regression fit function to classified training operating information classified by the classifier based on a least squares approach. The method may also include using the regression fit function assigned to the classified training operating information to determine the one or more predicted operating values.Example 21. The method of example 20, the workload type or profile may include one of a first workload type or profile that is processing or processor intensive, a second workload type or profile that is memory intensive, a third workload type or profile that is network switch intensive, a fourth workload type or profile that is storage intensive or a fifth workload type or profile that is a balanced workload type or profile that has relatively equal processor, memory, network switch and storage intensities.Example 22. The method of example 20, the training operating information may include an inlet or an outlet temperature for a platform or rack housing the one or more computing resources, a power consumption for a platform or rack housing the one or more computing resources, processor cache miss information, network data throughput latency information, memory access latency information, throttling activation information for the one or more computing resources, margin to a peak operating temperature threshold for the one or more computing resources, or a volumetric airflow for a platform and/or rack housing the one or more computing resources.Example 23. An example at least one machine readable medium may include a plurality of instructions that in response to being executed by a system may cause the system to carry out a method according to any one of examples 13 to 22.Example 24. An example apparatus may include means for performing the methods of any one of examples 13 to 22.Example 25. An example at least one machine readable medium comprising a plurality of instructions that in response to being executed by a system may cause the system to receive an indication of a workload distribution for a workload to be supported by one or more computing resources of a data center. The instructions may also cause the system to determine a class for the workload based on a workload type or profile for the workload. The instructions may also cause the system to determine predicted operating values for at least one of the one or more computing resources based on the class and input the one or more predicted operating values in at least one scoring model to evaluate the workload being supported by the at least one of the one or more computing resources. The instructions may also cause the system to schedule the workload to the at least one of the one or more computing resources based on the evaluation.Example 26. The at least one machine readable medium of example 25, the instructions to cause the system to input the predicted operating values in at least one scoring model to evaluate the workload may further include the instructions to cause the system to determine separate weight factors for individual attributes of a plurality of attributes, at least one predicted operating value from among the predicted operating values to correspond to an individual attribute of the plurality of attributes. The instructions may also cause the system to multiply the separate weight factors for individual attributes with corresponding predicted operating values. The instructions may also cause the system to sum products of the multiplication of the separate weight factors for individual attributes with corresponding predicted operating values to generate a scoring model predicted value. The instructions may also cause the system to evaluate the workload based on the scoring model predicted value.Example 27. The at least one machine readable medium of example 26, the separate weight factors may be normalized to [0, 1]. Example 28. The at least one machine readable medium of example 26, the plurality of attributes may include a thermal attribute, a performance attribute, a power attribute or a reliability attribute.Example 29. The at least one machine readable medium of example 28, a first predicted operating value corresponding to the thermal attribute may include a predicted operating temperature for at least one of the one or more computing resources, a second predicted operating value corresponding to the performance attribute may include a predicted cache miss rate for at least one of the one or more computing resources, a third predicted operating value corresponding to the power attribute may include a predicted power utilization rate for at least one of the one or more computing resources and a fourth predicted operating value corresponding to the reliability attribute may include a failure probability for at least one of the one or more computing resources.Example 30. The at least one machine readable medium of example 29, the instructions may further cause the system to apply one or more constraining rules to the scoring model predicted value to cause the scoring model predicted value to be set to a value of 0 if at least one of the one or more constraining rules are met. For these examples, the one or more constraining rules may include the predicted operating temperature exceeding a throttle temperature threshold for at least one of the one or more computing resources, the predicted cache miss rate exceeding a cache miss rate threshold, the predicted power utilization rate exceeding a power utilization rate threshold or the failure probability exceeding a failure probability threshold.Example 31. The at least one machine readable medium of example 25, the one or more computing resources may include one or more of a processor, a memory device, a storage device, a power module, a cooling module, a network input/output device, a network switch or a virtual machine.Example 32. The at least one machine readable medium of example 25, the instructions to cause the system to determine the one or more predicted operating values based on the class includes the instructions to further cause the system to gather training operating information for one or more workloads included in the workload type or profile while the one or more computing resources of the data center support the one or more workloads. The instructions may also cause the system to cluster grouping the training operating information using fc-means clustering. The instructions may also cause the system to learn a classifier based on the cluster grouping of the training operating information using an SVM. The instructions may also cause the system to assign a regression fit function to classified training operating information classified by the classifier based on a least squares approach. The instructions may also cause the system to use the regression fit function assigned to the classified training operating information to determine the one or more predicted operating values.Example 33. The at least one machine readable medium of example 32, the workload type or profile may include one of a first workload type or profile that is processing or processor intensive, a second workload type or profile that is memory intensive, a third workload type or profile that is network switch intensive, a fourth workload type or profile that is storage intensive or a fifth workload type or profile that is a balanced workload type or profile that has relatively equal processor, memory, network switch and storage intensities.Example 34. The at least one machine readable medium of example 32, the training operating information may include an inlet or an Outlet temperature for a platform or rack housing the one or more computing resources, a power consumption for a platform or rack housing the one or more computing resources, processor cache miss information, network data throughput latency information, memory access latency information, throttling activation information for the one or more computing resources, margin to a peak operating temperature threshold for the one or more computing resources, or a volumetric airflow for a platform and/or rack housing the one or more computing resources.Example 35. An apparatus comprising: circuitry communicatively coupled to a data center; a distributor for execution by the circuitry to receive an indication of a workload distribution for a workload to be supported by one or more computing resources of a data center; a classifier for execution by the circuitry to determine a class for the workload based on a workload type or profile for the workload; a modeler for execution by the circuitry to determine predicted operating values for at least one of the one or more computing resources based on the class and input the one or more predicted operating values in at least one scoring model to evaluate the workload being supported by the at least one of the one or more computing resources; and a scheduler for execution by the circuitry to schedule the workload to the at least one of the one or more computing resources based on the evaluation.Example 36. The apparatus of claim 35, the modeler to: determine separate weight factors for individual attributes of a plurality of attributes, at least one predicted operating value from among the predicted operating values to correspond to an individual attribute of the plurality of attributes; multiply the separate weight factors for individual attributes with corresponding predicted operating values; sum products of the multiplication of the separate weight factors for individual attributes with corresponding predicted operating values to generate a scoring model predicted value; and evaluate the workload based on the scoring model predicted value. 6 055056Example 37. The apparatus of claim 36, wherein the separate weight factors arenormalized to [0, 1 ].Example 38. The apparatus of claim 36, the plurality of attributes comprising a thermal attribute, a performance attribute, a power attribute or a reliability attribute.Example 39. The apparatus of claim 38, the predicted operating values comprising a first predicted operating value corresponding to the thermal attribute includes a predicted operating temperature for at least one of the one or more computing resources, a second predicted operating value corresponding to the performance attribute includes a predicted cache miss rate for at least one of the one or more computing resources, a third predicted operating value corresponding to the power attribute includes a predicted power utilization rate for at least one of the one or more computing resources and a fourth predicted operating value corresponding to the reliability attribute includes a failure probability for at least one of the one or more computing resources.Example 40. The apparatus of claim 39, the modeler to apply one or more constraining rules to the scoring model predicted value to cause the scoring model predicted value to be set to a value of 0 if at least one of the one or more constraining rules are met, the one or more constraining rules including: the predicted operating temperature exceeding a throttletemperature threshold for at least one of the one or more computing resources; the predicted cache miss rate exceeding a cache miss rate threshold; the predicted power utilization rate exceeding a power utilization rate threshold; or the failure probability exceeding a failure probability threshold.Example 41. The apparatus of claim 35, the one or more computing resources comprising one or more of a processor, a memory device, a storage device, a power module, a cooling module, a network input/output device, a network switch or a virtual machine.Example 42. The apparatus of claim 35, comprising: a collector for execution by the circuitry to gather training operating information for one or more workloads included in the workload type or profile while the one or more computing resources of the data center support the one or more workloads; and a store for execution by the circuitry to store the gathered training operating information, the classifier to use the stored and gathered training operating information to: cluster group the training operating information via i-means clustering; learn a classifier based on the cluster group of the training operating information via a support vector machine (SVM); and assign a regression fit function to classified training operating information classified by the classifier based on a least squares approach. Example 43. The apparatus of claim 42, the model logic to determine the one or more predicted operating values based on the class comprises the model logic to use the regression fit function assigned to the classified training operating information to determine the one or more predicted operating values.Example 44. The apparatus of claim 42, the workload type or profile comprising one of a first workload type or profile that is processing or processor intensive, a second workload type or profile that is memory intensive, a third workload type or profile that is network switch intensive, a fourth workload type or profile that is storage intensive or a fifth workload type or profile that is a balanced workload type or profile that has relatively equal processor, memory, network switch and storage intensities.Example 45. The apparatus of claim 42, the training operating information comprising an inlet or an outlet temperature for a platform or rack housing the one or more computing resources, a power consumption for a platform or rack housing the one or more computing resources, processor cache miss information, network data throughput latency information, memory access latency information, throttling activation information for the one or more computing resources, margin to a peak operating temperature threshold for the one or more computing resources, or a volumetric airflow for a platform and/or rack housing the one or more computing resources.Example 46. The apparatus of claim 35, comprising a digital display coupled to the circuitry to present a user interface view.It is emphasized that the Abstract of the Disclosure is provided to comply with 37 C.F.R. Section 1.72(b), requiring an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single example for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed examples require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed example. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate example. In the appended claims, the terms "including" and "in which" are used as the plain-English equivalents of the respective terms "comprising" and "wherein," respectively. Moreover, the terms "first," "second," "third," and so forth, are used merely as labels, and are not intended to impose numerical requirements on their objects. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Embodiments of the present disclosure describe semiconductor equipment devices having a metal workpiece and a diamond-like carbon (DLC) coating disposed on a surface of the metal workpiece, thermal semiconductor test pedestals having a metal plate and a DLC coating disposed on a surface of the metal plate, techniques for fabricating thermal semiconductor test pedestals with DLC coatings, and associated configurations. A thermal semiconductor test pedestal may include a metal plate and a DLC coating disposed on a surface of the metal plate. The metal plate may include a metal block formed of a first metal and a metal coating layer formed of a second metal between the metal block and the DLC coating. An adhesion strength promoter layer may be disposed between the metal coating layer and the DLC coating. Other embodiments may be described and/or claimed.
What is claimed is:1. A thermal semiconductor test pedestal comprising:a metal plate; anda diamond-like carbon (DLC) coating disposed on a surface of the metal plate, wherein a surface of the DLC coating is to contact a semiconductor product during operation of the thermal semiconductor test pedestal.2. The thermal semiconductor test pedestal of claim 1, wherein the metal plate includes a metal block formed of a first metal and a metal coating layer formed of a second metal disposed between the metal block and the DLC coating.3. The thermal semiconductor test pedestal of claim 2, further comprising an adhesion strength promoter layer disposed between the metal coating layer and the DLC coating.4. The thermal semiconductor test pedestal of claim 3, wherein the adhesion strength promoter layer includes at least one of titanium (Ti), tungsten (W), chromium (Cr), or iron (Fe).5. The thermal semiconductor test pedestal of any one of claims 1-4, wherein the DLC coating has a thickness of greater than or equal to approximately 2 microns and less than or equal to approximately 5 microns.6. The thermal semiconductor test pedestal of any one of claims 2-4, wherein the metal block is formed of copper (Cu), stainless steel (SS) or aluminum (Al).7. The thermal semiconductor test pedestal of claim 6, wherein the metal coating layer is formed of nickel (Ni).8. The thermal semiconductor test pedestal of claim 7, wherein the metal coating layer has a thickness of greater than or equal to approximately 12 microns and less than or equal to approximately 25 microns.9. The thermal semiconductor test pedestal of any one of claims 1-2, further comprising an adhesion strength promoter layer disposed between the metal plate and the DLC coating, wherein the adhesion strength promoter layer includes a carbide forming material.10. A semiconductor equipment device comprising:a metal workpiece; anda diamond-like carbon (DLC) coating disposed on a surface of the metal workpiece, wherein a surface of the DLC coating is to contact a semiconductor product during operation of the semiconductor equipment device.11. The semiconductor equipment device of claim 10, wherein the metal workpiece is a metal plate of a thermal semiconductor test pedestal.12. The semiconductor equipment device of claim 11, wherein the semiconductor equipment device is a thermal control unit further comprising a temperature regulation stage coupled with the metal plate of the thermal semiconductor test pedestal.13. The semiconductor equipment device of any one of claims 10-12, wherein the metal workpiece includes a base structure formed of a first metal and a metal coating layer formed of a second metal, wherein the second metal is different than the first metal and the metal coating layer is plated on the base structure.14. The semiconductor equipment device of claim 13, further comprising an adhesion strength promoter layer disposed between the metal coating layer and the DLC coating.15. A method of fabricating a thermal semiconductor test pedestal comprising: providing a metal plate; andcoating the metal plate with a diamond-like carbon (DLC) layer, wherein a surface of the DLC layer is to contact a semiconductor die during operation of the thermal semiconductor test pedestal.16. The method of claim 15, further comprising depositing an adhesion strength promoter layer on the metal plate before coating the metal plate with the DLC layer.17. The method of claim 16, wherein the metal plate includes a copper (Cu) block and a metal coating layer disposed between the Cu block and the DLC layer, wherein the metal coating layer is formed of a metal different than Cu.18. The method of claim 17, wherein the metal coating layer is formed of nickel(Ni).19. The method of any one of claims 16-18, wherein the adhesion strength promoter layer includes at least one of titanium (Ti), tungsten (W), chromium (Cr), or iron (Fe).20. The method of any one of claims 16-18, wherein the adhesion strength promoter layer is deposited using at least one of physical vapor deposition (PVD) or chemical vapor deposition (CVD) and coating the metal plate includes depositing the DLC layer on the adhesion strength layer using at least one of PVD or CVD.
DIAMOND-LIKE CARBON COATED SEMICONDUCTOR EQUIPMENTRELATED APPLICATIONThis application claims priority to U. S. Patent Application 15/165,896, filed May 26, 2016 entitled "DIAMOND-LIKE CARBON COATED SEMICONDUCTOREQUIPMENT".Technical FieldThe present disclosure relates generally to the field of semiconductor fabrication and test equipment and, more specifically, to semiconductor fabrication and test equipment with improved wear resistance.BackgroundEquipment used in semiconductor package assembly and test processes is subject to wear and tear due to the high number of repetitive steps involved in semiconductor manufacturing and testing. Thermal test modules typically use nickel-coated copper pedestals that repeatedly contact bare die, over-molded die, or lidded packages. The role of the pedestal is to provide an efficient heat transfer path from the product to avoid thermal overstress to the silicon. During repeated cycling interactions of the pedestal with semiconductor packages, considerable pedestal wear is observed, resulting in a need for pedestal replacement. During actuation of the test pedestal with a semiconductor product, the presence of hard foreign material (FM) particles from the automated testing (AT) process can embed in the pedestal with the potential to propagate cracks or cosmetic scratches in the product, resulting in yield loss. Additionally, the same pedestal is typically cycled thousands of times, and shear damage may accumulate in the form of pitting and thinning of the plating material on the pedestal. The exposure of a copper layer under nickel plating can lead to copper oxide spallation and introduce undesired FM particles in the process and/or factory. In addition to the introduction of FM particles in the test module, the thermal performance of the pedestal may suffer with plating removal, due to uneven contact with the surface of the semiconductor product and air gap introduction. Although some ceramic materials have been used for test pedestals, they may be brittle, making them a costly and unviable alternative in some situations due to difficulty of manufacturing and handling.Brief Description of the DrawingsEmbodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.Figure 1 schematically illustrates a cross-sectional side view of a thermal semiconductor test pedestal that may include a diamond-like carbon coating, in accordance with various embodiments.Figure 2 schematically illustrates a cross-sectional side view of a semiconductor equipment device that may include a workpiece having a diamond-like carbon coating, in accordance with various embodiments.Figure 3 schematically illustrates a flow diagram for a process of fabricating a thermal semiconductor test pedestal such as the thermal semiconductor test pedestal of Figure 1, in accordance with various embodiments.Detailed DescriptionEmbodiments herein may include semiconductor equipment devices having a metal workpiece and a diamond-like carbon (DLC) coating disposed on a surface of the metal workpiece, thermal semiconductor test pedestals having a metal plate and a DLC coating disposed on a surface of the metal plate, techniques for fabricating thermal semiconductor test pedestals with DLC coatings, and associated configurations. A thermal semiconductor test pedestal may include a metal plate and a DLC coating disposed on a surface of the metal plate. The metal plate may include a metal block formed of a first metal and a metal coating layer formed of a second metal disposed between the metal block and the DLC coating. An adhesion strength promoter layer may be disposed between the metal coating layer and the DLC coating.In the following detailed description, reference is made to the accompanying drawings that form a part hereof, wherein like numerals designate like parts throughout, and in which is shown by way of illustration embodiments in which the subject matter of the present disclosure may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents.Various operations may be described as multiple discrete actions or operations in turn, in a manner that is most helpful in understanding the claimed subject matter.However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations may not be performed in the order of presentation. Operations described may be performed in a different order than the described embodiment. Various additional operations may be performed and/or described operations may be omitted in additional embodiments.For the purposes of the present disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C).The description may use the phrases "in an embodiment," or "in embodiments," which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous.The term "coupled with," along with its derivatives, may be used herein."Coupled" may mean one or more of the following. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements indirectly contact each other, but yet still cooperate or interact with each other, and may mean that one or more other elements are coupled or connected between the elements that are said to be coupled with each other.In various embodiments, the phrase "a first layer formed on a second layer" may mean that the first layer is formed over the second layer, and at least a part of the first layer may be in direct contact (e.g., direct physical and/or electrical contact) or indirect contact (e.g., having one or more other layers between the first layer and the second layer) with at least a part of the second layer.Figure 1 schematically illustrates a cross-sectional side view of a thermal semiconductor test pedestal 100, in accordance with various embodiments. In some embodiments, the semiconductor test pedestal 100 may include a metal plate 102 and a diamond-like carbon (DLC) coating 104, having a thickness Tl, disposed on a surface 106 of the metal plate 102. In some embodiments, the metal plate 102 may include a metal block 108 formed of a first metal and a metal coating layer 110, having a thickness T2, formed of a second metal disposed between the metal block 108 and the DLC coating 104. In various embodiments, the metal coating layer 110 may be plated on or otherwise coupled with the metal block 108. In some embodiments, the metal block 108 may be formed of copper and the metal coating layer 110 may be a nickel or aluminum layer plated on the copper of the metal block 108. In other embodiments, the metal block 108 may be formed of a metal such as aluminum or stainless steel. In some embodiments, the metal plate 102 may include an adhesion strength promoter layer 112 that may be disposed between the metal coating layer 110 and the DLC coating 104. In various embodiments, the adhesion strength promoter layer 1 12 may include a carbide forming material such as at least one of titanium (Ti), tungsten (W), chromium (Cr), or iron (Fe).In various embodiments, the metal block 108 may be a Cu metal block, the metal coating layer 110 may be a Ni coating layer plated on the Cu metal block, and the DLC coated Ni-plated Cu pedestal may have a hardness greater than or equal to approximately 27.5 gigapascals (GPa) as measured by nanoindentation metrology. In some embodiments, this hardness of the DLC coated pedestal may compare favorably with typical hardness values of non-DLC coated pedestals such as a hardness value of approximately 7.5 GPa for a Ni-plated Cu pedestal that does not include a DLC coating. A typical hardness of a silicon die may be approximately 12 GPa, such that the DLC coated pedestal, having a greater hardness than the silicon die, may undergo less wear than an uncoated pedestal in various embodiments. In some embodiments, DLC coating of the pedestal may allow for less frequent pedestal replacement, cost savings, and/or a lower likelihood of having a piece of FM such as silicon impinged on its surface during a testing process, causing potential damage to a next set of products to be tested.In some embodiments, the DLC coating 104 may be an amorphous carbon in sp2 and sp3 hybridization. In various embodiments, the thickness Tl of the DLC coating 104 may be less than or equal to approximately 5 micrometers (μιη), also referred to as microns herein, and in some embodiments the thickness Tl of the DLC coating 104 may be greater than or equal to approximately 2 microns and less than or equal toapproximately 4 microns. In some embodiments, the thickness Tl of the DLC coating 104 may be approximately 2 microns. In various embodiments, the use of a thin DLC coating 104 may provide increased wear resistance while still having a very small effect on the thermal resistance of the pedestal 100 because thermal resistance scales proportionally with the thickness of the material which may also be referred to as a bond line thickness (BLT), and scales inversely with the thermal conductivity (k) of the material which can generally be represented with the equation R = BLT/k, where R is the thermal resistance. In some embodiments, the thermal resistance of the DLC coating 104 may be on the order of approximately 10"3to 10"4°Celsius · centimeters2AVatt (°Ccm2AV), resulting in the added DLC coating 104 having no significant effect on the overall thermal resistance of the pedestal 100. In some embodiments, the thickness T2 of the metal coating layer 110 may be greater than or equal to approximately 12 microns and less than or equal to approximately 25 microns.In some embodiments, the metal plate 102 may be formed of a material such as stainless steel that does not require an adhesion strength promoter layer, and the DLC coating 104 may be disposed directly on a surface of the stainless steel rather than on an adhesion strength promoter layer. In various embodiments, the adhesion strength promoter layer may not be required because an alloy of the metal forming the metal plate 102 may already include sufficient carbon (C) content, as with stainless steel. In someembodiments, the metal coating layer 1 10 may not be present and the adhesion strength promoter layer 112 may be disposed on the metal block 108.Figure 2 schematically illustrates a cross-sectional side view of a semiconductor equipment device 200 that may include a workpiece 202 having a diamond-like carbon coating 204, in accordance with various embodiments. In some embodiments, during operation of the semiconductor equipment device 200, the workpiece 202 may come into contact with a die 206 that may be on a substrate 208. In various embodiments, the workpiece 202 may come into direct contact with bare die, over-molded die, or lidded packages during repeated cycling interactions with semiconductor products. The workpiece 202 is shown in contact with the die 206, but it should be understood that the die 206 and the substrate 208 are not a part of the semiconductor equipment device 200 and that the workpiece 202 may come into contact with many different dies 206 during operation of the semiconductor equipment device 200.The die 206 may represent a discrete product made from a semiconductor material (e.g., silicon) using semiconductor fabrication techniques such as thin film deposition, lithography, etching, and the like used in connection with forming complementary metal- oxide-semiconductor (CMOS) devices. In some embodiments, the die 206 may be, include, or be a part of a radio frequency (RF) die. In other embodiments, the die may be, include, or be a part of a processor, memory, system-on-chip (SoC), or application specific integrated circuit (ASIC).In some embodiments, the workpiece 202 may be a metal workpiece such as semiconductor test pedestal 100 described with respect to Figure 1. In some embodiments, a metal layer 210 of the workpiece 202 may correspond to the metal plate 102 of the semiconductor test pedestal 100 described with respect to Figure 1. In variousembodiments, the DLC coating 204 may be disposed on a surface 212 of the metal layer 210. In some embodiments, the semiconductor equipment device 200 may be a thermal control unit that may include a temperature regulation stage 214 coupled with the metal layer 210, which may be the metal plate 102 of the thermal semiconductor test pedestal 100. In various embodiments, the temperature regulation stage 214 may include a chiller plate and/or a heater plate. A temperature sensor (not shown) may be coupled with the workpiece 202 such that a controller (not shown) may generate control signals to control the temperature regulation stage 214 in response to a temperature of the workpiece 202 as sensed by the temperature sensor to keep the workpiece 202 at a predefined temperature or within a predefined temperature range. A positioner 216 may be coupled with the temperature regulation stage 214 and/or the workpiece 202 to position the workpiece 202 into and out of contact with semiconductor products such as the die 206. In some embodiments, the workpiece 202 may be removable from the semiconductor equipment device 200 so that it may be replaced when needed.In other embodiments, the semiconductor equipment device 200 may be another type of semiconductor fabrication or test equipment such as a thermal compression bonding (TCB) device where the workpiece 202 may be a TCB head, a saw used in singulation and trimming processes of substrate panels and wafers, a drill for drilling plated through holes in substrate, or may include a thermal collateral used in different test processes such as a thermal heat sink. In various embodiments, the DLC coating 204 may coat other portions of the metal layer 210 than those shown (e.g., multiple outer surfaces of a saw blade or outer surfaces of a drill bit). In some embodiments, the temperature regulation stage 214 may not be present, or a different device block may be coupled with the workpiece 202 and/or the positioner 216 that may include one or more elements in accordance with the type of semiconductor equipment device 200 that includes the workpiece 202.Figure 3 schematically illustrates a flow diagram for a process 300 of fabricating a thermal semiconductor test pedestal such as the thermal semiconductor test pedestal 100 of Figure 1 , in accordance with various embodiments. In various embodiments, the process 300 may include providing a metal plate (e.g., metal plate 102 of Fig. 1) at a block 302. The metal plate may include a metal block formed of a material such as stainless steel or copper in various embodiments. In some embodiments, the metal block may include a metal coating layer (e.g., metal coating layer 110 of Fig. 1) that may be formed of a different material than the metal block. In some embodiments, the metal block may be a copper metal block and the metal coating layer may be formed of nickel (Ni) or aluminum (Al). At a decision block 304, the process 300 may include determining whether an adhesion strength promoter layer is required.If, at the decision block 304, it is determined that an adhesion strength promoter layer is required, the process may proceed to a block 306 and may include depositing an adhesion strength promoter layer (e.g., adhesion strength promoter layer 112 of Fig. 1) at the block 306. In various embodiments, the adhesion strength promoter layer may include at least one of titanium (Ti), tungsten (W), chromium (Cr), or iron (Fe). In some embodiments, the adhesion strength promoter layer may be deposited using at least one of physical vapor deposition (PVD) or chemical vapor deposition (CVD). In some embodiments, the process 300 may then proceed to a block 308 and may include coating the metal plate with a diamond-like carbon (DLC) layer (e.g., DLC coating 104 or Fig. 1 or DLC coating 204 of Fig. 2). In various embodiments, coating the metal plate with the DLC layer may include depositing the DLC layer using at least one of PVD or CVD over the adhesion strength promoter layer.If, at the decision block 304, it is determined that an adhesion strength promoter layer is not required, such as when the metal plate may be a stainless steel metal plate, the process 300 may proceed to the block 308 and may include depositing a DLC layer at the block 308 directly on the metal plate without an adhesion strength promoter layer. In various embodiments, the process 300 may include performing other actions at a block 310.The following paragraphs provide examples of various ones of the embodiments disclosed herein.Example 1 may include a thermal semiconductor test pedestal comprising: a metal plate; and a diamond-like carbon (DLC) coating disposed on a surface of the metal plate, wherein a surface of the DLC coating is to contact a semiconductor product during operation of the thermal semiconductor test pedestal.Example 2 may include the subject matter of Example 1, wherein the metal plate includes a metal block formed of a first metal and a metal coating layer formed of a second metal disposed between the metal block and the DLC coating.Example 3 may include the subject matter of Example 2, further comprising an adhesion strength promoter layer disposed between the metal coating layer and the DLC coating. Example 4 may include the subject matter of Example 3, wherein the adhesion strength promoter layer includes at least one of titanium (Ti), tungsten (W), chromium (Cr), or iron (Fe).Example 5 may include the subject matter of any one of Examples 1-4, wherein the DLC coating has a hardness greater than or equal to approximately 27.5 gigapascals (GPa).Example 6 may include the subject matter of any one of Examples 1-5, wherein the DLC coating is an amorphous carbon in sp2 and sp3 hybridization.Example 7 may include the subject matter of any one of Examples 1-6, wherein the DLC coating has a thickness of less than or equal to approximately 5 microns.Example 8 may include the subject matter of any one of Examples 1-7, wherein the DLC coating has a thickness of greater than or equal to approximately 2 microns and less than or equal to approximately 5 microns.Example 9 may include the subject matter of Example 1 , wherein the metal plate is formed of stainless steel.Example 10 may include the subject matter of any one of Examples 2-4, wherein the metal block is formed of copper (Cu), stainless steel (SS) or aluminum (Al).Example 11 may include the subject matter of Example 10, wherein the metal coating layer is formed of nickel (Ni).Example 12 may include the subject matter of any one of Examples 10-1 1, wherein the metal coating layer has a thickness of greater than or equal to approximately 12 microns and less than or equal to approximately 25 microns.Example 13 may include the subject matter of any one of Examples 1 -2, further comprising an adhesion strength promoter layer disposed between the metal plate and the DLC coating, wherein the adhesion strength promoter layer includes a carbide forming material.Example 14 may include a semiconductor equipment device comprising: a metal workpiece; and a diamond-like carbon (DLC) coating disposed on a surface of the metal workpiece, wherein a surface of the DLC coating is to contact a semiconductor product during operation of the semiconductor equipment device.Example 15 may include the subject matter of Example 14, wherein the metal workpiece is a metal plate of a thermal semiconductor test pedestal.Example 16 may include the subject matter of Example 15, wherein the semiconductor equipment device is a thermal control unit further comprising a temperature regulation stage coupled with the metal plate of the thermal semiconductor test pedestal.Example 17 may include the subject matter of Example 14, wherein the metal workpiece is a thermal compression bonding (TCB) head.Example 18 may include the subject matter of any one of Examples 14-17, wherein the metal workpiece includes a base structure formed of a first metal and a metal coating layer formed of a second metal, wherein the second metal is different than the first metal and the metal coating layer is plated on the base structure.Example 19 may include the subject matter of Example 18, further comprising an adhesion strength promoter layer disposed between the metal coating layer and the DLC coating.Example 20 may include a method of fabricating a thermal semiconductor test pedestal comprising: providing a metal plate; and coating the metal plate with a diamondlike carbon (DLC) layer, wherein a surface of the DLC layer is to contact a semiconductor die during operation of the thermal semiconductor test pedestal.Example 21 may include the subject matter of Example 20, further comprising depositing an adhesion strength promoter layer on the metal plate before coating the metal plate with the DLC layer.Example 22 may include the subject matter of Example 21, wherein the metal plate includes a copper (Cu) block and a metal coating layer disposed between the Cu block and the DLC layer, wherein the metal coating layer is formed of a metal different than Cu.Example 23 may include the subject matter of Example 22, wherein the metal coating layer is formed of nickel (Ni).Example 24 may include the subject matter of any one of Examples 21 -23, wherein the adhesion strength promoter layer includes at least one of titanium (Ti), tungsten (W), chromium (Cr), or iron (Fe).Example 25 may include the subject matter of any one of Examples 21-24, wherein the adhesion strength promoter layer is deposited using at least one of physical vapor deposition (PVD) or chemical vapor deposition (CVD) and coating the metal plate includes depositing the DLC layer on the adhesion strength layer using at least one of PVD or CVD.
Apparatuses, systems, and techniques to convert between tensor convolution and tensor contraction operations. In at least one embodiment, one or more convolution operations are performed on image data by at least contracting one or more tensors to generate one or more feature maps.
CLAIMS WHAT IS CLAIMED IS: 1. A processor, comprising: one or more arithmetic logic units (ALUs) to perform one or more convolution operations on image data by at least contracting one or more tensors to generate one or more feature maps. 2. The processor of claim 1, wherein the one or more convolution operations include a first convolution operation with a first activation tensor and a filter tensor to generate a first feature map represented by an output tensor, and the one or more ALUs are to: construct a second activation tensor that has a higher number of modes than the first activation tensor; and generate the first feature map by performing a tensor contraction with the second activation tensor and the filter tensor. 3. The processor of claim 2, wherein the one or more ALUs are to construct the second activation tensor based at least in part on: identifying a mode of the first activation tensor that is not present in the filter tensor and is not present in the output tensor; and replacing the identified mode with a first mode from the output tensor and a second mode from the filter tensor in the second activation tensor. 4. The processor of claim 3, wherein the one or more ALUs are to construct the second activation tensor such that the first mode and the second mode of the second activation tensor have overlapping strides. 5. The processor of claim 4, wherein the identified mode of the first activation tensor has an identified stride, and the one or more ALUs are to set a first stride of the first mode and a second stride of the second mode of the second activation tensor to the identified stride.6. The processor of claim 2, wherein the one or more ALUs are to construct the second activation tensor using data elements of the first activation tensor without adding additional data elements. 7. A system, comprising: one or more processors to perform a first type of operation on a tensor to generate an output by: changing a representation of the tensor from a first number of dimensions to a second number of dimensions; and performing a second type of operation on the representation of the tensor with the second number of dimensions to generate the output. 8. The system of claim 7, wherein the first type of operation is a convolution, the second type of operation is a tensor contraction, and the second number of dimensions is greater than the first number of dimensions. 9. The system of claim 8, wherein the output is a feature map represented by an output tensor, the tensor is an activation tensor, the convolution is a convolution of the activation tensor and a filter tensor, and the one or more processors are to: identify a dimension of the activation tensor that is not present in the filter tensor and is not present in the output tensor; and replace the identified dimension with a first dimension from the output tensor and a second dimension from the filter tensor in the changed representation of the tensor. 10. The system of claim 9, wherein the first dimension and the second dimension have overlapping strides. 11. The system of claim 8, further comprising a memory, wherein the tensor includes one or more data elements stored in the memory, and the one or more processors are to change the representation of the tensor such that two dimensions of the tensor refer to a common set of data elements included in the one or more data elements. 12. The system of claim 7, wherein the first type of operation is a tensor contraction and the second type of operation is a convolution.13. The system of claim 8, further comprising one or more memories to store parameters corresponding to one or more neural networks, wherein the one or more processors are to perform an inferencing operation using the one or more neural networks based, at least in part, on the output of the tensor contraction. 14. A machine-readable medium having stored thereon a set of instructions, which if performed by one or more processors, cause the one or more processors to at least generate one or more feature map outputs of one or more convolution operations on image data by at least contracting one or more tensors. 15. The machine-readable medium of claim 14, wherein the one or more convolution operations include a first convolution operation with a first activation tensor and a filter tensor to produce a first feature map represented by an output tensor, and wherein the set of instructions, which if performed by the one or more processors, further cause the one or more processors to: construct a second activation tensor that has a higher number of modes than the first activation tensor; and perform a tensor contraction with the second activation tensor and the filter tensor to generate the first feature map. 16. The machine-readable medium of claim 15, wherein the set of instructions, which if performed by the one or more processors, further cause the one or more processors to: identify a mode of the first activation tensor that is not present in the filter tensor and is not present in the output tensor; and replace the identified mode with a first mode from the output tensor and a second mode from the filter tensor in the second activation tensor. 17. The machine-readable medium of claim 16, wherein the set of instructions, which if performed by the one or more processors, further cause the one or more processors to construct the second activation tensor such that the first mode and the second mode of the second activation tensor have overlapping strides. 18. The machine-readable medium of claim 17, wherein the identified mode of the first activation tensor has an identified stride, and the set of instructions, which if performed by the one or more processors, further cause the one or more processors to set a first stride of the first mode and a second stride of the second mode of the second activation tensor to the identified stride. 19. The machine-readable medium of claim 15, wherein the first convolution operation is a two-dimensional (2D) convolution operation. 20. The machine-readable medium of claim 15, wherein the set of instructions, which if performed by the one or more processors, further cause the one or more processors to perform an inferencing operation using a neural network based, at least in part, on the first feature map. 21. A vehicle, comprising: a computer vision system that includes one or more processors to identify one or more features of a vehicle operating environment based at least in part on using one or more neural networks to generate one or more outputs of one or more convolution operations on image data by at least contracting one or more tensors to generate one or more feature maps; and one or more of a propulsion system and a directional control system to control one or more movements of the vehicle based at least in part on the identified one or more features. 22. The vehicle of claim 21, wherein the one or more convolution operations include a first convolution operation with a first activation tensor and a filter tensor to generate a first feature map represented by an output tensor, and the one or more processors are to: construct a second activation tensor that has a higher number of modes than the first activation tensor; and generate the first feature map by performing a tensor contraction with the second activation tensor and the filter tensor. 23. The vehicle of claim 22, wherein the one or more processors are to construct the second activation tensor based at least in part on: identifying a mode of the first activation tensor that is not present in the filter tensor and is not present in the output tensor; and replacing the identified mode with a first mode from the output tensor and a second mode from the filter tensor in the second activation tensor.24. The vehicle of claim 23, wherein the one or more processors are to construct the second activation tensor such that the first mode and the second mode of the second activation tensor have overlapping strides. 25. The vehicle of claim 24, wherein the identified mode of the first activation tensor has an identified stride, and the one or more processors are to set a first stride of the first mode and a second stride of the second mode of the second activation tensor to the identified stride. 26. The vehicle of claim 22, wherein the computer vision system includes a memory, the first activation tensor includes a plurality of data elements stored in the memory, and the one or more processors are to construct the second activation tensor such that two modes of the second activation tensor refer to a common set of data elements included in the plurality of data elements. 27. A method, comprising: identifying a first type of operation with a first tensor to generate an output; and generating the output by: constructing a second tensor based at least in part on changing a number of dimensions of the first tensor from a first number of dimensions to a second number of dimensions; and performing a second type of operation with the second tensor to generate the output. 28. The method of claim 27, wherein the first type of operation is a convolution, the second type of operation is a tensor contraction, and the second number of dimensions is greater than the first number of dimensions. 29. The method of claim 28, wherein the output is a feature map represented by an output tensor, the first tensor is an activation tensor, the convolution is a convolution of the activation tensor and a filter tensor, and the method further includes: identifying a mode of the activation tensor that is not present in the filter tensor and is not present in the output tensor; and replacing the identified mode with a first mode from the output tensor and a second mode from the filter tensor in the second tensor.30. The method of claim 29, wherein constructing the second tensor includes constructing the second tensor such that the first mode and the second mode have overlapping strides. 31. The method of claim 28, wherein the convolution is a two-dimensional (2D) convolution. 32. The method of claim 28, further comprising: performing an inferencing operation using a neural network based, at least in part, on the tensor contraction. 33. The method of claim 27, wherein the first type of operation is a tensor contraction and the second type of operation is a convolution.
PROCESSOR AND SYSTEM TO CONVERT TENSOR OPERATIONS IN MACHINE LEARNING CROSS REFERENCE TO RELATED APPLICATION This application claims priority to U.S. Patent Application No. 16/559,544, filed September 3, 2019, entitled “PROCESSOR AND SYSTEM TO CONVERT TENSOR OPERATIONS IN MACHINE LEARNING,” the entire contents of which is incorporated herein by reference in its entirety and for all purposes. TECHNICAL FIELD [0001] At least one embodiment pertains to processing resources used to perform and facilitate artificial intelligence. For example, at least one embodiment pertains to processors or computing systems used to train neural networks according to various novel techniques described herein. BACKGROUND [0002] Tensor convolution operations are used in many machine learning approaches, such as training and inferencing with deep learning techniques that use convolutional neural networks. These tensor convolution operations can use significant memory, time, or computing resources, and may require specialized tensor convolution libraries to function. Approaches to the use of tensor convolution operations in deep learning techniques can be improved. BRIEF DESCRIPTION OF THE DRAWINGS [0003] FIG. 1 illustrates a flowchart of a technique of constructing a tensor to generate an output, according to at least one embodiment; [0004] FIG. 2 illustrates a flowchart of a technique of generating a feature map by a tensor contraction, according to at least one embodiment; [0005] FIG. 3 illustrates a flowchart of a technique of constructing a tensor, according to at least one embodiment; [0006] FIG. 4 illustrates a flowchart of a technique of splitting a mode of a tensor, according to at least one embodiment; [0007] FIG.5 illustrates a block diagram of memory to store tensor data, according to at least one embodiment; [0008] FIG.6A illustrates inference and/or training logic, according to at least one embodiment; [0009] FIG.6B illustrates inference and/or training logic, according to at least one embodiment; [0010] FIG.7 illustrates training and deployment of a neural network, according to at least one embodiment; [0011] FIG.8 illustrates an example data center system, according to at least one embodiment; [0012] FIG.9A illustrates an example of an autonomous vehicle, according to at least one embodiment; [0013] FIG.9B illustrates an example of camera locations and fields of view for the autonomous vehicle of FIG.9A, according to at least one embodiment; [0014] FIG.9C is a block diagram illustrating an example system architecture for the autonomous vehicle of FIG.9A, according to at least one embodiment; [0015] FIG.9D is a diagram illustrating a system for communication between cloud- based server(s) and the autonomous vehicle of FIG.9A, according to at least one embodiment; [0016] FIG.10 is a block diagram illustrating a computer system, according to at least one embodiment; [0017] FIG.11 is a block diagram illustrating computer system, according to at least one embodiment; [0018] FIG.12 illustrates a computer system, according to at least one embodiment; [0019] FIG.13 illustrates a computer system, according at least one embodiment; [0020] FIG.14A illustrates a computer system, according to at least one embodiment; [0021] FIG.14B illustrates a computer system, according to at least one embodiment; [0022] FIG.14C illustrates a computer system, according to at least one embodiment; [0023] FIG.14D illustrates a computer system, according to at least one embodiment; [0024] FIG.14E and 14F illustrate a shared programming model, according to at least one embodiment; [0025] FIG.15 illustrates exemplary integrated circuits and associated graphics processors, according to at least one embodiment; [0026] FIGS.16A-16B illustrate exemplary integrated circuits and associated graphics processors, according to at least one embodiment; [0027] FIGS.17A-17B illustrate additional exemplary graphics processor logic according to at least one embodiment; [0028] FIG.18 illustrates a computer system, according to at least one embodiment; [0029] FIG.19A illustrates a parallel processor, according to at least one embodiment; [0030] FIG.19B illustrates a partition unit, according to at least one embodiment; [0031] FIG.19C illustrates a processing cluster, according to at least one embodiment; [0032] FIG.19D illustrates a graphics multiprocessor, according to at least one embodiment; [0033] FIG.20 illustrates a multi-graphics processing unit (GPU) system, according to at least one embodiment; [0034] FIG.21 illustrates a graphics processor, according to at least one embodiment; [0035] FIG.22 is a block diagram illustrating a processor micro-architecture for a processor, according to at least one embodiment; [0036] FIG.23 illustrates a deep learning application processor, according to at least one embodiment; [0037] FIG.24 is a block diagram illustrating an example neuromorphic processor, according to at least one embodiment; [0038] FIG.25 illustrates at least portions of a graphics processor, according to one or more embodiments; [0039] FIG.26 illustrates at least portions of a graphics processor, according to one or more embodiments; [0040] FIG.27 illustrates at least portions of a graphics processor, according to one or more embodiments; [0041] FIG.28 is a block diagram of a graphics processing engine of a graphics processor in accordance with at least one embodiment; [0042] FIG.29 is a block diagram of at least portions of a graphics processor core, according to at least one embodiment; [0043] FIGS.30A-30B illustrate thread execution logic including an array of processing elements of a graphics processor core according to at least one embodiment [0044] FIG.31 illustrates a parallel processing unit (“PPU”), according to at least one embodiment; [0045] FIG.32 illustrates a general processing cluster (“GPC”), according to at least one embodiment; [0046] FIG.33 illustrates a memory partition unit of a parallel processing unit (“PPU”), according to at least one embodiment; and [0047] FIG.34 illustrates a streaming multi-processor, according to at least one embodiment. DETAILED DESCRIPTION [0048] In at least one embodiment, one or more techniques relate to a duality between tensor contractions and tensor convolutions. In at least one embodiment, a technique includes an algorithm that reinterprets any n-mode convolution in terms of a tensor contraction. In at least one embodiment, a technique reinterprets a tensor contraction in terms of a convolution. [0049] FIG.1 illustrates a flowchart of a technique 100 of constructing a tensor to generate an output, according to at least one embodiment. In at least one embodiment, inference and/or training logic 615, described with respect to FIGS.6A and 6B, performs technique 100. In at least one embodiment, arithmetic logic units (ALUs) 610 of inference and/or training logic 615 perform technique 100. In at least one embodiment, inference and/or training logic 615 includes one or more processors to perform technique 100. In at least one embodiment, inference and/or training logic 615 includes a machine-readable medium having stored thereon a set of instructions, which if performed by one or more processors of inference and/or training logic 615, cause one or more processors of inference and/or training logic 615 to perform technique 100. In at least one embodiment, set of instructions is provided to ALUs 610 to cause ALUs 610 to perform technique 100. In at least one embodiment, computational hardware 602 and/or computational hardware 606, described with respect to FIG. 6B, performs technique 100. In at least one embodiment, a first inference and/or training logic 615 identifies first type of operation at block 102, constructs second tensor at block 104, and causes a second inference and/or training logic 615 to perform second type of operation with second tensor at block 106. In at least one embodiment, second inference and/or training logic 615 is part of a graphics processing unit (GPU). In at least one embodiment, first inference and/or training logic 615 issues instructions to GPU to perform second type of operation with second tensor, and second inference and/or training logic 615, operating on GPU, performs second type of operation in response to issued instructions. [0050] In at least one embodiment, a tensor refers to a dense n-dimensional (or n-mode) array. In at least one embodiment, tensors are a generalization of matrices to higher dimensions; for instance, scalars (e.g., a, b, g), vectors (e.g., a, b, c), and matrices (e.g., A, B, C) are 0-mode, 1-mode, and 2-mode tensors, respectively. Tensors can be represented by calligraphic capital letters (e.g., A, B, C). For instance,can represent a n- mode tensor with eidenoting an extent of an ith mode. A shape of a tensor can be referred to as e1× e2× ... en and a size (total number of entries) of a tensor can be referred to as Õiei. To simplify notation, symbolic names may be assigned to modes such that Ai1, i2, …., indenotes a n-mode tensor with its modes named i1, i2, …, in. Modes of a tensor can be referred to as dimensions. N-mode tensors can be referred to as mode-n tensors. [0051] A notation A(i1, i2, …., in) can denote a single element of a tensor. In at least one embodiment, a location Loc(A(i1, i2, …., in))) of that element relative to a memory location of A is given by: Loc(A(i1, i2, …., in)) = i1× stride(i1) + i2× stride(i2) + … + in× stride (in) (1) where stride(il) represents a displacement in physical memory between two logically neighboring elements along a mode il. In at least one embodiment, a column-major matrix Am,nhas stride(m) = 1 and stride(n) = m. [0052] In at least one embodiment, technique 100 includes, at a block 102, identifying a first type of operation with a first tensor that, when performed, generates an output. In at least one embodiment, technique 100 includes, at a block 104, constructing a second tensor. In at least one embodiment, constructing second tensor at block 104 is based at least in part on changing a number of dimensions of first tensor from a first number of dimensions to a second number of dimensions, as further described with respect to FIGS. 3-5. In at least one embodiment, constructing second tensor is performed using data elements of first tensor for second tensor, without adding additional physical data elements. In at least one embodiment, constructing second tensor includes adding additional logical data elements to second tensor that refer to already existing physical data elements of first tensor. In at least one embodiment, physical data elements are stored in memory locations and additional logical data elements of second tensor point to memory locations where physical data elements of first tensor are stored. In at least one embodiment, technique 100 includes, at a block 106, performing a second type of operation with second tensor. In at least one embodiment, performing second type of operation with second tensor generates a same output as would have been generated by first type of operation with first tensor. In at least embodiment, technique 100 is performed in constant time, O(1), with respect to a problem size. [0053] In at least one embodiment, first type of operation identified at block 102 is a tensor convolution, second type of operation performed at block 106 is a tensor contraction, and second number of dimensions of second tensor is greater than first number of dimensions of first tensor. In at least one embodiment, output is a feature map represented by an output tensor. In at least one embodiment, first tensor is an activation tensor, and convolution is a convolution of activation tensor and a filter tensor. [0054] In at least one embodiment, a first software library, such as a tensor convolution library, is not available to a system performing technique 100 such that first type of operation cannot be performed directly. In at least one embodiment, a second software library, such as a tensor contraction library, is available to system performing technique 100, and performing second type of operation with second tensor is performed using second software library. In at least one embodiment, tensor convolution library includes at least one of computer code, classes, procedures, scripts, and configuration data to provide at least one tensor convolution function via a tensor convolution library application programming interface (API). In at least one embodiment, tensor contraction library includes at least one of computer code, classes, procedures, scripts, and configuration data to provide at least one tensor contraction function via a tensor contraction library API. In at least one embodiment, performing second type of operation at block 106 is performed based at least in part on calling second software library via an API of second software library. In at least one embodiment, a first data structure representing second tensor and a second data structure representing an additional tensor, such as a filter tensor, are passed to second software library with a function call, which causes one or more processors to execute instructions and perform second type of operation with second tensor and additional tensor. In at least one embodiment, first type of operation is tensor convolution, second type of operation is tensor contraction, first software library is tensor convolution library, and second software library is tensor contraction library. In at least one embodiment, first type of operation is tensor contraction, second type of operation is tensor convolution, first software library is tensor contraction library, and second software library is tensor convolution library.[0055] An arbitrary-dimensional tensor contraction can be described in relation to notation for a matrix-matrix multiplication. Where , amatrix-matrix multiplication is expressed as:[0056] With this in mind, tensor contractions can be described using similar notation. [0057] A tensor contraction can be described with respect to lettingbe dA--, dB- and dc-mode tensors, respectively. An extension to a “contracted tensor product” may be considered, and a tensor contraction may be expressed as:[0058] whererespectively represent free modes of A (modes that appear in C and A), free modes of B (modes that appear in C and B ), as well as contracted modes (common modes of A and B ) with andMoreover, pA, pB, and pCare permutations thatallow modes to appear in any order.[0059] To simplify notation, “Einstein Notation” may be adopted, where summations over contracted modes are implicit such that Equation (3) becomes:[0060] In at least one embodiment, technique 100 transforms a class 2D spatial convolution to a tensor contraction. In at least one embodiment, a two-dimensional convolution of two four-mode tensors A and F can be described as follows:[0061 ]where respectively represent four-dimensional output, activation and filter tensors. In at least one embodiment, h- and w-mode of tensor A exhibit a peculiar access pattern which disqualifies (5) from being a tensor contraction. In at least one embodiment, a tensor contraction requires that all modes that either exist in tensor A or F also appear in tensor O. In at least one embodiment, n corresponds to a batch size. In at least one embodiment, k corresponds to an output channel. In at least one embodiment, p corresponds to an output height position. In at least one embodiment, q corresponds to an output width position. In at least one embodiment, r corresponds to a filter height. In at least one embodiment, s corresponds to a filter width. In at least one embodiment, c corresponds to an input channel. In at least one embodiment, h corresponds to an input image height. In at least one embodiment, w corresponds to an input image width. [0062] In at least one embodiment, a four-dimensional activation tensoris (logically) reinterpreted as a six-dimensional tensorwith overlapping strides. In at least one embodiment, overlapping strides refers to overlapping memory locations for different logical data elements in same tensor. In at least one embodiment, overlapping memory locations for different logical data elements refers to two different logical data elements having physical data stored at a same physical memory address. In at least one embodiment, using such a reinterpretation yields tensor contraction: [0063][0064] In at least one embodiment, with respect to Equation 6, n,p,q,k denote free-modes and r,s,c represent contracted modes. In at least one embodiment, performing second type of operation at block 106 includes performing a tensor contraction such as indicated by Equation 6 to generate an output tensor that represents a feature map. [0065] In at least one embodiment, an arbitrary dimensional tensor convolution is first type of operation at block 102, and is performed in terms of a tensor contraction at block 106. In at least one embodiment, normal convolutions using cross-correlation are first type of operation at block 102, and are performed in terms of a tensor contraction at block 106. In at least one embodiment, at least one of sub-sampling, dilation, and grouped convolutions are first type of operation at block 102, and are performed in terms of a tensor contraction at block 106. In at least one embodiment, a forward propagation function (Fprop) is first type of operation at block 102, and is performed in terms of a tensor contraction at block 106. In at least one embodiment, a data gradient function (Dgrad) is first type of operation at block 102, and is performed in terms of a tensor contraction at block 106. In at least one embodiment, a weight gradient function (Wgrad) is first type of operation at block 102, and is performed in terms of a tensor contraction at block 106. In at least one embodiment, at least one of first tensor, second tensor, output tensor, and filter tensor are stored in a NHWC (N, height, width, channel) type layout in memory, where N corresponds to batch size, using a generalized row- major memory layout (e.g., stride(C)=1, stride(W)=C, stride(H)=W*C, stride(N)=H*W*C). In at least one embodiment, at least one of first tensor, second tensor, output tensor, and filter tensor are stored in a NCHW (N, channel, height, width) type layout in memory, using a generalized row-major layout. In at least one embodiment, at least one of first tensor, second tensor, output tensor, and filter tensor are stored in a NC/32HW32 type layout in memory, with two groups of 32 channels. In at least one embodiment, at least one of first tensor, second tensor, output tensor, and filter tensor are stored in some other type layout in memory, such as CHWN (channel, height, width, N), NCDHW (N, channel, depth, height, width), or NDHWC (N, depth, height, width, channel). In at least one embodiment, technique 100 is agnostic of memory layout and applies to tensors having any number of dimensions stored in any memory layout. [0066] In at least one embodiment, a tensor contraction is formulated and performed in terms of a generic n-dimensional convolution. In at least one embodiment, first type of operation of block 102 is a tensor contraction, and second type of operation of block 106 is a tensor convolution. In at least one embodiment, first type of operation is a tensor contraction such as:where A is first tensor of block 102, D is a tensor output, and second type of operation is a tensor convolution that generates D by performing second type of operation at block 106 with second tensor constructed at block 104 and B. In at least one embodiment where first type of operation is a tensor contraction and second type of operation is a tensor convolution, second type of operation is performed with respect to computational chemistry data or computational physics data. [0067] In at least one embodiment where first operation of technique 100 is a tensor contraction and second operation of technique 100 is a tensor convolution, supported convolution formats include a convolution of an activation tensor ÃN,C,H,Wwith a filter tensor FK,C,R,S,yielding ON,K,P,Q.In at least one embodiment, first operation is represented by a concrete tensor contraction and m1is mapped to N mode of Ã, k ismapped to C mode of Ã, m2is mapped to H mode of Ã, and a corresponding filter F dimension, such as R, is set to one, effectively not performing a convolution along that mode, and n is mapped to K mode of F. In at least one embodiment, similarly, first operation is represented by a contraction of form which is reinterpreted as aconvolution by mapping m to N mode of Ã, k1to C mode of Ã, k2to H mode of Ã, setting a corresponding filter dimension, such as R, to extent (k2) contracting entire mode, and mapping n to K mode of F. In at least one embodiment, if a generic n-dimensional convolution software library implementation is available, that convolves an n-dimensional tensor A with an m-dimensional tensor B to yield a k-dimensional tensor C along x convolved modes, a generic tensor contraction can be reinterpreted in constant time, O(1), in terms of a convolution. [0068] FIG.2 illustrates a flowchart of a technique 200 of generating a feature map by a tensor contraction, according to at least one embodiment. In at least one embodiment, inference and/or training logic 615, described with respect to FIGS.6A and 6B, performs technique 200. In at least one embodiment, arithmetic logic units (ALUs) 610 of inference and/or training logic 615 perform technique 200. In at least one embodiment, inference and/or training logic 615 includes one or more processors to perform technique 200. In at least one embodiment, inference and/or training logic 615 includes a machine-readable medium having stored thereon a set of instructions, which if performed by one or more processors of inference and/or training logic 615, cause one or more processors of inference and/or training logic 615 to perform technique 200. In at least one embodiment, set of instructions is provided to ALUs 610 to cause ALUs 610 to perform technique 200. In at least one embodiment, computational hardware 602 and/or computational hardware 606, described with respect to FIG.6B, performs technique 200. In at least one embodiment, computational hardware 602 and/or computational hardware 606, described with respect to FIG.6B, performs technique 100. In at least one embodiment, a first inference and/or training logic 615 identifies convolution operation at block 202, identifies convolved modes of first activation tensor at block 204, constructs second activation tensor at block 206, and causes a second inference and/or training logic 615 to generate feature map at block 208. In at least one embodiment, second inference and/or training logic 615 is part of a GPU. In at least one embodiment, first inference and/or training logic 615 issues instructions to GPU to generate feature map using tensor contraction of second activation tensor and filter tensor, and second inference and/or training logic 615, operating on GPU, generates feature map in response to issued instructions. [0069] In at least one embodiment, technique 200 includes, at a block 202, identifying a convolution operation with a first activation tensor and a filter tensor that generates a feature map. In at least one embodiment, convolution operation is on image data, such as an image file (e.g., bitmap), a frame of a video, or other such image data. In at least one embodiment, technique 200 includes, at a block 204, identifying convolved modes of first activation tensor. In at least one embodiment, technique 200 includes, at a block 206, constructing a second activation tensor. In at least one embodiment, constructing second activation tensor is based at least in part on first activation tensor. In at least one embodiment, second activation tensor has a higher number of modes than first activation tensor. In at least one embodiment, a first mode and a second mode of second activation tensor have overlapping strides. In at least one embodiment, all modes of first activation tensor have non-overlapping strides. In at least one embodiment, strides of first mode and second mode are identical. In at least one embodiment, strides of first mode and second mode are set to a stride of a convolved mode of first activation tensor. In at least one embodiment, constructing second activation tensor is performed using data elements of first activation tensor without adding additional data elements. In at least one embodiment, technique 200 includes, at a block 208, generating feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is represented by an output tensor. In at least embodiment, technique 200 is performed in constant time, O(1), with respect to a problem size. [0070] In at least one embodiment, a 2D convolution implemented as a tensor contraction generates feature map. In at least one embodiment, an image processing system uses feature map to detect features in frames of a video. In at least one embodiment, a 3D convolution implemented as a tensor contraction generates feature map for video analysis. In at least one embodiment, a medical imaging system such as a magnetic resonance imaging (MRI) system or a computed tomography (CT) system generates feature map with a 4D convolution implemented as a tensor contraction. In at least one embodiment, a multispectral imaging system generates feature map with a convolution implemented as a tensor contraction. In at least one embodiment, a system that performs analysis on other types of sensor data such as acoustic sensor data generates feature map with a convolution implemented as a tensor contraction. In at least one embodiment, a natural language processing system generates feature map with a 1D convolution implemented as a contraction. [0071] FIG.3 illustrates a flowchart of a technique 300 of constructing a tensor, according to at least one embodiment. In at least one embodiment, inference and/or training logic 615, described with respect to FIGS.6A and 6B, performs technique 300. In at least one embodiment, arithmetic logic units (ALUs) 610 of inference and/or training logic 615 perform technique 300. In at least one embodiment, inference and/or training logic 615 includes one or more processors to perform technique 300. In at least one embodiment, inference and/or training logic 615 includes a machine-readable medium having stored thereon a set of instructions, which if performed by one or more processors of inference and/or training logic 615, cause one or more processors of inference and/or training logic 615 to perform technique 300. In at least one embodiment, set of instructions is provided to ALUs 610 to cause ALUs 610 to perform technique 300. In at least one embodiment, computational hardware 602 and/or computational hardware 606, described with respect to FIG.6B, performs technique 300. [0072] In at least one embodiment, technique 300 includes identifying, at a block 302, modes of an activation tensor, a filter tensor, and an output tensor. In at least one embodiment, at a decision block 304, it is determined whether a mode of activation tensor is in filter tensor or output tensor. In at least one embodiment, if, at decision block 304, mode is not in filter tensor or output tensor, technique 300 includes splitting mode at a block 306. In at least one embodiment, if, at decision block 304, mode is in filter tensor or mode is in output tensor, technique 300 proceeds to a decision block 308 where it is determined whether activation tensor includes additional modes not already evaluated at decision block 304. In at least one embodiment, modes added at block 306 are not considered to be additional modes of activation tensor not already evaluated at decision block 304 in making determination at decision block 308. In at least one embodiment, technique 300 also proceeds to decision block 308 after splitting mode at block 306. In at least one embodiment, if, at decision block 308, it is determined that activation tensor includes additional modes, technique 300 returns to decision block 304 to evaluate an additional mode. In at least one embodiment, if, at decision block 308, it is determined that activation tensor does not include additional modes, technique 300 proceeds to block 310 that includes performing additional actions. In at least one embodiment, performing additional actions includes storing a data structure having identifiers that correspond to modes created at block 306, and that refer to previously stored data of activation tensor identified at block 302. [0073] FIG.4 illustrates a flowchart of a technique 400 of splitting a mode of a tensor, according to at least one embodiment. In at least one embodiment, inference and/or training logic 615, described with respect to FIGS.6A and 6B, performs technique 400. In at least one embodiment, arithmetic logic units (ALUs) 610 of inference and/or training logic 615 perform technique 400. In at least one embodiment, inference and/or training logic 615 includes one or more processors to perform technique 400. In at least one embodiment, inference and/or training logic 615 includes a machine-readable medium having stored thereon a set of instructions, which if performed by one or more processors of inference and/or training logic 615, cause one or more processors of inference and/or training logic 615 to perform technique 400. In at least one embodiment, set of instructions is provided to ALUs 610 to cause ALUs 610 to perform technique 400. In at least one embodiment, computational hardware 602 and/or computational hardware 606, described with respect to FIG.6B, performs technique 400. [0074] In at least one embodiment, technique 400 includes, at a block 402, identifying a mode of a filter tensor that corresponds to a convolved mode of a first activation tensor that has data stored in a memory with a stride. In at least one embodiment, technique 400 includes, at a block 404, identifying a mode of an output tensor that corresponds to convolved mode of first activation tensor. In at least one embodiment, at a block 406, technique 400 includes setting a first mode of a second activation tensor to identified mode of filter tensor. In at least one embodiment, technique 400 includes, at a block 408, setting a second mode of second activation tensor to identified mode of output tensor. In at least one embodiment, at a block 410, technique 400 includes pointing elements of first mode and second mode of activation tensor to same data stored in memory with same stride. [0075] FIG.5 illustrates a block diagram 500 of a memory 502 to store tensor data, according to at least one embodiment. In at least one embodiment, memory 502 corresponds to code and/or data storage 601 and/or code and/or data storage 605 described with respect to FIGS.6A and 6B. In at least one embodiment, operations described with respect to block diagram 500 are performed by inference and/or training logic 615 described with respect to FIGS.6A and 6B. In at least one embodiment, operations described with respect to block diagram 500 are performed by arithmetic logic units (ALUs) 610 of inference and/or training logic 615. In at least one embodiment, inference and/or training logic 615 includes one or more processors to perform operations described with respect to block diagram 500. In at least one embodiment, inference and/or training logic 615 includes a machine-readable medium having stored thereon a set of instructions, which if performed by one or more processors of inference and/or training logic 615, cause one or more processors of inference and/or training logic 615 to perform operations described with respect to block diagram 500. In at least one embodiment, set of instructions is provided to ALUs 610 to cause ALUs 610 to perform operations described with respect to block diagram 500. In at least one embodiment, computational hardware 602 and/or computational hardware 606, described with respect to FIG.6B, performs operations described with respect to block diagram 500. [0076] In at least one embodiment, a tensor convolution operation, if performed, generates an output tensor 504 by convolving an activation tensor 506 and a filter tensor 508. In at least one embodiment, tensor convolution operation corresponds to first type of operation identified in block 102 of FIG.1. In at least one embodiment, activation tensor 506 includes modes identified as ‘n’, ‘c’, ‘h’, and ‘w’. In at least one embodiment, output tensor 504 includes modes identified as ‘n’, ‘k’, ‘p’, and ‘q’. In at least one embodiment, filter tensor 508 includes modes identified as ‘k’, ‘c’, ‘r’, and ‘s’. In at least one embodiment, activation tensor 506 corresponds to first tensor discussed with respect to FIG.1. In at least one embodiment, activation tensor 506 corresponds to first activation tensor discussed with respect to FIG.2. In at least one embodiment, activation tensor 506 corresponds to activation tensor discussed with respect to FIG.3. In at least one embodiment, activation tensor 506 corresponds to first activation tensor discussed with respect to FIG.4. In at least one embodiment, activation tensor 506 can have less than four dimensions or more than four dimensions, activation tensor 516 can have a corresponding different number of dimensions, filter tensor 508 can have a different number of dimensions, number of convolved modes can be different than two, and output tensor 504 can have a corresponding different number of dimensions. [0077] In at least one embodiment, tensor convolution operation corresponds to convolution operation identified in block 202 of FIG.2. In at least one embodiment, memory 502 stores data elements of activation tensor 506 in memory locations 510. In at least one embodiment, mode ‘h’ of activation tensor 506 has a stride 512. In at least one embodiment, mode ‘w’ of activation tensor 506 has a stride 514. In at least one embodiment, stride 512 and stride 514 represent displacements in physical memory between two logically neighboring elements along ‘h’ mode and ‘w’ mode, respectively. Strides of ‘n’ and ‘c’ modes of activation tensor 506 in memory 502 are not shown for clarity. [0078] In at least one embodiment, an activation tensor 516 is constructed. In at least one embodiment, activation tensor 516 includes modes identified as ‘n’, ‘c’, ‘p’, ‘r’, ‘q’, and ‘s’. In at least one embodiment, activation tensor 516 corresponds to second tensor constructed in block 104 of FIG.1. In at least one embodiment, activation tensor 516 corresponds to second activation tensor constructed in block 206 of FIG.2. In at least one embodiment, activation tensor 516 is constructed by splitting mode ‘h’ and mode ‘w’ of activation tensor 506. In at least one embodiment, mode ‘h’ and mode ‘w’ of activation tensor 516 are identified as modes to be split as described with respect to FIG.3. In at least one embodiment, when activation tensor 516 is constructed, mode ‘h’ of activation tensor 506 is identified as a mode not present in filter tensor 508 or output tensor 504, as discussed with respect to decision block 304 of FIG.3. In at least one embodiment, mode ‘h’ of activation tensor 506 is split as described with respect to block 306 of FIG.3 and technique 400 of FIG.4. In at least one embodiment, mode ‘w’ of activation tensor 506 is an additional activation tensor node identified at decision block 308 of FIG.3, and is split as described with respect to block 306 of FIG.3 and technique 400 of FIG.4. [0079] In at least one embodiment, activation tensor 516 includes non-convolved mode ‘n’ and non-convolved mode ‘c’ of activation tensor 506. In at least one embodiment, convolved mode ‘h’ of activation tensor 506 is not present in activation tensor 516, which instead includes mode ‘p’ of output tensor 504 and mode ‘r’ of filter tensor 508. In at least one embodiment, mode ‘r’ and mode ‘p’ are set in activation tensor 516 as described with respect to block 406 and 408 of FIG.4, respectively. In at least one embodiment, at least some elements of both mode ‘r’ and mode ‘p’ of activation tensor 516 are pointed to same data stored in memory 502, stored with respect to mode ‘h’ of activation tensor 506. In at least one embodiment, mode ‘r’ and mode ‘p’ of activation tensor 516 are both set to stride 512. In at least one embodiment, convolved mode ‘w’ of activation tensor 506 is not present in activation tensor 516, which instead includes mode ‘q’ of output tensor 504 and mode ‘s’ of filter tensor 508. In at least one embodiment, mode ‘q’ and mode ‘s’ are set in activation tensor 516 as described with respect to block 406 and 408 of FIG.4, respectively. In at least one embodiment, at least some elements of both mode ‘q’ and mode ‘s’ of activation tensor 516 are pointed to same data stored in memory 502, stored with respect to mode ‘w’ of activation tensor 506. In at least one embodiment, mode ‘q’ and mode ‘s’ of activation tensor 516 are both set to stride 514. In at least one embodiment, constructing activation tensor 516 includes copying at least one data element of activation tensor 506 to a different location in memory 502. In at least one embodiment, activation tensor 516 is constructed such that all data elements of activation tensor 506 remain in a same memory location in memory 502 and are pointed to by a data structure that represents activation tensor 516. In at least one embodiment, constructing activation tensor 516 such that some modes have overlapping strides provides a memory storage advantage by representing activation tensor 516 in a more compact form than if activation tensor 516 had been constructed with non-overlapping strides. In at least one embodiment, constructing activation tensor 516 such that data elements of activation tensor 506 remain in same memory location provides a processing time advantage because copying data and/or generating additional data elements would take additional time to perform. [0080] In at least one embodiment, A(i1, i2, …, in) represents an n-dimensional tensor with ikrepresenting a convolved mode. In at least one embodiment, this mode is split into ik1and ik2which results in a logical (n+1)-dimensional tensor Ã(i1, i2, …, ik1,ik2,…, in) for which entries along ik1 and ik2 modes can have identical memory locations. In at least one embodiment, to be precise, for fixed i1, i2, … ik-1, ik+1,… in, LOC(Ã (i1, … , a, b, … in)) == LOC(Ã (i1, … , b, a, … in)), for two arbitrary (but valid) offsets along ik1and ik2modes, such that those logical modes expose a symmetry. In at least one embodiment, a convolution is converted to a tensor contraction with a symmetry along newly introduced logical modes. In at least one embodiment, activation tensor 506 is an instance of n-dimensional tensor A, and activation tensor 516 is an instance of an (n+2)-dimensional tensor corresponding to (n+1)- dimensional tensor Ã, described above, after two convolved modes of activation tensor 506 have been split, resulting in a (n+2)-dimensional tensor rather than a (n+1)-dimensional tensor. INFERENCE AND TRAINING LOGIC [0081] FIG.6A illustrates inference and/or training logic 615 used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided below in conjunction with FIGS.6A and/or 6B. [0082] In at least one embodiment, inference and/or training logic 615 may include, without limitation, code and/or data storage 601 to store forward and/or output weight and/or input/output data, and/or other parameters to configure neurons or layers of a neural network trained and/or used for inferencing in aspects of one or more embodiments. In at least one embodiment, training logic 615 may include, or be coupled to code and/or data storage 601 to store graph code or other software to control timing and/or order, in which weight and/or other parameter information is to be loaded to configure, logic, including integer and/or floating point units (collectively, arithmetic logic units (ALUs). In at least one embodiment, code, such as graph code, loads weight or other parameter information into processor ALUs based on an architecture of a neural network to which the code corresponds. In at least one embodiment code and/or data storage 601 stores weight parameters and/or input/output data of each layer of a neural network trained or used in conjunction with one or more embodiments during forward propagation of input/output data and/or weight parameters during training and/or inferencing using aspects of one or more embodiments. In at least one embodiment, any portion of code and/or data storage 601 may be included with other on-chip or off-chip data storage, including a processor’s L1, L2, or L3 cache or system memory. [0083] In at least one embodiment, any portion of code and/or data storage 601 may be internal or external to one or more processors or other hardware logic devices or circuits. In at least one embodiment, code and/or code and/or data storage 601 may be cache memory, dynamic randomly addressable memory (“DRAM”), static randomly addressable memory (“SRAM”), non-volatile memory (e.g., Flash memory), or other storage. In at least one embodiment, choice of whether code and/or code and/or data storage 601 is internal or external to a processor, for example, or comprised of DRAM, SRAM, Flash or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors. [0084] In at least one embodiment, inference and/or training logic 615 may include, without limitation, a code and/or data storage 605 to store backward and/or output weight and/or input/output data corresponding to neurons or layers of a neural network trained and/or used for inferencing in aspects of one or more embodiments. In at least one embodiment, code and/or data storage 605 stores weight parameters and/or input/output data of each layer of a neural network trained or used in conjunction with one or more embodiments during backward propagation of input/output data and/or weight parameters during training and/or inferencing using aspects of one or more embodiments. In at least one embodiment, training logic 615 may include, or be coupled to code and/or data storage 605 to store graph code or other software to control timing and/or order, in which weight and/or other parameter information is to be loaded to configure, logic, including integer and/or floating point units (collectively, arithmetic logic units (ALUs). In at least one embodiment, code, such as graph code, loads weight or other parameter information into processor ALUs based on an architecture of a neural network to which the code corresponds. In at least one embodiment, any portion of code and/or data storage 605 may be included with other on-chip or off-chip data storage, including a processor’s L1, L2, or L3 cache or system memory. In at least one embodiment, any portion of code and/or data storage 605 may be internal or external to on one or more processors or other hardware logic devices or circuits. In at least one embodiment, code and/or data storage 605 may be cache memory, DRAM, SRAM, non- volatile memory (e.g., Flash memory), or other storage. In at least one embodiment, choice of whether code and/or data storage 605 is internal or external to a processor, for example, or comprised of DRAM, SRAM, Flash or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors. [0085] In at least one embodiment, code and/or data storage 601 and code and/or data storage 605 may be separate storage structures. In at least one embodiment, code and/or data storage 601 and code and/or data storage 605 may be same storage structure. In at least one embodiment, code and/or data storage 601 and code and/or data storage 605 may be partially same storage structure and partially separate storage structures. In at least one embodiment, any portion of code and/or data storage 601 and code and/or data storage 605 may be included with other on-chip or off-chip data storage, including a processor’s L1, L2, or L3 cache or system memory. [0086] In at least one embodiment, inference and/or training logic 615 may include, without limitation, one or more arithmetic logic unit(s) (“ALU(s)”) 610, including integer and/or floating point units, to perform logical and/or mathematical operations based, at least in part on, or indicated by, training and/or inference code (e.g., graph code), a result of which may produce activations (e.g., output values from layers or neurons within a neural network) stored in an activation storage 620 that are functions of input/output and/or weight parameter data stored in code and/or data storage 601 and/or code and/or data storage 605. In at least one embodiment, activations stored in activation storage 620 are generated according to linear algebraic and or matrix-based mathematics performed by ALU(s) 610 in response to performing instructions or other code, wherein weight values stored in code and/or data storage 605 and/or data 601 are used as operands along with other values, such as bias values, gradient information, momentum values, or other parameters or hyperparameters, any or all of which may be stored in code and/or data storage 605 or code and/or data storage 601 or another storage on or off-chip. [0087] In at least one embodiment, ALU(s) 610 are included within one or more processors or other hardware logic devices or circuits, whereas in another embodiment, ALU(s) 610 may be external to a processor or other hardware logic device or circuit that uses them (e.g., a co-processor). In at least one embodiment, ALUs 610 may be included within a processor’s execution units or otherwise within a bank of ALUs accessible by a processor’s execution units either within same processor or distributed between different processors of different types (e.g., central processing units, graphics processing units, fixed function units, etc.). In at least one embodiment, data storage 601, code and/or data storage 605, and activation storage 620 may be on same processor or other hardware logic device or circuit, whereas in another embodiment, they may be in different processors or other hardware logic devices or circuits, or some combination of same and different processors or other hardware logic devices or circuits. In at least one embodiment, any portion of activation storage 620 may be included with other on-chip or off-chip data storage, including a processor’s L1, L2, or L3 cache or system memory. Furthermore, inferencing and/or training code may be stored with other code accessible to a processor or other hardware logic or circuit and fetched and/or processed using a processor’s fetch, decode, scheduling, execution, retirement and/or other logical circuits. [0088] In at least one embodiment, activation storage 620 may be cache memory, DRAM, SRAM, non-volatile memory (e.g., Flash memory), or other storage. In at least one embodiment, activation storage 620 may be completely or partially within or external to one or more processors or other logical circuits. In at least one embodiment, choice of whether activation storage 620 is internal or external to a processor, for example, or comprised of DRAM, SRAM, Flash or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors. In at least one embodiment, inference and/or training logic 615 illustrated in FIG.6A may be used in conjunction with an application-specific integrated circuit (“ASIC”), such as Tensorflow® Processing Unit from Google, an inference processing unit (IPU) from Graphcore™, or a Nervana® (e.g., “Lake Crest”) processor from Intel Corp. In at least one embodiment, inference and/or training logic 615 illustrated in FIG.6A may be used in conjunction with central processing unit (“CPU”) hardware, graphics processing unit (“GPU”) hardware or other hardware, such as field programmable gate arrays (“FPGAs”). [0089] FIG.6B illustrates inference and/or training logic 615, according to at least one embodiment various. In at least one embodiment, inference and/or training logic 615 may include, without limitation, hardware logic in which computational resources are dedicated or otherwise exclusively used in conjunction with weight values or other information corresponding to one or more layers of neurons within a neural network. In at least one embodiment, inference and/or training logic 615 illustrated in FIG.6B may be used in conjunction with an application-specific integrated circuit (ASIC), such as Tensorflow® Processing Unit from Google, an inference processing unit (IPU) from Graphcore™, or a Nervana® (e.g., “Lake Crest”) processor from Intel Corp. In at least one embodiment, inference and/or training logic 615 illustrated in FIG.6B may be used in conjunction with central processing unit (CPU) hardware, graphics processing unit (GPU) hardware or other hardware, such as field programmable gate arrays (FPGAs). In at least one embodiment, inference and/or training logic 615 includes, without limitation, code and/or data storage 601 and code and/or data storage 605, which may be used to store code (e.g., graph code), weight values and/or other information, including bias values, gradient information, momentum values, and/or other parameter or hyperparameter information. In at least one embodiment illustrated in FIG.6B, each of code and/or data storage 601and code and/or data storage 605 is associated with a dedicated computational resource, such as computational hardware 602 and computational hardware 606, respectively. In at least one embodiment, each of computational hardware 602 and computational hardware 606 comprises one or more ALUs that perform mathematical functions, such as linear algebraic functions, only on information stored in code and/or data storage 601 and code and/or data storage 605, respectively, result of which is stored in activation storage 620. [0090] In at least one embodiment, each of code and/or data storage 601 and 605 and corresponding computational hardware 602 and 606, respectively, correspond to different layers of a neural network, such that resulting activation from one “storage/computational pair 601/602” of code and/or data storage 601 and computational hardware 602 is provided as an input to next “storage/computational pair 605/606” of code and/or data storage 605 and computational hardware 606, in order to mirror conceptual organization of a neural network. In at least one embodiment, each of storage/computational pairs 601/602 and 605/606 may correspond to more than one neural network layer. In at least one embodiment, additional storage/computation pairs (not shown) subsequent to or in parallel with storage computation pairs 601/602 and 605/606 may be included in inference and/or training logic 615. NEURAL NETWORK TRAINING AND DEPLOYMENT [0091] FIG.7 illustrates training and deployment of a deep neural network, according to at least one embodiment. In at least one embodiment, untrained neural network 706 is trained using a training dataset 702. In at least one embodiment, training framework 704 is a PyTorch framework, whereas in other embodiments, training framework 704 is a Tensorflow, Boost, Caffe, Microsoft Cognitive Toolkit/CNTK, MXNet, Chainer, Keras, Deeplearning4j, or other training framework. In at least one embodiment training framework 704 trains an untrained neural network 706 and enables it to be trained using processing resources described herein to generate a trained neural network 708. In at least one embodiment, weights may be chosen randomly or by pre-training using a deep belief network. In at least one embodiment, training may be performed in either a supervised, partially supervised, or unsupervised manner. [0092] In at least one embodiment, untrained neural network 706 is trained using supervised learning, wherein training dataset 702 includes an input paired with a desired output for an input, or where training dataset 702 includes input having a known output and an output of neural network 706 is manually graded. In at least one embodiment, untrained neural network 706 is trained in a supervised manner processes inputs from training dataset 702 and compares resulting outputs against a set of expected or desired outputs. In at least one embodiment, errors are then propagated back through untrained neural network 706. In at least one embodiment, training framework 704 adjusts weights that control untrained neural network 706. In at least one embodiment, training framework 704 includes tools to monitor how well untrained neural network 706 is converging towards a model, such as trained neural network 708, suitable to generating correct answers, such as in result 714, based on known input data, such as new data 712. In at least one embodiment, training framework 704 trains untrained neural network 706 repeatedly while adjust weights to refine an output of untrained neural network 706 using a loss function and adjustment algorithm, such as stochastic gradient descent. In at least one embodiment, training framework 704 trains untrained neural network 706 until untrained neural network 706 achieves a desired accuracy. In at least one embodiment, trained neural network 708 can then be deployed to implement any number of machine learning operations. In at least one embodiment, training framework 704 trains untrained neural network 706 using inference and/or training logic 615, described with respect to FIGS.6A and 6B based, at least in part, on at least one technique described with respect to FIGS.1-5, such as identifying a first type of operation with a first tensor, constructing a second tensor, and performing a second type of operation with second tensor, described with respect to FIG.1. In at least one embodiment, inference and/or training logic 615 performs an inferencing operation using trained neural network 708 based, at least in part, on at least one technique described with respect to FIGS.1-5, such as identifying a first type of operation with a first tensor, constructing a second tensor, and performing a second type of operation with second tensor, described with respect to FIG.1. [0093] In at least one embodiment, untrained neural network 706 is trained using unsupervised learning, wherein untrained neural network 706 attempts to train itself using unlabeled data. In at least one embodiment, unsupervised learning training dataset 702 will include input data without any associated output data or “ground truth” data. In at least one embodiment, untrained neural network 706 can learn groupings within training dataset 702 and can determine how individual inputs are related to untrained dataset 702. In at least one embodiment, unsupervised training can be used to generate a self-organizing map, which is a type of trained neural network 708 capable of performing operations useful in reducing dimensionality of new data 712. In at least one embodiment, unsupervised training can also be used to perform anomaly detection, which allows identification of data points in a new dataset 712 that deviate from normal patterns of new dataset 712. [0094] In at least one embodiment, semi-supervised learning may be used, which is a technique in which in training dataset 702 includes a mix of labeled and unlabeled data. In at least one embodiment, training framework 704 may be used to perform incremental learning, such as through transferred learning techniques. In at least one embodiment, incremental learning enables trained neural network 708 to adapt to new data 712 without forgetting knowledge instilled within network during initial training. DATA CENTER [0095] FIG.8 illustrates an example data center 800, in which at least one embodiment may be used. In at least one embodiment, data center 800 includes a data center infrastructure layer 810, a framework layer 820, a software layer 830 and an application layer 840. [0096] In at least one embodiment, as shown in FIG.8, data center infrastructure layer 810 may include a resource orchestrator 812, grouped computing resources 814, and node computing resources (“node C.R.s”) 816(1)-816(N), where “N” represents any whole, positive integer. In at least one embodiment, node C.R.s 816(1)-816(N) may include, but are not limited to, any number of central processing units (“CPUs”) or other processors (including accelerators, field programmable gate arrays (FPGAs), graphics processors, etc.), memory devices (e.g., dynamic read-only memory), storage devices (e.g., solid state or disk drives), network input/output ("NW I/O”) devices, network switches, virtual machines (“VMs”), power modules, and cooling modules, etc. In at least one embodiment, one or more node C.R.s from among node C.R.s 816(1)-816(N) may be a server having one or more of above-mentioned computing resources. [0097] In at least one embodiment, grouped computing resources 814 may include separate groupings of node C.R.s housed within one or more racks (not shown), or many racks housed in data centers at various geographical locations (also not shown). separate groupings of node C.R.s within grouped computing resources 814 may include grouped compute, network, memory or storage resources that may be configured or allocated to support one or more workloads. In at least one embodiment, several node C.R.s including CPUs or processors may grouped within one or more racks to provide compute resources to support one or more workloads. In at least one embodiment, one or more racks may also include any number of power modules, cooling modules, and network switches, in any combination. [0098] In at least one embodiment, resource orchestrator 812 may configure or otherwise control one or more node C.R.s 816(1)-816(N) and/or grouped computing resources 814. In at least one embodiment, resource orchestrator 812 may include a software design infrastructure (“SDI”) management entity for data center 800. In at least one embodiment, resource orchestrator may include hardware, software or some combination thereof. [0099] In at least one embodiment, as shown in FIG.8, framework layer 820 includes a job scheduler 832, a configuration manager 834, a resource manager 836 and a distributed file system 838. In at least one embodiment, framework layer 820 may include a framework to support software 832 of software layer 830 and/or one or more application(s) 842 of application layer 840. In at least one embodiment, software 832 or application(s) 842 may respectively include web-based service software or applications, such as those provided by Amazon Web Services, Google Cloud and Microsoft Azure. In at least one embodiment, framework layer 820 may be, but is not limited to, a type of free and open-source software web application framework such as Apache SparkTM(hereinafter “Spark”) that may utilize distributed file system 838 for large-scale data processing (e.g., "big data"). In at least one embodiment, job scheduler 832 may include a Spark driver to facilitate scheduling of workloads supported by various layers of data center 800. In at least one embodiment, configuration manager 834 may be capable of configuring different layers such as software layer 830 and framework layer 820 including Spark and distributed file system 838 for supporting large-scale data processing. In at least one embodiment, resource manager 836 may be capable of managing clustered or grouped computing resources mapped to or allocated for support of distributed file system 838 and job scheduler 832. In at least one embodiment, clustered or grouped computing resources may include grouped computing resource 814 at data center infrastructure layer 810. In at least one embodiment, resource manager 836 may coordinate with resource orchestrator 812 to manage these mapped or allocated computing resources. [0100] In at least one embodiment, software 832 included in software layer 830 may include software used by at least portions of node C.R.s 816(1)-816(N), grouped computing resources 814, and/or distributed file system 838 of framework layer 820. one or more types of software may include, but are not limited to, Internet web page search software, e-mail virus scan software, database software, and streaming video content software. [0101] In at least one embodiment, application(s) 842 included in application layer 840 may include one or more types of applications used by at least portions of node C.R.s 816(1)- 816(N), grouped computing resources 814, and/or distributed file system 838 of framework layer 820. one or more types of applications may include, but are not limited to, any number of a genomics application, a cognitive compute, and a machine learning application, including training or inferencing software, machine learning framework software (e.g., PyTorch, TensorFlow, Caffe, etc.) or other machine learning applications used in conjunction with one or more embodiments. [0102] In at least one embodiment, any of configuration manager 834, resource manager 836, and resource orchestrator 812 may implement any number and type of self- modifying actions based on any amount and type of data acquired in any technically feasible fashion. In at least one embodiment, self-modifying actions may relieve a data center operator of data center 800 from making possibly bad configuration decisions and possibly avoiding underutilized and/or poor performing portions of a data center. [0103] In at least one embodiment, data center 800 may include tools, services, software or other resources to train one or more machine learning models or predict or infer information using one or more machine learning models according to one or more embodiments described herein. For example, in at least one embodiment, a machine learning model may be trained by calculating weight parameters according to a neural network architecture using software and computing resources described above with respect to data center 800. In at least one embodiment, trained machine learning models corresponding to one or more neural networks may be used to infer or predict information using resources described above with respect to data center 800 by using weight parameters calculated through one or more training techniques described herein. [0104] In at least one embodiment, data center may use CPUs, application-specific integrated circuits (ASICs), GPUs, FPGAs, or other hardware to perform training and/or inferencing using above-described resources. Moreover, one or more software and/or hardware resources described above may be configured as a service to allow users to train or performing inferencing of information, such as image recognition, speech recognition, or other artificial intelligence services. [0105] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment, inference and/or training logic 615 may be used in system FIG.8 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. [0106] In at least one embodiment, at least one component shown or described with respect to FIG.8 is utilized to implement techniques described in connection with FIGS.1-5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. AUTONOMOUS VEHICLE [0107] FIG.9A illustrates an example of an autonomous vehicle 900, according to at least one embodiment. In at least one embodiment, autonomous vehicle 900 (alternatively referred to herein as “vehicle 900”) may be, without limitation, a passenger vehicle, such as a car, a truck, a bus, and/or another type of vehicle that accommodates one or more passengers. In at least one embodiment, vehicle 900 may be a semi-tractor-trailer truck used for hauling cargo. In at least one embodiment, vehicle 900 may be an airplane, robotic vehicle, or other kind of vehicle. [0108] Autonomous vehicles may be described in terms of automation levels, defined by National Highway Traffic Safety Administration (“NHTSA”), a division of US Department of Transportation, and Society of Automotive Engineers (“SAE”) "Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles” (e.g., Standard No. J3016-201806, published on June 15, 2018, Standard No. J3016-201609, published on September 30, 2016, and previous and future versions of this standard). In one or more embodiments, vehicle 900 may be capable of functionality in accordance with one or more of level 1 – level 5 of autonomous driving levels. For example, in at least one embodiment, vehicle 900 may be capable of conditional automation (Level 3), high automation (Level 4), and/or full automation (Level 5), depending on embodiment. [0109] In at least one embodiment, vehicle 900 may include, without limitation, components such as a chassis, a vehicle body, wheels (e.g., 2, 4, 6, 8, 18, etc.), tires, axles, and other components of a vehicle. In at least one embodiment, vehicle 900 may include, without limitation, a propulsion system 950, such as an internal combustion engine, hybrid electric power plant, an all-electric engine, and/or another propulsion system type. In at least one embodiment, propulsion system 950 may be connected to a drive train of vehicle 900, which may include, without limitation, a transmission, to enable propulsion of vehicle 900. In at least one embodiment, propulsion system 950 may be controlled in response to receiving signals from a throttle/accelerator(s) 952. [0110] In at least one embodiment, a steering system 954, which may include, without limitation, a steering wheel, is used to steer a vehicle 900 (e.g., along a desired path or route) when a propulsion system 950 is operating (e.g., when vehicle is in motion). In at least one embodiment, a steering system 954 may receive signals from steering actuator(s) 956. steering wheel may be optional for full automation (Level 5) functionality. In at least one embodiment, a brake sensor system 946 may be used to operate vehicle brakes in response to receiving signals from brake actuator(s) 948 and/or brake sensors. [0111] In at least one embodiment, controller(s) 936, which may include, without limitation, one or more system on chips (“SoCs”) (not shown in FIG.9A) and/or graphics processing unit(s) (“GPU(s)”), provide signals (e.g., representative of commands) to one or more components and/or systems of vehicle 900. For instance, in at least one embodiment, controller(s) 936 may send signals to operate vehicle brakes via brake actuators 948, to operate steering system 954 via steering actuator(s) 956, to operate propulsion system 950 via throttle/accelerator(s) 952. controller(s) 936 may include one or more onboard (e.g., integrated) computing devices (e.g., supercomputers) that process sensor signals, and output operation commands (e.g., signals representing commands) to enable autonomous driving and/or to assist a human driver in driving vehicle 900. In at least one embodiment, controller(s) 936 may include a first controller 936 for autonomous driving functions, a second controller 936 for functional safety functions, a third controller 936 for artificial intelligence functionality (e.g., computer vision), a fourth controller 936 for infotainment functionality, a fifth controller 936 for redundancy in emergency conditions, and/or other controllers. In at least one embodiment, a single controller 936 may handle two or more of above functionalities, two or more controllers 936 may handle a single functionality, and/or any combination thereof. [0112] In at least one embodiment, controller(s) 936 provide signals for controlling one or more components and/or systems of vehicle 900 in response to sensor data received from one or more sensors (e.g., sensor inputs). In at least one embodiment, sensor data may be received from, for example and without limitation, global navigation satellite systems (“GNSS”) sensor(s) 958 (e.g., Global Positioning System sensor(s)), RADAR sensor(s) 960, ultrasonic sensor(s) 962, LIDAR sensor(s) 964, inertial measurement unit (“IMU”) sensor(s) 966 (e.g., accelerometer(s), gyroscope(s), magnetic compass(es), magnetometer(s), etc.), microphone(s) 996, stereo camera(s) 968, wide-view camera(s) 970 (e.g., fisheye cameras), infrared camera(s) 972, surround camera(s) 974 (e.g., 360 degree cameras), long-range cameras (not shown in Figure 9A), mid-range camera(s) (not shown in Figure 9A), speed sensor(s) 944 (e.g., for measuring speed of vehicle 900), vibration sensor(s) 942, steering sensor(s) 940, brake sensor(s) (e.g., as part of brake sensor system 946), and/or other sensor types. [0113] In at least one embodiment, one or more of controller(s) 936 may receive inputs (e.g., represented by input data) from an instrument cluster 932 of vehicle 900 and provide outputs (e.g., represented by output data, display data, etc.) via a human-machine interface (“HMI”) display 934, an audible annunciator, a loudspeaker, and/or via other components of vehicle 900. In at least one embodiment, outputs may include information such as vehicle velocity, speed, time, map data (e.g., a High Definition map (not shown in FIG.9A), location data (e.g., vehicle’s 900 location, such as on a map), direction, location of other vehicles (e.g., an occupancy grid), information about objects and status of objects as perceived by controller(s) 936, etc. For example, in at least one embodiment, HMI display 934 may display information about presence of one or more objects (e.g., a street sign, caution sign, traffic light changing, etc.), and/or information about driving maneuvers vehicle has made, is making, or will make (e.g., changing lanes now, taking exit 34B in two miles, etc.). [0114] In at least one embodiment, vehicle 900 further includes a network interface 924 which may use wireless antenna(s) 926 and/or modem(s) to communicate over one or more networks. For example, in at least one embodiment, network interface 924 may be capable of communication over Long-Term Evolution (“LTE”), Wideband Code Division Multiple Access (“WCDMA”), Universal Mobile Telecommunications System (“UMTS”), Global System for Mobile communication (“GSM”), IMT-CDMA Multi-Carrier (“CDMA2000”), etc. In at least one embodiment, wireless antenna(s) 926 may also enable communication between objects in environment (e.g., vehicles, mobile devices, etc.), using local area network(s), such as Bluetooth, Bluetooth Low Energy (“LE”), Z-Wave, ZigBee, etc., and/or low power wide-area network(s) (“LPWANs”), such as LoRaWAN, SigFox, etc. [0115] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment, inference and/or training logic 615 may be used in system FIG.9A for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. [0116] In at least one embodiment, at least one component shown or described with respect to FIG.9A is utilized to implement techniques described in connection with FIGS.1- 5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in vehicle 900 of FIG.9A. [0117] FIG.9B illustrates an example of camera locations and fields of view for autonomous vehicle 900 of FIG.9A, according to at least one embodiment. In at least one embodiment, cameras and respective fields of view are one example embodiment and are not intended to be limiting. For instance, in at least one embodiment, additional and/or alternative cameras may be included and/or cameras may be located at different locations on vehicle 900. [0118] In at least one embodiment, camera types for cameras may include, but are not limited to, digital cameras that may be adapted for use with components and/or systems of vehicle 900. camera(s) may operate at automotive safety integrity level (“ASIL”) B and/or at another ASIL. In at least one embodiment, camera types may be capable of any image capture rate, such as 60 frames per second (fps), 1220 fps, 240 fps, etc., depending on embodiment. In at least one embodiment, cameras may be capable of using rolling shutters, global shutters, another type of shutter, or a combination thereof. In at least one embodiment, color filter array may include a red clear clear clear (“RCCC”) color filter array, a red clear clear blue (“RCCB”) color filter array, a red blue green clear (“RBGC”) color filter array, a Foveon X3 color filter array, a Bayer sensors (“RGGB”) color filter array, a monochrome sensor color filter array, and/or another type of color filter array. In at least one embodiment, clear pixel cameras, such as cameras with an RCCC, an RCCB, and/or an RBGC color filter array, may be used in an effort to increase light sensitivity. [0119] In at least one embodiment, one or more of camera(s) may be used to perform advanced driver assistance systems (“ADAS”) functions (e.g., as part of a redundant or fail- safe design). For example, in at least one embodiment, a Multi-Function Mono Camera may be installed to provide functions including lane departure warning, traffic sign assist and intelligent headlamp control. In at least one embodiment, one or more of camera(s) (e.g., all of cameras) may record and provide image data (e.g., video) simultaneously. [0120] In at least one embodiment, one or more of cameras may be mounted in a mounting assembly, such as a custom designed (three-dimensional (“3D”) printed) assembly, in order to cut out stray light and reflections from within car (e.g., reflections from dashboard reflected in windshield mirrors) which may interfere with camera’s image data capture abilities. With reference to wing-mirror mounting assemblies, in at least one embodiment, wing-mirror assemblies may be custom 3D printed so that camera mounting plate matches shape of wing-mirror. In at least one embodiment, camera(s) may be integrated into wing- mirror. For side-view cameras, camera(s) may also be integrated within four pillars at each corner of cabin at least one embodiment. [0121] In at least one embodiment, cameras with a field of view that include portions of environment in front of vehicle 900 (e.g., front-facing cameras) may be used for surround view, to help identify forward facing paths and obstacles, as well as aid in, with help of one or more of controllers 936 and/or control SoCs, providing information critical to generating an occupancy grid and/or determining preferred vehicle paths. In at least one embodiment, front-facing cameras may be used to perform many of same ADAS functions as LIDAR, including, without limitation, emergency braking, pedestrian detection, and collision avoidance. In at least one embodiment, front-facing cameras may also be used for ADAS functions and systems including, without limitation, Lane Departure Warnings (“LDW”), Autonomous Cruise Control (“ACC”), and/or other functions such as traffic sign recognition. [0122] In at least one embodiment, a variety of cameras may be used in a front-facing configuration, including, for example, a monocular camera platform that includes a CMOS (“complementary metal oxide semiconductor”) color imager. In at least one embodiment, wide-view camera 970 may be used to perceive objects coming into view from periphery (e.g., pedestrians, crossing traffic or bicycles). Although only one wide-view camera 970 is illustrated in FIG.9B, in other embodiments, there may be any number (including zero) of wide-view camera(s) 970 on vehicle 900. In at least one embodiment, any number of long- range camera(s) 998 (e.g., a long-view stereo camera pair) may be used for depth-based object detection, especially for objects for which a neural network has not yet been trained. In at least one embodiment, long-range camera(s) 998 may also be used for object detection and classification, as well as basic object tracking. [0123] In at least one embodiment, any number of stereo camera(s) 968 may also be included in a front-facing configuration. In at least one embodiment, one or more of stereo camera(s) 968 may include an integrated control unit comprising a scalable processing unit, which may provide a programmable logic (“FPGA”) and a multi-core micro-processor with an integrated Controller Area Network (“CAN”) or Ethernet interface on a single chip. In at least one embodiment, such a unit may be used to generate a 3D map of environment of vehicle 900, including a distance estimate for all points in image. In at least one embodiment, one or more of stereo camera(s) 968 may include, without limitation, compact stereo vision sensor(s) that may include, without limitation, two camera lenses (one each on left and right) and an image processing chip that may measure distance from vehicle 900 to target object and use generated information (e.g., metadata) to activate autonomous emergency braking and lane departure warning functions. In at least one embodiment, other types of stereo camera(s) 968 may be used in addition to, or alternatively from, those described herein. [0124] In at least one embodiment, cameras with a field of view that include portions of environment to side of vehicle 900 (e.g., side-view cameras) may be used for surround view, providing information used to create and update occupancy grid, as well as to generate side impact collision warnings. For example, in at least one embodiment, surround camera(s) 974 (e.g., four surround cameras 974 as illustrated in FIG.9B) could be positioned on vehicle 900. surround camera(s) 974 may include, without limitation, any number and combination of wide-view camera(s) 970, fisheye camera(s), 360 degree camera(s), and/or like. For instance, in at least one embodiment, four fisheye cameras may be positioned on front, rear, and sides of vehicle 900. In at least one embodiment, vehicle 900 may use three surround camera(s) 974 (e.g., left, right, and rear), and may leverage one or more other camera(s) (e.g., a forward-facing camera) as a fourth surround-view camera. [0125] In at least one embodiment, cameras with a field of view that include portions of environment to rear of vehicle 900 (e.g., rear-view cameras) may be used for park assistance, surround view, rear collision warnings, and creating and updating occupancy grid. In at least one embodiment, a wide variety of cameras may be used including, but not limited to, cameras that are also suitable as a front-facing camera(s) (e.g., long-range cameras 998 and/or mid-range camera(s) 976, stereo camera(s) 968), infrared camera(s) 972, etc.), as described herein. [0126] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment, inference and/or training logic 615 may be used in system FIG.9B for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. [0127] In at least one embodiment, at least one component shown or described with respect to FIG.9B is utilized to implement techniques described in connection with FIGS.1- 5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in vehicle 900 of FIG.9B. [0128] FIG.9C is a block diagram illustrating an example system architecture for autonomous vehicle 900 of FIG.9A, according to at least one embodiment. In at least one embodiment, each of components, features, and systems of vehicle 900 in FIG.9C are illustrated as being connected via a bus 902. In at least one embodiment, bus 902 may include, without limitation, a CAN data interface (alternatively referred to herein as a “CAN bus”). In at least one embodiment, a CAN may be a network inside vehicle 900 used to aid in control of various features and functionality of vehicle 900, such as actuation of brakes, acceleration, braking, steering, windshield wipers, etc. In at least one embodiment, bus 902 may be configured to have dozens or even hundreds of nodes, each with its own unique identifier (e.g., a CAN ID). In at least one embodiment, bus 902 may be read to find steering wheel angle, ground speed, engine revolutions per minute (“RPMs”), button positions, and/or other vehicle status indicators. In at least one embodiment, bus 902 may be a CAN bus that is ASIL B compliant. [0129] In at least one embodiment, in addition to, or alternatively from CAN, FlexRay and/or Ethernet may be used. In at least one embodiment, there may be any number of busses 902, which may include, without limitation, zero or more CAN busses, zero or more FlexRay busses, zero or more Ethernet busses, and/or zero or more other types of busses using a different protocol. In at least one embodiment, two or more busses 902 may be used to perform different functions, and/or may be used for redundancy. For example, a first bus 902 may be used for collision avoidance functionality and a second bus 902 may be used for actuation control. In at least one embodiment, each bus 902 may communicate with any of components of vehicle 900, and two or more busses 902 may communicate with same components. In at least one embodiment, each of any number of system(s) on chip(s) (“SoC(s)”) 904, each of controller(s) 936, and/or each computer within vehicle may have access to same input data (e.g., inputs from sensors of vehicle 900), and may be connected to a common bus, such CAN bus. [0130] In at least one embodiment, vehicle 900 may include one or more controller(s) 936, such as those described herein with respect to FIG.9A. controller(s) 936 may be used for a variety of functions. In at least one embodiment, controller(s) 936 may be coupled to any of various other components and systems of vehicle 900, and may be used for control of vehicle 900, artificial intelligence of vehicle 900, infotainment for vehicle 900, and/or like. [0131] In at least one embodiment, vehicle 900 may include any number of SoCs 904. Each of SoCs 904 may include, without limitation, central processing units (“CPU(s)”) 906, graphics processing units (“GPU(s)”) 908, processor(s) 910, cache(s) 912, accelerator(s) 914, data store(s) 916, and/or other components and features not illustrated. In at least one embodiment, SoC(s) 904 may be used to control vehicle 900 in a variety of platforms and systems. For example, in at least one embodiment, SoC(s) 904 may be combined in a system (e.g., system of vehicle 900) with a High Definition (“HD”) map 922 which may obtain map refreshes and/or updates via network interface 924 from one or more servers (not shown in Figure 9C). [0132] In at least one embodiment, CPU(s) 906 may include a CPU cluster or CPU complex (alternatively referred to herein as a “CCPLEX”). In at least one embodiment, CPU(s) 906 may include multiple cores and/or level two (“L2”) caches. For instance, in at least one embodiment, CPU(s) 906 may include eight cores in a coherent multi-processor configuration. In at least one embodiment, CPU(s) 906 may include four dual-core clusters where each cluster has a dedicated L2 cache (e.g., a 2 MB L2 cache). In at least one embodiment, CPU(s) 906 (e.g., CCPLEX) may be configured to support simultaneous cluster operation enabling any combination of clusters of CPU(s) 906 to be active at any given time. [0133] In at least one embodiment, one or more of CPU(s) 906 may implement power management capabilities that include, without limitation, one or more of following features: individual hardware blocks may be clock-gated automatically when idle to save dynamic power; each core clock may be gated when core is not actively executing instructions due to execution of Wait for Interrupt ("WFI”)/Wait for Event ("WFE”) instructions; each core may be independently power-gated; each core cluster may be independently clock-gated when all cores are clock-gated or power-gated; and/or each core cluster may be independently power- gated when all cores are power-gated. In at least one embodiment, CPU(s) 906 may further implement an enhanced algorithm for managing power states, where allowed power states and expected wakeup times are specified, and hardware/microcode determines best power state to enter for core, cluster, and CCPLEX. In at least one embodiment, processing cores may support simplified power state entry sequences in software with work offloaded to microcode. [0134] In at least one embodiment, GPU(s) 908 may include an integrated GPU (alternatively referred to herein as an “iGPU”). In at least one embodiment, GPU(s) 908 may be programmable and may be efficient for parallel workloads. In at least one embodiment, GPU(s) 908, in at least one embodiment, may use an enhanced tensor instruction set. In on embodiment, GPU(s) 908 may include one or more streaming microprocessors, where each streaming microprocessor may include a level one (“L1”) cache (e.g., an L1 cache with at least 96KB storage capacity), and two or more of streaming microprocessors may share an L2 cache (e.g., an L2 cache with a 512 KB storage capacity). In at least one embodiment, GPU(s) 908 may include at least eight streaming microprocessors. In at least one embodiment, GPU(s) 908 may use compute application programming interface(s) (API(s)). In at least one embodiment, GPU(s) 908 may use one or more parallel computing platforms and/or programming models (e.g., NVIDIA’s CUDA). [0135] In at least one embodiment, one or more of GPU(s) 908 may be power-optimized for best performance in automotive and embedded use cases. For example, in on embodiment, GPU(s) 908 could be fabricated on a Fin field-effect transistor ("FinFET”). In at least one embodiment, each streaming microprocessor may incorporate a number of mixed-precision processing cores partitioned into multiple blocks. For example, and without limitation, 64 PF32 cores and 32 PF64 cores could be partitioned into four processing blocks. In at least one embodiment, each processing block could be allocated 16 FP32 cores, 8 FP64 cores, 16 INT32 cores, two mixed-precision NVIDIA TENSOR COREs for deep learning matrix arithmetic, a level zero (“L0”) instruction cache, a warp scheduler, a dispatch unit, and/or a 64 KB register file. In at least one embodiment, streaming microprocessors may include independent parallel integer and floating-point data paths to provide for efficient execution of workloads with a mix of computation and addressing calculations. In at least one embodiment, streaming microprocessors may include independent thread scheduling capability to enable finer-grain synchronization and cooperation between parallel threads. In at least one embodiment, streaming microprocessors may include a combined L1 data cache and shared memory unit in order to improve performance while simplifying programming. [0136] In at least one embodiment, one or more of GPU(s) 908 may include a high bandwidth memory ("HBM) and/or a 16 GB HBM2 memory subsystem to provide, in some examples, about 900 GB/second peak memory bandwidth. In at least one embodiment, in addition to, or alternatively from, HBM memory, a synchronous graphics random-access memory ("SGRAM”) may be used, such as a graphics double data rate type five synchronous random-access memory ("GDDR5”). [0137] In at least one embodiment, GPU(s) 908 may include unified memory technology. In at least one embodiment, address translation services (“ATS”) support may be used to allow GPU(s) 908 to access CPU(s) 906 page tables directly. In at least one embodiment, embodiment, when GPU(s) 908 memory management unit ("MMU”) experiences a miss, an address translation request may be transmitted to CPU(s) 906. In response, CPU(s) 906 may look in its page tables for virtual-to-physical mapping for address and transmits translation back to GPU(s) 908, in at least one embodiment. In at least one embodiment, unified memory technology may allow a single unified virtual address space for memory of both CPU(s) 906 and GPU(s) 908, thereby simplifying GPU(s) 908 programming and porting of applications to GPU(s) 908. [0138] In at least one embodiment, GPU(s) 908 may include any number of access counters that may keep track of frequency of access of GPU(s) 908 to memory of other processors. In at least one embodiment, access counter(s) may help ensure that memory pages are moved to physical memory of processor that is accessing pages most frequently, thereby improving efficiency for memory ranges shared between processors. [0139] In at least one embodiment, one or more of SoC(s) 904 may include any number of cache(s) 912, including those described herein. For example, in at least one embodiment, cache(s) 912 could include a level three (”L3”) cache that is available to both CPU(s) 906 and GPU(s) 908 (e.g., that is connected both CPU(s) 906 and GPU(s) 908). In at least one embodiment, cache(s) 912 may include a write-back cache that may keep track of states of lines, such as by using a cache coherence protocol (e.g., MEI, MESI, MSI, etc.). In at least one embodiment, L3 cache may include 4 MB or more, depending on embodiment, although smaller cache sizes may be used. [0140] In at least one embodiment, one or more of SoC(s) 904 may include one or more accelerator(s) 914 (e.g., hardware accelerators, software accelerators, or a combination thereof). In at least one embodiment, SoC(s) 904 may include a hardware acceleration cluster that may include optimized hardware accelerators and/or large on-chip memory. In at least one embodiment, large on-chip memory (e.g., 4MB of SRAM), may enable hardware acceleration cluster to accelerate neural networks and other calculations. In at least one embodiment, hardware acceleration cluster may be used to complement GPU(s) 908 and to off-load some of tasks of GPU(s) 908 (e.g., to free up more cycles of GPU(s) 908 for performing other tasks). In at least one embodiment, accelerator(s) 914 could be used for targeted workloads (e.g., perception, convolutional neural networks ("CNNs”), recurrent neural networks (“RNNs”), etc.) that are stable enough to be amenable to acceleration. In at least one embodiment, a CNN may include a region-based or regional convolutional neural networks ("RCNNs”) and Fast RCNNs (e.g., as used for object detection) or other type of CNN. [0141] In at least one embodiment, accelerator(s) 914 (e.g., hardware acceleration cluster) may include a deep learning accelerator(s) ("DLA). DLA(s) may include, without limitation, one or more Tensor processing units ("TPUs) that may be configured to provide an additional ten trillion operations per second for deep learning applications and inferencing. In at least one embodiment, TPUs may be accelerators configured to, and optimized for, performing image processing functions (e.g., for CNNs, RCNNs, etc.). DLA(s) may further be optimized for a specific set of neural network types and floating point operations, as well as inferencing. In at least one embodiment, design of DLA(s) may provide more performance per millimeter than a typical general-purpose GPU, and typically vastly exceeds performance of a CPU. In at least one embodiment, TPU(s) may perform several functions, including a single-instance convolution function, supporting, for example, INT8, INT16, and FP16 data types for both features and weights, as well as post-processor functions. In at least one embodiment, DLA(s) may quickly and efficiently execute neural networks, especially CNNs, on processed or unprocessed data for any of a variety of functions, including, for example and without limitation: a CNN for object identification and detection using data from camera sensors; a CNN for distance estimation using data from camera sensors; a CNN for emergency vehicle detection and identification and detection using data from microphones 996; a CNN for facial recognition and vehicle owner identification using data from camera sensors; and/or a CNN for security and/or safety related events. [0142] In at least one embodiment, DLA(s) may perform any function of GPU(s) 908, and by using an inference accelerator, for example, a designer may target either DLA(s) or GPU(s) 908 for any function. For example, in at least one embodiment, designer may focus processing of CNNs and floating point operations on DLA(s) and leave other functions to GPU(s) 908 and/or other accelerator(s) 914. [0143] In at least one embodiment, accelerator(s) 914 (e.g., hardware acceleration cluster) may include a programmable vision accelerator(s) (“PVA”), which may alternatively be referred to herein as a computer vision accelerator. In at least one embodiment, PVA(s) may be designed and configured to accelerate computer vision algorithms for advanced driver assistance system (“ADAS”) 938, autonomous driving, augmented reality (“AR”) applications, and/or virtual reality (“VR”) applications. PVA(s) may provide a balance between performance and flexibility. For example, in at least one embodiment, each PVA(s) may include, for example and without limitation, any number of reduced instruction set computer (“RISC”) cores, direct memory access (“DMA”), and/or any number of vector processors. [0144] In at least one embodiment, RISC cores may interact with image sensors (e.g., image sensors of any of cameras described herein), image signal processor(s), and/or like. In at least one embodiment, each of RISC cores may include any amount of memory. In at least one embodiment, RISC cores may use any of a number of protocols, depending on embodiment. In at least one embodiment, RISC cores may execute a real-time operating system (“RTOS”). In at least one embodiment, RISC cores may be implemented using one or more integrated circuit devices, application specific integrated circuits (“ASICs”), and/or memory devices. For example, in at least one embodiment, RISC cores could include an instruction cache and/or a tightly coupled RAM. [0145] In at least one embodiment, DMA may enable components of PVA(s) to access system memory independently of CPU(s) 906. In at least one embodiment, DMA may support any number of features used to provide optimization to PVA including, but not limited to, supporting multi-dimensional addressing and/or circular addressing. In at least one embodiment, DMA may support up to six or more dimensions of addressing, which may include, without limitation, block width, block height, block depth, horizontal block stepping, vertical block stepping, and/or depth stepping. [0146] In at least one embodiment, vector processors may be programmable processors that may be designed to efficiently and flexibly execute programming for computer vision algorithms and provide signal processing capabilities. In at least one embodiment, PVA may include a PVA core and two vector processing subsystem partitions. In at least one embodiment, PVA core may include a processor subsystem, DMA engine(s) (e.g., two DMA engines), and/or other peripherals. In at least one embodiment, vector processing subsystem may operate as primary processing engine of PVA, and may include a vector processing unit (“VPU”), an instruction cache, and/or vector memory (e.g., “VMEM”). In at least one embodiment, VPU core may include a digital signal processor such as, for example, a single instruction, multiple data (“SIMD”), very long instruction word (“VLIW”) digital signal processor. In at least one embodiment, a combination of SIMD and VLIW may enhance throughput and speed. [0147] In at least one embodiment, each of vector processors may include an instruction cache and may be coupled to dedicated memory. As a result, in at least one embodiment, each of vector processors may be configured to execute independently of other vector processors. In at least one embodiment, vector processors that are included in a particular PVA may be configured to employ data parallelism. For instance, in at least one embodiment, plurality of vector processors included in a single PVA may execute same computer vision algorithm, but on different regions of an image. In at least one embodiment, vector processors included in a particular PVA may simultaneously execute different computer vision algorithms, on same image, or even execute different algorithms on sequential images or portions of an image. In at least one embodiment, among other things, any number of PVAs may be included in hardware acceleration cluster and any number of vector processors may be included in each of PVAs. In at least one embodiment, PVA(s) may include additional error correcting code (“ECC”) memory, to enhance overall system safety. [0148] In at least one embodiment, accelerator(s) 914 (e.g., hardware acceleration cluster) may include a computer vision network on-chip and static random-access memory (“SRAM”), for providing a high-bandwidth, low latency SRAM for accelerator(s) 914. In at least one embodiment, on-chip memory may include at least 4MB SRAM, consisting of, for example and without limitation, eight field-configurable memory blocks, that may be accessible by both PVA and DLA. In at least one embodiment, each pair of memory blocks may include an advanced peripheral bus (“APB”) interface, configuration circuitry, a controller, and a multiplexer. In at least one embodiment, any type of memory may be used. In at least one embodiment, PVA and DLA may access memory via a backbone that provides PVA and DLA with high-speed access to memory. In at least one embodiment, backbone may include a computer vision network on-chip that interconnects PVA and DLA to memory (e.g., using APB). [0149] In at least one embodiment, computer vision network on-chip may include an interface that determines, before transmission of any control signal/address/data, that both PVA and DLA provide ready and valid signals. In at least one embodiment, an interface may provide for separate phases and separate channels for transmitting control signals/addresses/data, as well as burst-type communications for continuous data transfer. In at least one embodiment, an interface may comply with International Organization for Standardization (“ISO”) 26262 or International Electrotechnical Commission (“IEC”) 61508 standards, although other standards and protocols may be used. [0150] In at least one embodiment, one or more of SoC(s) 904 may include a real-time ray-tracing hardware accelerator. In at least one embodiment, real-time ray-tracing hardware accelerator may be used to quickly and efficiently determine positions and extents of objects (e.g., within a world model), to generate real-time visualization simulations, for RADAR signal interpretation, for sound propagation synthesis and/or analysis, for simulation of SONAR systems, for general wave propagation simulation, for comparison to LIDAR data for purposes of localization and/or other functions, and/or for other uses. [0151] In at least one embodiment, accelerator(s) 914 (e.g., hardware accelerator cluster) have a wide array of uses for autonomous driving. In at least one embodiment, PVA may be a programmable vision accelerator that may be used for key processing stages in ADAS and autonomous vehicles. In at least one embodiment, PVA’s capabilities are a good match for algorithmic domains needing predictable processing, at low power and low latency. In other words, PVA performs well on semi-dense or dense regular computation, even on small data sets, which need predictable run-times with low latency and low power. In at least one embodiment, autonomous vehicles, such as vehicle 900, PVAs are designed to run classic computer vision algorithms, as they are efficient at object detection and operating on integer math. [0152] For example, according to at least one embodiment of technology, PVA is used to perform computer stereo vision. In at least one embodiment, semi-global matching-based algorithm may be used in some examples, although this is not intended to be limiting. In at least one embodiment, applications for Level 3-5 autonomous driving use motion estimation/stereo matching on-the-fly (e.g., structure from motion, pedestrian recognition, lane detection, etc.). In at least one embodiment, PVA may perform computer stereo vision function on inputs from two monocular cameras. [0153] In at least one embodiment, PVA may be used to perform dense optical flow. For example, in at least one embodiment, PVA could process raw RADAR data (e.g., using a 4D Fast Fourier Transform) to provide processed RADAR data. In at least one embodiment, PVA is used for time of flight depth processing, by processing raw time of flight data to provide processed time of flight data, for example. [0154] In at least one embodiment, DLA may be used to run any type of network to enhance control and driving safety, including for example and without limitation, a neural network that outputs a measure of confidence for each object detection. In at least one embodiment, confidence may be represented or interpreted as a probability, or as providing a relative “weight” of each detection compared to other detections. In at least one embodiment, confidence enables a system to make further decisions regarding which detections should be considered as true positive detections rather than false positive detections. For example, In at least one embodiment, a system may set a threshold value for confidence and consider only detections exceeding threshold value as true positive detections. In an embodiment in which an automatic emergency braking (“AEB”) system is used, false positive detections would cause vehicle to automatically perform emergency braking, which is obviously undesirable. In at least one embodiment, highly confident detections may be considered as triggers for AEB. In at least one embodiment, DLA may run a neural network for regressing confidence value. In at least one embodiment, neural network may take as its input at least some subset of parameters, such as bounding box dimensions, ground plane estimate obtained (e.g. from another subsystem), output from IMU sensor(s) 966 that correlates with vehicle 900 orientation, distance, 3D location estimates of object obtained from neural network and/or other sensors (e.g., LIDAR sensor(s) 964 or RADAR sensor(s) 960), among others. [0155] In at least one embodiment, one or more of SoC(s) 904 may include data store(s) 916 (e.g., memory). In at least one embodiment, data store(s) 916 may be on-chip memory of SoC(s) 904, which may store neural networks to be executed on GPU(s) 908 and/or DLA. In at least one embodiment, data store(s) 916 may be large enough in capacity to store multiple instances of neural networks for redundancy and safety. In at least one embodiment, data store(s) 912 may comprise L2 or L3 cache(s). [0156] In at least one embodiment, one or more of SoC(s) 904 may include any number of processor(s) 910 (e.g., embedded processors). processor(s) 910 may include a boot and power management processor that may be a dedicated processor and subsystem to handle boot power and management functions and related security enforcement. In at least one embodiment, boot and power management processor may be a part of SoC(s) 904 boot sequence and may provide runtime power management services. In at least one embodiment, boot power and management processor may provide clock and voltage programming, assistance in system low power state transitions, management of SoC(s) 904 thermals and temperature sensors, and/or management of SoC(s) 904 power states. In at least one embodiment, each temperature sensor may be implemented as a ring-oscillator whose output frequency is proportional to temperature, and SoC(s) 904 may use ring-oscillators to detect temperatures of CPU(s) 906, GPU(s) 908, and/or accelerator(s) 914. In at least one embodiment, if temperatures are determined to exceed a threshold, then boot and power management processor may enter a temperature fault routine and put SoC(s) 904 into a lower power state and/or put vehicle 900 into a chauffeur to safe stop mode (e.g., bring vehicle 900 to a safe stop). [0157] In at least one embodiment, processor(s) 910 may further include a set of embedded processors that may serve as an audio processing engine. In at least one embodiment, audio processing engine may be an audio subsystem that enables full hardware support for multi-channel audio over multiple interfaces, and a broad and flexible range of audio I/O interfaces. In at least one embodiment, audio processing engine is a dedicated processor core with a digital signal processor with dedicated RAM. [0158] In at least one embodiment, processor(s) 910 may further include an always on processor engine that may provide necessary hardware features to support low power sensor management and wake use cases. In at least one embodiment, always on processor engine may include, without limitation, a processor core, a tightly coupled RAM, supporting peripherals (e.g., timers and interrupt controllers), various I/O controller peripherals, and routing logic. [0159] In at least one embodiment, processor(s) 910 may further include a safety cluster engine that includes, without limitation, a dedicated processor subsystem to handle safety management for automotive applications. In at least one embodiment, safety cluster engine may include, without limitation, two or more processor cores, a tightly coupled RAM, support peripherals (e.g., timers, an interrupt controller, etc.), and/or routing logic. In a safety mode, two or more cores may operate, in at least one embodiment, in a lockstep mode and function as a single core with comparison logic to detect any differences between their operations. In at least one embodiment, processor(s) 910 may further include a real-time camera engine that may include, without limitation, a dedicated processor subsystem for handling real-time camera management. In at least one embodiment, processor(s) 910 may further include a high-dynamic range signal processor that may include, without limitation, an image signal processor that is a hardware engine that is part of camera processing pipeline. [0160] In at least one embodiment, processor(s) 910 may include a video image compositor that may be a processing block (e.g., implemented on a microprocessor) that implements video post-processing functions needed by a video playback application to produce final image for player window. In at least one embodiment, video image compositor may perform lens distortion correction on wide-view camera(s) 970, surround camera(s) 974, and/or on in-cabin monitoring camera sensor(s). In at least one embodiment, in-cabin monitoring camera sensor(s) are preferably monitored by a neural network running on another instance of SoC 904, configured to identify in cabin events and respond accordingly. In at least one embodiment, an in-cabin system may perform, without limitation, lip reading to activate cellular service and place a phone call, dictate emails, change vehicle’s destination, activate or change vehicle’s infotainment system and settings, or provide voice- activated web surfing. In at least one embodiment, certain functions are available to driver when vehicle is operating in an autonomous mode and are disabled otherwise. [0161] In at least one embodiment, video image compositor may include enhanced temporal noise reduction for both spatial and temporal noise reduction. For example, in at least one embodiment, where motion occurs in a video, noise reduction weights spatial information appropriately, decreasing weight of information provided by adjacent frames. In at least one embodiment, where an image or portion of an image does not include motion, temporal noise reduction performed by video image compositor may use information from previous image to reduce noise in current image. [0162] In at least one embodiment, video image compositor may also be configured to perform stereo rectification on input stereo lens frames. In at least one embodiment, video image compositor may further be used for user interface composition when operating system desktop is in use, and GPU(s) 908 are not required to continuously render new surfaces. In at least one embodiment, when GPU(s) 908 are powered on and active doing 3D rendering, video image compositor may be used to offload GPU(s) 908 to improve performance and responsiveness. [0163] In at least one embodiment, one or more of SoC(s) 904 may further include a mobile industry processor interface (“MIPI”) camera serial interface for receiving video and input from cameras, a high-speed interface, and/or a video input block that may be used for camera and related pixel input functions. In at least one embodiment, one or more of SoC(s) 904 may further include an input/output controller(s) that may be controlled by software and may be used for receiving I/O signals that are uncommitted to a specific role. [0164] In at least one embodiment, one or more of SoC(s) 904 may further include a broad range of peripheral interfaces to enable communication with peripherals, audio encoders/decoders (“codecs”), power management, and/or other devices. SoC(s) 904 may be used to process data from cameras (e.g., connected over Gigabit Multimedia Serial Link and Ethernet), sensors (e.g., LIDAR sensor(s) 964, RADAR sensor(s) 960, etc. that may be connected over Ethernet), data from bus 902 (e.g., speed of vehicle 900, steering wheel position, etc.), data from GNSS sensor(s) 958 (e.g., connected over Ethernet or CAN bus), etc. In at least one embodiment, one or more of SoC(s) 904 may further include dedicated high-performance mass storage controllers that may include their own DMA engines, and that may be used to free CPU(s) 906 from routine data management tasks. [0165] In at least one embodiment, SoC(s) 904 may be an end-to-end platform with a flexible architecture that spans automation levels 3-5, thereby providing a comprehensive functional safety architecture that leverages and makes efficient use of computer vision and ADAS techniques for diversity and redundancy, provides a platform for a flexible, reliable driving software stack, along with deep learning tools. In at least one embodiment, SoC(s) 904 may be faster, more reliable, and even more energy-efficient and space-efficient than conventional systems. For example, in at least one embodiment, accelerator(s) 914, when combined with CPU(s) 906, GPU(s) 908, and data store(s) 916, may provide for a fast, efficient platform for level 3-5 autonomous vehicles. [0166] In at least one embodiment, computer vision algorithms may be executed on CPUs, which may be configured using high-level programming language, such as C programming language, to execute a wide variety of processing algorithms across a wide variety of visual data. However, in at least one embodiment, CPUs are oftentimes unable to meet performance requirements of many computer vision applications, such as those related to execution time and power consumption, for example. In at least one embodiment, many CPUs are unable to execute complex object detection algorithms in real-time, which is used in in-vehicle ADAS applications and in practical Level 3-5 autonomous vehicles. [0167] Embodiments described herein allow for multiple neural networks to be performed simultaneously and/or sequentially, and for results to be combined together to enable Level 3-5 autonomous driving functionality. For example, in at least one embodiment, a CNN executing on DLA or discrete GPU (e.g., GPU(s) 920) may include text and word recognition, allowing supercomputer to read and understand traffic signs, including signs for which neural network has not been specifically trained. In at least one embodiment, DLA may further include a neural network that is able to identify, interpret, and provide semantic understanding of sign, and to pass that semantic understanding to path planning modules running on CPU Complex. [0168] In at least one embodiment, multiple neural networks may be run simultaneously, as for Level 3, 4, or 5 driving. For example, in at least one embodiment, a warning sign consisting of “Caution: flashing lights indicate icy conditions,” along with an electric light, may be independently or collectively interpreted by several neural networks. In at least one embodiment, sign itself may be identified as a traffic sign by a first deployed neural network (e.g., a neural network that has been trained), text “flashing lights indicate icy conditions” may be interpreted by a second deployed neural network, which informs vehicle’s path planning software (preferably executing on CPU Complex) that when flashing lights are detected, icy conditions exist. In at least one embodiment, flashing light may be identified by operating a third deployed neural network over multiple frames, informing vehicle’s path- planning software of presence (or absence) of flashing lights. In at least one embodiment, all three neural networks may run simultaneously, such as within DLA and/or on GPU(s) 908. [0169] In at least one embodiment, a CNN for facial recognition and vehicle owner identification may use data from camera sensors to identify presence of an authorized driver and/or owner of vehicle 900. In at least one embodiment, an always on sensor processing engine may be used to unlock vehicle when owner approaches driver door and turn on lights, and, in security mode, to disable vehicle when owner leaves vehicle. In this way, SoC(s) 904 provide for security against theft and/or carjacking. [0170] In at least one embodiment, a CNN for emergency vehicle detection and identification may use data from microphones 996 to detect and identify emergency vehicle sirens. In at least one embodiment, SoC(s) 904 use CNN for classifying environmental and urban sounds, as well as classifying visual data. In at least one embodiment, CNN running on DLA is trained to identify relative closing speed of emergency vehicle (e.g., by using Doppler effect). In at least one embodiment, CNN may also be trained to identify emergency vehicles specific to local area in which vehicle is operating, as identified by GNSS sensor(s) 958. In at least one embodiment, when operating in Europe, CNN will seek to detect European sirens, and when in United States CNN will seek to identify only North American sirens. In at least one embodiment, once an emergency vehicle is detected, a control program may be used to execute an emergency vehicle safety routine, slowing vehicle, pulling over to side of road, parking vehicle, and/or idling vehicle, with assistance of ultrasonic sensor(s) 962, until emergency vehicle(s) passes. [0171] In at least one embodiment, vehicle 900 may include CPU(s) 918 (e.g., discrete CPU(s), or dCPU(s)), that may be coupled to SoC(s) 904 via a high-speed interconnect (e.g., PCIe). In at least one embodiment, CPU(s) 918 may include an X86 processor, for example. CPU(s) 918 may be used to perform any of a variety of functions, including arbitrating potentially inconsistent results between ADAS sensors and SoC(s) 904, and/or monitoring status and health of controller(s) 936 and/or an infotainment system on a chip (“infotainment SoC”) 930, for example. [0172] In at least one embodiment, vehicle 900 may include GPU(s) 920 (e.g., discrete GPU(s), or dGPU(s)), that may be coupled to SoC(s) 904 via a high-speed interconnect (e.g., NVIDIA’s NVLINK). In at least one embodiment, GPU(s) 920 may provide additional artificial intelligence functionality, such as by executing redundant and/or different neural networks, and may be used to train and/or update neural networks based at least in part on input (e.g., sensor data) from sensors of vehicle 900. [0173] In at least one embodiment, vehicle 900 may further include network interface 924 which may include, without limitation, wireless antenna(s) 926 (e.g., one or more wireless antennas 926 for different communication protocols, such as a cellular antenna, a Bluetooth antenna, etc.). In at least one embodiment, network interface 924 may be used to enable wireless connectivity over Internet with cloud (e.g., with server(s) and/or other network devices), with other vehicles, and/or with computing devices (e.g., client devices of passengers). In at least one embodiment, to communicate with other vehicles, a direct link may be established between vehicle 90 and other vehicle and/or an indirect link may be established (e.g., across networks and over Internet). In at least one embodiment, direct links may be provided using a vehicle-to-vehicle communication link. Vehicle-to-vehicle communication link may provide vehicle 900 information about vehicles in proximity to vehicle 900 (e.g., vehicles in front of, on side of, and/or behind vehicle 900). In at least one embodiment, aforementioned functionality may be part of a cooperative adaptive cruise control functionality of vehicle 900. [0174] In at least one embodiment, network interface 924 may include an SoC that provides modulation and demodulation functionality and enables controller(s) 936 to communicate over wireless networks. In at least one embodiment, network interface 924 may include a radio frequency front-end for up-conversion from baseband to radio frequency, and down conversion from radio frequency to baseband. In at least one embodiment, frequency conversions may be performed in any technically feasible fashion. For example, frequency conversions could be performed through well-known processes, and/or using super-heterodyne processes. In at least one embodiment, radio frequency front end functionality may be provided by a separate chip. In at least one embodiment, network interface may include wireless functionality for communicating over LTE, WCDMA, UMTS, GSM, CDMA2000, Bluetooth, Bluetooth LE, Wi-Fi, Z-Wave, ZigBee, LoRaWAN, and/or other wireless protocols. [0175] In at least one embodiment, vehicle 900 may further include data store(s) 928 which may include, without limitation, off-chip (e.g., off SoC(s) 904) storage. In at least one embodiment, data store(s) 928 may include, without limitation, one or more storage elements including RAM, SRAM, dynamic random-access memory (“DRAM”), video random-access memory (“VRAM”), Flash, hard disks, and/or other components and/or devices that may store at least one bit of data. [0176] In at least one embodiment, vehicle 900 may further include GNSS sensor(s) 958 (e.g., GPS and/or assisted GPS sensors), to assist in mapping, perception, occupancy grid generation, and/or path planning functions. In at least one embodiment, any number of GNSS sensor(s) 958 may be used, including, for example and without limitation, a GPS using a USB connector with an Ethernet to Serial (e.g., RS-232) bridge. [0177] In at least one embodiment, vehicle 900 may further include RADAR sensor(s) 960. RADAR sensor(s) 960 may be used by vehicle 900 for long-range vehicle detection, even in darkness and/or severe weather conditions. In at least one embodiment, RADAR functional safety levels may be ASIL B. RADAR sensor(s) 960 may use CAN and/or bus 902 (e.g., to transmit data generated by RADAR sensor(s) 960) for control and to access object tracking data, with access to Ethernet to access raw data in some examples. In at least one embodiment, wide variety of RADAR sensor types may be used. For example, and without limitation, RADAR sensor(s) 960 may be suitable for front, rear, and side RADAR use. In at least one embodiment, one or more of RADAR sensors(s) 960 are Pulse Doppler RADAR sensor(s). [0178] In at least one embodiment, RADAR sensor(s) 960 may include different configurations, such as long-range with narrow field of view, short-range with wide field of view, short-range side coverage, etc. In at least one embodiment, long-range RADAR may be used for adaptive cruise control functionality. In at least one embodiment, long-range RADAR systems may provide a broad field of view realized by two or more independent scans, such as within a 250m range. In at least one embodiment, RADAR sensor(s) 960 may help in distinguishing between static and moving objects, and may be used by ADAS system 938 for emergency brake assist and forward collision warning. Sensors 960(s) included in a long-range RADAR system may include, without limitation, monostatic multimodal RADAR with multiple (e.g., six or more) fixed RADAR antennae and a high-speed CAN and FlexRay interface. In at least one embodiment, with six antennae, central four antennae may create a focused beam pattern, designed to record vehicle’s 900 surroundings at higher speeds with minimal interference from traffic in adjacent lanes. In at least one embodiment, other two antennae may expand field of view, making it possible to quickly detect vehicles entering or leaving vehicle’s 900 lane. [0179] In at least one embodiment, mid-range RADAR systems may include, as an example, a range of up to 160m (front) or 80m (rear), and a field of view of up to 42 degrees (front) or 150 degrees (rear). In at least one embodiment, short-range RADAR systems may include, without limitation, any number of RADAR sensor(s) 960 designed to be installed at both ends of rear bumper. When installed at both ends of rear bumper, in at least one embodiment, a RADAR sensor system may create two beams that constantly monitor blind spot in rear and next to vehicle. In at least one embodiment, short-range RADAR systems may be used in ADAS system 938 for blind spot detection and/or lane change assist. [0180] In at least one embodiment, vehicle 900 may further include ultrasonic sensor(s) 962. ultrasonic sensor(s) 962, which may be positioned at front, back, and/or sides of vehicle 900, may be used for park assist and/or to create and update an occupancy grid. In at least one embodiment, a wide variety of ultrasonic sensor(s) 962 may be used, and different ultrasonic sensor(s) 962 may be used for different ranges of detection (e.g., 2.5m, 4m). In at least one embodiment, ultrasonic sensor(s) 962 may operate at functional safety levels of ASIL B. [0181] In at least one embodiment, vehicle 900 may include LIDAR sensor(s) 964. LIDAR sensor(s) 964 may be used for object and pedestrian detection, emergency braking, collision avoidance, and/or other functions. In at least one embodiment, LIDAR sensor(s) 964 may be functional safety level ASIL B. In at least one embodiment, vehicle 900 may include multiple LIDAR sensors 964 (e.g., two, four, six, etc.) that may use Ethernet (e.g., to provide data to a Gigabit Ethernet switch). [0182] In at least one embodiment, LIDAR sensor(s) 964 may be capable of providing a list of objects and their distances for a 360-degree field of view. In at least one embodiment, ommercially available LIDAR sensor(s) 964 may have an advertised range of approximately 100m, with an accuracy of 2cm-3cm, and with support for a 100 Mbps Ethernet connection, for example. In at least one embodiment, one or more non-protruding LIDAR sensors 964 may be used. In such an embodiment, LIDAR sensor(s) 964 may be implemented as a small device that may be embedded into front, rear, sides, and/or corners of vehicle 900. In at least one embodiment, LIDAR sensor(s) 964, in such an embodiment, may provide up to a 120- degree horizontal and 35-degree vertical field-of-view, with a 200m range even for low- reflectivity objects. In at least one embodiment, front-mounted LIDAR sensor(s) 964 may be configured for a horizontal field of view between 45 degrees and 135 degrees. [0183] In at least one embodiment, LIDAR technologies, such as 3D flash LIDAR, may also be used. 3D Flash LIDAR uses a flash of a laser as a transmission source, to illuminate surroundings of vehicle 900 up to approximately 200m. In at least one embodiment, a flash LIDAR unit includes, without limitation, a receptor, which records laser pulse transit time and reflected light on each pixel, which in turn corresponds to range from vehicle 900 to objects. In at least one embodiment, flash LIDAR may allow for highly accurate and distortion-free images of surroundings to be generated with every laser flash. In at least one embodiment, four flash LIDAR sensors may be deployed, one at each side of vehicle 900. In at least one embodiment, 3D flash LIDAR systems include, without limitation, a solid-state 3D staring array LIDAR camera with no moving parts other than a fan (e.g., a non-scanning LIDAR device). In at least one embodiment, flash LIDAR device may use a 5 nanosecond class I (eye-safe) laser pulse per frame and may capture reflected laser light in form of 3D range point clouds and co-registered intensity data. [0184] In at least one embodiment, vehicle may further include IMU sensor(s) 966. In at least one embodiment, IMU sensor(s) 966 may be located at a center of rear axle of vehicle 900, in at least one embodiment. In at least one embodiment, IMU sensor(s) 966 may include, for example and without limitation, accelerometer(s), magnetometer(s), gyroscope(s), magnetic compass(es), and/or other sensor types. In at least one embodiment, such as in six-axis applications, IMU sensor(s) 966 may include, without limitation, accelerometers and gyroscopes. In at least one embodiment, such as in nine-axis applications, IMU sensor(s) 966 may include, without limitation, accelerometers, gyroscopes, and magnetometers. [0185] In at least one embodiment, IMU sensor(s) 966 may be implemented as a miniature, high performance GPS-Aided Inertial Navigation System ("GPS/INS”) that combines micro-electro-mechanical systems (“MEMS”) inertial sensors, a high-sensitivity GPS receiver, and advanced Kalman filtering algorithms to provide estimates of position, velocity, and attitude. In at least one embodiment, IMU sensor(s) 966 may enable vehicle 900 to estimate heading without requiring input from a magnetic sensor by directly observing and correlating changes in velocity from GPS to IMU sensor(s) 966. In at least one embodiment, IMU sensor(s) 966 and GNSS sensor(s) 958 may be combined in a single integrated unit. [0186] In at least one embodiment, vehicle 900 may include microphone(s) 996 placed in and/or around vehicle 900. In at least one embodiment, microphone(s) 996 may be used for emergency vehicle detection and identification, among other things. [0187] In at least one embodiment, vehicle 900 may further include any number of camera types, including stereo camera(s) 968, wide-view camera(s) 970, infrared camera(s) 972, surround camera(s) 974, long-range camera(s) 998, mid-range camera(s) 976, and/or other camera types. In at least one embodiment, cameras may be used to capture image data around an entire periphery of vehicle 900. In at least one embodiment, types of cameras used depends vehicle 900. In at least one embodiment, any combination of camera types may be used to provide necessary coverage around vehicle 900. In at least one embodiment, number of cameras may differ depending on embodiment. For example, in at least one embodiment, vehicle 900 could include six cameras, seven cameras, ten cameras, twelve cameras, or another number of cameras. cameras may support, as an example and without limitation, Gigabit Multimedia Serial Link (“GMSL”) and/or Gigabit Ethernet. In at least one embodiment, each of camera(s) is described with more detail previously herein with respect to FIG.9A and FIG.9B. [0188] In at least one embodiment, vehicle 900 may further include vibration sensor(s) 942. vibration sensor(s) 942 may measure vibrations of components of vehicle 900, such as axle(s). For example, in at least one embodiment, changes in vibrations may indicate a change in road surfaces. In at least one embodiment, when two or more vibration sensors 942 are used, differences between vibrations may be used to determine friction or slippage of road surface (e.g., when difference in vibration is between a power-driven axle and a freely rotating axle). [0189] In at least one embodiment, vehicle 900 may include ADAS system 938. ADAS system 938 may include, without limitation, an SoC, in some examples. In at least one embodiment, ADAS system 938 may include, without limitation, any number and combination of an autonomous/adaptive/automatic cruise control (“ACC”) system, a cooperative adaptive cruise control (“CACC”) system, a forward crash warning (“FCW”) system, an automatic emergency braking (“AEB”) system, a lane departure warning (“LDW)” system, a lane keep assist (“LKA”) system, a blind spot warning (“BSW”) system, a rear cross-traffic warning (“RCTW”) system, a collision warning (“CW”) system, a lane centering (“LC”) system, and/or other systems, features, and/or functionality. [0190] In at least one embodiment, ACC system may use RADAR sensor(s) 960, LIDAR sensor(s) 964, and/or any number of camera(s). In at least one embodiment, ACC system may include a longitudinal ACC system and/or a lateral ACC system. In at least one embodiment, longitudinal ACC system monitors and controls distance to vehicle immediately ahead of vehicle 900 and automatically adjust speed of vehicle 900 to maintain a safe distance from vehicles ahead. In at least one embodiment, lateral ACC system performs distance keeping, and advises vehicle 900 to change lanes when necessary. In at least one embodiment, lateral ACC is related to other ADAS applications such as LC and CW. [0191] In at least one embodiment, CACC system uses information from other vehicles that may be received via network interface 924 and/or wireless antenna(s) 926 from other vehicles via a wireless link, or indirectly, over a network connection (e.g., over Internet). In at least one embodiment, direct links may be provided by a vehicle-to-vehicle (“V2V”) communication link, while indirect links may be provided by an infrastructure-to-vehicle (“I2V”) communication link. In general, V2V communication concept provides information about immediately preceding vehicles (e.g., vehicles immediately ahead of and in same lane as vehicle 900), while I2V communication concept provides information about traffic further ahead. In at least one embodiment, CACC system may include either or both I2V and V2V information sources. In at least one embodiment, given information of vehicles ahead of vehicle 900, CACC system may be more reliable and it has potential to improve traffic flow smoothness and reduce congestion on road. [0192] In at least one embodiment, FCW system is designed to alert driver to a hazard, so that driver may take corrective action. In at least one embodiment, FCW system uses a front- facing camera and/or RADAR sensor(s) 960, coupled to a dedicated processor, DSP, FPGA, and/or ASIC, that is electrically coupled to driver feedback, such as a display, speaker, and/or vibrating component. In at least one embodiment, FCW system may provide a warning, such as in form of a sound, visual warning, vibration and/or a quick brake pulse. [0193] In at least one embodiment, AEB system detects an impending forward collision with another vehicle or other object, and may automatically apply brakes if driver does not take corrective action within a specified time or distance parameter. In at least one embodiment, AEB system may use front-facing camera(s) and/or RADAR sensor(s) 960, coupled to a dedicated processor, DSP, FPGA, and/or ASIC. In at least one embodiment, when AEB system detects a hazard, AEB system typically first alerts driver to take corrective action to avoid collision and, if driver does not take corrective action, AEB system may automatically apply brakes in an effort to prevent, or at least mitigate, impact of predicted collision. In at least one embodiment, AEB system, may include techniques such as dynamic brake support and/or crash imminent braking. [0194] In at least one embodiment, LDW system provides visual, audible, and/or tactile warnings, such as steering wheel or seat vibrations, to alert driver when vehicle 900 crosses lane markings. In at least one embodiment, LDW system does not activate when driver indicates an intentional lane departure, by activating a turn signal. In at least one embodiment, LDW system may use front-side facing cameras, coupled to a dedicated processor, DSP, FPGA, and/or ASIC, that is electrically coupled to driver feedback, such as a display, speaker, and/or vibrating component. In at least one embodiment, LKA system is a variation of LDW system. LKA system provides steering input or braking to correct vehicle 900 if vehicle 900 starts to exit lane. [0195] In at least one embodiment, BSW system detects and warns driver of vehicles in an automobile’s blind spot. In at least one embodiment, BSW system may provide a visual, audible, and/or tactile alert to indicate that merging or changing lanes is unsafe. In at least one embodiment, BSW system may provide an additional warning when driver uses a turn signal. In at least one embodiment, BSW system may use rear-side facing camera(s) and/or RADAR sensor(s) 960, coupled to a dedicated processor, DSP, FPGA, and/or ASIC, that is electrically coupled to driver feedback, such as a display, speaker, and/or vibrating component. [0196] In at least one embodiment, RCTW system may provide visual, audible, and/or tactile notification when an object is detected outside rear-camera range when vehicle 900 is backing up. In at least one embodiment, RCTW system includes AEB system to ensure that vehicle brakes are applied to avoid a crash. In at least one embodiment, RCTW system may use one or more rear-facing RADAR sensor(s) 960, coupled to a dedicated processor, DSP, FPGA, and/or ASIC, that is electrically coupled to driver feedback, such as a display, speaker, and/or vibrating component. [0197] In at least one embodiment, conventional ADAS systems may be prone to false positive results which may be annoying and distracting to a driver, but typically are not catastrophic, because conventional ADAS systems alert driver and allow driver to decide whether a safety condition truly exists and act accordingly. In at least one embodiment, vehicle 900 itself decides, in case of conflicting results, whether to heed result from a primary computer or a secondary computer (e.g., first controller 936 or second controller 936). For example, in at least one embodiment, ADAS system 938 may be a backup and/or secondary computer for providing perception information to a backup computer rationality module. In at least one embodiment, backup computer rationality monitor may run a redundant diverse software on hardware components to detect faults in perception and dynamic driving tasks. In at least one embodiment, outputs from ADAS system 938 may be provided to a supervisory MCU. In at least one embodiment, if outputs from primary computer and secondary computer conflict, supervisory MCU determines how to reconcile conflict to ensure safe operation. [0198] In at least one embodiment, primary computer may be configured to provide supervisory MCU with a confidence score, indicating primary computer’s confidence in chosen result. In at least one embodiment, if confidence score exceeds a threshold, supervisory MCU may follow primary computer’s direction, regardless of whether secondary computer provides a conflicting or inconsistent result. In at least one embodiment, where confidence score does not meet threshold, and where primary and secondary computer indicate different results (e.g., a conflict), supervisory MCU may arbitrate between computers to determine appropriate outcome. [0199] In at least one embodiment, supervisory MCU may be configured to run a neural network(s) that is trained and configured to determine, based at least in part on outputs from primary computer and secondary computer, conditions under which secondary computer provides false alarms. In at least one embodiment, neural network(s) in supervisory MCU may learn when secondary computer’s output may be trusted, and when it cannot. For example, in at least one embodiment, when secondary computer is a RADAR-based FCW system, a neural network(s) in supervisory MCU may learn when FCW system is identifying metallic objects that are not, in fact, hazards, such as a drainage grate or manhole cover that triggers an alarm. In at least one embodiment, when secondary computer is a camera-based LDW system, a neural network in supervisory MCU may learn to override LDW when bicyclists or pedestrians are present and a lane departure is, in fact, safest maneuver. In at least one embodiment, supervisory MCU may include at least one of a DLA or GPU suitable for running neural network(s) with associated memory. In at least one embodiment, supervisory MCU may comprise and/or be included as a component of SoC(s) 904. [0200] In at least one embodiment, ADAS system 938 may include a secondary computer that performs ADAS functionality using traditional rules of computer vision. In at least one embodiment, secondary computer may use classic computer vision rules (if-then), and presence of a neural network(s) in supervisory MCU may improve reliability, safety and performance. For example, in at least one embodiment, diverse implementation and intentional non-identity makes overall system more fault-tolerant, especially to faults caused by software (or software-hardware interface) functionality. For example, in at least one embodiment, if there is a software bug or error in software running on primary computer, and non-identical software code running on secondary computer provides same overall result, then supervisory MCU may have greater confidence that overall result is correct, and bug in software or hardware on primary computer is not causing material error. [0201] In at least one embodiment, output of ADAS system 938 may be fed into primary computer’s perception block and/or primary computer’s dynamic driving task block. For example, in at least one embodiment, if ADAS system 938 indicates a forward crash warning due to an object immediately ahead, perception block may use this information when identifying objects. In at least one embodiment, secondary computer may have its own neural network which is trained and thus reduces risk of false positives, as described herein. [0202] In at least one embodiment, vehicle 900 may further include infotainment SoC 930 (e.g., an in-vehicle infotainment system (IVI)). Although illustrated and described as an SoC, infotainment system 930, in at least one embodiment, may not be an SoC, and may include, without limitation, two or more discrete components. In at least one embodiment, infotainment SoC 930 may include, without limitation, a combination of hardware and software that may be used to provide audio (e.g., music, a personal digital assistant, navigational instructions, news, radio, etc.), video (e.g., TV, movies, streaming, etc.), phone (e.g., hands-free calling), network connectivity (e.g., LTE, WiFi, etc.), and/or information services (e.g., navigation systems, rear-parking assistance, a radio data system, vehicle related information such as fuel level, total distance covered, brake fuel level, oil level, door open/close, air filter information, etc.) to vehicle 900. For example, infotainment SoC 930 could include radios, disk players, navigation systems, video players, USB and Bluetooth connectivity, carputers, in-car entertainment, WiFi, steering wheel audio controls, hands free voice control, a heads-up display (“HUD”), HMI display 934, a telematics device, a control panel (e.g., for controlling and/or interacting with various components, features, and/or systems), and/or other components. In at least one embodiment, infotainment SoC 930 may further be used to provide information (e.g., visual and/or audible) to user(s) of vehicle, such as information from ADAS system 938, autonomous driving information such as planned vehicle maneuvers, trajectories, surrounding environment information (e.g., intersection information, vehicle information, road information, etc.), and/or other information. [0203] In at least one embodiment, infotainment SoC 930 may include any amount and type of GPU functionality. In at least one embodiment, infotainment SoC 930 may communicate over bus 902 (e.g., CAN bus, Ethernet, etc.) with other devices, systems, and/or components of vehicle 900. In at least one embodiment, infotainment SoC 930 may be coupled to a supervisory MCU such that GPU of infotainment system may perform some self-driving functions in event that primary controller(s) 936 (e.g., primary and/or backup computers of vehicle 900) fail. In at least one embodiment, infotainment SoC 930 may put vehicle 900 into a chauffeur to safe stop mode, as described herein. [0204] In at least one embodiment, vehicle 900 may further include instrument cluster 932 (e.g., a digital dash, an electronic instrument cluster, a digital instrument panel, etc.). instrument cluster 932 may include, without limitation, a controller and/or supercomputer (e.g., a discrete controller or supercomputer). In at least one embodiment, instrument cluster 932 may include, without limitation, any number and combination of a set of instrumentation such as a speedometer, fuel level, oil pressure, tachometer, odometer, turn indicators, gearshift position indicator, seat belt warning light(s), parking-brake warning light(s), engine- malfunction light(s), supplemental restraint system (e.g., airbag) information, lighting controls, safety system controls, navigation information, etc. In some examples, information may be displayed and/or shared among infotainment SoC 930 and instrument cluster 932. In at least one embodiment, instrument cluster 932 may be included as part of infotainment SoC 930, or vice versa. [0205] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment, inference and/or training logic 615 may be used in system FIG.9C for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. [0206] In at least one embodiment, at least one component shown or described with respect to vehicle 900 system architecture of FIG.9C is utilized to implement techniques described in connection with FIGS.1-5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in system architecture of FIG.9C. [0207] FIG.9D is a diagram of a system 976 for communication between cloud-based server(s) and autonomous vehicle 900 of FIG.9A, according to at least one embodiment. In at least one embodiment, system 976 may include, without limitation, server(s) 978, network(s) 990, and any number and type of vehicles, including vehicle 900. Server(s) 978 may include, without limitation, a plurality of GPUs 984(A)-984(H) (collectively referred to herein as GPUs 984), PCIe switches 982(A)-982(H) (collectively referred to herein as PCIe switches 982), and/or CPUs 980(A)-980(B) (collectively referred to herein as CPUs 980). GPUs 984, CPUs 980, and PCIe switches 982 may be interconnected with high-speed interconnects such as, for example and without limitation, NVLink interfaces 988 developed by NVIDIA and/or PCIe connections 986. In at least one embodiment, GPUs 984 are connected via an NVLink and/or NVSwitch SoC and GPUs 984 and PCIe switches 982 are connected via PCIe interconnects. In at least one embodiment, although eight GPUs 984, two CPUs 980, and four PCIe switches 982 are illustrated, this is not intended to be limiting. In at least one embodiment, each of server(s) 978 may include, without limitation, any number of GPUs 984, CPUs 980, and/or PCIe switches 982, in any combination. For example, in at least one embodiment, server(s) 978 could each include eight, sixteen, thirty-two, and/or more GPUs 984. [0208] In at least one embodiment, server(s) 978 may receive, over network(s) 990 and from vehicles, image data representative of images showing unexpected or changed road conditions, such as recently commenced road-work. In at least one embodiment, server(s) 978 may transmit, over network(s) 990 and to vehicles, neural networks 992, updated neural networks 992, and/or map information 994, including, without limitation, information regarding traffic and road conditions. In at least one embodiment, updates to map information 994 may include, without limitation, updates for HD map 922, such as information regarding construction sites, potholes, detours, flooding, and/or other obstructions. In at least one embodiment, neural networks 992, updated neural networks 992, and/or map information 994 may have resulted from new training and/or experiences represented in data received from any number of vehicles in environment, and/or based at least in part on training performed at a data center (e.g., using server(s) 978 and/or other servers). [0209] In at least one embodiment, server(s) 978 may be used to train machine learning models (e.g., neural networks) based at least in part on training data. Training data may be generated by vehicles, and/or may be generated in a simulation (e.g., using a game engine). In at least one embodiment, any amount of training data is tagged (e.g., where associated neural network benefits from supervised learning) and/or undergoes other pre-processing. In at least one embodiment, any amount of training data is not tagged and/or pre-processed (e.g., where associated neural network does not require supervised learning). In at least one embodiment, once machine learning models are trained, machine learning models may be used by vehicles (e.g., transmitted to vehicles over network(s) 990, and/or machine learning models may be used by server(s) 978 to remotely monitor vehicles. [0210] In at least one embodiment, server(s) 978 may receive data from vehicles and apply data to up-to-date real-time neural networks for real-time intelligent inferencing. In at least one embodiment, server(s) 978 may include deep-learning supercomputers and/or dedicated AI computers powered by GPU(s) 984, such as a DGX and DGX Station machines developed by NVIDIA. However, in at least one embodiment, server(s) 978 may include deep learning infrastructure that use CPU-powered data centers. [0211] In at least one embodiment, deep-learning infrastructure of server(s) 978 may be capable of fast, real-time inferencing, and may use that capability to evaluate and verify health of processors, software, and/or associated hardware in vehicle 900. For example, in at least one embodiment, deep-learning infrastructure may receive periodic updates from vehicle 900, such as a sequence of images and/or objects that vehicle 900 has located in that sequence of images (e.g., via computer vision and/or other machine learning object classification techniques). In at least one embodiment, deep-learning infrastructure may run its own neural network to identify objects and compare them with objects identified by vehicle 900 and, if results do not match and deep-learning infrastructure concludes that AI in vehicle 900 is malfunctioning, then server(s) 978 may transmit a signal to vehicle 900 instructing a fail-safe computer of vehicle 900 to assume control, notify passengers, and complete a safe parking maneuver. [0212] In at least one embodiment, server(s) 978 may include GPU(s) 984 and one or more programmable inference accelerators (e.g., NVIDIA’s TensorRT 3). In at least one embodiment, combination of GPU-powered servers and inference acceleration may make real-time responsiveness possible. In at least one embodiment, such as where performance is less critical, servers powered by CPUs, FPGAs, and other processors may be used for inferencing. In at least one embodiment, hardware structure(s) 615 are used to perform one or more embodiments. Details regarding hardware structure(x) 615 are provided herein in conjunction with FIGS.6A and/or 6B. [0213] In at least one embodiment, at least one component shown or described with respect to vehicle 900 of FIGS.9A-9D is utilized to implement techniques described in connection with FIGS.1-5 to identify one or more features of a vehicle operating environment. In at least one embodiment, vehicle 900 includes a computer vision system that includes one or more processors to identify one or more featrures of a vehicle operating environment based at least in part on using one or more neural networks to generate one or more outputs of one or more convolution operations on image data by at least contracting one or more tensors to generate one or more feature maps, and one or more of a propulsion system and a directional control system to control one or more movements of vehicle 900 based at least in part on identified one or more features. In at least one embodiment, computer vision system, propulsion system, and directional control system may be included in some other type of vehicle such as an aerial vehicle (e.g., airplane, helicopter, quadcopter), an aquatic vehicle (e.g., boat, submarine), or a space vehicle (e.g., satellite, spacecraft). In at least one embodiment, vehicle 900 performs at least one convolution operation with respect to three-dimensional point cloud data using a contraction operation, as described with respect to at least one of FIGS.1-5. COMPUTER SYSTEMS [0214] FIG.10 is a block diagram illustrating an exemplary computer system, which may be a system with interconnected devices and components, a system-on-a-chip (SOC) or some combination thereof 1000 formed with a processor that may include execution units to execute an instruction, according to at least one embodiment. In at least one embodiment, computer system 1000 may include, without limitation, a component, such as a processor 1002 to employ execution units including logic to perform algorithms for process data, in accordance with present disclosure, such as in embodiment described herein. In at least one embodiment, computer system 1000 may include processors, such as PENTIUM® Processor family, XeonTM, Itanium®, XScaleTM and/or StrongARMTM, Intel® Core™, or Intel® Nervana™ microprocessors available from Intel Corporation of Santa Clara, California, although other systems (including PCs having other microprocessors, engineering workstations, set-top boxes and like) may also be used. In at least one embodiment, computer system 1000 may execute a version of WINDOWS’ operating system available from Microsoft Corporation of Redmond, Wash., although other operating systems (UNIX and Linux for example), embedded software, and/or graphical user interfaces, may also be used. [0215] Embodiments may be used in other devices such as handheld devices and embedded applications. Some examples of handheld devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants (“PDAs”), and handheld PCs. In at least one embodiment, embedded applications may include a microcontroller, a digital signal processor (“DSP”), system on a chip, network computers (“NetPCs”), set-top boxes, network hubs, wide area network (“WAN”) switches, or any other system that may perform one or more instructions in accordance with at least one embodiment. [0216] In at least one embodiment, computer system 1000 may include, without limitation, processor 1002 that may include, without limitation, one or more execution units 1008 to perform machine learning model training and/or inferencing according to techniques described herein. In at least one embodiment, system 10 is a single processor desktop or server system, but in another embodiment system 10 may be a multiprocessor system. In at least one embodiment, processor 1002 may include, without limitation, a complex instruction set computer (“CISC”) microprocessor, a reduced instruction set computing (“RISC”) microprocessor, a very long instruction word (“VLIW”) microprocessor, a processor implementing a combination of instruction sets, or any other processor device, such as a digital signal processor, for example. In at least one embodiment, processor 1002 may be coupled to a processor bus 1010 that may transmit data signals between processor 1002 and other components in computer system 1000. [0217] In at least one embodiment, processor 1002 may include, without limitation, a Level 1 (“L1”) internal cache memory (“cache”) 1004. In at least one embodiment, processor 1002 may have a single internal cache or multiple levels of internal cache. In at least one embodiment, cache memory may reside external to processor 1002. Other embodiments may also include a combination of both internal and external caches depending on particular implementation and needs. In at least one embodiment, register file 1006 may store different types of data in various registers including, without limitation, integer registers, floating point registers, status registers, and instruction pointer register. [0218] In at least one embodiment, execution unit 1008, including, without limitation, logic to perform integer and floating point operations, also resides in processor 1002. processor 1002 may also include a microcode (“ucode”) read only memory (“ROM”) that stores microcode for certain macro instructions. In at least one embodiment, execution unit 1008 may include logic to handle a packed instruction set 1009. In at least one embodiment, by including packed instruction set 1009 in instruction set of a general-purpose processor 1002, along with associated circuitry to execute instructions, operations used by many multimedia applications may be performed using packed data in a general-purpose processor 1002. In one or more embodiments, many multimedia applications may be accelerated and executed more efficiently by using full width of a processor's data bus for performing operations on packed data, which may eliminate need to transfer smaller units of data across processor's data bus to perform one or more operations one data element at a time. [0219] In at least one embodiment, execution unit 1008 may also be used in microcontrollers, embedded processors, graphics devices, DSPs, and other types of logic circuits. In at least one embodiment, computer system 1000 may include, without limitation, a memory 1020. In at least one embodiment, memory 1020 may be implemented as a Dynamic Random Access Memory (“DRAM”) device, a Static Random Access Memory (“SRAM”) device, flash memory device, or other memory device. Memory 1020 may store instruction(s) 1019 and/or data 1021 represented by data signals that may be executed by processor 1002. [0220] In at least one embodiment, system logic chip may be coupled to processor bus 1010 and memory 1020. In at least one embodiment, system logic chip may include, without limitation, a memory controller hub (“MCH”) 1016, and processor 1002 may communicate with MCH 1016 via processor bus 1010. In at least one embodiment, MCH 1016 may provide a high bandwidth memory path 1018 to memory 1020 for instruction and data storage and for storage of graphics commands, data and textures. In at least one embodiment, MCH 1016 may direct data signals between processor 1002, memory 1020, and other components in computer system 1000 and to bridge data signals between processor bus 1010, memory 1020, and a system I/O 1022. In at least one embodiment, system logic chip may provide a graphics port for coupling to a graphics controller. In at least one embodiment, MCH 1016 may be coupled to memory 1020 through a high bandwidth memory path 1018 and graphics/video card 1012 may be coupled to MCH 1016 through an Accelerated Graphics Port (“AGP”) interconnect 1014. [0221] In at least one embodiment, computer system 1000 may use system I/O 1022 that is a proprietary hub interface bus to couple MCH 1016 to I/O controller hub (“ICH”) 1030. In at least one embodiment, ICH 1030 may provide direct connections to some I/O devices via a local I/O bus. In at least one embodiment, local I/O bus may include, without limitation, a high-speed I/O bus for connecting peripherals to memory 1020, chipset, and processor 1002. Examples may include, without limitation, an audio controller 1029, a firmware hub (“flash BIOS”) 1028, a wireless transceiver 1026, a data storage 1024, a legacy I/O controller 1023 containing user input and keyboard interfaces, a serial expansion port 1027, such as Universal Serial Bus (“USB”), and a network controller 1034. Data storage 1024 may comprise a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device. [0222] In at least one embodiment, FIG.10 illustrates a system, which includes interconnected hardware devices or “chips”, whereas in other embodiments, FIG.10 may illustrate an exemplary System on a Chip (“SoC”). In at least one embodiment, devices illustrated in FIG.10 may be interconnected with proprietary interconnects, standardized interconnects (e.g., PCIe) or some combination thereof. In at least one embodiment, one or more components of system 1000 are interconnected using compute express link (CXL) interconnects. [0223] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment, inference and/or training logic 615 may be used in system FIG.10 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. [0224] In at least one embodiment, at least one component shown or described with respect to FIG.10 is utilized to implement techniques described in connection with FIGS.1- 5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in system 1000 of FIG.10. [0225] FIG.11 is a block diagram illustrating an electronic device 1100 for utilizing a processor 1110, according to at least one embodiment. In at least one embodiment, electronic device 1100 may be, for example and without limitation, a notebook, a tower server, a rack server, a blade server, a laptop, a desktop, a tablet, a mobile device, a phone, an embedded computer, or any other suitable electronic device. [0226] In at least one embodiment, system 1100 may include, without limitation, processor 1110 communicatively coupled to any suitable number or kind of components, peripherals, modules, or devices. In at least one embodiment, processor 1110 coupled using a bus or interface, such as a 1°C bus, a System Management Bus (“SMBus”), a Low Pin Count (LPC) bus, a Serial Peripheral Interface (“SPI”), a High Definition Audio (“HDA”) bus, a Serial Advance Technology Attachment (“SATA”) bus, a Universal Serial Bus (“USB”) (versions 1, 2, 3), or a Universal Asynchronous Receiver/Transmitter (“UART”) bus. In at least one embodiment, FIG.11 illustrates a system, which includes interconnected hardware devices or “chips”, whereas in other embodiments, FIG.11 may illustrate an exemplary System on a Chip (“SoC”). In at least one embodiment, devices illustrated in FIG.11 may be interconnected with proprietary interconnects, standardized interconnects (e.g., PCIe) or some combination thereof. In at least one embodiment, one or more components of FIG.11 are interconnected using compute express link (CXL) interconnects. [0227] In at least one embodiment, FIG.11 may include a display 1124, a touch screen 1125, a touch pad 1130, a Near Field Communications unit (“NFC”) 1145, a sensor hub 1140, a thermal sensor 1146, an Express Chipset (“EC”) 1135, a Trusted Platform Module (“TPM”) 1138, BIOS/firmware/flash memory (“BIOS, FW Flash”) 1122, a DSP 1160, a drive “SSD or HDD”) 1120 such as a Solid State Disk (“SSD”) or a Hard Disk Drive (“HDD”), a wireless local area network unit (“WLAN”) 1150, a Bluetooth unit 1152, a Wireless Wide Area Network unit (“WWAN”) 1156, a Global Positioning System (GPS) 1155, a camera (“USB 3.0 camera”) 1154 such as a USB 3.0 camera, or a Low Power Double Data Rate (“LPDDR”) memory unit (“LPDDR3”) 1115 implemented in, for example, LPDDR3 standard. These components may each be implemented in any suitable manner. [0228] In at least one embodiment, other components may be communicatively coupled to processor 1110 through components discussed above. In at least one embodiment, an accelerometer 1141, Ambient Light Sensor (“ALS”) 1142, compass 1143, and a gyroscope 1144 may be communicatively coupled to sensor hub 1140. In at least one embodiment, thermal sensor 1139, a fan 1137, a keyboard 1146, and a touch pad 1130 may be communicatively coupled to EC 1135. In at least one embodiment, speaker 1163, a headphones 1164, and a microphone (“mic”) 1165 may be communicatively coupled to an audio unit (“audio codec and class d amp”) 1164, which may in turn be communicatively coupled to DSP 1160. In at least one embodiment, audio unit 1164 may include, for example and without limitation, an audio coder/decoder (“codec”) and a class D amplifier. In at least one embodiment, SIM card (“SIM”) 1157 may be communicatively coupled to WWAN unit 1156. In at least one embodiment, components such as WLAN unit 1150 and Bluetooth unit 1152, as well as WWAN unit 1156 may be implemented in a Next Generation Form Factor (“NGFF”). [0229] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment, inference and/or training logic 615 may be used in system FIG.11 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. [0230] In at least one embodiment, at least one component shown or described with respect to FIG.11 is utilized to implement techniques described in connection with FIGS.1- 5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in system 1100 of FIG.11. [0231] FIG.12 illustrates a computer system 1200, according to at least one embodiment. In at least one embodiment, computer system 1200 is configured to implement various processes and methods described throughout this disclosure. [0232] In at least one embodiment, computer system 1200 comprises, without limitation, at least one central processing unit (“CPU”) 1202 that is connected to a communication bus 1210 implemented using any suitable protocol, such as PCI (“Peripheral Component Interconnect”), peripheral component interconnect express (“PCI-Express”), AGP (“Accelerated Graphics Port”), HyperTransport, or any other bus or point-to-point communication protocol(s). In at least one embodiment, computer system 1200 includes, without limitation, a main memory 1204 and control logic (e.g., implemented as hardware, software, or a combination thereof) and data are stored in main memory 1204 which may take form of random access memory (“RAM”). In at least one embodiment, a network interface subsystem (“network interface”) 1222 provides an interface to other computing devices and networks for receiving data from and transmitting data to other systems from computer system 1200. [0233] In at least one embodiment, computer system 1200, in at least one embodiment, includes, without limitation, input devices 1208, parallel processing system 1212, and display devices 1206 which can be implemented using a conventional cathode ray tube (“CRT”), liquid crystal display (“LCD”), light emitting diode (“LED”), plasma display, or other suitable display technologies. In at least one embodiment, user input is received from input devices 1208 such as keyboard, mouse, touchpad, microphone, and more. In at least one embodiment, each of foregoing modules can be situated on a single semiconductor platform to form a processing system. [0234] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment, inference and/or training logic 615 may be used in system FIG.12 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. [0235] In at least one embodiment, at least one component shown or described with respect to FIG.12 is utilized to implement techniques described in connection with FIGS.1- 5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in system 1200 of FIG.12. [0236] FIG.13 illustrates a computer system 1300, according to at least one embodiment. In at least one embodiment, computer system 1300 includes, without limitation, a computer 1310 and a USB stick 1320. In at least one embodiment, computer 1310 may include, without limitation, any number and type of processor(s) (not shown) and a memory (not shown). In at least one embodiment, computer 1310 includes, without limitation, a server, a cloud instance, a laptop, and a desktop computer. [0237] In at least one embodiment, USB stick 1320 includes, without limitation, a processing unit 1330, a USB interface 1340, and USB interface logic 1350. In at least one embodiment, processing unit 1330 may be any instruction execution system, apparatus, or device capable of executing instructions. In at least one embodiment, processing unit 1330 may include, without limitation, any number and type of processing cores (not shown). In at least one embodiment, processing core 1330 comprises an application specific integrated circuit (“ASIC”) that is optimized to perform any amount and type of operations associated with machine learning. For instance, in at least one embodiment, processing core 1330 is a tensor processing unit (“TPC”) that is optimized to perform machine learning inference operations. In at least one embodiment, processing core 1330 is a vision processing unit (“VPU”) that is optimized to perform machine vision and machine learning inference operations. [0238] In at least one embodiment, USB interface 1340 may be any type of USB connector or USB socket. For instance, in at least one embodiment, USB interface 1340 is a USB 3.0 Type-C socket for data and power. In at least one embodiment, USB interface 1340 is a USB 3.0 Type-A connector. In at least one embodiment, USB interface logic 1350 may include any amount and type of logic that enables processing unit 1330 to interface with or devices (e.g., computer 1310) via USB connector 1340. [0239] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment, inference and/or training logic 615 may be used in system FIG.13 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. [0240] In at least one embodiment, at least one component shown or described with respect to FIG.13 is utilized to implement techniques described in connection with FIGS.1- 5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in system 1300 of FIG.13. [0241] FIG.14A illustrates an exemplary architecture in which a plurality of GPUs 1410- 1413 is communicatively coupled to a plurality of multi-core processors 1405-1406 over high-speed links 1440-1443 (e.g., buses, point-to-point interconnects, etc.). In one embodiment, high-speed links 1440-1443 support a communication throughput of 4GB/s, 30GB/s, 80GB/s or higher. Various interconnect protocols may be used including, but not limited to, PCIe 4.0 or 5.0 and NVLink 2.0. [0242] In addition, and in one embodiment, two or more of GPUs 1410-1413 are interconnected over high-speed links 1429-1430, which may be implemented using same or different protocols/links than those used for high-speed links 1440-1443. Similarly, two or more of multi-core processors 1405-1406 may be connected over high speed link 1428 which may be symmetric multi-processor (SMP) buses operating at 20GB/s, 30GB/s, 120GB/s or higher. Alternatively, all communication between various system components shown in FIG. 14A may be accomplished using same protocols/links (e.g., over a common interconnection fabric). [0243] In one embodiment, each multi-core processor 1405-1406 is communicatively coupled to a processor memory 1401-1402, via memory interconnects 1426-1427, respectively, and each GPU 1410-1413 is communicatively coupled to GPU memory 1420- 1423 over GPU memory interconnects 1450-1453, respectively. Memory interconnects 1426- 1427 and 1450-1453 may utilize same or different memory access technologies. By way of example, and not limitation, processor memories 1401-1402 and GPU memories 1420-1423 may be volatile memories such as dynamic random access memories (DRAMs) (including stacked DRAMs), Graphics DDR SDRAM (GDDR) (e.g., GDDR5, GDDR6), or High Bandwidth Memory (HBM) and/or may be non-volatile memories such as 3D XPoint or Nano-Ram. In one embodiment, some portion of processor memories 1401-1402 may be volatile memory and another portion may be non-volatile memory (e.g., using a two-level memory (2LM) hierarchy). [0244] As described herein, although various processors 1405-1406 and GPUs 1410-1413 may be physically coupled to a particular memory 1401-1402, 1420-1423, respectively, a unified memory architecture may be implemented in which a same virtual system address space (also referred to as “effective address” space) is distributed among various physical memories. For example, processor memories 1401-1402 may each comprise 64GB of system memory address space and GPU memories 1420-1423 may each comprise 32GB of system memory address space (resulting in a total of 256GB addressable memory in this example). [0245] FIG.14B illustrates additional details for an interconnection between a multi-core processor 1407 and a graphics acceleration module 1446 in accordance with one exemplary embodiment. Graphics acceleration module 1446 may include one or more GPU chips integrated on a line card which is coupled to processor 1407 via high-speed link 1440. Alternatively, graphics acceleration module 1446 may be integrated on a same package or chip as processor 1407. [0246] In at least one embodiment, illustrated processor 1407 includes a plurality of cores 1460A-1460D, each with a translation lookaside buffer 1461A-1461D and one or more caches 1462A-1462D. In at least one embodiment, cores 1460A-1460D may include various other components for executing instructions and processing data which are not illustrated. Caches 1462A-1462D may comprise level 1 (L1) and level 2 (L2) caches. In addition, one or more shared caches 1456 may be included in caches 1462A-1462D and shared by sets of cores 1460A-1460D. For example, one embodiment of processor 1407 includes 24 cores, each with its own L1 cache, twelve shared L2 caches, and twelve shared L3 caches. In this embodiment, one or more L2 and L3 caches are shared by two adjacent cores. Processor 1407 and graphics acceleration module 1446 connect with system memory 1414, which may include processor memories 1401-1402 of FIG.14A. [0247] Coherency is maintained for data and instructions stored in various caches 1462A- 1462D, 1456 and system memory 1414 via inter-core communication over a coherence bus 1464. For example, each cache may have cache coherency logic/circuitry associated therewith to communicate to over coherence bus 1464 in response to detected reads or writes to particular cache lines. In one implementation, a cache snooping protocol is implemented over coherence bus 1464 to snoop cache accesses. [0248] In one embodiment, a proxy circuit 1425 communicatively couples graphics acceleration module 1446 to coherence bus 1464, allowing graphics acceleration module 1446 to participate in a cache coherence protocol as a peer of cores 1460A-1460D. In particular, an interface 1435 provides connectivity to proxy circuit 1425 over high-speed link 1440 (e.g., a PCIe bus, NVLink, etc.) and an interface 1437 connects graphics acceleration module 1446 to link 1440. [0249] In one implementation, an accelerator integration circuit 1436 provides cache management, memory access, context management, and interrupt management services on behalf of a plurality of graphics processing engines 1431, 1432, N of graphics acceleration module 1446. Graphics processing engines 1431, 1432, N may each comprise a separate graphics processing unit (GPU). Alternatively, graphics processing engines 1431, 1432, N may comprise different types of graphics processing engines within a GPU such as graphics execution units, media processing engines (e.g., video encoders/decoders), samplers, and blit engines. In at least one embodiment, graphics acceleration module 1446 may be a GPU with a plurality of graphics processing engines 1431-1432, N or graphics processing engines 1431- 1432, N may be individual GPUs integrated on a common package, line card, or chip. [0250] In one embodiment, accelerator integration circuit 1436 includes a memory management unit (MMU) 1439 for performing various memory management functions such as virtual-to-physical memory translations (also referred to as effective-to-real memory translations) and memory access protocols for accessing system memory 1414. MMU 1439 may also include a translation lookaside buffer (TLB) (not shown) for caching virtual/effective to physical/real address translations. In one implementation, a cache 1438 stores commands and data for efficient access by graphics processing engines 1431-1432, N. In one embodiment, data stored in cache 1438 and graphics memories 1433-1434, M is kept coherent with core caches 1462A-1462D, 1456 and system memory 1414. As mentioned, this may be accomplished via proxy circuit 1425 on behalf of cache 1438 and memories 1433- 1434, M (e.g., sending updates to cache 1438 related to modifications/accesses of cache lines on processor caches 1462A-1462D, 1456 and receiving updates from cache 1438). [0251] A set of registers 1445 store context data for threads executed by graphics processing engines 1431-1432, N and a context management circuit 1448 manages thread contexts. For example, context management circuit 1448 may perform save and restore operations to save and restore contexts of various threads during contexts switches (e.g., where a first thread is saved and a second thread is stored so that a second thread can be execute by a graphics processing engine). For example, on a context switch, context management circuit 1448 may store current register values to a designated region in memory (e.g., identified by a context pointer). It may then restore register values when returning to a context. In one embodiment, an interrupt management circuit 1447 receives and processes interrupts received from system devices. [0252] In one implementation, virtual/effective addresses from a graphics processing engine 1431 are translated to real/physical addresses in system memory 1414 by MMU 1439. One embodiment of accelerator integration circuit 1436 supports multiple (e.g., 4, 8, 16) graphics accelerator modules 1446 and/or other accelerator devices. Graphics accelerator module 1446 may be dedicated to a single application executed on processor 1407 or may be shared between multiple applications. In one embodiment, a virtualized graphics execution environment is presented in which resources of graphics processing engines 1431-1432, N are shared with multiple applications or virtual machines (VMs). In at least one embodiment, resources may be subdivided into “slices” which are allocated to different VMs and/or applications based on processing requirements and priorities associated with VMs and/or applications. [0253] In at least one embodiment, accelerator integration circuit 1436 performs as a bridge to a system for graphics acceleration module 1446 and provides address translation and system memory cache services. In addition, accelerator integration circuit 1436 may provide virtualization facilities for a host processor to manage virtualization of graphics processing engines 1431-1432, interrupts, and memory management. [0254] Because hardware resources of graphics processing engines 1431-1432, N are mapped explicitly to a real address space seen by host processor 1407, any host processor can address these resources directly using an effective address value. One function of accelerator integration circuit 1436, in one embodiment, is physical separation of graphics processing engines 1431-1432, N so that they appear to a system as independent units. [0255] In at least one embodiment, one or more graphics memories 1433-1434, M are coupled to each of graphics processing engines 1431-1432, N, respectively. Graphics memories 1433-1434, M store instructions and data being processed by each of graphics processing engines 1431-1432, N. Graphics memories 1433-1434, M may be volatile memories such as DRAMs (including stacked DRAMs), GDDR memory (e.g., GDDR5, GDDR6), or HBM, and/or may be non-volatile memories such as 3D XPoint or Nano-Ram. [0256] In one embodiment, to reduce data traffic over link 1440, biasing techniques are used to ensure that data stored in graphics memories 1433-1434, M is data which will be used most frequently by graphics processing engines 1431-1432, N and preferably not used by cores 1460A-1460D (at least not frequently). Similarly, a biasing mechanism attempts to keep data needed by cores (and preferably not graphics processing engines 1431-1432, N) within caches 1462A-1462D, 1456 of cores and system memory 1414. [0257] FIG.14C illustrates another exemplary embodiment in which accelerator integration circuit 1436 is integrated within processor 1407. In this embodiment, graphics processing engines 1431-1432, N communicate directly over high-speed link 1440 to accelerator integration circuit 1436 via interface 1437 and interface 1435 (which, again, may be utilize any form of bus or interface protocol). Accelerator integration circuit 1436 may perform same operations as those described with respect to FIG.14B, but potentially at a higher throughput given its close proximity to coherence bus 1464 and caches 1462A-1462D, 1456. One embodiment supports different programming models including a dedicated- process programming model (no graphics acceleration module virtualization) and shared programming models (with virtualization), which may include programming models which are controlled by accelerator integration circuit 1436 and programming models which are controlled by graphics acceleration module 1446. [0258] In at least one embodiment, graphics processing engines 1431-1432, N are dedicated to a single application or process under a single operating system. In at least one embodiment, a single application can funnel other application requests to graphics processing engines 1431-1432, N, providing virtualization within a VM/partition. [0259] In at least one embodiment, graphics processing engines 1431-1432, N, may be shared by multiple VM/application partitions. In at least one embodiment, shared models may use a system hypervisor to virtualize graphics processing engines 1431-1432, N to allow access by each operating system. For single-partition systems without a hypervisor, graphics processing engines 1431-1432, N are owned by an operating system. In at least one embodiment, an operating system can virtualize graphics processing engines 1431-1432, N to provide access to each process or application. [0260] In at least one embodiment, graphics acceleration module 1446 or an individual graphics processing engine 1431-1432, N selects a process element using a process handle. In one embodiment, process elements are stored in system memory 1414 and are addressable using an effective address to real address translation techniques described herein. In at least one embodiment, a process handle may be an implementation-specific value provided to a host process when registering its context with graphics processing engine 1431-1432, N (that is, calling system software to add a process element to a process element linked list). In at least one embodiment, a lower 16-bits of a process handle may be an offset of the process element within a process element linked list. [0261] FIG.14D illustrates an exemplary accelerator integration slice 1490. As used herein, a “slice” comprises a specified portion of processing resources of accelerator integration circuit 1436. Application effective address space 1482 within system memory 1414 stores process elements 1483. In one embodiment, process elements 1483 are stored in response to GPU invocations 1481 from applications 1480 executed on processor 1407. A process element 1483 contains process state for corresponding application 1480. A work descriptor (WD) 1484 contained in process element 1483 can be a single job requested by an application or may contain a pointer to a queue of jobs. In at least one embodiment, WD 1484 is a pointer to a job request queue in an application’s address space 1482. [0262] Graphics acceleration module 1446 and/or individual graphics processing engines 1431-1432, N can be shared by all or a subset of processes in a system. In at least one embodiment, an infrastructure for setting up process state and sending a WD 1484 to a graphics acceleration module 1446 to start a job in a virtualized environment may be included. [0263] In at least one embodiment, a dedicated-process programming model is implementation-specific. In this model, a single process owns graphics acceleration module 1446 or an individual graphics processing engine 1431. Because graphics acceleration module 1446 is owned by a single process, a hypervisor initializes accelerator integration circuit 1436 for an owning partition and an operating system initializes accelerator integration circuit 1436 for an owning process when graphics acceleration module 1446 is assigned. [0264] In operation, a WD fetch unit 1491 in accelerator integration slice 1490 fetches next WD 1484 which includes an indication of work to be done by one or more graphics processing engines of graphics acceleration module 1446. Data from WD 1484 may be stored in registers 1445 and used by MMU 1439, interrupt management circuit 1447 and/or context management circuit 1448 as illustrated. For example, one embodiment of MMU 1439 includes segment/page walk circuitry for accessing segment/page tables 1486 within OS virtual address space 1485. Interrupt management circuit 1447 may process interrupt events 1492 received from graphics acceleration module 1446. When performing graphics operations, an effective address 1493 generated by a graphics processing engine 1431-1432, N is translated to a real address by MMU 1439. [0265] In one embodiment, a same set of registers 1445 are duplicated for each graphics processing engine 1431-1432, N and/or graphics acceleration module 1446 and may be initialized by a hypervisor or operating system. Each of these duplicated registers may be included in an accelerator integration slice 1490. Exemplary registers that may be initialized by a hypervisor are shown in Table 1. Table 1 –Hypervisor Initialized Registers[0266] Exemplary registers that may be initialized by an operating system are shown in Table 2. Table 2 –Operating System Initialized Registers[0267] In one embodiment, each WD 1484 is specific to a particular graphics acceleration module 1446 and/or graphics processing engines 1431-1432, N. It contains all information required by a graphics processing engine 1431-1432, N to do work or it can be a pointer to a memory location where an application has set up a command queue of work to be completed. [0268] FIG.14E illustrates additional details for one exemplary embodiment of a shared model. This embodiment includes a hypervisor real address space 1498 in which a process element list 1499 is stored. Hypervisor real address space 1498 is accessible via a hypervisor 1496 which virtualizes graphics acceleration module engines for operating system 1495. [0269] In at least one embodiment, shared programming models allow for all or a subset of processes from all or a subset of partitions in a system to use a graphics acceleration module 1446. There are two programming models where graphics acceleration module 1446 is shared by multiple processes and partitions: time-sliced shared and graphics directed shared. [0270] In this model, system hypervisor 1496 owns graphics acceleration module 1446 and makes its function available to all operating systems 1495. For a graphics acceleration module 1446 to support virtualization by system hypervisor 1496, graphics acceleration module 1446 may adhere to the following: 1) An application’s job request must be autonomous (that is, state does not need to be maintained between jobs), or graphics acceleration module 1446 must provide a context save and restore mechanism; 2) An application’s job request is guaranteed by graphics acceleration module 1446 to complete in a specified amount of time, including any translation faults, or graphics acceleration module 1446 provides an ability to preempt processing of a job; 3) Graphics acceleration module 1446 must be guaranteed fairness between processes when operating in a directed shared programming model. [0271] In at least one embodiment, application 1480 is required to make an operating system 1495 system call with a graphics acceleration module 1446 type, a work descriptor (WD), an authority mask register (AMR) value, and a context save/restore area pointer (CSRP). In at least one embodiment, graphics acceleration module 1446 type describes a targeted acceleration function for a system call. In at least one embodiment, graphics acceleration module 1446 type may be a system-specific value. In at least one embodiment, WD is formatted specifically for graphics acceleration module 1446 and can be in a form of a graphics acceleration module 1446 command, an effective address pointer to a user-defined structure, an effective address pointer to a queue of commands, or any other data structure to describe work to be done by graphics acceleration module 1446. In one embodiment, an AMR value is an AMR state to use for a current process. In at least one embodiment, a value passed to an operating system is similar to an application setting an AMR. If accelerator integration circuit 1436 and graphics acceleration module 1446 implementations do not support a User Authority Mask Override Register (UAMOR), an operating system may apply a current UAMOR value to an AMR value before passing an AMR in a hypervisor call. Hypervisor 1496 may optionally apply a current Authority Mask Override Register (AMOR) value before placing an AMR into process element 1483. In at least one embodiment, CSRP is one of registers 1445 containing an effective address of an area in an application’s address space 1482 for graphics acceleration module 1446 to save and restore context state. This pointer is optional if no state is required to be saved between jobs or when a job is preempted. In at least one embodiment, context save/restore area may be pinned system memory. [0272] Upon receiving a system call, operating system 1495 may verify that application 1480 has registered and been given authority to use graphics acceleration module 1446. Operating system 1495 then calls hypervisor 1496 with information shown in Table 3. Table 3 –OS to Hypervisor Call Parameters[0273] Upon receiving a hypervisor call, hypervisor 1496 verifies that operating system 1495 has registered and been given authority to use graphics acceleration module 1446. Hypervisor 1496 then puts process element 1483 into a process element linked list for a corresponding graphics acceleration module 1446 type. A process element may include information shown in Table 4. Table 4 –Process Element Information[0274] In at least one embodiment, hypervisor initializes a plurality of accelerator integration slice 1490 registers 1445. [0275] As illustrated in FIG.14F, in at least one embodiment, a unified memory is used, addressable via a common virtual memory address space used to access physical processor memories 1401-1402 and GPU memories 1420-1423. In this implementation, operations executed on GPUs 1410-1413 utilize a same virtual/effective memory address space to access processor memories 1401-1402 and vice versa, thereby simplifying programmability. In one embodiment, a first portion of a virtual/effective address space is allocated to processor memory 1401, a second portion to second processor memory 1402, a third portion to GPU memory 1420, and so on. In at least one embodiment, an entire virtual/effective memory space (sometimes referred to as an effective address space) is thereby distributed across each of processor memories 1401-1402 and GPU memories 1420-1423, allowing any processor or GPU to access any physical memory with a virtual address mapped to that memory. [0276] In one embodiment, bias/coherence management circuitry 1494A-1494E within one or more of MMUs 1439A-1439E ensures cache coherence between caches of one or more host processors (e.g., 1405) and GPUs 1410-1413 and implements biasing techniques indicating physical memories in which certain types of data should be stored. While multiple instances of bias/coherence management circuitry 1494A-1494E are illustrated in FIG.14F, bias/coherence circuitry may be implemented within an MMU of one or more host processors 1405 and/or within accelerator integration circuit 1436. [0277] One embodiment allows GPU-attached memory 1420-1423 to be mapped as part of system memory, and accessed using shared virtual memory (SVM) technology, but without suffering performance drawbacks associated with full system cache coherence. In at least one embodiment, an ability for GPU-attached memory 1420-1423 to be accessed as system memory without onerous cache coherence overhead provides a beneficial operating environment for GPU offload. This arrangement allows host processor 1405 software to setup operands and access computation results, without overhead of tradition I/O DMA data copies. Such traditional copies involve driver calls, interrupts and memory mapped I/O (MMIO) accesses that are all inefficient relative to simple memory accesses. In at least one embodiment, an ability to access GPU attached memory 1420-1423 without cache coherence overheads can be critical to execution time of an offloaded computation. In cases with substantial streaming write memory traffic, for example, cache coherence overhead can significantly reduce an effective write bandwidth seen by a GPU 1410-1413. In at least one embodiment, efficiency of operand setup, efficiency of results access, and efficiency of GPU computation may play a role in determining effectiveness of a GPU offload. [0278] In at least one embodiment, selection of GPU bias and host processor bias is driven by a bias tracker data structure. A bias table may be used, for example, which may be a page-granular structure (i.e., controlled at a granularity of a memory page) that includes 1 or 2 bits per GPU-attached memory page. In at least one embodiment, a bias table may be implemented in a stolen memory range of one or more GPU-attached memories 1420-1423, with or without a bias cache in GPU 1410-1413 (e.g., to cache frequently/recently used entries of a bias table). Alternatively, an entire bias table may be maintained within a GPU. [0279] In at least one embodiment, a bias table entry associated with each access to GPU- attached memory 1420-1423 is accessed prior to actual access to a GPU memory, causing the following operations. First, local requests from GPU 1410-1413 that find their page in GPU bias are forwarded directly to a corresponding GPU memory 1420-1423. Local requests from a GPU that find their page in host bias are forwarded to processor 1405 (e.g., over a high- speed link as discussed above). In one embodiment, requests from processor 1405 that find a requested page in host processor bias complete a request like a normal memory read. Alternatively, requests directed to a GPU-biased page may be forwarded to GPU 1410-1413. In at least one embodiment, a GPU may then transition a page to a host processor bias if it is not currently using a page. In at least one embodiment, bias state of a page can be changed either by a software-based mechanism, a hardware-assisted software-based mechanism, or, for a limited set of cases, a purely hardware-based mechanism. [0280] One mechanism for changing bias state employs an API call (e.g. OpenCL), which, in turn, calls a GPU’s device driver which, in turn, sends a message (or enqueues a command descriptor) to a GPU directing it to change a bias state and, for some transitions, perform a cache flushing operation in a host. In at least one embodiment, cache flushing operation is used for a transition from host processor 1405 bias to GPU bias, but is not for an opposite transition. [0281] In one embodiment, cache coherency is maintained by temporarily rendering GPU-biased pages uncacheable by host processor 1405. To access these pages, processor 1405 may request access from GPU 1410 which may or may not grant access right away. Thus, to reduce communication between processor 1405 and GPU 1410 it is beneficial to ensure that GPU-biased pages are those which are required by a GPU but not host processor 1405 and vice versa. [0282] Hardware structure(s) 615 are used to perform one or more embodiments. Details regarding the hardware structure(s) 615 are provided herein in conjunction with FIGS.6A and/or 6B. [0283] In at least one embodiment, at least one component of FIG.14A, FIG.14B, 14C, 14D, 14E, and/or FIG.14F is utilized to implement techniques described in connection with FIGS.1-5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. [0284] FIG.15 illustrates exemplary integrated circuits and associated graphics processors that may be fabricated using one or more IP cores, according to various embodiments described herein. In addition to what is illustrated, other logic and circuits may be included in at least one embodiment, including additional graphics processors/cores, peripheral interface controllers, or general-purpose processor cores. [0285] FIG.15 is a block diagram illustrating an exemplary system on a chip integrated circuit 1500 that may be fabricated using one or more IP cores, according to at least one embodiment. In at least one embodiment, integrated circuit 1500 includes one or more application processor(s) 1505 (e.g., CPUs), at least one graphics processor 1510, and may additionally include an image processor 1515 and/or a video processor 1520, any of which may be a modular IP core. In at least one embodiment, integrated circuit 1500 includes peripheral or bus logic including a USB controller 1525, UART controller 1530, an SPI/SDIO controller 1535, and an I.sup.2S/I.sup.2C controller 1540. In at least one embodiment, integrated circuit 1500 can include a display device 1545 coupled to one or more of a high-definition multimedia interface (HDMI) controller 1550 and a mobile industry processor interface (MIPI) display interface 1555. In at least one embodiment, storage may be provided by a flash memory subsystem 1560 including flash memory and a flash memory controller. In at least one embodiment, memory interface may be provided via a memory controller 1565 for access to SDRAM or SRAM memory devices. In at least one embodiment, some integrated circuits additionally include an embedded security engine 1570. [0286] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment, inference and/or training logic 615 may be used in integrated circuit 1500 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. [0287] In at least one embodiment, at least one component shown or described with respect to FIG.15 is utilized to implement techniques described in connection with FIGS.1- 5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in system on chip integrated circuit 1500 of FIG.15. [0288] FIGS.16A-16B illustrate exemplary integrated circuits and associated graphics processors that may be fabricated using one or more IP cores, according to various embodiments described herein. In addition to what is illustrated, other logic and circuits may be included in at least one embodiment, including additional graphics processors/cores, peripheral interface controllers, or general-purpose processor cores. [0289] FIGS.16A-16B are block diagrams illustrating exemplary graphics processors for use within an SoC, according to embodiments described herein. FIG.16A illustrates an exemplary graphics processor 1610 of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to at least one embodiment. FIG.16B illustrates an additional exemplary graphics processor 1640 of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to at least one embodiment. In at least one embodiment, graphics processor 1610 of FIG.16A is a low power graphics processor core. In at least one embodiment, graphics processor 1640 of FIG. 16B is a higher performance graphics processor core. In at least one embodiment, each of graphics processors 1610, 1640 can be variants of graphics processor 1510 of FIG.15. [0290] In at least one embodiment, graphics processor 1610 includes a vertex processor 1605 and one or more fragment processor(s) 1615A-1615N (e.g., 1615A, 1615B, 1615C, 1615D, through 1615N-1, and 1615N). In at least one embodiment, graphics processor 1610 can execute different shader programs via separate logic, such that vertex processor 1605 is optimized to execute operations for vertex shader programs, while one or more fragment processor(s) 1615A-1615N execute fragment (e.g., pixel) shading operations for fragment or pixel shader programs. In at least one embodiment, vertex processor 1605 performs a vertex processing stage of a 3D graphics pipeline and generates primitives and vertex data. In at least one embodiment, fragment processor(s) 1615A-1615N use primitive and vertex data generated by vertex processor 1605 to produce a framebuffer that is displayed on a display device. In at least one embodiment, fragment processor(s) 1615A-1615N are optimized to execute fragment shader programs as provided for in an OpenGL API, which may be used to perform similar operations as a pixel shader program as provided for in a Direct 3D API. [0291] In at least one embodiment, graphics processor 1610 additionally includes one or more memory management units (MMUs) 1620A-1620B, cache(s) 1625A-1625B, and circuit interconnect(s) 1630A-1630B. In at least one embodiment, one or more MMU(s) 1620A- 1620B provide for virtual to physical address mapping for graphics processor 1610, including for vertex processor 1605 and/or fragment processor(s) 1615A-1615N, which may reference vertex or image/texture data stored in memory, in addition to vertex or image/texture data stored in one or more cache(s) 1625A-1625B. In at least one embodiment, one or more MMU(s) 1620A-1620B may be synchronized with other MMUs within system, including one or more MMUs associated with one or more application processor(s) 1505, image processors 1515, and/or video processors 1520 of FIG.15, such that each processor 1505-1520 can participate in a shared or unified virtual memory system. In at least one embodiment, one or more circuit interconnect(s) 1630A-1630B enable graphics processor 1610 to interface with other IP cores within SoC, either via an internal bus of SoC or via a direct connection. [0292] In at least one embodiment, graphics processor 1640 includes one or more MMU(s) 1620A-1620B, caches 1625A-1625B, and circuit interconnects 1630A-1630B of graphics processor 1610 of FIG.16A. In at least one embodiment, graphics processor 1640 includes one or more shader core(s) 1655A-1655N (e.g., 1655A, 1655B, 1655C, 1655D, 1655E, 1655F, through 1655N-1, and 1655N), which provides for a unified shader core architecture in which a single core or type or core can execute all types of programmable shader code, including shader program code to implement vertex shaders, fragment shaders, and/or compute shaders. In at least one embodiment, a number of shader cores can vary. In at least one embodiment, graphics processor 1640 includes an inter-core task manager 1645, which acts as a thread dispatcher to dispatch execution threads to one or more shader cores 1655A-1655N and a tiling unit 1658 to accelerate tiling operations for tile-based rendering, in which rendering operations for a scene are subdivided in image space, for example to exploit local spatial coherence within a scene or to optimize use of internal caches. [0293] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment, inference and/or training logic 615 may be used in integrated circuit 16A and/or 16B for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. [0294] In at least one embodiment, at least one component shown or described with respect to FIG.16A and/or FIG.16B is utilized to implement techniques described in connection with FIGS.1-5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in graphics processor 1610 of FIG.16A and/or graphics processor 1640 of FIG.16B. [0295] FIGS.17A-17B illustrate additional exemplary graphics processor logic according to embodiments described herein. FIG.17A illustrates a graphics core 1700 that may be included within graphics processor 1510 of FIG.15, in at least one embodiment, and may be a unified shader core 1655A-1655N as in FIG.16B in at least one embodiment. FIG.17B illustrates a highly-parallel general-purpose graphics processing unit 1730 suitable for deployment on a multi-chip module in at least one embodiment. [0296] In at least one embodiment, graphics core 1700 includes a shared instruction cache 1702, a texture unit 1718, and a cache/shared memory 1720 that are common to execution resources within graphics core 1700. In at least one embodiment, graphics core 1700 can include multiple slices 1701A-1701N or partition for each core, and a graphics processor can include multiple instances of graphics core 1700. Slices 1701A-1701N can include support logic including a local instruction cache 1704A-1704N, a thread scheduler 1706A-1706N, a thread dispatcher 1708A-1708N, and a set of registers 1710A-1710N. In at least one embodiment, slices 1701A-1701N can include a set of additional function units (AFUs 1712A-1712N), floating-point units (FPU 1714A-1714N), integer arithmetic logic units (ALUs 1716-1716N), address computational units (ACU 1713A-1713N), double- precision floating-point units (DPFPU 1715A-1715N), and matrix processing units (MPU 1717A-1717N). [0297] In at least one embodiment, FPUs 1714A-1714N can perform single-precision (32-bit) and half-precision (16-bit) floating point operations, while DPFPUs 1715A-1715N perform double precision (64-bit) floating point operations. In at least one embodiment, ALUs 1716A-1716N can perform variable precision integer operations at 8-bit, 16-bit, and 32-bit precision, and can be configured for mixed precision operations. In at least one embodiment, MPUs 1717A-1717N can also be configured for mixed precision matrix operations, including half-precision floating point and 8-bit integer operations. In at least one embodiment, MPUs 1717-1717N can perform a variety of matrix operations to accelerate machine learning application frameworks, including enabling support for accelerated general matrix to matrix multiplication (GEMM). In at least one embodiment, AFUs 1712A-1712N can perform additional logic operations not supported by floating-point or integer units, including trigonometric operations (e.g., Sine, Cosine, etc.). [0298] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment, inference and/or training logic 615 may be used in graphics core 1700 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. [0299] In at least one embodiment, at least one component shown or described with respect to FIG.17A is utilized to implement techniques described in connection with FIGS. 1-5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in graphics processor 1700 of FIG.17A. [0300] FIG.17B illustrates a general-purpose processing unit (GPGPU) 1730 that can be configured to enable highly-parallel compute operations to be performed by an array of graphics processing units, in at least one embodiment. In at least one embodiment, GPGPU 1730 can be linked directly to other instances of GPGPU 1730 to create a multi-GPU cluster to improve training speed for deep neural networks. In at least one embodiment, GPGPU 1730 includes a host interface 1732 to enable a connection with a host processor. In at least one embodiment, host interface 1732 is a PCI Express interface. In at least one embodiment, host interjace 1732 can be a vendor specific communications interface or communications fabric. In at least one embodiment, GPGPU 1730 receives commands from a host processor and uses a global scheduler 1734 to distribute execution threads associated with those commands to a set of compute clusters 1736A-1736H. In at least one embodiment, compute clusters 1736A-1736H share a cache memory 1738. In at least one embodiment, cache memory 1738 can serve as a higher-level cache for cache memories within compute clusters 1736A-1736H. [0301] In at least one embodiment, GPGPU 1730 includes memory 1744A-1744B coupled with compute clusters 1736A-1736H via a set of memory controllers 1742A-1742B. In at least one embodiment, memory 1744A-1744B can include various types of memory devices including dynamic random access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SGRAM), including graphics double data rate (GDDR) memory. [0302] In at least one embodiment, compute clusters 1736A-1736H each include a set of graphics cores, such as graphics core 1700 of FIG.17A, which can include multiple types of integer and floating point logic units that can perform computational operations at a range of precisions including suited for machine learning computations. For example, in at least one embodiment, at least a subset of floating point units in each of compute clusters 1736A- 1736H can be configured to perform 16-bit or 32-bit floating point operations, while a different subset of floating point units can be configured to perform 64-bit floating point operations. [0303] In at least one embodiment, multiple instances of GPGPU 1730 can be configured to operate as a compute cluster. In at least one embodiment, communication used by compute clusters 1736A-1736H for synchronization and data exchange varies across embodiments. In at least one embodiment, multiple instances of GPGPU 1730 communicate over host interface 1732. In at least one embodiment, GPGPU 1730 includes an I/O hub 1739 that couples GPGPU 1730 with a GPU link 1740 that enables a direct connection to other instances of GPGPU 1730. In at least one embodiment, GPU link 1740 is coupled to a dedicated GPU-to-GPU bridge that enables communication and synchronization between multiple instances of GPGPU 1730. In at least one embodiment GPU link 1740 couples with a high speed interconnect to transmit and receive data to other GPGPUs or parallel processors. In at least one embodiment, multiple instances of GPGPU 1730 are located in separate data processing systems and communicate via a network device that is accessible via host interface 1732. In at least one embodiment GPU link 1740 can be configured to enable a connection to a host processor in addition to or as an alternative to host interface 1732. [0304] In at least one embodiment, GPGPU 1730 can be configured to train neural networks. In at least one embodiment, GPGPU 1730 can be used within a inferencing platform. In at least one embodiment, in which GPGPU 1730 is used for inferencing, GPGPU may include fewer compute clusters 1736A-1736H relative to when GPGPU is used for training a neural network. In at least one embodiment, memory technology associated with memory 1744A-1744B may differ between inferencing and training configurations, with higher bandwidth memory technologies devoted to training configurations. In at least one embodiment, inferencing configuration of GPGPU 1730 can support inferencing specific instructions. For example, in at least one embodiment, an inferencing configuration can provide support for one or more 8-bit integer dot product instructions, which may be used during inferencing operations for deployed neural networks. [0305] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment, inference and/or training logic 615 may be used in GPGPU 1730 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. [0306] In at least one embodiment, at least one component shown or described with respect to FIG.17B is utilized to implement techniques described in connection with FIGS. 1-5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in GPGPU 1730 of FIG.17B. [0307] FIG.18 is a block diagram illustrating a computing system 1800 according to at least one embodiment. In at least one embodiment, computing system 1800 includes a processing subsystem 1801 having one or more processor(s) 1802 and a system memory 1804 communicating via an interconnection path that may include a memory hub 1805. In at least one embodiment, memory hub 1805 may be a separate component within a chipset component or may be integrated within one or more processor(s) 1802. In at least one embodiment, memory hub 1805 couples with an I/O subsystem 1811 via a communication link 1806. In at least one embodiment, I/O subsystem 1811 includes an I/O hub 1807 that can enable computing system 1800 to receive input from one or more input device(s) 1808. In at least one embodiment, I/O hub 1807 can enable a display controller, which may be included in one or more processor(s) 1802, to provide outputs to one or more display device(s) 1810A. In at least one embodiment, one or more display device(s) 1810A coupled with I/O hub 1807 can include a local, internal, or embedded display device. [0308] In at least one embodiment, processing subsystem 1801 includes one or more parallel processor(s) 1812 coupled to memory hub 1805 via a bus or other communication link 1813. In at least one embodiment, communication link 1813 may be one of any number of standards based communication link technologies or protocols, such as, but not limited to PCI Express, or may be a vendor specific communications interface or communications fabric. In at least one embodiment, one or more parallel processor(s) 1812 form a computationally focused parallel or vector processing system that can include a large number of processing cores and/or processing clusters, such as a many integrated core (MIC) processor. In at least one embodiment, one or more parallel processor(s) 1812 form a graphics processing subsystem that can output pixels to one of one or more display device(s) 1810A coupled via I/O Hub 1807. In at least one embodiment, one or more parallel processor(s) 1812 can also include a display controller and display interface (not shown) to enable a direct connection to one or more display device(s) 1810B. [0309] In at least one embodiment, a system storage unit 1814 can connect to I/O hub 1807 to provide a storage mechanism for computing system 1800. In at least one embodiment, an I/O switch 1816 can be used to provide an interface mechanism to enable connections between I/O hub 1807 and other components, such as a network adapter 1818 and/or wireless network adapter 1819 that may be integrated into platform, and various other devices that can be added via one or more add-in device(s) 1820. In at least one embodiment, network adapter 1818 can be an Ethernet adapter or another wired network adapter. In at least one embodiment, wireless network adapter 1819 can include one or more of a Wi-Fi, Bluetooth, near field communication (NFC), or other network device that includes one or more wireless radios. [0310] In at least one embodiment, computing system 1800 can include other components not explicitly shown, including USB or other port connections, optical storage drives, video capture devices, and like, may also be connected to I/O hub 1807. In at least one embodiment, communication paths interconnecting various components in FIG.18 may be implemented using any suitable protocols, such as PCI (Peripheral Component Interconnect) based protocols (e.g., PCI-Express), or other bus or point-to-point communication interfaces and/or protocol(s), such as NV-Link high-speed interconnect, or interconnect protocols. [0311] In at least one embodiment, one or more parallel processor(s) 1812 incorporate circuitry optimized for graphics and video processing, including, for example, video output circuitry, and constitutes a graphics processing unit (GPU). In at least one embodiment, one or more parallel processor(s) 1812 incorporate circuitry optimized for general purpose processing. In at least embodiment, components of computing system 1800 may be integrated with one or more other system elements on a single integrated circuit. For example, in at least one embodiment, one or more parallel processor(s) 1812, memory hub 1805, processor(s) 1802, and I/O hub 1807 can be integrated into a system on chip (SoC) integrated circuit. In at least one embodiment, components of computing system 1800 can be integrated into a single package to form a system in package (SIP) configuration. In at least one embodiment, at least a portion of components of computing system 1800 can be integrated into a multi-chip module (MCM), which can be interconnected with other multi- chip modules into a modular computing system. [0312] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment, inference and/or training logic 615 may be used in system FIG.1800 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. [0313] In at least one embodiment, at least one component shown or described with respect to FIG.18 is utilized to implement techniques described in connection with FIGS.1- 5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in processing subsystem 1801 of FIG.18. PROCESSORS [0314] FIG.19A illustrates a parallel processor 1900 according to at least on embodiment. In at least one embodiment, various components of parallel processor 1900 may be implemented using one or more integrated circuit devices, such as programmable processors, application specific integrated circuits (ASICs), or field programmable gate arrays (FPGA). In at least one embodiment, illustrated parallel processor 1900 is a variant of one or more parallel processor(s) 1812 shown in FIG.18 according to an exemplary embodiment. [0315] In at least one embodiment, parallel processor 1900 includes a parallel processing unit 1902. In at least one embodiment, parallel processing unit 1902 includes an I/O unit 1904 that enables communication with other devices, including other instances of parallel processing unit 1902. In at least one embodiment, I/O unit 1904 may be directly connected to other devices. In at least one embodiment, I/O unit 1904 connects with other devices via use of a hub or switch interface, such as memory hub 1805. In at least one embodiment, connections between memory hub 1805 and I/O unit 1904 form a communication link 1813. In at least one embodiment, I/O unit 1904 connects with a host interface 1906 and a memory crossbar 1916, where host interface 1906 receives commands directed to performing processing operations and memory crossbar 1916 receives commands directed to performing memory operations. [0316] In at least one embodiment, when host interface 1906 receives a command buffer via I/O unit 1904, host interface 1906 can direct work operations to perform those commands to a front end 1908. In at least one embodiment, front end 1908 couples with a scheduler 1910, which is configured to distribute commands or other work items to a processing cluster array 1912. In at least one embodiment, scheduler 1910 ensures that processing cluster array 1912 is properly configured and in a valid state before tasks are distributed to processing cluster array 1912 of processing cluster array 1912. In at least one embodiment, scheduler 1910 is implemented via firmware logic executing on a microcontroller. In at least one embodiment, microcontroller implemented scheduler 1910 is configurable to perform complex scheduling and work distribution operations at coarse and fine granularity, enabling rapid preemption and context switching of threads executing on processing array 1912. In at least one embodiment, host software can prove workloads for scheduling on processing array 1912 via one of multiple graphics processing doorbells. In at least one embodiment, workloads can then be automatically distributed across processing array 1912 by scheduler 1910 logic within a microcontroller including scheduler 1910. [0317] In at least one embodiment, processing cluster array 1912 can include up to “N” processing clusters (e.g., cluster 1914A, cluster 1914B, through cluster 1914N). In at least one embodiment, each cluster 1914A-1914N of processing cluster array 1912 can execute a large number of concurrent threads. In at least one embodiment, scheduler 1910 can allocate work to clusters 1914A-1914N of processing cluster array 1912 using various scheduling and/or work distribution algorithms, which may vary depending on workload arising for each type of program or computation. In at least one embodiment, scheduling can be handled dynamically by scheduler 1910, or can be assisted in part by compiler logic during compilation of program logic configured for execution by processing cluster array 1912. In at least one embodiment, different clusters 1914A-1914N of processing cluster array 1912 can be allocated for processing different types of programs or for performing different types of computations. [0318] In at least one embodiment, processing cluster array 1912 can be configured to perform various types of parallel processing operations. In at least one embodiment, processing cluster array 1912 is configured to perform general-purpose parallel compute operations. For example, in at least one embodiment, processing cluster array 1912 can include logic to execute processing tasks including filtering of video and/or audio data, performing modeling operations, including physics operations, and performing data transformations. [0319] In at least one embodiment, processing cluster array 1912 is configured to perform parallel graphics processing operations. In at least one embodiment, processing cluster array 1912 can include additional logic to support execution of such graphics processing operations, including, but not limited to texture sampling logic to perform texture operations, as well as tessellation logic and other vertex processing logic. In at least one embodiment, processing cluster array 1912 can be configured to execute graphics processing related shader programs such as, but not limited to vertex shaders, tessellation shaders, geometry shaders, and pixel shaders. In at least one embodiment, parallel processing unit 1902 can transfer data from system memory via I/O unit 1904 for processing. In at least one embodiment, during processing, transferred data can be stored to on-chip memory (e.g., parallel processor memory 1922) during processing, then written back to system memory. [0320] In at least one embodiment, when parallel processing unit 1902 is used to perform graphics processing, scheduler 1910 can be configured to divide a processing workload into approximately equal sized tasks, to better enable distribution of graphics processing operations to multiple clusters 1914A-1914N of processing cluster array 1912. In at least one embodiment, portions of processing cluster array 1912 can be configured to perform different types of processing. For example, in at least one embodiment, a first portion may be configured to perform vertex shading and topology generation, a second portion may be configured to perform tessellation and geometry shading, and a third portion may be configured to perform pixel shading or other screen space operations, to produce a rendered image for display. In at least one embodiment, intermediate data produced by one or more of clusters 1914A-1914N may be stored in buffers to allow intermediate data to be transmitted between clusters 1914A-1914N for further processing. [0321] In at least one embodiment, processing cluster array 1912 can receive processing tasks to be executed via scheduler 1910, which receives commands defining processing tasks from front end 1908. In at least one embodiment, processing tasks can include indices of data to be processed, e.g., surface (patch) data, primitive data, vertex data, and/or pixel data, as well as state parameters and commands defining how data is to be processed (e.g., what program is to be executed). In at least one embodiment, scheduler 1910 may be configured to fetch indices corresponding to tasks or may receive indices from front end 1908. In at least one embodiment, front end 1908 can be configured to ensure processing cluster array 1912 is configured to a valid state before a workload specified by incoming command buffers (e.g., batch-buffers, push buffers, etc.) is initiated. [0322] In at least one embodiment, each of one or more instances of parallel processing unit 1902 can couple with parallel processor memory 1922. In at least one embodiment, parallel processor memory 1922 can be accessed via memory crossbar 1916, which can receive memory requests from processing cluster array 1912 as well as I/O unit 1904. In at least one embodiment, memory crossbar 1916 can access parallel processor memory 1922 via a memory interface 1918. In at least one embodiment, memory interface 1918 can include multiple partition units (e.g., partition unit 1920A, partition unit 1920B, through partition unit 1920N) that can each couple to a portion (e.g., memory unit) of parallel processor memory 1922. In at least one embodiment, a number of partition units 1920A-1920N is configured to be equal to a number of memory units, such that a first partition unit 1920A has a corresponding first memory unit 1924A, a second partition unit 1920B has a corresponding memory unit 1924B, and an Nth partition unit 1920N has a corresponding Nth memory unit 1924N. In at least one embodiment, a number of partition units 1920A-1920N may not be equal to a number of memory devices. [0323] In at least one embodiment, memory units 1924A-1924N can include various types of memory devices, including dynamic random access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SGRAM), including graphics double data rate (GDDR) memory. In at least one embodiment, memory units 1924A-1924N may also include 3D stacked memory, including but not limited to high bandwidth memory (HBM). In at least one embodiment, render targets, such as frame buffers or texture maps may be stored across memory units 1924A-1924N, allowing partition units 1920A-1920N to write portions of each render target in parallel to efficiently use available bandwidth of parallel processor memory 1922. In at least one embodiment, a local instance of parallel processor memory 1922 may be excluded in favor of a unified memory design that utilizes system memory in conjunction with local cache memory. [0324] In at least one embodiment, any one of clusters 1914A-1914N of processing cluster array 1912 can process data that will be written to any of memory units 1924A-1924N within parallel processor memory 1922. In at least one embodiment, memory crossbar 1916 can be configured to transfer an output of each cluster 1914A-1914N to any partition unit 1920A-1920N or to another cluster 1914A-1914N, which can perform additional processing operations on an output. In at least one embodiment, each cluster 1914A-1914N can communicate with memory interface 1918 through memory crossbar 1916 to read from or write to various external memory devices. In at least one embodiment, memory crossbar 1916 has a connection to memory interface 1918 to communicate with I/O unit 1904, as well as a connection to a local instance of parallel processor memory 1922, enabling processing units within different processing clusters 1914A-1914N to communicate with system memory or other memory that is not local to parallel processing unit 1902. In at least one embodiment, memory crossbar 1916 can use virtual channels to separate traffic streams between clusters 1914A-1914N and partition units 1920A-1920N. [0325] In at least one embodiment, multiple instances of parallel processing unit 1902 can be provided on a single add-in card, or multiple add-in cards can be interconnected. In at least one embodiment, different instances of parallel processing unit 1902 can be configured to inter-operate even if different instances have different numbers of processing cores, different amounts of local parallel processor memory, and/or other configuration differences. For example, in at least one embodiment, some instances of parallel processing unit 1902 can include higher precision floating point units relative to other instances. In at least one embodiment, systems incorporating one or more instances of parallel processing unit 1902 or parallel processor 1900 can be implemented in a variety of configurations and form factors, including but not limited to desktop, laptop, or handheld personal computers, servers, workstations, game consoles, and/or embedded systems. [0326] FIG.19B is a block diagram of a partition unit 1920 according to at least one embodiment. In at least one embodiment, partition unit 1920 is an instance of one of partition units 1920A-1920N of FIG.19A. In at least one embodiment, partition unit 1920 includes an L2 cache 1921, a frame buffer interface 1925, and a ROP 1926 (raster operations unit). L2 cache 1921 is a read/write cache that is configured to perform load and store operations received from memory crossbar 1916 and ROP 1926. In at least one embodiment, read misses and urgent write-back requests are output by L2 cache 1921 to frame buffer interface 1925 for processing. In at least one embodiment, updates can also be sent to a frame buffer via frame buffer interface 1925 for processing. In at least one embodiment, frame buffer interface 1925 interfaces with one of memory units in parallel processor memory, such as memory units 1924A-1924N of FIG.19 (e.g., within parallel processor memory 1922). [0327] In at least one embodiment, ROP 1926 is a processing unit that performs raster operations such as stencil, z test, blending, and like. In at least one embodiment, ROP 1926 then outputs processed graphics data that is stored in graphics memory. In at least one embodiment, ROP 1926 includes compression logic to compress depth or color data that is written to memory and decompress depth or color data that is read from memory. In at least one embodiment, compression logic can be lossless compression logic that makes use of one or more of multiple compression algorithms. type of compression that is performed by ROP 1926 can vary based on statistical characteristics of data to be compressed. For example, in at least one embodiment, delta color compression is performed on depth and color data on a per-tile basis. [0328] In In at least one embodiment, ROP 1926 is included within each processing cluster (e.g., cluster 1914A-1914N of FIG.19) instead of within partition unit 1920. In at least one embodiment, read and write requests for pixel data are transmitted over memory crossbar 1916 instead of pixel fragment data. In at least one embodiment, processed graphics data may be displayed on a display device, such as one of one or more display device(s) 1810 of FIG.18, routed for further processing by processor(s) 1802, or routed for further processing by one of processing entities within parallel processor 1900 of FIG.19A. [0329] FIG.19C is a block diagram of a processing cluster 1914 within a parallel processing unit according to at least one embodiment. In at least one embodiment, a processing cluster is an instance of one of processing clusters 1914A-1914N of FIG.19. In at least one embodiment, processing cluster 1914 can be configured to execute many threads in parallel, where term “thread” refers to an instance of a particular program executing on a particular set of input data. In at least one embodiment, single-instruction, multiple-data (SIMD) instruction issue techniques are used to support parallel execution of a large number of threads without providing multiple independent instruction units. In at least one embodiment, single-instruction, multiple-thread (SIMT) techniques are used to support parallel execution of a large number of generally synchronized threads, using a common instruction unit configured to issue instructions to a set of processing engines within each one of processing clusters. [0330] In at least one embodiment, operation of processing cluster 1914 can be controlled via a pipeline manager 1932 that distributes processing tasks to SIMT parallel processors. In at least one embodiment, pipeline manager 1932 receives instructions from scheduler 1910 of FIG.19 and manages execution of those instructions via a graphics multiprocessor 1934 and/or a texture unit 1936. In at least one embodiment, graphics multiprocessor 1934 is an exemplary instance of a SIMT parallel processor. However, in at least one embodiment, various types of SIMT parallel processors of differing architectures may be included within processing cluster 1914. In at least one embodiment, one or more instances of graphics multiprocessor 1934 can be included within a processing cluster 1914. In at least one embodiment, graphics multiprocessor 1934 can process data and a data crossbar 1940 can be used to distribute processed data to one of multiple possible destinations, including other shader units. In at least one embodiment, pipeline manager 1932 can facilitate distribution of processed data by specifying destinations for processed data to be distributed vis data crossbar 1940. [0331] In at least one embodiment, each graphics multiprocessor 1934 within processing cluster 1914 can include an identical set of functional execution logic (e.g., arithmetic logic units, load-store units, etc.). In at least one embodiment, functional execution logic can be configured in a pipelined manner in which new instructions can be issued before previous instructions are complete. In at least one embodiment, functional execution logic supports a variety of operations including integer and floating point arithmetic, comparison operations, Boolean operations, bit-shifting, and computation of various algebraic functions. In at least one embodiment, same functional-unit hardware can be leveraged to perform different operations and any combination of functional units may be present. [0332] In at least one embodiment, instructions transmitted to processing cluster 1914 constitute a thread. In at least one embodiment, a set of threads executing across a set of parallel processing engines is a thread group. In at least one embodiment, thread group executes a program on different input data. In at least one embodiment, each thread within a thread group can be assigned to a different processing engine within a graphics multiprocessor 1934. In at least one embodiment, a thread group may include fewer threads than a number of processing engines within graphics multiprocessor 1934. In at least one embodiment, when a thread group includes fewer threads than a number of processing engines, one or more of processing engines may be idle during cycles in which that thread group is being processed. In at least one embodiment, a thread group may also include more threads than a number of processing engines within graphics multiprocessor 1934. In at least one embodiment, when a thread group includes more threads than number of processing engines within graphics multiprocessor 1934, processing can be performed over consecutive clock cycles. In at least one embodiment, multiple thread groups can be executed concurrently on a graphics multiprocessor 1934. [0333] In at least one embodiment, graphics multiprocessor 1934 includes an internal cache memory to perform load and store operations. In at least one embodiment, graphics multiprocessor 1934 can forego an internal cache and use a cache memory (e.g., L1 cache 1948) within processing cluster 1914. In at least one embodiment, each graphics multiprocessor 1934 also has access to L2 caches within partition units (e.g., partition units 1920A-1920N of FIG.19) that are shared among all processing clusters 1914 and may be used to transfer data between threads. In at least one embodiment, graphics multiprocessor 1934 may also access off-chip global memory, which can include one or more of local parallel processor memory and/or system memory. In at least one embodiment, any memory external to parallel processing unit 1902 may be used as global memory. In at least one embodiment, processing cluster 1914 includes multiple instances of graphics multiprocessor 1934 can share common instructions and data, which may be stored in L1 cache 1948. [0334] In at least one embodiment, each processing cluster 1914 may include an MMU 1945 (memory management unit) that is configured to map virtual addresses into physical addresses. In at least one embodiment, one or more instances of MMU 1945 may reside within memory interface 1918 of FIG.19. In at least one embodiment, MMU 1945 includes a set of page table entries (PTEs) used to map a virtual address to a physical address of a tile (talk more about tiling) and optionally a cache line index. In at least one embodiment, MMU 1945 may include address translation lookaside buffers (TLB) or caches that may reside within graphics multiprocessor 1934 or L1 cache or processing cluster 1914. In at least one embodiment, physical address is processed to distribute surface data access locality to allow efficient request interleaving among partition units. In at least one embodiment, cache line index may be used to determine whether a request for a cache line is a hit or miss. [0335] In at least one embodiment, a processing cluster 1914 may be configured such that each graphics multiprocessor 1934 is coupled to a texture unit 1936 for performing texture mapping operations, e.g., determining texture sample positions, reading texture data, and filtering texture data. In at least one embodiment, texture data is read from an internal texture L1 cache (not shown) or from an L1 cache within graphics multiprocessor 1934 and is fetched from an L2 cache, local parallel processor memory, or system memory, as needed. In at least one embodiment, each graphics multiprocessor 1934 outputs processed tasks to data crossbar 1940 to provide processed task to another processing cluster 1914 for further processing or to store processed task in an L2 cache, local parallel processor memory, or system memory via memory crossbar 1916. In at least one embodiment, preROP 1942 (pre- raster operations unit) is configured to receive data from graphics multiprocessor 1934, direct data to ROP units, which may be located with partition units as described herein (e.g., partition units 1920A-1920N of FIG.19). In at least one embodiment, PreROP 1942 unit can perform optimizations for color blending, organize pixel color data, and perform address translations. [0336] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment, inference and/or training logic 615 may be used in graphics processing cluster 1914 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. [0337] In at least one embodiment, at least one component of FIG.19A, FIG.19B, and/or FIG.19C is utilized to implement techniques described in connection with FIGS.1-5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in parallel processor 1900 of FIG.19A. In at least one embodiment, feature map is used in graphics multiprocessor 1934 of FIG.19C. [0338] FIG.19D shows a graphics multiprocessor 1934 according to at least one embodiment. In at least one embodiment, graphics multiprocessor 1934 couples with pipeline manager 1932 of processing cluster 1914. In at least one embodiment, graphics multiprocessor 1934 has an execution pipeline including but not limited to an instruction cache 1952, an instruction unit 1954, an address mapping unit 1956, a register file 1958, one or more general purpose graphics processing unit (GPGPU) cores 1962, and one or more load/store units 1966. GPGPU cores 1962 and load/store units 1966 are coupled with cache memory 1972 and shared memory 1970 via a memory and cache interconnect 1968. [0339] In at least one embodiment, instruction cache 1952 receives a stream of instructions to execute from pipeline manager 1932. In at least one embodiment, instructions are cached in instruction cache 1952 and dispatched for execution by instruction unit 1954. In at least one embodiment, instruction unit 1954 can dispatch instructions as thread groups (e.g., warps), with each thread of thread group assigned to a different execution unit within GPGPU core 1962. In at least one embodiment, an instruction can access any of a local, shared, or global address space by specifying an address within a unified address space. In at least one embodiment, address mapping unit 1956 can be used to translate addresses in a unified address space into a distinct memory address that can be accessed by load/store units 1966. [0340] In at least one embodiment, register file 1958 provides a set of registers for functional units of graphics multiprocessor 1934. In at least one embodiment, register file 1958 provides temporary storage for operands connected to data paths of functional units (e.g., GPGPU cores 1962, load/store units 1966) of graphics multiprocessor 1934. In at least one embodiment, register file 1958 is divided between each of functional units such that each functional unit is allocated a dedicated portion of register file 1958. In at least one embodiment, register file 1958 is divided between different warps being executed by graphics multiprocessor 1934. [0341] In at least one embodiment, GPGPU cores 1962 can each include floating point units (FPUs) and/or integer arithmetic logic units (ALUs) that are used to execute instructions of graphics multiprocessor 1934. GPGPU cores 1962 can be similar in architecture or can differ in architecture. In at least one embodiment, a first portion of GPGPU cores 1962 include a single precision FPU and an integer ALU while a second portion of GPGPU cores include a double precision FPU. In at least one embodiment, FPUs can implement IEEE 754-2008 standard for floating point arithmetic or enable variable precision floating point arithmetic. In at least one embodiment, graphics multiprocessor 1934 can additionally include one or more fixed function or special function units to perform specific functions such as copy rectangle or pixel blending operations. In at least one embodiment one or more of GPGPU cores can also include fixed or special function logic. [0342] In at least one embodiment, GPGPU cores 1962 include SIMD logic capable of performing a single instruction on multiple sets of data. In at least one embodiment GPGPU cores 1962 can physically execute SIMD4, SIMD8, and SIMD16 instructions and logically execute SIMD1, SIMD2, and SIMD32 instructions. In at least one embodiment, SIMD instructions for GPGPU cores can be generated at compile time by a shader compiler or automatically generated when executing programs written and compiled for single program multiple data (SPMD) or SIMT architectures. In at least one embodiment, multiple threads of a program configured for an SIMT execution model can executed via a single SIMD instruction. For example, in at least one embodiment, eight SIMT threads that perform same or similar operations can be executed in parallel via a single SIMD8 logic unit. [0343] In at least one embodiment, memory and cache interconnect 1968 is an interconnect network that connects each functional unit of graphics multiprocessor 1934 to register file 1958 and to shared memory 1970. In at least one embodiment, memory and cache interconnect 1968 is a crossbar interconnect that allows load/store unit 1966 to implement load and store operations between shared memory 1970 and register file 1958. In at least one embodiment, register file 1958 can operate at a same frequency as GPGPU cores 1962, thus data transfer between GPGPU cores 1962 and register file 1958 is very low latency. In at least one embodiment, shared memory 1970 can be used to enable communication between threads that execute on functional units within graphics multiprocessor 1934. In at least one embodiment, cache memory 1972 can be used as a data cache for example, to cache texture data communicated between functional units and texture unit 1936. In at least one embodiment, shared memory 1970 can also be used as a program managed cached. In at least one embodiment, threads executing on GPGPU cores 1962 can programmatically store data within shared memory in addition to automatically cached data that is stored within cache memory 1972. [0344] In at least one embodiment, a parallel processor or GPGPU as described herein is communicatively coupled to host/processor cores to accelerate graphics operations, machine- learning operations, pattern analysis operations, and various general purpose GPU (GPGPU) functions. In at least one embodiment, GPU may be communicatively coupled to host processor/cores over a bus or other interconnect (e.g., a high speed interconnect such as PCIe or NVLink). In at least one embodiment, GPU may be integrated on same package or chip as cores and communicatively coupled to cores over an internal processor bus/interconnect (i.e., internal to package or chip). In at least one embodiment, regardless of manner in which GPU is connected, processor cores may allocate work to GPU in form of sequences of commands/instructions contained in a work descriptor. In at least one embodiment, GPU then uses dedicated circuitry/logic for efficiently processing these commands/instructions. [0345] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment, inference and/or training logic 615 may be used in graphics multiprocessor 1934 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. [0346] In at least one embodiment, at least one component of FIG.19D is utilized to implement techniques described in connection with FIGS.1-5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in graphics multiprocessor 1934 of FIG.19D. [0347] FIG.20 illustrates a multi-GPU computing system 2000, according to at least one embodiment. In at least one embodiment, multi-GPU computing system 2000 can include a processor 2002 coupled to multiple general purpose graphics processing units (GPGPUs) 2006A-D via a host interface switch 2004. In at least one embodiment, host interface switch 2004 is a PCI express switch device that couples processor 2002 to a PCI express bus over which processor 2002 can communicate with GPGPUs 2006A-D. GPGPUs 2006A-D can interconnect via a set of high-speed point to point GPU to GPU links 2016. In at least one embodiment, GPU to GPU links 2016 connect to each of GPGPUs 2006A-D via a dedicated GPU link. In at least one embodiment, P2P GPU links 2016 enable direct communication between each of GPGPUs 2006A-D without requiring communication over host interface bus 2004 to which processor 2002 is connected. In at least one embodiment, with GPU-to-GPU traffic directed to P2P GPU links 2016, host interface bus 2004 remains available for system memory access or to communicate with other instances of multi-GPU computing system 2000, for example, via one or more network devices. While in at least one embodiment GPGPUs 2006A-D connect to processor 2002 via host interface switch 2004, in at least one embodiment processor 2002 includes direct support for P2P GPU links 2016 and can connect directly to GPGPUs 2006A-D. [0348] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment, inference and/or training logic 615 may be used in multi-GPU computing system 2000 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. [0349] In at least one embodiment, at least one component of FIG.20 is utilized to implement techniques described in connection with FIGS.1-5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in multi-GPU computing system 2000 of FIG.20. [0350] FIG.21 is a block diagram of a graphics processor 2100, according to at least one embodiment. In at least one embodiment, graphics processor 2100 includes a ring interconnect 2102, a pipeline front-end 2104, a media engine 2137, and graphics cores 2180A-2180N. In at least one embodiment, ring interconnect 2102 couples graphics processor 2100 to other processing units, including other graphics processors or one or more general-purpose processor cores. In at least one embodiment, graphics processor 2100 is one of many processors integrated within a multi-core processing system. [0351] In at least one embodiment, graphics processor 2100 receives batches of commands via ring interconnect 2102. In at least one embodiment, incoming commands are interpreted by a command streamer 2103 in pipeline front-end 2104. In at least one embodiment, graphics processor 2100 includes scalable execution logic to perform 3D geometry processing and media processing via graphics core(s) 2180A-2180N. In at least one embodiment, for 3D geometry processing commands, command streamer 2103 supplies commands to geometry pipeline 2136. In at least one embodiment, for at least some media processing commands, command streamer 2103 supplies commands to a video front end 2134, which couples with a media engine 2137. In at least one embodiment, media engine 2137 includes a Video Quality Engine (VQE) 2130 for video and image post-processing and a multi-format encode/decode (MFX) 2133 engine to provide hardware-accelerated media data encode and decode. In at least one embodiment, geometry pipeline 2136 and media engine 2137 each generate execution threads for thread execution resources provided by at least one graphics core 2180A. [0352] In at least one embodiment, graphics processor 2100 includes scalable thread execution resources featuring modular cores 2180A-2180N (sometimes referred to as core slices), each having multiple sub-cores 2150A-550N, 2160A-2160N (sometimes referred to as core sub-slices). In at least one embodiment, graphics processor 2100 can have any number of graphics cores 2180A through 2180N. In at least one embodiment, graphics processor 2100 includes a graphics core 2180A having at least a first sub-core 2150A and a second sub-core 2160A. In at least one embodiment, graphics processor 2100 is a low power processor with a single sub-core (e.g., 2150A). In at least one embodiment, graphics processor 2100 includes multiple graphics cores 2180A-2180N, each including a set of first sub-cores 2150A-2150N and a set of second sub-cores 2160A-2160N. In at least one embodiment, each sub-core in first sub-cores 2150A-2150N includes at least a first set of execution units 2152A-2152N and media/texture samplers 2154A-2154N. In at least one embodiment, each sub-core in second sub-cores 2160A-2160N includes at least a second set of execution units 2162A-2162N and samplers 2164A-2164N. In at least one embodiment, each sub-core 2150A-2150N, 2160A-2160N shares a set of shared resources 2170A-2170N. In at least one embodiment, shared resources include shared cache memory and pixel operation logic. [0353] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment, inference and/or training logic 615 may be used in graphics processor 2100 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. [0354] In at least one embodiment, at least one component shown or described with respect to FIG.21 is utilized to implement techniques described in connection with FIGS.1- 5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in graphics processor 2100 of FIG.21. [0355] FIG.22 is a block diagram illustrating micro-architecture for a processor 2200 that may include logic circuits to perform instructions, according to at least one embodiment. In at least one embodiment, processor 2200 may perform instructions, including x86 instructions, ARM instructions, specialized instructions for application-specific integrated circuits (ASICs), etc. In at least one embodiment, processor 2210 may include registers to store packed data, such as 64-bit wide MMXTM registers in microprocessors enabled with MMX technology from Intel Corporation of Santa Clara, Calif. In at least one embodiment, MMX registers, available in both integer and floating point forms, may operate with packed data elements that accompany single instruction, multiple data (“SIMD”) and streaming SIMD extensions (“SSE”) instructions. In at least one embodiment, 128-bit wide XMM registers relating to SSE2, SSE3, SSE4, AVX, or beyond (referred to generically as “SSEx”) technology may hold such packed data operands. In at least one embodiment, processors 2210 may perform instructions to accelerate machine learning or deep learning algorithms, training, or inferencing. [0356] In at least one embodiment, processor 2200 includes an in-order front end (“front end”) 2201 to fetch instructions to be executed and prepare instructions to be used later in processor pipeline. In at least one embodiment, front end 2201 may include several units. In at least one embodiment, an instruction prefetcher 2226 fetches instructions from memory and feeds instructions to an instruction decoder 2228 which in turn decodes or interprets instructions. For example, in at least one embodiment, instruction decoder 2228 decodes a received instruction into one or more operations called “micro-instructions” or “micro- operations” (also called “micro ops”or “uops”) that machine may execute. In at least one embodiment, instruction decoder 2228 parses instruction into an opcode and corresponding data and control fields that may be used by micro-architecture to perform operations in accordance with at least one embodiment. In at least one embodiment, a trace cache 2230 may assemble decoded uops into program ordered sequences or traces in a uop queue 2234 for execution. In at least one embodiment, when trace cache 2230 encounters a complex instruction, a microcode ROM 2232 provides uops needed to complete operation. [0357] In at least one embodiment, some instructions may be converted into a single micro-op, whereas others need several micro-ops to complete full operation. In at least one embodiment, if more than four micro-ops are needed to complete an instruction, instruction decoder 2228 may access microcode ROM 2232 to perform instruction. In at least one embodiment, an instruction may be decoded into a small number of micro-ops for processing at instruction decoder 2228. In at least one embodiment, an instruction may be stored within microcode ROM 2232 should a number of micro-ops be needed to accomplish operation. In at least one embodiment, trace cache 2230 refers to an entry point programmable logic array (“PLA”) to determine a correct micro-instruction pointer for reading microcode sequences to complete one or more instructions from microcode ROM 2232 in accordance with at least one embodiment. In at least one embodiment, fter microcode ROM 2232 finishes sequencing micro-ops for an instruction, front end 2201 of machine may resume fetching micro-ops from trace cache 2230. [0358] In at least one embodiment, out-of-order execution engine (“out of order engine”) 2203 may prepare instructions for execution. In at least one embodiment, out-of- order execution logic has a number of buffers to smooth out and re-order flow of instructions to optimize performance as they go down pipeline and get scheduled for execution. out-of- order execution engine 2203 includes, without limitation, an allocator/register renamer 2240, a memory uop queue 2242, an integer/floating point uop queue 2244, a memory scheduler 2246, a fast scheduler 2202, a slow/general floating point scheduler (“slow/general FP scheduler”) 2204, and a simple floating point scheduler (“simple FP scheduler”) 2206. In at least one embodiment, fast schedule 2202, slow/general floating point scheduler 2204, and simple floating point scheduler 2206 are also collectively referred to herein as “uop schedulers 2202, 2204, 2206.” allocator/register renamer 2240 allocates machine buffers and resources that each uop needs in order to execute. In at least one embodiment, allocator/register renamer 2240 renames logic registers onto entries in a register file. In at least one embodiment, allocator/register renamer 2240 also allocates an entry for each uop in one of two uop queues, memory uop queue 2242 for memory operations and integer/floating point uop queue 2244 for non-memory operations, in front of memory scheduler 2246 and uop schedulers 2202, 2204, 2206. In at least one embodiment, uop schedulers 2202, 2204, 2206, determine when a uop is ready to execute based on readiness of their dependent input register operand sources and availability of execution resources uops need to complete their operation. In at least one embodiment, fast scheduler 2202 of at least one embodiment may schedule on each half of main clock cycle while slow/general floating point scheduler 2204 and simple floating point scheduler 2206 may schedule once per main processor clock cycle. In at least one embodiment, uop schedulers 2202, 2204, 2206 arbitrate for dispatch ports to schedule uops for execution. [0359] In at least one embodiment, execution block b11 includes, without limitation, an integer register file/bypass network 2208, a floating point register file/bypass network (“FP register file/bypass network”) 2210, address generation units (“AGUs”) 2212 and 2214, fast Arithmetic Logic Units (ALUs) (“fast ALUs”) 2216 and 2218, a slow Arithmetic Logic Unit (“slow ALU”) 2220, a floating point ALU (“FP”) 2222, and a floating point move unit (“FP move”) 2224. In at least one embodiment, integer register file/bypass network 2208 and floating point register file/bypass network 2210 are also referred to herein as “register files 2208, 2210.” In at least one embodiment, AGUSs 2212 and 2214, fast ALUs 2216 and 2218, slow ALU 2220, floating point ALU 2222, and floating point move unit 2224 are also referred to herein as “execution units 2212, 2214, 2216, 2218, 2220, 2222, and 2224.” In at least one embodiment, execution block b11 may include, without limitation, any number (including zero) and type of register files, bypass networks, address generation units, and execution units, in any combination. [0360] In at least one embodiment, register files 2208, 2210 may be arranged between uop schedulers 2202, 2204, 2206, and execution units 2212, 2214, 2216, 2218, 2220, 2222, and 2224. In at least one embodiment, integer register file/bypass network 2208 performs integer operations. In at least one embodiment, floating point register file/bypass network 2210 performs floating point operations. In at least one embodiment, each of register files 2208, 2210 may include, without limitation, a bypass network that may bypass or forward just completed results that have not yet been written into register file to new dependent uops. In at least one embodiment, register files 2208, 2210 may communicate data with each other. In at least one embodiment, integer register file/bypass network 2208 may include, without limitation, two separate register files, one register file for low-order thirty-two bits of data and a second register file for high order thirty-two bits of data. In at least one embodiment, floating point register file/bypass network 2210 may include, without limitation, 128-bit wide entries because floating point instructions typically have operands from 64 to 128 bits in width. [0361] In at least one embodiment, execution units 2212, 2214, 2216, 2218, 2220, 2222, 2224 may execute instructions. In at least one embodiment, register files 2208, 2210 store integer and floating point data operand values that micro-instructions need to execute. In at least one embodiment, processor 2200 may include, without limitation, any number and combination of execution units 2212, 2214, 2216, 2218, 2220, 2222, 2224. In at least one embodiment, floating point ALU 2222 and floating point move unit 2224, may execute floating point, MMX, SIMD, AVX and SSE, or other operations, including specialized machine learning instructions. In at least one embodiment, floating point ALU 2222 may include, without limitation, a 64-bit by 64-bit floating point divider to execute divide, square root, and remainder micro ops. In at least one embodiment, instructions involving a floating point value may be handled with floating point hardware. In at least one embodiment, ALU operations may be passed to fast ALUs 2216, 2218. In at least one embodiment, fast ALUS 2216, 2218 may execute fast operations with an effective latency of half a clock cycle. In at least one embodiment, most complex integer operations go to slow ALU 2220 as slow ALU 2220 may include, without limitation, integer execution hardware for long-latency type of operations, such as a multiplier, shifts, flag logic, and branch processing. In at least one embodiment, memory load/store operations may be executed by AGUS 2212, 2214. In at least one embodiment, fast ALU 2216, fast ALU 2218, and slow ALU 2220 may perform integer operations on 64-bit data operands. In at least one embodiment, fast ALU 2216, fast ALU 2218, and slow ALU 2220 may be implemented to support a variety of data bit sizes including sixteen, thirty-two, 128, 256, etc. In at least one embodiment, floating point ALU 2222 and floating point move unit 2224 may be implemented to support a range of operands having bits of various widths. In at least one embodiment, floating point ALU 2222 and floating point move unit 2224 may operate on 128-bit wide packed data operands in conjunction with SIMD and multimedia instructions. [0362] In at least one embodiment, uop schedulers 2202, 2204, 2206, dispatch dependent operations before parent load has finished executing. In at least one embodiment, as uops may be speculatively scheduled and executed in processor 2200, processor 2200 may also include logic to handle memory misses. In at least one embodiment, if a data load misses in data cache, there may be dependent operations in flight in pipeline that have left scheduler with temporarily incorrect data. In at least one embodiment, a replay mechanism tracks and re-executes instructions that use incorrect data. In at least one embodiment, dependent operations might need to be replayed and independent ones may be allowed to complete. In at least one embodiment, schedulers and replay mechanism of at least one embodiment of a processor may also be designed to catch instruction sequences for text string comparison operations. [0363] In at least one embodiment, term “registers” may refer to on-board processor storage locations that may be used as part of instructions to identify operands. In at least one embodiment,, registers may be those that may be usable from outside of processor (from a programmer's perspective). In at least one embodiment, registers might not be limited to a particular type of circuit. Rather, in at least one embodiment, a register may store data, provide data, and perform functions described herein. In at least one embodiment, registers described herein may be implemented by circuitry within a processor using any number of different techniques, such as dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. In at least one embodiment, integer registers store 32-bit integer data. A register file of at least one embodiment also contains eight multimedia SIMD registers for packed data. [0364] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment portions or all of inference and/or training logic 615 may be incorporated into EXE Block 2211 and other memory or registers shown or not shown. For example, in at least one embodiment, training and/or inferencing techniques described herein may use one or more of ALUs illustrated in EXE Block 2211. Moreover, weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure ALUs of EXE Block 2211 to perform one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein. [0365] In at least one embodiment, at least one component shown or described with respect to FIG.22 is utilized to implement techniques described in connection with FIGS.1- 5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in processor 2200 of FIG.22. [0366] FIG.23 illustrates a deep learning application processor 2300, according to at least one embodiment. In at least one embodiment, deep learning application processor 2300 uses instructions that, if executed by deep learning application processor 2300, cause deep learning application processor 2300 to perform some or all of processes and techniques described throughout this disclosure. In at least one embodiment, deep learning application processor 2300 is an application-specific integrated circuit (ASIC). In at least one embodiment, application processor 2300 performs matrix multiply operations either “hard- wired” into hardware as a result of performing one or more instructions or both. In at least one embodiment, deep learning application processor 2300 includes, without limitation, processing clusters 2310(1)-2310(12), Inter-Chip Links (“ICLs”) 2320(1)-2320(12), Inter- Chip Controllers (“ICCs”) 2330(1)-2330(2), high bandwidth memory second generation (“HBM2”) 2340(1)-2340(4), memory controllers (“Mem Ctrlrs”) 2342(1)-2342(4), high bandwidth memory physical layer (“HBM PHY”) 2344(1)-2344(4), a management-controller central processing unit (“management-controller CPU”) 2350, a Serial Peripheral Interface, Inter-Integrated Circuit, and General Purpose Input/Output block (“SPI, I2C, GPIO”) 2360, a peripheral component interconnect express controller and direct memory access block (“PCIe Controller and DMA”) 2370, and a sixteen-lane peripheral component interconnect express port (“PCI Express x 16”) 2380. [0367] In at least one embodiment, processing clusters 2310 may perform deep learning operations, including inference or prediction operations based on weight parameters calculated one or more training techniques, including those described herein. In at least one embodiment, each processing cluster 2310 may include, without limitation, any number and type of processors. In at least one embodiment, deep learning application processor 2300 may include any number and type of processing clusters 2300. In at least one embodiment, Inter-Chip Links 2320 are bi-directional. In at least one embodiment, Inter-Chip Links 2320 and Inter-Chip Controllers 2330 enable multiple deep learning application processors 2300 to exchange information, including activation information resulting from performing one or more machine learning algorithms embodied in one or more neural networks. In at least one embodiment, deep learning application processor 2300 may include any number (including zero) and type of ICLs 2320 and ICCs 2330. [0368] In at least one embodiment, HBM2s 2340 provide a total of 32 Gigabytes (GB) of memory. HBM22340(i) is associated with both memory controller 2342(i) and HBM PHY 2344(i). In at least one embodiment, any number of HBM2s 2340 may provide any type and total amount of high bandwidth memory and may be associated with any number (including zero) and type of memory controllers 2342 and HBM PHYs 2344. In at least one embodiment, SPI, I2C, GPIO 2360, PCIe Controller and DMA 2370, and/or PCIe 2380 may be replaced with any number and type of blocks that enable any number and type of communication standards in any technically feasible fashion. [0369] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment, deep learning application processor is used to train a machine learning model, such as a neural network, to predict or infer information provided to deep learning application processor 2300. In at least one embodiment, deep learning application processor 2300 is used to infer or predict information based on a trained machine learning model (e.g., neural network) that has been trained by another processor or system or by deep learning application processor 2300. In at least one embodiment, processor 2300 may be used to perform one or more neural network use cases described herein. [0370] In at least one embodiment, at least one component shown or described with respect to FIG.23 is utilized to implement techniques described in connection with FIGS.1- 5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in deep learning application processor 2300 of FIG.23. [0371] FIG.24 is a block diagram of a neuromorphic processor 2400, according to at least one embodiment. In at least one embodiment, neuromorphic processor 2400 may receive one or more inputs from sources external to neuromorphic processor 2400. In at least one embodiment, these inputs may be transmitted to one or more neurons 2402 within neuromorphic processor 2400. In at least one embodiment, neurons 2402 and components thereof may be implemented using circuitry or logic, including one or more arithmetic logic units (ALUs). In at least one embodiment, neuromorphic processor 2400 may include, without limitation, thousands or millions of instances of neurons 2402, but any suitable number of neurons 2402 may be used. In at least one embodiment, each instance of neuron 2402 may include a neuron input 2404 and a neuron output 2406. In at least one embodiment, neurons 2402 may generate outputs that may be transmitted to inputs of other instances of neurons 2402. For example, in at least one embodiment, neuron inputs 2404 and neuron outputs 2406 may be interconnected via synapses 2408. [0372] In at least one embodiment, neurons 2402 and synapses 2408 may be interconnected such that neuromorphic processor 2400 operates to process or analyze information received by neuromorphic processor 2400. In at least one embodiment, neurons 2402 may transmit an output pulse (or “fire” or “spike”) when inputs received through neuron input 2404 exceed a threshold. In at least one embodiment, neurons 2402 may sum or integrate signals received at neuron inputs 2404. For example, in at least one embodiment, neurons 2402 may be implemented as leaky integrate-and-fire neurons, wherein if a sum (referred to as a “membrane potential”) exceeds a threshold value, neuron 2402 may generate an output (or “fire”) using a transfer function such as a sigmoid or threshold function. In at least one embodiment, a leaky integrate-and-fire neuron may sum signals received at neuron inputs 2404 into a membrane potential and may also apply a decay factor (or leak) to reduce a membrane potential. In at least one embodiment, a leaky integrate-and-fire neuron may fire if multiple input signals are received at neuron inputs 2404 rapidly enough to exceed a threshold value (i.e., before a membrane potential decays too low to fire). In at least one embodiment, neurons 2402 may be implemented using circuits or logic that receive inputs, integrate inputs into a membrane potential, and decay a membrane potential. In at least one embodiment, inputs may be averaged, or any other suitable transfer function may be used. Furthermore, in at least one embodiment, neurons 2402 may include, without limitation, comparator circuits or logic that generate an output spike at neuron output 2406 when result of applying a transfer function to neuron input 2404 exceeds a threshold. In at least one embodiment, once neuron 2402 fires, it may disregard previously received input information by, for example, resetting a membrane potential to 0 or another suitable default value. In at least one embodiment, once membrane potential is reset to 0, neuron 2402 may resume normal operation after a suitable period of time (or refractory period). [0373] In at least one embodiment, neurons 2402 may be interconnected through synapses 2408. In at least one embodiment, synapses 2408 may operate to transmit signals from an output of a first neuron 2402 to an input of a second neuron 2402. In at least one embodiment, neurons 2402 may transmit information over more than one instance of synapse 2408. In at least one embodiment, one or more instances of neuron output 2406 may be connected, via an instance of synapse 2408, to an instance of neuron input 2404 in same neuron 2402. In at least one embodiment, an instance of neuron 2402 generating an output to be transmitted over an instance of synapse 2408 may be referred to as a "pre-synaptic neuron” with respect to that instance of synapse 2408. In at least one embodiment, an instance of neuron 2402 receiving an input transmitted over an instance of synapse 2408 may be referred to as a “post-synaptic neuron” with respect to that instance of synapse 2408. Because an instance of neuron 2402 may receive inputs from one or more instances of synapse 2408, and may also transmit outputs over one or more instances of synapse 2408, a single instance of neuron 2402 may therefore be both a "pre-synaptic neuron” and “post- synaptic neuron,” with respect to various instances of synapses 2408, in at least one embodiment. [0374] In at least one embodiment, neurons 2402 may be organized into one or more layers. Each instance of neuron 2402 may have one neuron output 2406 that may fan out through one or more synapses 2408 to one or more neuron inputs 2404. In at least one embodiment, neuron outputs 2406 of neurons 2402 in a first layer 2410 may be connected to neuron inputs 2404 of neurons 2402 in a second layer 2412. In at least one embodiment, layer 2410 may be referred to as a "feed-forward layer.” In at least one embodiment, each instance of neuron 2402 in an instance of first layer 2410 may fan out to each instance of neuron 2402 in second layer 2412. In at least one embodiment, first layer 2410 may be referred to as a “fully connected feed-forward layer.” In at least one embodiment, each instance of neuron 2402 in an instance of second layer 2412 may fan out to fewer than all instances of neuron 2402 in a third layer 2414. In at least one embodiment, second layer 2412 may be referred to as a “sparsely connected feed-forward layer.” In at least one embodiment, neurons 2402 in second layer 2412 may fan out to neurons 2402 in multiple other layers, including to neurons 2402 in (same) second layer 2412. In at least one embodiment, second layer 2412 may be referred to as a “recurrent layer.” neuromorphic processor 2400 may include, without limitation, any suitable combination of recurrent layers and feed-forward layers, including, without limitation, both sparsely connected feed-forward layers and fully connected feed-forward layers. [0375] In at least one embodiment, neuromorphic processor 2400 may include, without limitation, a reconfigurable interconnect architecture or dedicated hard wired interconnects to connect synapse 2408 to neurons 2402. In at least one embodiment, neuromorphic processor 2400 may include, without limitation, circuitry or logic that allows synapses to be allocated to different neurons 2402 as needed based on neural network topology and neuron fan-in/out. For example, in at least one embodiment, synapses 2408 may be connected to neurons 2402 using an interconnect fabric, such as network-on-chip, or with dedicated connections. In at least one embodiment, synapse interconnections and components thereof may be implemented using circuitry or logic. [0376] In at least one embodiment, at least one component shown or described with respect to FIG.24 is utilized to implement techniques described in connection with FIGS.1- 5. In at least one embodiment, circuitry and/or logic of neurons 2402 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, circuitry and/or logic of neurons 2402 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in neuromorphic processor 2400 of FIG.24. [0377] FIG.25 is a block diagram of a processing system, according to at least one embodiment. In at least one embodiment, system 2500 includes one or more processors 2502 and one or more graphics processors 2508, and may be a single processor desktop system, a multiprocessor workstation system, or a server system having a large number of processors 2502 or processor cores 2507. In at least one embodiment, system 2500 is a processing platform incorporated within a system-on-a-chip (SoC) integrated circuit for use in mobile, handheld, or embedded devices. [0378] In at least one embodiment, system 2500 can include, or be incorporated within a server-based gaming platform, a game console, including a game and media console, a mobile gaming console, a handheld game console, or an online game console. In at least one embodiment, system 2500 is a mobile phone, smart phone, tablet computing device or mobile Internet device. In at least one embodiment, processing system 2500 can also include, couple with, or be integrated within a wearable device, such as a smart watch wearable device, smart eyewear device, augmented reality device, or virtual reality device. In at least one embodiment, processing system 2500 is a television or set top box device having one or more processors 2502 and a graphical interface generated by one or more graphics processors 2508. [0379] In at least one embodiment, one or more processors 2502 each include one or more processor cores 2507 to process instructions which, when executed, perform operations for system and user software. In at least one embodiment, each of one or more processor cores 2507 is configured to process a specific instruction set 2509. In at least one embodiment, instruction set 2509 may facilitate Complex Instruction Set Computing (CISC), Reduced Instruction Set Computing (RISC), or computing via a Very Long Instruction Word (VLIW). In at least one embodiment, processor cores 2507 may each process a different instruction set 2509, which may include instructions to facilitate emulation of other instruction sets. In at least one embodiment, processor core 2507 may also include other processing devices, such a Digital Signal Processor (DSP). [0380] In at least one embodiment, processor 2502 includes cache memory 2504. In at least one embodiment, processor 2502 can have a single internal cache or multiple levels of internal cache. In at least one embodiment, cache memory is shared among various components of processor 2502. In at least one embodiment, processor 2502 also uses an external cache (e.g., a Level-3 (L3) cache or Last Level Cache (LLC)) (not shown), which may be shared among processor cores 2507 using known cache coherency techniques. In at least one embodiment, register file 2506 is additionally included in processor 2502 which may include different types of registers for storing different types of data (e.g., integer registers, floating point registers, status registers, and an instruction pointer register). In at least one embodiment, register file 2506 may include general-purpose registers or other registers. [0381] In at least one embodiment, one or more processor(s) 2502 are coupled with one or more interface bus(es) 2510 to transmit communication signals such as address, data, or control signals between processor 2502 and other components in system 2500. In at least one embodiment interface bus 2510, in one embodiment, can be a processor bus, such as a version of a Direct Media Interface (DMI) bus. In at least one embodiment, interface 2510 is not limited to a DMI bus, and may include one or more Peripheral Component Interconnect buses (e.g., PCI, PCI Express), memory busses, or other types of interface busses. In at least one embodiment processor(s) 2502 include an integrated memory controller 2516 and a platform controller hub 2530. In at least one embodiment, memory controller 2516 facilitates communication between a memory device and other components of system 2500, while platform controller hub (PCH) 2530 provides connections to I/O devices via a local I/O bus. [0382] In at least one embodiment, memory device 2520 can be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as process memory. In at least one embodiment memory device 2520 can operate as system memory for system 2500, to store data 2522 and instructions 2521 for use when one or more processors 2502 executes an application or process. In at least one embodiment, memory controller 2516 also couples with an optional external graphics processor 2512, which may communicate with one or more graphics processors 2508 in processors 2502 to perform graphics and media operations. In at least one embodiment, a display device 2511 can connect to processor(s) 2502. In at least one embodiment display device 2511 can include one or more of an internal display device, as in a mobile electronic device or a laptop device or an external display device attached via a display interface (e.g., DisplayPort, etc.). In at least one embodiment, display device 2511 can include a head mounted display (HMD) such as a stereoscopic display device for use in virtual reality (VR) applications or augmented reality (AR) applications. [0383] In at least one embodiment, platform controller hub 2530 enables peripherals to connect to memory device 2520 and processor 2502 via a high-speed I/O bus. In at least one embodiment, I/O peripherals include, but are not limited to, an audio controller 2546, a network controller 2534, a firmware interface 2528, a wireless transceiver 2526, touch sensors 2525, a data storage device 2524 (e.g., hard disk drive, flash memory, etc.). In at least one embodiment, data storage device 2524 can connect via a storage interface (e.g., SATA) or via a peripheral bus, such as a Peripheral Component Interconnect bus (e.g., PCI, PCI Express). In at least one embodiment, touch sensors 2525 can include touch screen sensors, pressure sensors, or fingerprint sensors. In at least one embodiment, wireless transceiver 2526 can be a Wi-Fi transceiver, a Bluetooth transceiver, or a mobile network transceiver such as a 3G, 4G, or Long Term Evolution (LTE) transceiver. In at least one embodiment, firmware interface 2528 enables communication with system firmware, and can be, for example, a unified extensible firmware interface (UEFI). In at least one embodiment, network controller 2534 can enable a network connection to a wired network. In at least one embodiment, a high-performance network controller (not shown) couples with interface bus 2510. In at least one embodiment, audio controller 2546 is a multi-channel high definition audio controller. In at least one embodiment, system 2500 includes an optional legacy I/O controller 2540 for coupling legacy (e.g., Personal System 2 (PS/2)) devices to system. In at least one embodiment, platform controller hub 2530 can also connect to one or more Universal Serial Bus (USB) controllers 2542 connect input devices, such as keyboard and mouse 2543 combinations, a camera 2544, or other USB input devices. [0384] In at least one embodiment, an instance of memory controller 2516 and platform controller hub 2530 may be integrated into a discreet external graphics processor, such as external graphics processor 2512. In at least one embodiment, platform controller hub 2530 and/or memory controller 2516 may be external to one or more processor(s) 2502. For example, in at least one embodiment, system 2500 can include an external memory controller 2516 and platform controller hub 2530, which may be configured as a memory controller hub and peripheral controller hub within a system chipset that is in communication with processor(s) 2502. [0385] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment portions or all of inference and/or training logic 615 may be incorporated into graphics processor 2500. For example, in at least one embodiment, training and/or inferencing techniques described herein may use one or more of ALUs embodied in 3D pipeline 2512. Moreover, in at least one embodiment, inferencing and/or training operations described herein may be done using logic other than logic illustrated in FIGS.6A or 6B. In at least one embodiment, weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure ALUs of graphics processor 2500 to perform one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein. [0386] In at least one embodiment, at least one component shown or described with respect to FIG.25 is utilized to implement techniques described in connection with FIGS.1- 5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in system 2500 of FIG.25. [0387] FIG.26 is a block diagram of a processor 2600 having one or more processor cores 2602A-2602N, an integrated memory controller 2614, and an integrated graphics processor 2608, according to at least one emodiment. In at least one embodiment, processor 2600 can include additional cores up to and including additional core 2602N represented by dashed lined boxes. In at least one embodiment, each of processor cores 2602A-2602N includes one or more internal cache units 2604A-2604N. In at least one embodiment, each processor core also has access to one or more shared cached units 2606. [0388] In at least one embodiment, internal cache units 2604A-2604N and shared cache units 2606 represent a cache memory hierarchy within processor 2600. In at least one embodiment, cache memory units 2604A-2604N may include at least one level of instruction and data cache within each processor core and one or more levels of shared mid-level cache, such as a Level 2 (L2), Level 3 (L3), Level 4 (L4), or other levels of cache, where a highest level of cache before external memory is classified as an LLC. In at least one embodiment, cache coherency logic maintains coherency between various cache units 2606 and 2604A- 2604N. [0389] In at least one embodiment, processor 2600 may also include a set of one or more bus controller units 2616 and a system agent core 2610. In at least one embodiment, one or more bus controller units 2616 manage a set of peripheral buses, such as one or more PCI or PCI express busses. In at least one embodiment, system agent core 2610 provides management functionality for various processor components. In at least one embodiment, system agent core 2610 includes one or more integrated memory controllers 2614 to manage access to various external memory devices (not shown). [0390] In at least one embodiment, one or more of processor cores 2602A-2602N include support for simultaneous multi-threading. In at least one embodiment, system agent core 2610 includes components for coordinating and operating cores 2602A-2602N during multi- threaded processing. In at least one embodiment, system agent core 2610 may additionally include a power control unit (PCU), which includes logic and components to regulate one or more power states of processor cores 2602A-2602N and graphics processor 2608. [0391] In at least one embodiment, processor 2600 additionally includes graphics processor 2608 to execute graphics processing operations. In at least one embodiment, graphics processor 2608 couples with shared cache units 2606, and system agent core 2610, including one or more integrated memory controllers 2614. In at least one embodiment, system agent core 2610 also includes a display controller 2611 to drive graphics processor output to one or more coupled displays. In at least one embodiment, display controller 2611 may also be a separate module coupled with graphics processor 2608 via at least one interconnect, or may be integrated within graphics processor 2608. [0392] In at least one embodiment, a ring based interconnect unit 2612 is used to couple internal components of processor 2600. In at least one embodiment, an alternative interconnect unit may be used, such as a point-to-point interconnect, a switched interconnect, or other techniques. In at least one embodiment, graphics processor 2608 couples with ring interconnect 2612 via an I/O link 2613. [0393] In at least one embodiment, I/O link 2613 represents at least one of multiple varieties of I/O interconnects, including an on package I/O interconnect which facilitates communication between various processor components and a high-performance embedded memory module 2618, such as an eDRAM module. In at least one embodiment, each of processor cores 2602A-2602N and graphics processor 2608 use embedded memory modules 2618 as a shared Last Level Cache. [0394] In at least one embodiment, processor cores 2602A-2602N are homogenous cores executing a common instruction set architecture. In at least one embodiment, processor cores 2602A-2602N are heterogeneous in terms of instruction set architecture (ISA), where one or more of processor cores 2602A-2602N execute a common instruction set, while one or more other cores of processor cores 2602A-26-02N executes a subset of a common instruction set or a different instruction set. In at least one embodiment, processor cores 2602A-2602N are heterogeneous in terms of microarchitecture, where one or more cores having a relatively higher power consumption couple with one or more power cores having a lower power consumption. In at least one embodiment, processor 2600 can be implemented on one or more chips or as an SoC integrated circuit. [0395] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment portions or all of inference and/or training logic 615 may be incorporated into graphics processor 2610. For example, in at least one embodiment, training and/or inferencing techniques described herein may use one or more of ALUs embodied in 3D pipeline 2512, graphics core(s) 2615A, shared function logic 2616, graphics core(s) 2615B, shared function logic 2620, or other logic in FIG.26. Moreover, in at least one embodiment, inferencing and/or training operations described herein may be done using logic other than logic illustrated in FIGS.6A or 6B. In at least one embodiment, weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure ALUs of graphics processor 2610 to perform one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein. [0396] In at least one embodiment, at least one component shown or described with respect to FIG.26 is utilized to implement techniques described in connection with FIGS.1- 5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in processor 2600 of FIG.26. [0397] FIG.27 is a block diagram of a graphics processor 2700, which may be a discrete graphics processing unit, or may be a graphics processor integrated with a plurality of processing cores. In at least one embodiment, graphics processor 2700 communicates via a memory mapped I/O interface to registers on graphics processor 2700 and with commands placed into memory. In at least one embodiment, graphics processor 2700 includes a memory interface 2714 to access memory. In at least one embodiment, memory interface 2714 is an interface to local memory, one or more internal caches, one or more shared external caches, and/or to system memory. [0398] In at least one embodiment, graphics processor 2700 also includes a display controller 2702 to drive display output data to a display device 2720. In at least one embodiment, display controller 2702 includes hardware for one or more overlay planes for display device 2720 and composition of multiple layers of video or user interface elements. In at least one embodiment, display device 2720 can be an internal or external display device. In at least one embodiment, display device 2720 is a head mounted display device, such as a virtual reality (VR) display device or an augmented reality (AR) display device. In at least one embodiment, graphics processor 2700 includes a video codec engine 2706 to encode, decode, or transcode media to, from, or between one or more media encoding formats, including, but not limited to Moving Picture Experts Group (MPEG) formats such as MPEG- 2, Advanced Video Coding (AVC) formats such as H.264/MPEG-4 AVC, as well as the Society of Motion Picture & Television Engineers (SMPTE) 421M/VC-1, and Joint Photographic Experts Group (JPEG) formats such as JPEG, and Motion JPEG (MJPEG) formats. [0399] In at least one embodiment, graphics processor 2700 includes a block image transfer (BLIT) engine 2704 to perform two-dimensional (2D) rasterizer operations including, for example, bit-boundary block transfers. However, in at least one embodiment, 2D graphics operations are performed using one or more components of graphics processing engine (GPE) 2710. In at least one embodiment, GPE 2710 is a compute engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations. [0400] In at least one embodiment, GPE 2710 includes a 3D pipeline 2712 for performing 3D operations, such as rendering three-dimensional images and scenes using processing functions that act upon 3D primitive shapes (e.g., rectangle, triangle, etc.).3D pipeline 2712 includes programmable and fixed function elements that perform various tasks and/or spawn execution threads to a 3D/Media sub-system 2715. While 3D pipeline 2712 can be used to perform media operations, in at least one embodiment, GPE 2710 also includes a media pipeline 2716 that is used to perform media operations, such as video post-processing and image enhancement. [0401] In at least one embodiment, media pipeline 2716 includes fixed function or programmable logic units to perform one or more specialized media operations, such as video decode acceleration, video de-interlacing, and video encode acceleration in place of, or on behalf of video codec engine 2706. In at least one embodiment, media pipeline 2716 additionally includes a thread spawning unit to spawn threads for execution on 3D/Media sub-system 2715. In at least one embodiment, spawned threads perform computations for media operations on one or more graphics execution units included in 3D/Media sub-system 2715. [0402] In at least one embodiment, 3D/Media subsystem 2715 includes logic for executing threads spawned by 3D pipeline 2712 and media pipeline 2716. In at least one embodiment, 3D pipeline 2712 and media pipeline 2716 send thread execution requests to 3D/Media subsystem 2715, which includes thread dispatch logic for arbitrating and dispatching various requests to available thread execution resources. In at least one embodiment, execution resources include an array of graphics execution units to process 3D and media threads. In at least one embodiment, 3D/Media subsystem 2715 includes one or more internal caches for thread instructions and data. In at least one embodiment, subsystem 2715 also includes shared memory, including registers and addressable memory, to share data between threads and to store output data. [0403] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment portions or all of inference and/or training logic 615 may be incorporated into graphics processor 2700. For example, in at least one embodiment, training and/or inferencing techniques described herein may use one or more of ALUs embodied in 3D pipeline 2712. Moreover, in at least one embodiment, inferencing and/or training operations described herein may be done using logic other than logic illustrated in FIGS.6A or 6B. In at least one embodiment, weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure ALUs of graphics processor 2700 to perform one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein. [0404] In at least one embodiment, at least one component shown or described with respect to FIG.27 is utilized to implement techniques described in connection with FIGS.1- 5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in graphics processor 2700 of FIG.27. [0405] FIG.28 is a block diagram of a graphics processing engine 2810 of a graphics processor in accordance with at least one embodiment. In at least one embodiment, graphics processing engine (GPE) 2810 is a version of GPE 2710 shown in FIG.27. In at least one embodiment, media pipeline 2816 is optional and may not be explicitly included within GPE 2810. In at least one embodiment, a separate media and/or image processor is coupled to GPE 2810. [0406] In at least one embodiment, GPE 2810 is coupled to or includes a command streamer 2803, which provides a command stream to 3D pipeline 2812 and/or media pipelines 2816. In at least one embodiment, command streamer 2803 is coupled to memory, which can be system memory, or one or more of internal cache memory and shared cache memory. In at least one embodiment, command streamer 2803 receives commands from memory and sends commands to 3D pipeline 2812 and/or media pipeline 2816. In at least one embodiment, commands are instructions, primitives, or micro-operations fetched from a ring buffer, which stores commands for 3D pipeline 2812 and media pipeline 2816. In at least one embodiment, a ring buffer can additionally include batch command buffers storing batches of multiple commands. In at least one embodiment, commands for 3D pipeline 2812 can also include references to data stored in memory, such as but not limited to vertex and geometry data for 3D pipeline 2812 and/or image data and memory objects for media pipeline 2816. In at least one embodiment, 3D pipeline 2812 and media pipeline 2816 process commands and data by performing operations or by dispatching one or more execution threads to a graphics core array 2814. In at least one embodiment graphics core array 2814 includes one or more blocks of graphics cores (e.g., graphics core(s) 2815A, graphics core(s) 2815B), each block including one or more graphics cores. In at least one embodiment, each graphics core includes a set of graphics execution resources that includes general-purpose and graphics specific execution logic to perform graphics and compute operations, as well as fixed function texture processing and/or machine learning and artificial intelligence acceleration logic, including inference and/or training logic 615 in FIG.6A and FIG.6B. [0407] In at least one embodiment, 3D pipeline 2812 includes fixed function and programmable logic to process one or more shader programs, such as vertex shaders, geometry shaders, pixel shaders, fragment shaders, compute shaders, or other shader programs, by processing instructions and dispatching execution threads to graphics core array 2814. In at least one embodiment, graphics core array 2814 provides a unified block of execution resources for use in processing shader programs. In at least one embodiment, multi-purpose execution logic (e.g., execution units) within graphics core(s) 2815A-2815B of graphic core array 2814 includes support for various 3D API shader languages and can execute multiple simultaneous execution threads associated with multiple shaders. [0408] In at least one embodiment, graphics core array 2814 also includes execution logic to perform media functions, such as video and/or image processing. In at least one embodiment, execution units additionally include general-purpose logic that is programmable to perform parallel general-purpose computational operations, in addition to graphics processing operations. [0409] In at least one embodiment, output data generated by threads executing on graphics core array 2814 can output data to memory in a unified return buffer (URB) 2818. URB 2818 can store data for multiple threads. In at least one embodiment, URB 2818 may be used to send data between different threads executing on graphics core array 2814. In at least one embodiment, URB 2818 may additionally be used for synchronization between threads on graphics core array 2814 and fixed function logic within shared function logic 2820. [0410] In at least one embodiment, graphics core array 2814 is scalable, such that graphics core array 2814 includes a variable number of graphics cores, each having a variable number of execution units based on a target power and performance level of GPE 2810. In at least one embodiment, execution resources are dynamically scalable, such that execution resources may be enabled or disabled as needed. [0411] In at least one embodiment, graphics core array 2814 is coupled to shared function logic 2820 that includes multiple resources that are shared between graphics cores in graphics core array 2814. In at least one embodiment, shared functions performed by shared function logic 2820 are embodied in hardware logic units that provide specialized supplemental functionality to graphics core array 2814. In at least one embodiment, shared function logic 2820 includes but is not limited to sampler 2821, math 2822, and inter-thread communication (ITC) 2823 logic. In at least one embodiment, one or more cache(s) 2825 are in included in or couple to shared function logic 2820. [0412] In at least one embodiment, a shared function is used if demand for a specialized function is insufficient for inclusion within graphics core array 2814. In at least one embodiment, a single instantiation of a specialized function is used in shared function logic 2820 and shared among other execution resources within graphics core array 2814. In at least one embodiment, specific shared functions within shared function logic 2820 that are used extensively by graphics core array 2814 may be included within shared function logic 2816 within graphics core array 2814. In at least one embodiment, shared function logic 2816 within graphics core array 2814 can include some or all logic within shared function logic 2820. In at least one embodiment, all logic elements within shared function logic 2820 may be duplicated within shared function logic 2816 of graphics core array 2814. In at least one embodiment, shared function logic 2820 is excluded in favor of shared function logic 2816 within graphics core array 2814. [0413] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment portions or all of inference and/or training logic 615 may be incorporated into graphics processor 2810. For example, in at least one embodiment, training and/or inferencing techniques described herein may use one or more of ALUs embodied in 3D pipeline 2812, graphics core(s) 2815A, shared function logic 2816, graphics core(s) 2815B, shared function logic 2820, or other logic in FIG.28. Moreover, in at least one embodiment, inferencing and/or training operations described herein may be done using logic other than logic illustrated in FIGS.6A or 6B. In at least one embodiment, weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure ALUs of graphics processor 2810 to perform one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein. [0414] In at least one embodiment, at least one component shown or described with respect to FIG.28 is utilized to implement techniques described in connection with FIGS.1- 5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in graphics processing engine 2810 of FIG.28. [0415] FIG.29 is a block diagram of hardware logic of a graphics processor core 2900, according to at least one embodiment described herein. In at least one embodiment, graphics processor core 2900 is included within a graphics core array. In at least one embodiment, graphics processor core 2900, sometimes referred to as a core slice, can be one or multiple graphics cores within a modular graphics processor. In at least one embodiment, graphics processor core 2900 is exemplary of one graphics core slice, and a graphics processor as described herein may include multiple graphics core slices based on target power and performance envelopes. In at least one embodiment, each graphics core 2900 can include a fixed function block 2930 coupled with multiple sub-cores 2901A-2901F, also referred to as sub-slices, that include modular blocks of general-purpose and fixed function logic. [0416] In at least one embodiment, fixed function block 2930 includes a geometry/fixed function pipeline 2936 that can be shared by all sub-cores in graphics processor 2900, for example, in lower performance and/or lower power graphics processor implementations. In at least one embodiment, geometry/fixed function pipeline 2936 includes a 3D fixed function pipeline, a video front-end unit, a thread spawner and thread dispatcher, and a unified return buffer manager, which manages unified return buffers. [0417] In at least one embodiment fixed function block 2930 also includes a graphics SoC interface 2937, a graphics microcontroller 2938, and a media pipeline 2939. Graphics SoC interface 2937 provides an interface between graphics core 2900 and other processor cores within a system on a chip integrated circuit. In at least one embodiment, graphics microcontroller 2938 is a programmable sub-processor that is configurable to manage various functions of graphics processor 2900, including thread dispatch, scheduling, and pre-emption. In at least one embodiment, media pipeline 2939 includes logic to facilitate decoding, encoding, pre-processing, and/or post-processing of multimedia data, including image and video data. In at least one embodiment, media pipeline 2939 implement media operations via requests to compute or sampling logic within sub-cores 2901-2901F. [0418] In at least one embodiment, SoC interface 2937 enables graphics core 2900 to communicate with general-purpose application processor cores (e.g., CPUs) and/or other components within an SoC, including memory hierarchy elements such as a shared last level cache memory, system RAM, and/or embedded on-chip or on-package DRAM. In at least one embodiment, SoC interface 2937 can also enable communication with fixed function devices within an SoC, such as camera imaging pipelines, and enables use of and/or implements global memory atomics that may be shared between graphics core 2900 and CPUs within an SoC. In at least one embodiment, SoC interface 2937 can also implement power management controls for graphics core 2900 and enable an interface between a clock domain of graphic core 2900 and other clock domains within an SoC. In at least one embodiment, SoC interface 2937 enables receipt of command buffers from a command streamer and global thread dispatcher that are configured to provide commands and instructions to each of one or more graphics cores within a graphics processor. In at least one embodiment, commands and instructions can be dispatched to media pipeline 2939, when media operations are to be performed, or a geometry and fixed function pipeline (e.g., geometry and fixed function pipeline 2936, geometry and fixed function pipeline 2914) when graphics processing operations are to be performed. [0419] In at least one embodiment, graphics microcontroller 2938 can be configured to perform various scheduling and management tasks for graphics core 2900. In at least one embodiment, graphics microcontroller 2938 can perform graphics and/or compute workload scheduling on various graphics parallel engines within execution unit (EU) arrays 2902A- 2902F, 2904A-2904F within sub-cores 2901A-2901F. In at least one embodiment, host software executing on a CPU core of an SoC including graphics core 2900 can submit workloads one of multiple graphic processor doorbells, which invokes a scheduling operation on an appropriate graphics engine. In at least one embodiment, scheduling operations include determining which workload to run next, submitting a workload to a command streamer, pre- empting existing workloads running on an engine, monitoring progress of a workload, and notifying host software when a workload is complete. In at least one embodiment, graphics microcontroller 2938 can also facilitate low-power or idle states for graphics core 2900, providing graphics core 2900 with an ability to save and restore registers within graphics core 2900 across low-power state transitions independently from an operating system and/or graphics driver software on a system. [0420] In at least one embodiment, graphics core 2900 may have greater than or fewer than illustrated sub-cores 2901A-2901F, up to N modular sub-cores. For each set of N sub- cores, in at least one embodiment, graphics core 2900 can also include shared function logic 2910, shared and/or cache memory 2912, a geometry/fixed function pipeline 2914, as well as additional fixed function logic 2916 to accelerate various graphics and compute processing operations. In at least one embodiment, shared function logic 2910 can include logic units (e.g., sampler, math, and/or inter-thread communication logic) that can be shared by each N sub-cores within graphics core 2900. Shared and/or cache memory 2912 can be a last-level cache for N sub-cores 2901A-2901F within graphics core 2900 and can also serve as shared memory that is accessible by multiple sub-cores. In at least one embodiment, geometry/fixed function pipeline 2914 can be included instead of geometry/fixed function pipeline 2936 within fixed function block 2930 and can include same or similar logic units. [0421] In at least one embodiment, graphics core 2900 includes additional fixed function logic 2916 that can include various fixed function acceleration logic for use by graphics core 2900. In at least one embodiment, additional fixed function logic 2916 includes an additional geometry pipeline for use in position only shading. In position-only shading, at least two geometry pipelines exist, whereas in a full geometry pipeline within geometry/fixed function pipeline 2916, 2936, and a cull pipeline, which is an additional geometry pipeline which may be included within additional fixed function logic 2916. In at least one embodiment, cull pipeline is a trimmed down version of a full geometry pipeline. In at least one embodiment, a full pipeline and a cull pipeline can execute different instances of an application, each instance having a separate context. In at least one embodiment, position only shading can hide long cull runs of discarded triangles, enabling shading to be completed earlier in some instances. For example, in at least one embodiment, cull pipeline logic within additional fixed function logic 2916 can execute position shaders in parallel with a main application and generally generates critical results faster than a full pipeline, as cull pipeline fetches and shades position attribute of vertices, without performing rasterization and rendering of pixels to a frame buffer. In at least one embodiment, cull pipeline can use generated critical results to compute visibility information for all triangles without regard to whether those triangles are culled. In at least one embodiment, full pipeline (which in this instance may be referred to as a replay pipeline) can consume visibility information to skip culled triangles to shade only visible triangles that are finally passed to a rasterization phase. [0422] In at least one embodiment, additional fixed function logic 2916 can also include machine-learning acceleration logic, such as fixed function matrix multiplication logic, for implementations including optimizations for machine learning training or inferencing. [0423] In at least one embodiment, within each graphics sub-core 2901A-2901F includes a set of execution resources that may be used to perform graphics, media, and compute operations in response to requests by graphics pipeline, media pipeline, or shader programs. In at least one embodiment, graphics sub-cores 2901A-2901F include multiple EU arrays 2902A-2902F, 2904A-2904F, thread dispatch and inter-thread communication (TD/IC) logic 2903A-2903F, a 3D (e.g., texture) sampler 2905A-2905F, a media sampler 2906A-2906F, a shader processor 2907A-2907F, and shared local memory (SLM) 2908A-2908F. EU arrays 2902A-2902F, 2904A-2904F each include multiple execution units, which are general- purpose graphics processing units capable of performing floating-point and integer/fixed- point logic operations in service of a graphics, media, or compute operation, including graphics, media, or compute shader programs. In at least one embodiment, TD/IC logic 2903A-2903F performs local thread dispatch and thread control operations for execution units within a sub-core and facilitate communication between threads executing on execution units of a sub-core. In at least one embodiment, 3D sampler 2905A-2905F can read texture or other 3D graphics related data into memory. In at least one embodiment, 3D sampler can read texture data differently based on a configured sample state and texture format associated with a given texture. In at least one embodiment, media sampler 2906A-2906F can perform similar read operations based on a type and format associated with media data. In at least one embodiment, each graphics sub-core 2901A-2901F can alternately include a unified 3D and media sampler. In at least one embodiment, threads executing on execution units within each of sub-cores 2901A-2901F can make use of shared local memory 2908A-2908F within each sub-core, to enable threads executing within a thread group to execute using a common pool of on-chip memory. [0424] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment, portions or all of inference and/or training logic 615 may be incorporated into graphics processor 2910. For example, in at least one embodiment, training and/or inferencing techniques described herein may use one or more of ALUs embodied in 3D pipeline 2910, graphics microcontroller 2938, geometry & fixed function pipeline 2914 and 2936, or other logic in FIG.26. Moreover, in at least one embodiment, inferencing and/or training operations described herein may be done using logic other than logic illustrated in FIGS.6A or 6B. In at least one embodiment, weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure ALUs of graphics processor 2900 to perform one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein. [0425] In at least one embodiment, at least one component shown or described with respect to FIG.29 is utilized to implement techniques described in connection with FIGS.1- 5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in graphics processor core 2900 of FIG.29. [0426] FIGS.30A-30B illustrate thread execution logic 3000 including an array of processing elements of a graphics processor core according to at least one embodiment. FIG.30A illustrates at least one embodiment, in which thread execution logic 3000 is used. FIG.30B illustrates exemplary internal details of an execution unit, according to at least one embodiment. [0427] As illustrated in FIG.30A, in at least one embodiment, thread execution logic 3000 includes a shader processor 3002, a thread dispatcher 3004, instruction cache 3006, a scalable execution unit array including a plurality of execution units 3008A-3008N, a sampler 3010, a data cache 3012, and a data port 3014. In at least one embodiment a scalable execution unit array can dynamically scale by enabling or disabling one or more execution units (e.g., any of execution unit 3008A, 3008B, 3008C, 3008D, through 3008N-1 and 3008N) based on computational requirements of a workload, for example. In at least one embodiment, scalable execution units are interconnected via an interconnect fabric that links to each of execution unit. In at least one embodiment, thread execution logic 3000 includes one or more connections to memory, such as system memory or cache memory, through one or more of instruction cache 3006, data port 3014, sampler 3010, and execution units 3008A- 3008N. In at least one embodiment, each execution unit (e.g., 3008A) is a stand-alone programmable general-purpose computational unit that is capable of executing multiple simultaneous hardware threads while processing multiple data elements in parallel for each thread. In at least one embodiment, array of execution units 3008A-3008N is scalable to include any number individual execution units. [0428] In at least one embodiment, execution units 3008A-3008N are primarily used to execute shader programs. In at least one embodiment, shader processor 3002 can process various shader programs and dispatch execution threads associated with shader programs via a thread dispatcher 3004. In at least one embodiment, thread dispatcher 3004 includes logic to arbitrate thread initiation requests from graphics and media pipelines and instantiate requested threads on one or more execution units in execution units 3008A-3008N. For example, in at least one embodiment, a geometry pipeline can dispatch vertex, tessellation, or geometry shaders to thread execution logic for processing. In at least one embodiment, thread dispatcher 3004 can also process runtime thread spawning requests from executing shader programs. [0429] In at least one embodiment, execution units 3008A-3008N support an instruction set that includes native support for many standard 3D graphics shader instructions, such that shader programs from graphics libraries (e.g., Direct 3D and OpenGL) are executed with a minimal translation. In at least one embodiment, execution units support vertex and geometry processing (e.g., vertex programs, geometry programs, vertex shaders), pixel processing (e.g., pixel shaders, fragment shaders) and general-purpose processing (e.g., compute and media shaders). In at least one embodiment, each of execution units 3008A- 3008N, which include one or more arithmetic logic units (ALUs), is capable of multi-issue single instruction multiple data (SIMD) execution and multi-threaded operation enables an efficient execution environment despite higher latency memory accesses. In at least one embodiment, each hardware thread within each execution unit has a dedicated high- bandwidth register file and associated independent thread-state. In at least one embodiment, execution is multi-issue per clock to pipelines capable of integer, single and double precision floating point operations, SIMD branch capability, logical operations, transcendental operations, and other miscellaneous operations. In at least one embodiment, while waiting for data from memory or one of shared functions, dependency logic within execution units 3008A-3008N causes a waiting thread to sleep until requested data has been returned. In at least one embodiment, while a waiting thread is sleeping, hardware resources may be devoted to processing other threads. For example, in at least one embodiment, during a delay associated with a vertex shader operation, an execution unit can perform operations for a pixel shader, fragment shader, or another type of shader program, including a different vertex shader. [0430] In at least one embodiment, each execution unit in execution units 3008A-3008N operates on arrays of data elements. In at least one embodiment, a number of data elements is "execution size," or number of channels for an instruction. In at least one embodiment, an execution channel is a logical unit of execution for data element access, masking, and flow control within instructions. In at least one embodiment, a number of channels may be independent of a number of physical Arithmetic Logic Units (ALUs) or Floating Point Units (FPUs) for a particular graphics processor. In at least one embodiment, execution units 3008A-3008N support integer and floating-point data types. [0431] In at least one embodiment, an execution unit instruction set includes SIMD instructions. In at least one embodiment, various data elements can be stored as a packed data type in a register and execution unit will process various elements based on data size of elements. For example, in at least one embodiment, when operating on a 256-bit wide vector, 256 bits of a vector are stored in a register and an execution unit operates on a vector as four separate 64-bit packed data elements (Quad-Word (QW) size data elements), eight separate 32-bit packed data elements (Double Word (DW) size data elements), sixteen separate 16-bit packed data elements (Word (W) size data elements), or thirty-two separate 8- bit data elements (byte (B) size data elements). However, in at least one embodiment, different vector widths and register sizes are possible. [0432] In at least one embodiment, one or more execution units can be combined into a fused execution unit 3009A-3009N having thread control logic (3007A-3007N) that is common to fused EUs. In at least one embodiment, multiple EUs can be fused into an EU group. In at least one embodiment, each EU in fused EU group can be configured to execute a separate SIMD hardware thread. Th number of EUs in a fused EU group can vary according to various embodiments. In at least one embodiment, various SIMD widths can be performed per-EU, including but not limited to SIMD8, SIMD16, and SIMD32. In at least one embodiment, each fused graphics execution unit 3009A-3009N includes at least two execution units. For example, in at least one embodiment, fused execution unit 3009A includes a first EU 3008A, second EU 3008B, and thread control logic 3007A that is common to first EU 3008A and second EU 3008B. In at least one embodiment, thread control logic 3007A controls threads executed on fused graphics execution unit 3009A, allowing each EU within fused execution units 3009A-3009N to execute using a common instruction pointer register. [0433] In at least one embodiment, one or more internal instruction caches (e.g., 3006) are included in thread execution logic 3000 to cache thread instructions for execution units. In at least one embodiment, one or more data caches (e.g., 3012) are included to cache thread data during thread execution. In at least one embodiment, a sampler 3010 is included to provide texture sampling for 3D operations and media sampling for media operations. In at least one embodiment, sampler 3010 includes specialized texture or media sampling functionality to process texture or media data during sampling process before providing sampled data to an execution unit. [0434] During execution, in at least one embodiment, graphics and media pipelines send thread initiation requests to thread execution logic 3000 via thread spawning and dispatch logic. In at least one embodiment, once a group of geometric objects has been processed and rasterized into pixel data, pixel processor logic (e.g., pixel shader logic, fragment shader logic, etc.) within shader processor 3002 is invoked to further compute output information and cause results to be written to output surfaces (e.g., color buffers, depth buffers, stencil buffers, etc.). In at least one embodiment, a pixel shader or fragment shader calculates values of various vertex attributes that are to be interpolated across a rasterized object. In at least one embodiment, pixel processor logic within shader processor 3002 then executes an application programming interface (API)-supplied pixel or fragment shader program. In at least one embodiment, to execute a shader program, shader processor 3002 dispatches threads to an execution unit (e.g., 3008A) via thread dispatcher 3004. In at least one embodiment, shader processor 3002 uses texture sampling logic in sampler 3010 to access texture data in texture maps stored in memory. In at least one embodiment, arithmetic operations on texture data and input geometry data compute pixel color data for each geometric fragment, or discards one or more pixels from further processing. [0435] In at least one embodiment, data port 3014 provides a memory access mechanism for thread execution logic 3000 to output processed data to memory for further processing on a graphics processor output pipeline. In at least one embodiment, data port 3014 includes or couples to one or more cache memories (e.g., data cache 3012) to cache data for memory access via a data port. [0436] As illustrated in FIG.30B, in at least one embodiment, a graphics execution unit 3008 can include an instruction fetch unit 3037, a general register file array (GRF) 3024, an architectural register file array (ARF) 3026, a thread arbiter 3022, a send unit 3030, a branch unit 3032, a set of SIMD floating point units (FPUs) 3034, and In at least one embodiment a set of dedicated integer SIMD ALUs 3035. In at least one embodiment, GRF 3024 and ARF 3026 includes a set of general register files and architecture register files associated with each simultaneous hardware thread that may be active in graphics execution unit 3008. In at least one embodiment, per thread architectural state is maintained in ARF 3026, while data used during thread execution is stored in GRF 3024. In at least one embodiment, execution state of each thread, including instruction pointers for each thread, can be held in thread-specific registers in ARF 3026. [0437] In at least one embodiment, graphics execution unit 3008 has an architecture that is a combination of Simultaneous Multi-Threading (SMT) and fine-grained Interleaved Multi-Threading (IMT). In at least one embodiment, architecture has a modular configuration that can be fine-tuned at design time based on a target number of simultaneous threads and number of registers per execution unit, where execution unit resources are divided across logic used to execute multiple simultaneous threads. [0438] In at least one embodiment, graphics execution unit 3008 can co-issue multiple instructions, which may each be different instructions. In at least one embodiment, thread arbiter 3022 of graphics execution unit thread 3008 can dispatch instructions to one of send unit 3030, branch unit 3042, or SIMD FPU(s) 3034 for execution. In at least one embodiment, each execution thread can access 128 general-purpose registers within GRF 3024, where each register can store 32 bytes, accessible as a SIMD 8-element vector of 32-bit data elements. In at least one embodiment, each execution unit thread has access to 4 Kbytes within GRF 3024, although embodiments are not so limited, and greater or fewer register resources may be provided in other embodiments. In at least one embodiment, up to seven threads can execute simultaneously, although a number of threads per execution unit can also vary according to embodiments. In at least one embodiment, in which seven threads may access 4 Kbytes, GRF 3024 can store a total of 28 Kbytes. In at least one embodiment, flexible addressing modes can permit registers to be addressed together to build effectively wider registers or to represent strided rectangular block data structures. [0439] In at least one embodiment, memory operations, sampler operations, and other longer-latency system communications are dispatched via "send" instructions that are executed by message passing send unit 3030. In at least one embodiment, branch instructions are dispatched to a dedicated branch unit 3032 to facilitate SIMD divergence and eventual convergence. [0440] In at least one embodiment graphics execution unit 3008 includes one or more SIMD floating point units (FPU(s)) 3034 to perform floating-point operations. In at least one embodiment, FPU(s) 3034 also support integer computation. In at least one embodiment FPU(s) 3034 can SIMD execute up to M number of 32-bit floating-point (or integer) operations, or SIMD execute up to 2M 16-bit integer or 16-bit floating-point operations. In at least one embodiment, at least one of FPU(s) provides extended math capability to support high-throughput transcendental math functions and double precision 64-bit floating-point. In at least one embodiment, a set of 8-bit integer SIMD ALUs 3035 are also present, and may be specifically optimized to perform operations associated with machine learning computations. [0441] In at least one embodiment, arrays of multiple instances of graphics execution unit 3008 can be instantiated in a graphics sub-core grouping (e.g., a sub-slice). In at least one embodiment execution unit 3008 can execute instructions across a plurality of execution channels. In at least one embodiment, each thread executed on graphics execution unit 3008 is executed on a different channel. [0442] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment, portions or all of inference and/or training logic 615 may be incorporated into execution logic 3000. Moreover, in at least one embodiment, inferencing and/or training operations described herein may be done using logic other than logic illustrated in FIGS.6A or 6B. In at least one embodiment, weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure ALUs of execution logic 3000 to perform one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein. [0443] In at least one embodiment, at least one component shown or described with respect to FIG.30A and/or FIG.30B is utilized to implement techniques described in connection with FIGS.1-5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in thread execution logic 3000 of FIG.30A and/or graphics execution unit 3008 of FIG.30B. [0444] FIG.31 illustrates a parallel processing unit (“PPU”) 3100, according to at least one embodiment. In at least one embodiment, PPU 3100 is configured with machine- readable code that, if executed by PPU 3100, causes PPU 3100 to perform some or all of processes and techniques described throughout this disclosure. In at least one embodiment, PPU 3100 is a multi-threaded processor that is implemented on one or more integrated circuit devices and that utilizes multithreading as a latency-hiding technique designed to process computer-readable instructions (also referred to as machine-readable instructions or simply instructions) on multiple threads in parallel. In at least one embodiment, a thread refers to a thread of execution and is an instantiation of a set of instructions configured to be executed by PPU 3100. In at least one embodiment, PPU 3100 is a graphics processing unit (“GPU”) configured to implement a graphics rendering pipeline for processing three-dimensional (“3D”) graphics data in order to generate two-dimensional (“2D”) image data for display on a display device such as a liquid crystal display (“LCD”) device. In at least one embodiment, PPU 3100 is utilized to perform computations such as linear algebra operations and machine- learning operations. FIG.31 illustrates an example parallel processor for illustrative purposes only and should be construed as a non-limiting example of processor architectures contemplated within scope of this disclosure and that any suitable processor may be employed to supplement and/or substitute for same. [0445] In at least one embodiment, one or more PPUs 3100 are configured to accelerate High Performance Computing (“HPC”), data center, and machine learning applications. In at least one embodiment, PPU 3100 is configured to accelerate deep learning systems and applications including following non-limiting examples: autonomous vehicle platforms, deep learning, high-accuracy speech, image, text recognition systems, intelligent video analytics, molecular simulations, drug discovery, disease diagnosis, weather forecasting, big data analytics, astronomy, molecular dynamics simulation, financial modeling, robotics, factory automation, real-time language translation, online search optimizations, and personalized user recommendations, and more. [0446] In at least one embodiment, PPU 3100 includes, without limitation, an Input/Output (“I/O”) unit 3106, a front-end unit 3110, a scheduler unit 3112, a work distribution unit 3114, a hub 3116, a crossbar (“Xbar”) 3120, one or more general processing clusters (“GPCs”) 3118, and one or more partition units (“memory partition units”) 3122. In at least one embodiment, PPU 3100 is connected to a host processor or other PPUs 3100 via one or more high-speed GPU interconnects (“GPU interconnects”) 3108. In at least one embodiment, PPU 3100 is connected to a host processor or other peripheral devices via an interconnect 3102. In at least one embodiment, PPU 3100 is connected to a local memory comprising one or more memory devices (“memory”) 3104. In at least one embodiment, memory devices 3104 include, without limitation, one or more dynamic random access memory (“DRAM”) devices. In at least one embodiment, one or more DRAM devices are configured and/or configurable as high-bandwidth memory (“HBM”) subsystems, with multiple DRAM dies stacked within each device. [0447] In at least one embodiment, high-speed GPU interconnect 3108 may refer to a wire-based multi-lane communications link that is used by systems to scale and include one or more PPUs 3100 combined with one or more central processing units (“CPUs”), supports cache coherence between PPUs 3100 and CPUs, and CPU mastering. In at least one embodiment, data and/or commands are transmitted by high-speed GPU interconnect 3108 through hub 3116 to/from other units of PPU 3100 such as one or more copy engines, video encoders, video decoders, power management units, and other components which may not be explicitly illustrated in FIG.31. [0448] In at least one embodiment, I/O unit 3106 is configured to transmit and receive communications (e.g., commands, data) from a host processor (not illustrated in FIG.31) over system bus 3102. In at least one embodiment, I/O unit 3106 communicates with host processor directly via system bus 3102 or through one or more intermediate devices such as a memory bridge. In at least one embodiment, I/O unit 3106 may communicate with one or more other processors, such as one or more of PPUs 3100 via system bus 3102. In at least one embodiment, I/O unit 3106 implements a Peripheral Component Interconnect Express (“PCIe”) interface for communications over a PCIe bus. In at least one embodiment, I/O unit 3106 implements interfaces for communicating with external devices. [0449] In at least one embodiment, I/O unit 3106 decodes packets received via system bus 3102. In at least one embodiment, at least some packets represent commands configured to cause PPU 3100 to perform various operations. In at least one embodiment, I/O unit 3106 transmits decoded commands to various other units of PPU 3100 as specified by commands. In at least one embodiment, commands are transmitted to front-end unit 3110 and/or transmitted to hub 3116 or other units of PPU 3100 such as one or more copy engines, a video encoder, a video decoder, a power management unit, etc. (not explicitly illustrated in FIG.31). In at least one embodiment, I/O unit 3106 is configured to route communications between and among various logical units of PPU 3100. [0450] In at least one embodiment, a program executed by host processor encodes a command stream in a buffer that provides workloads to PPU 3100 for processing. In at least one embodiment, a workload comprises instructions and data to be processed by those instructions. In at least one embodiment, buffer is a region in a memory that is accessible (e.g., read/write) by both host processor and PPU 3100 — a host interface unit may be configured to access buffer in a system memory connected to system bus 3102 via memory requests transmitted over system bus 3102 by I/O unit 3106. In at least one embodiment, host processor writes command stream to buffer and then transmits a pointer to start of command stream to PPU 3100 such that front-end unit 3110 receives pointers to one or more command streams and manages one or more command streams, reading commands from command streams and forwarding commands to various units of PPU 3100. [0451] In at least one embodiment, front-end unit 3110 is coupled to scheduler unit 3112 that configures various GPCs 3118 to process tasks defined by one or more command streams. In at least one embodiment, scheduler unit 3112 is configured to track state information related to various tasks managed by scheduler unit 3112 where state information may indicate which of GPCs 3118 a task is assigned to, whether task is active or inactive, a priority level associated with task, and so forth. In at least one embodiment, scheduler unit 3112 manages execution of a plurality of tasks on one or more of GPCs 3118. [0452] In at least one embodiment, scheduler unit 3112 is coupled to work distribution unit 3114 that is configured to dispatch tasks for execution on GPCs 3118. In at least one embodiment, work distribution unit 3114 tracks a number of scheduled tasks received from scheduler unit 3112 and work distribution unit 3114 manages a pending task pool and an active task pool for each of GPCs 3118. In at least one embodiment, pending task pool comprises a number of slots (e.g., 32 slots) that contain tasks assigned to be processed by a particular GPC 3118; active task pool may comprise a number of slots (e.g., 4 slots) for tasks that are actively being processed by GPCs 3118 such that as one of GPCs 3118 completes execution of a task, that task is evicted from active task pool for GPC 3118 and one of other tasks from pending task pool is selected and scheduled for execution on GPC 3118. In at least one embodiment, if an active task is idle on GPC 3118, such as while waiting for a data dependency to be resolved, then active task is evicted from GPC 3118 and returned to pending task pool while another task in pending task pool is selected and scheduled for execution on GPC 3118. [0453] In at least one embodiment, work distribution unit 3114 communicates with one or more GPCs 3118 via XBar 3120. In at least one embodiment, XBar 3120 is an interconnect network that couples many of units of PPU 3100 to other units of PPU 3100 and can be configured to couple work distribution unit 3114 to a particular GPC 3118. In at least one embodiment, one or more other units of PPU 3100 may also be connected to XBar 3120 via hub 3116. [0454] In at least one embodiment, tasks are managed by scheduler unit 3112 and dispatched to one of GPCs 3118 by work distribution unit 3114. GPC 3118 is configured to process task and generate results. In at least one embodiment, results may be consumed by other tasks within GPC 3118, routed to a different GPC 3118 via XBar 3120, or stored in memory 3104. In at least one embodiment, results can be written to memory 3104 via partition units 3122, which implement a memory interface for reading and writing data to/from memory 3104. In at least one embodiment, results can be transmitted to another PPU 3104 or CPU via high-speed GPU interconnect 3108. In at least one embodiment, PPU 3100 includes, without limitation, a number U of partition units 3122 that is equal to number of separate and distinct memory devices 3104 coupled to PPU 3100. In at least one embodiment, partition unit 3122 will be described in more detail herein in conjunction with FIG.33. [0455] In at least one embodiment, a host processor executes a driver kernel that implements an application programming interface (“API”) that enables one or more applications executing on host processor to schedule operations for execution on PPU 3100. In at least one embodiment, multiple compute applications are simultaneously executed by PPU 3100 and PPU 3100 provides isolation, quality of service (“QoS”), and independent address spaces for multiple compute applications. In at least one embodiment, an application generates instructions (e.g., in form of API calls) that cause driver kernel to generate one or more tasks for execution by PPU 3100 and driver kernel outputs tasks to one or more streams being processed by PPU 3100. In at least one embodiment, each task comprises one or more groups of related threads, which may be referred to as a warp. In at least one embodiment, a warp comprises a plurality of related threads (e.g., 32 threads) that can be executed in parallel. In at least one embodiment, cooperating threads can refer to a plurality of threads including instructions to perform task and that exchange data through shared memory. In at least one embodiment, threads and cooperating threads are described in more detail, in accordance with at least one embodiment, in conjunction with FIG.33. [0456] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment, deep learning application processor is used to train a machine learning model, such as a neural network, to predict or infer information provided to PPU 3100. In at least one embodiment, deep learning application processor 3100 is used to infer or predict information based on a trained machine learning model (e.g., neural network) that has been trained by another processor or system or by PPU 3100. In at least one embodiment, PPU 3100 may be used to perform one or more neural network use cases described herein. [0457] In at least one embodiment, at least one component shown or described with respect to FIG.31 is utilized to implement techniques described in connection with FIGS.1- 5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in parallel processing unit 3100 of FIG.31. [0458] FIG.32 illustrates a general processing cluster (“GPC”) 3200, according to at least one embodiment. In at least one embodiment, GPC 3200 is GPC 3118 of FIG.31. In at least one embodiment, each GPC 3200 includes, without limitation, a number of hardware units for processing tasks and each GPC 3200 includes, without limitation, a pipeline manager 3202, a pre-raster operations unit (“PROP”) 3204, a raster engine 3208, a work distribution crossbar (“WDX”) 3216, a memory management unit (“MMU”) 3218, one or more Data Processing Clusters (“DPCs”) 3206, and any suitable combination of parts. [0459] In at least one embodiment, operation of GPC 3200 is controlled by pipeline manager 3202. In at least one embodiment, pipeline manager 3202 manages configuration of one or more DPCs 3206 for processing tasks allocated to GPC 3200. In at least one embodiment, pipeline manager 3202 configures at least one of one or more DPCs 3206 to implement at least a portion of a graphics rendering pipeline. In at least one embodiment, DPC 3206 is configured to execute a vertex shader program on a programmable streaming multi-processor (“SM”) 3214. In at least one embodiment, pipeline manager 3202 is configured to route packets received from a work distribution unit to appropriate logical units within GPC 3200, in at least one embodiment, and some packets may be routed to fixed function hardware units in PROP 3204 and/or raster engine 3208 while other packets may be routed to DPCs 3206 for processing by a primitive engine 3212 or SM 3214. In at least one embodiment, pipeline manager 3202 configures at least one of DPCs 3206 to implement a neural network model and/or a computing pipeline. [0460] In at least one embodiment, PROP unit 3204 is configured, in at least one embodiment, to route data generated by raster engine 3208 and DPCs 3206 to a Raster Operations (“ROP”) unit in partition unit 3122, described in more detail above in conjunction with FIG.31. In at least one embodiment, PROP unit 3204 is configured to perform optimizations for color blending, organize pixel data, perform address translations, and more. In at least one embodiment, raster engine 3208 includes, without limitation, a number of fixed function hardware units configured to perform various raster operations, in at least one embodiment, and raster engine 3208 includes, without limitation, a setup engine, a coarse raster engine, a culling engine, a clipping engine, a fine raster engine, a tile coalescing engine, and any suitable combination thereof. In at least one embodiment, setup engine receives transformed vertices and generates plane equations associated with geometric primitive defined by vertices; plane equations are transmitted to coarse raster engine to generate coverage information (e.g., an x, y coverage mask for a tile) for primitive; output of coarse raster engine is transmitted to culling engine where fragments associated with primitive that fail a z-test are culled, and transmitted to a clipping engine where fragments lying outside a viewing frustum are clipped. In at least one embodiment, fragments that survive clipping and culling are passed to fine raster engine to generate attributes for pixel fragments based on plane equations generated by setup engine. In at least one embodiment, output of raster engine 3208 comprises fragments to be processed by any suitable entity such as by a fragment shader implemented within DPC 3206. [0461] In at least one embodiment, each DPC 3206 included in GPC 3200 comprise, without limitation, an M-Pipe Controller (“MPC”) 3210; primitive engine 3212; one or more SMs 3214; and any suitable combination thereof. In at least one embodiment, MPC 3210 controls operation of DPC 3206, routing packets received from pipeline manager 3202 to appropriate units in DPC 3206. In at least one embodiment, packets associated with a vertex are routed to primitive engine 3212, which is configured to fetch vertex attributes associated with vertex from memory; in contrast, packets associated with a shader program may be transmitted to SM 3214. [0462] In at least one embodiment, SM 3214 comprises, without limitation, a programmable streaming processor that is configured to process tasks represented by a number of threads. In at least one embodiment, SM 3214 is multi-threaded and configured to execute a plurality of threads (e.g., 32 threads) from a particular group of threads concurrently and implements a Single-Instruction, Multiple-Data (“SIMD”) architecture where each thread in a group of threads (e.g., a warp) is configured to process a different set of data based on same set of instructions. In at least one embodiment, all threads in group of threads execute same instructions. In at least one embodiment, SM 3214 implements a Single-Instruction, Multiple Thread (“SIMT”) architecture wherein each thread in a group of threads is configured to process a different set of data based on same set of instructions, but where individual threads in group of threads are allowed to diverge during execution. In at least one embodiment, a program counter, call stack, and execution state is maintained for each warp, enabling concurrency between warps and serial execution within warps when threads within warp diverge. In another embodiment, a program counter, call stack, and execution state is maintained for each individual thread, enabling equal concurrency between all threads, within and between warps. In at least one embodiment, execution state is maintained for each individual thread and threads executing same instructions may be converged and executed in parallel for better efficiency. At least one embodiment of SM 3214 are described in more detail herein. [0463] In at least one embodiment, MMU 3218 provides an interface between GPC 3200 and memory partition unit (e.g., partition unit 3122 of FIG.31) and MMU 3218 provides translation of virtual addresses into physical addresses, memory protection, and arbitration of memory requests. In at least one embodiment, MMU 3218 provides one or more translation lookaside buffers (“TLBs”) for performing translation of virtual addresses into physical addresses in memory. [0464] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment, deep learning application processor is used to train a machine learning model, such as a neural network, to predict or infer information provided to GPC 3200. In at least one embodiment, GPC 3200 is used to infer or predict information based on a trained machine learning model (e.g., neural network) that has been trained by another processor or system or by GPC 3200. In at least one embodiment, GPC 3200 may be used to perform one or more neural network use cases described herein. [0465] In at least one embodiment, at least one component shown or described with respect to FIG.32 is utilized to implement techniques described in connection with FIGS.1- 5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in general processing cluster 3200 of FIG.32. [0466] FIG.33 illustrates a memory partition unit 3300 of a parallel processing unit (“PPU”), in accordance with at least one embodiment. In at least one embodiment, memory partition unit 3300 includes, without limitation, a Raster Operations (“ROP”) unit 3302; a level two (“L2”) cache 3304; a memory interface 3306; and any suitable combination thereof. memory interface 3306 is coupled to memory. memory interface 3306 may implement 32, 64, 128, 1024-bit data buses, or like, for high-speed data transfer. In at least one embodiment, PPU incorporates U memory interfaces 3306, one memory interface 3306 per pair of partition units 3300, where each pair of partition units 3300 is connected to a corresponding memory device. For example, in at least one embodiment, PPU may be connected to up to Y memory devices, such as high bandwidth memory stacks or graphics double-data-rate, version 5, synchronous dynamic random a33ess memory (“GDDR5 SDRAM”). [0467] In at least one embodiment, memory interface 3306 implements a high bandwidth memory second generation (“HBM2”) memory interface and Y equals half U. In at least one embodiment, HBM2 memory stacks are located on same physical package as PPU, providing substantial power and area savings compared with conventional GDDR5 SDRAM systems. In at least one embodiment, each HBM2 stack includes, without limitation, four memory dies and Y equals 4, with each HBM2 stack including two 128-bit channels per die for a total of 8 channels and a data bus width of 1024 bits. In at least one embodiment, memory supports Single-Error Correcting Double-Error Detecting (“SECDED”) Error Correction Code (“ECC”) to protect data. ECC provides higher reliability for compute applications that are sensitive to data corruption. [0468] In at least one embodiment, PPU implements a multi-level memory hierarchy. In at least one embodiment, memory partition unit 3300 supports a unified memory to provide a single unified virtual address space for central processing unit (“CPU”) and PPU memory, enabling data sharing between virtual memory systems. In at least one embodiment frequency of a33esses by a PPU to memory located on other processors is traced to ensure that memory pages are moved to physical memory of PPU that is a33essing pages more frequently. In at least one embodiment, high-speed GPU interconnect 3108 supports address translation services allowing PPU to directly a33ess a CPU’s page tables and providing full a33ess to CPU memory by PPU. [0469] In at least one embodiment, copy engines transfer data between multiple PPUs or between PPUs and CPUs. In at least one embodiment, copy engines can generate page faults for addresses that are not mapped into page tables and memory partition unit 3300 then services page faults, mapping addresses into page table, after which copy engine performs transfer. In at least one embodiment, memory is pinned (i.e., non-pageable) for multiple copy engine operations between multiple processors, substantially reducing available memory. In at least one embodiment, with hardware page faulting, addresses can be passed to copy engines without regard as to whether memory pages are resident, and copy process is transparent. [0470] Data from memory 3104 of FIG.31 or other system memory is fetched by memory partition unit 3300 and stored in L2 cache 3304, which is located on-chip and is shared between various GPCs, in accordance with at least one embodiment. Each memory partition unit 3300, in at least one embodiment, includes, without limitation, at least a portion of L2 cache associated with a corresponding memory device. In at least one embodiment, lower level caches are implemented in various units within GPCs. In at least one embodiment, each of SMs 3214 may implement a level one (“L1”) cache wherein L1 cache is private memory that is dedicated to a particular SM 3214 and data from L2 cache 3304 is fetched and stored in each of L1 caches for processing in functional units of SMs 3214. In at least one embodiment, L2 cache 3304 is coupled to memory interface 3306 and XBar 3120. [0471] ROP unit 3302 performs graphics raster operations related to pixel color, such as color compression, pixel blending, and more, in at least one embodiment. ROP unit 3302, in at least one embodiment, implements depth testing in conjunction with raster engine 3208, receiving a depth for a sample location associated with a pixel fragment from culling engine of raster engine 3208. In at least one embodiment, depth is tested against a corresponding depth in a depth buffer for a sample location associated with fragment. In at least one embodiment, if fragment passes depth test for sample location, then ROP unit 3302 updates depth buffer and transmits a result of depth test to raster engine 3208. It will be appreciated that number of partition units 3300 may be different than number of GPCs and, therefore, each ROP unit 3302 can, in at least one embodiment, be coupled to each of GPCs. In at least one embodiment, ROP unit 3302 tracks packets received from different GPCs and determines which that a result generated by ROP unit 3302 is routed to through XBar 3120. [0472] FIG.34 illustrates a streaming multi-processor (“SM”) 3400, according to at least one embodiment. In at least one embodiment, SM 3400 is SM of FIG.32. In at least one embodiment, SM 3400 includes, without limitation, an instruction cache 3402; one or more scheduler units 3404; a register file 3408; one or more processing cores (“cores”) 3410; one or more special function units (“SFUs”) 3412; one or more load/store units (“LSUs”) 3414; an interconnect network 3416; a shared memory/level one (“L1”) cache 3418; and any suitable combination thereof. In at least one embodiment, a work distribution unit dispatches tasks for execution on general processing clusters (“GPCs”) of parallel processing units (“PPUs”) and each task is allocated to a particular Data Processing Cluster (“DPC”) within a GPC and, if task is associated with a shader program, task is allocated to one of SMs 3400. In at least one embodiment, scheduler unit 3404 receives tasks from work distribution unit and manages instruction scheduling for one or more thread blocks assigned to SM 3400. In at least one embodiment, scheduler unit 3404 schedules thread blocks for execution as warps of parallel threads, wherein each thread block is allocated at least one warp. In at least one embodiment, each warp executes threads. In at least one embodiment, scheduler unit 3404 manages a plurality of different thread blocks, allocating warps to different thread blocks and then dispatching instructions from plurality of different cooperative groups to various functional units (e.g., processing cores 3410, SFUs 3412, and LSUs 3414) during each clock cycle. [0473] In at least one embodiment, Cooperative Groups may refer to a programming model for organizing groups of communicating threads that allows developers to express granularity at which threads are communicating, enabling expression of richer, more efficient parallel decompositions. In at least one embodiment, cooperative launch APIs support synchronization amongst thread blocks for execution of parallel algorithms. In at least one embodiment, applications of conventional programming models provide a single, simple construct for synchronizing cooperating threads: a barrier across all threads of a thread block (e.g., syncthreads( ) function). However, In at least one embodiment, programmers may define groups of threads at smaller than thread block granularities and synchronize within defined groups to enable greater performance, design flexibility, and software reuse in form of collective group-wide function interfaces. In at least one embodiment, Cooperative Groups enables programmers to define groups of threads explicitly at sub-block (i.e., as small as a single thread) and multi-block granularities, and to perform collective operations such as synchronization on threads in a cooperative group. programming model supports clean composition across software boundaries, so that libraries and utility functions can synchronize safely within their local context without having to make assumptions about convergence. In at least one embodiment, Cooperative Groups primitives enable new patterns of cooperative parallelism, including, without limitation, producer-consumer parallelism, opportunistic parallelism, and global synchronization across an entire grid of thread blocks. [0474] In at least one embodiment, a dispatch unit 3406 is configured to transmit instructions to one or more of functional units and scheduler unit 3404 includes, without limitation, two dispatch units 3406 that enable two different instructions from same warp to be dispatched during each clock cycle. In at least one embodiment, each scheduler unit 3404 includes a single dispatch unit 3406 or a34itional dispatch units 3406. [0475] In at least one embodiment, each SM 3400, in at least one embodiment, includes, without limitation, register file 3408 that provides a set of registers for functional units of SM 3400. In at least one embodiment, register file 3408 is divided between each of functional units such that each functional unit is allocated a dedicated portion of register file 3408. In at least one embodiment, register file 3408 is divided between different warps being executed by SM 3400 and register file 3408 provides temporary storage for operands connected to data paths of functional units. In at least one embodiment, each SM 3400 comprises, without limitation, a plurality of L processing cores 3410. In at least one embodiment, SM 3400 includes, without limitation, a large number (e.g., 128 or more) of distinct processing cores 3410. In at least one embodiment, each processing core 3410, in at least one embodiment, includes, without limitation, a fully-pipelined, single-precision, double-precision, and/or mixed precision processing unit that includes, without limitation, a floating point arithmetic logic unit and an integer arithmetic logic unit. In at least one embodiment, floating point arithmetic logic units implement IEEE 754-2008 standard for floating point arithmetic. In at least one embodiment, processing cores 3410 include, without limitation, 64 single-precision (32-bit) floating point cores, 64 integer cores, 32 double- precision (64-bit) floating point cores, and 8 tensor cores. [0476] Tensor cores are configured to perform matrix operations in accordance with at least one embodiment. In at least one embodiment, one or more tensor cores are included in processing cores 3410. In at least one embodiment, tensor cores are configured to perform deep learning matrix arithmetic, such as convolution operations for neural network training and inferencing. In at least one embodiment, each tensor core operates on a 4x4 matrix and performs a matrix multiply and accumulate operation D = A X B + C, where A, B, C, and D are 4x4 matrices. [0477] In at least one embodiment, matrix multiply inputs A and B are 16-bit floating point matrices and accumulation matrices C and D are16-bit floating point or 32-bit floating point matrices. In at least one embodiment, tensor cores operate on 16-bit floating point input data with 32-bit floating point accumulation. In at least one embodiment, 16-bit floating point multiply uses 64 operations and results in a full precision product that is then accumulated using 32-bit floating point a34ition with other intermediate products for a 4x4x4 matrix multiply. Tensor cores are used to perform much larger two-dimensional or higher dimensional matrix operations, built up from these smaller elements, in at least one embodiment. In at least one embodiment, an API, such as CUDA 9 C++ API, exposes specialized matrix load, matrix multiply and accumulate, and matrix store operations to efficiently use tensor cores from a CUDA-C++ program. In at least one embodiment, at CUDA level, warp-level interface assumes 16x16 size matrices spanning all 32 threads of warp. [0478] In at least one embodiment, each SM 3400 comprises, without limitation, M SFUs 3412 that perform special functions (e.g., attribute evaluation, reciprocal square root, and like). In at least one embodiment, SFUs 3412 include, without limitation, a tree traversal unit configured to traverse a hierarchical tree data structure. In at least one embodiment, SFUs 3412 include, without limitation, a texture unit configured to perform texture map filtering operations. In at least one embodiment, texture units are configured to load texture maps (e.g., a 2D array of texels) from memory and sample texture maps to produce sampled texture values for use in shader programs executed by SM 3400. In at least one embodiment, texture maps are stored in shared memory/L1 cache 3418. In at least one embodiment, texture units implement texture operations such as filtering operations using mip-maps (e.g., texture maps of varying levels of detail), in accordance with at least one embodiment. In at least one embodiment, each SM 3400 includes, without limitation, two texture units. [0479] Each SM 3400 comprises, without limitation, N LSUs 3414 that implement load and store operations between shared memory/L1 cache 3418 and register file 3408, in at least one embodiment. Each SM 3400 includes, without limitation, interconnect network 3416 that connects each of functional units to register file 3408 and LSU 3414 to register file 3408 and shared memory/ L1 cache 3418 in at least one embodiment. In at least one embodiment, interconnect network 3416 is a crossbar that can be configured to connect any of functional units to any of registers in register file 3408 and connect LSUs 3414 to register file 3408 and memory locations in shared memory/L1 cache 3418. [0480] In at least one embodiment, shared memory/L1 cache 3418 is an array of on-chip memory that allows for data storage and communication between SM 3400 and primitive engine and between threads in SM 3400, in at least one embodiment. In at least one embodiment, shared memory/L1 cache 3418 comprises, without limitation, 128KB of storage capacity and is in path from SM 3400 to partition unit. In at least one embodiment, shared memory/L1 cache 3418, in at least one embodiment, is used to cache reads and writes. In at least one embodiment, one or more of shared memory/L1 cache 3418, L2 cache, and memory are backing stores. [0481] Combining data cache and shared memory functionality into a single memory block provides improved performance for both types of memory accesses, in at least one embodiment. In at least one embodiment, capacity is used or is usable as a cache by programs that do not use shared memory, such as if shared memory is configured to use half of capacity, texture and load/store operations can use remaining capacity. Integration within shared memory/L1 cache 3418 enables shared memory/L1 cache 3418 to function as a high- throughput conduit for streaming data while simultaneously providing high-bandwidth and low-latency access to frequently reused data, in accordance with at least one embodiment. In at least one embodiment, when configured for general purpose parallel computation, a simpler configuration can be used compared with graphics processing. In at least one embodiment, fixed function graphics processing units are bypassed, creating a much simpler programming model. In general purpose parallel computation configuration, work distribution unit assigns and distributes blocks of threads directly to DPCs, in at least one embodiment. In at least one embodiment, threads in a block execute same program, using a unique thread ID in calculation to ensure each thread generates unique results, using SM 3400 to execute program and perform calculations, shared memory/L1 cache 3418 to communicate between threads, and LSU 3414 to read and write global memory through shared memory/L1 cache 3418 and memory partition unit. In at least one embodiment, when configured for general purpose parallel computation, SM 3400 writes commands that scheduler unit 3404 can use to launch new work on DPCs. [0482] In at least one embodiment, PPU is included in or coupled to a desktop computer, a laptop computer, a tablet computer, servers, supercomputers, a smart-phone (e.g., a wireless, hand-held device), personal digital assistant (“PDA”), a digital camera, a vehicle, a head mounted display, a hand-held electronic device, and more. In at least one embodiment, PPU is embodied on a single semiconductor substrate. In at least one embodiment, PPU is included in a system-on-a-chip (“SoC”) along with one or more other devices such as additional PPUs, memory, a reduced instruction set computer (“RISC”) CPU, a memory management unit (“MMU”), a digital-to-analog converter (“DAC”), and like. [0483] In at least one embodiment, PPU may be included on a graphics card that includes one or more memory devices. graphics card may be configured to interface with a PCIe slot on a motherboard of a desktop computer. In at least one embodiment, PPU may be an integrated graphics processing unit (“iGPU”) included in chipset of motherboard. [0484] Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 615 are provided herein in conjunction with FIGS.6A and/or 6B. In at least one embodiment, deep learning application processor is used to train a machine learning model, such as a neural network, to predict or infer information provided to SM 3400. In at least one embodiment, SM 3400 is used to infer or predict information based on a trained machine learning model (e.g., neural network) that has been trained by another processor or system or by SM 3400. In at least one embodiment, SM 3400 may be used to perform one or more neural network use cases described herein. [0485] In at least one embodiment, at least one component shown or described with respect to FIG.34 is utilized to implement techniques described in connection with FIGS.1- 5. In at least one embodiment, inference and/or training logic 615 are used to identify a first type of operation with a first tensor, construct a second tensor, and perform a second type of operation with second tensor. In at least one embodiment, inference and/or training logic 615 identify a convolution operation with a first activation tensor and a filter tensor that generates a feature map, identify convolved modes of first activation tensor, construct a second activation tensor, and generate feature map using a tensor contraction of second activation tensor and filter tensor. In at least one embodiment, feature map is used in SM 3400 of FIG. 34. [0486] At least one embodiment can be described in view of the following clauses: 1. A processor, comprising: one or more arithmetic logic units (ALUs) to perform one or more convolution operations on image data by at least contracting one or more tensors to generate one or more feature maps. 2. The processor of clause 1, wherein the one or more convolution operations include a first convolution operation with a first activation tensor and a filter tensor to generate a first feature map represented by an output tensor, and the one or more ALUs are to: construct a second activation tensor that has a higher number of modes than the first activation tensor; and generate the first feature map by performing a tensor contraction with the second activation tensor and the filter tensor. 3. The processor of clause 2, wherein the one or more ALUs are to construct the second activation tensor based at least in part on: identifying a mode of the first activation tensor that is not present in the filter tensor and is not present in the output tensor; and replacing the identified mode with a first mode from the output tensor and a second mode from the filter tensor in the second activation tensor. 4. The processor of clause 3, wherein the one or more ALUs are to construct the second activation tensor such that the first mode and the second mode of the second activation tensor have overlapping strides. 5. The processor of clause 4, wherein the identified mode of the first activation tensor has an identified stride, and the one or more ALUs are to set a first stride of the first mode and a second stride of the second mode of the second activation tensor to the identified stride. 6. The processor of any one of clauses 2-5, wherein the one or more ALUs are to construct the second activation tensor using data elements of the first activation tensor without adding additional data elements. 7. A system, comprising: one or more processors to perform a first type of operation on a tensor to generate an output by: changing a representation of the tensor from a first number of dimensions to a second number of dimensions; and performing a second type of operation on the representation of the tensor with the second number of dimensions to generate the output. 8. The system of clause 7, wherein the first type of operation is a convolution, the second type of operation is a tensor contraction, and the second number of dimensions is greater than the first number of dimensions. 9. The system of any one of clauses 7-8, wherein the output is a feature map represented by an output tensor, the tensor is an activation tensor, the convolution is a convolution of the activation tensor and a filter tensor, and the one or more processors are to: identify a dimension of the activation tensor that is not present in the filter tensor and is not present in the output tensor; and replace the identified dimension with a first dimension from the output tensor and a second dimension from the filter tensor in the changed representation of the tensor. 10. The system of clause 9, wherein the first dimension and the second dimension have overlapping strides. 11. The system of any one of clauses 7-10, further comprising a memory, wherein the tensor includes one or more data elements stored in the memory, and the one or more processors are to change the representation of the tensor such that two dimensions of the tensor refer to a common set of data elements included in the one or more data elements. 12. The system of clause 7, wherein the first type of operation is a tensor contraction and the second type of operation is a convolution. 13. The system of any one of clauses 7-12, further comprising one or more memories to store parameters corresponding to one or more neural networks, wherein the one or more processors are to perform an inferencing operation using the one or more neural networks based, at least in part, on the output of the tensor contraction. 14. A machine-readable medium having stored thereon a set of instructions, which if performed by one or more processors, cause the one or more processors to at least generate one or more feature map outputs of one or more convolution operations on image data by at least contracting one or more tensors. 15. The machine-readable medium of clause 14, wherein the one or more convolution operations include a first convolution operation with a first activation tensor and a filter tensor to produce a first feature map represented by an output tensor, and wherein the set of instructions, which if performed by the one or more processors, further cause the one or more processors to: construct a second activation tensor that has a higher number of modes than the first activation tensor; and perform a tensor contraction with the second activation tensor and the filter tensor to generate the first feature map. 16. The machine-readable medium of clause 14 or 15, wherein the set of instructions, which if performed by the one or more processors, further cause the one or more processors to: identify a mode of the first activation tensor that is not present in the filter tensor and is not present in the output tensor; and replace the identified mode with a first mode from the output tensor and a second mode from the filter tensor in the second activation tensor. 17. The machine-readable medium of clause 16, wherein the set of instructions, which if performed by the one or more processors, further cause the one or more processors to construct the second activation tensor such that the first mode and the second mode of the second activation tensor have overlapping strides. 18. The machine-readable medium of any one of clauses 16-17, wherein the identified mode of the first activation tensor has an identified stride, and the set of instructions, which if performed by the one or more processors, further cause the one or more processors to set a first stride of the first mode and a second stride of the second mode of the second activation tensor to the identified stride. 19. The machine-readable medium of any one of clauses 15-18, wherein the first convolution operation is a two-dimensional (2D) convolution operation. 20. The machine-readable medium of any one of clauses 14-19, wherein the set of instructions, which if performed by the one or more processors, further cause the one or more processors to perform an inferencing operation using a neural network based, at least in part, on the first feature map. 21. A vehicle, comprising: a computer vision system that includes one or more processors to identify one or more features of a vehicle operating environment based at least in part on using one or more neural networks to generate one or more outputs of one or more convolution operations on image data by at least contracting one or more tensors to generate one or more feature maps; and one or more of a propulsion system and a directional control system to control one or more movements of the vehicle based at least in part on the identified one or more features. 22. The vehicle of clause 21, wherein the one or more convolution operations include a first convolution operation with a first activation tensor and a filter tensor to generate a first feature map represented by an output tensor, and the one or more processors are to: construct a second activation tensor that has a higher number of modes than the first activation tensor; and generate the first feature map by performing a tensor contraction with the second activation tensor and the filter tensor. 23. The vehicle of clause 22, wherein the one or more processors are to construct the second activation tensor based at least in part on: identifying a mode of the first activation tensor that is not present in the filter tensor and is not present in the output tensor; and replacing the identified mode with a first mode from the output tensor and a second mode from the filter tensor in the second activation tensor. 24. The vehicle of clause 23, wherein the one or more processors are to construct the second activation tensor such that the first mode and the second mode of the second activation tensor have overlapping strides. 25. The vehicle of any one of clauses 23-24, wherein the identified mode of the first activation tensor has an identified stride, and the one or more processors are to set a first stride of the first mode and a second stride of the second mode of the second activation tensor to the identified stride. 26. The vehicle of any one of clauses 22-25, wherein the computer vision system includes a memory, the first activation tensor includes a plurality of data elements stored in the memory, and the one or more processors are to construct the second activation tensor such that two modes of the second activation tensor refer to a common set of data elements included in the plurality of data elements. 27. A method, comprising: identifying a first type of operation with a first tensor to generate an output; and generating the output by: constructing a second tensor based at least in part on changing a number of dimensions of the first tensor from a first number of dimensions to a second number of dimensions; and performing a second type of operation with the second tensor to generate the output. 28. The method of clause 27, wherein the first type of operation is a convolution, the second type of operation is a tensor contraction, and the second number of dimensions is greater than the first number of dimensions. 29. The method of clause 28, wherein the output is a feature map represented by an output tensor, the first tensor is an activation tensor, the convolution is a convolution of the activation tensor and a filter tensor, and the method further includes: identifying a mode of the activation tensor that is not present in the filter tensor and is not present in the output tensor; and replacing the identified mode with a first mode from the output tensor and a second mode from the filter tensor in the second tensor. 30. The method of clause 29, wherein constructing the second tensor includes constructing the second tensor such that the first mode and the second mode have overlapping strides. 31. The method of any one of clauses 28-30, wherein the convolution is a two- dimensional (2D) convolution. 32. The method of any one of clauses 28-31, further comprising: performing an inferencing operation using a neural network based, at least in part, on the tensor contraction. 33. The method of any of clauses 27-32, wherein the first type of operation is a tensor contraction and the second type of operation is a convolution. [0487] In at least one embodiment, a single semiconductor platform may refer to a sole unitary semiconductor-based integrated circuit or chip. In at least one embodiment, multi- chip modules may be used with increased connectivity which simulate on-chip operation, and make substantial improvements over utilizing a conventional central processing unit (“CPU”) and bus implementation. In at least one embodiment, various modules may also be situated separately or in various combinations of semiconductor platforms per desires of user. [0488] In at least one embodiment, computer programs in form of machine-readable executable code or computer control logic algorithms are stored in main memory 1204 and/or secondary storage. Computer programs, if executed by one or more processors, enable system 1200 to perform various functions in accordance with at least one embodiment. memory 1204, storage, and/or any other storage are possible examples of computer-readable media. In at least one embodiment, secondary storage may refer to any suitable storage device or system such as a hard disk drive and/or a removable storage drive, representing a floppy disk drive, a magnetic tape drive, a compact disk drive, digital versatile disk (“DVD”) drive, recording device, universal serial bus (“USB”) flash memory, etc. In at least one embodiment, architecture and/or functionality of various previous figures are implemented in context of CPU 1202; parallel processing system 1212; an integrated circuit capable of at least a portion of capabilities of both CPU 1202; parallel processing system 1212; a chipset (e.g., a group of integrated circuits designed to work and sold as a unit for performing related functions, etc.); and any suitable combination of integrated circuit(s). [0489] In at least one embodiment, architecture and/or functionality of various previous figures are implemented in context of a general computer system, a circuit board system, a game console system dedicated for entertainment purposes, an application-specific system, and more. In at least one embodiment, computer system 1200 may take form of a desktop computer, a laptop computer, a tablet computer, servers, supercomputers, a smart-phone (e.g., a wireless, hand-held device), personal digital assistant (“PDA”), a digital camera, a vehicle, a head mounted display, a hand-held electronic device, a mobile phone device, a television, workstation, game consoles, embedded system, and/or any other type of logic. [0490] In at least one embodiment, parallel processing system 1212 includes, without limitation, a plurality of parallel processing units (“PPUs”) 1214 and associated memories 1216. In at least one embodiment, PPUs 1214 are connected to a host processor or other peripheral devices via an interconnect 1218 and a switch 1220 or multiplexer. In at least one embodiment, parallel processing system 1212 distributes computational tasks across PPUs 1214 which can be parallelizable — for example, as part of distribution of computational tasks across multiple graphics processing unit (“GPU”) thread blocks. In at least one embodiment, memory is shared and accessible (e.g., for read and/or write access) across some or all of PPUs 1214, although such shared memory may incur performance penalties relative to use of local memory and registers resident to a PPU 1214. In at least one embodiment, operation of PPUs 1214 is synchronized through use of a command such as __syncthreads(), wherein all threads in a block (e.g., executed across multiple PPUs 1214) to reach a certain point of execution of code before proc12ding. [0491] Other variations are within spirit of present disclosure. Thus, while disclosed techniques are susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in drawings and have been described above in detail. It should be understood, however, that there is no intention to limit disclosure to specific form or forms disclosed, but on contrary, intention is to cover all modifications, alternative constructions, and equivalents falling within spirit and scope of disclosure, as defined in appended claims. [0492] Use of terms “a” and “an” and “the” and similar referents in context of describing disclosed embodiments (especially in context of following claims) are to be construed to cover both singular and plural, unless otherwise indicated herein or clearly contradicted by context, and not as a definition of a term. Terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (meaning “including, but not limited to,”) unless otherwise noted. term “connected,” when unmodified and referring to physical connections, is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within range, unless otherwise indicated herein and each separate value is incorporated into specification as if it were individually recited herein. use of term “set” (e.g., “a set of items”) or “subset” unless otherwise noted or contradicted by context, is to be construed as a nonempty collection comprising one or more members. Further, unless otherwise noted or contradicted by context, term “subset” of a corresponding set does not necessarily denote a proper subset of corresponding set, but subset and corresponding set may be equal. [0493] Conjunctive language, such as phrases of form “at least one of A, B, and C,” or “at least one of A, B and C,” unless specifically stated otherwise or otherwise clearly contradicted by context, is otherwise understood with context as used in general to present that an item, term, etc., may be either A or B or C, or any nonempty subset of set of A and B and C. For instance, in illustrative example of a set having three members, conjunctive phrases “at least one of A, B, and C” and “at least one of A, B and C” refer to any of following sets: {A}, {B}, {C}, {A, B}, {A, C}, {B, C}, {A, B, C}. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of A, at least one of B and at least one of C each to be present. In addition, unless otherwise noted or contradicted by context, term “plurality” indicates a state of being plural (e.g., “a plurality of items” indicates multiple items). number of items in a plurality is at least two, but can be more when so indicated either explicitly or by context. Further, unless stated otherwise or otherwise clear from context, phrase “based on” means “based at least in part on” and not “based solely on.” [0494] Operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. In at least one embodiment, a process such as those processes described herein (or variations and/or combinations thereof) is performed under control of one or more computer systems configured with executable instructions and is implemented as code (e.g., executable instructions, one or more computer programs or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. In at least one embodiment, code is stored on a computer-readable storage medium, for example, in form of a computer program comprising a plurality of instructions executable by one or more processors. In at least one embodiment, a computer-readable storage medium is a non- transitory computer-readable storage medium that excludes transitory signals (e.g., a propagating transient electric or electromagnetic transmission) but includes non-transitory data storage circuitry (e.g., buffers, cache, and queues) within transceivers of transitory signals. In at least one embodiment, code (e.g., executable code or source code) is stored on a set of one or more non-transitory computer-readable storage media having stored thereon executable instructions (or other memory to store executable instructions) that, when executed (i.e., as a result of being executed) by one or more processors of a computer system, cause computer system to perform operations described herein. set of non-transitory computer-readable storage media, in at least one embodiment, comprises multiple non- transitory computer-readable storage media and one or more of individual non-transitory storage media of multiple non-transitory computer-readable storage media lack all of code while multiple non-transitory computer-readable storage media collectively store all of code. In at least one embodiment, executable instructions are executed such that different instructions are executed by different processors — for example, a non-transitory computer- readable storage medium store instructions and a main central processing unit (“CPU”) executes some of instructions while a graphics processing unit (“GPU”) executes other instructions. In at least one embodiment, different components of a computer system have separate processors and different processors execute different subsets of instructions. [0495] Accordingly, in at least one embodiment, computer systems are configured to implement one or more services that singly or collectively perform operations of processes described herein and such computer systems are configured with applicable hardware and/or software that enable performance of operations. Further, a computer system that implements at least one embodiment of present disclosure is a single device and, in another embodiment, is a distributed computer system comprising multiple devices that operate differently such that distributed computer system performs operations described herein and such that a single device does not perform all operations. [0496] Use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments of disclosure and does not pose a limitation on scope of disclosure unless otherwise claimed. No language in specification should be construed as indicating any non-claimed element as essential to practice of disclosure. [0497] All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein. [0498] In description and claims, terms “coupled” and “connected,” along with their derivatives, may be used. It should be understood that these terms may be not intended as synonyms for each other. Rather, in particular examples, “connected” or “coupled” may be used to indicate that two or more elements are in direct or indirect physical or electrical contact with each other. “Coupled” may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. [0499] Unless specifically stated otherwise, it may be appreciated that throughout specification terms such as “processing,” “computing,” “calculating,” “determining,” or like, refer to action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities within computing system’s registers and/or memories into other data similarly represented as physical quantities within computing system’s memories, registers or other such information storage, transmission or display devices. [0500] In a similar manner, term “processor” may refer to any device or portion of a device that processes electronic data from registers and/or memory and transform that electronic data into other electronic data that may be stored in registers and/or memory. As non-limiting examples, “processor” may be a CPU or a GPU. A “computing platform” may comprise one or more processors. As used herein, “software” processes may include, for example, software and/or hardware entities that perform work over time, such as tasks, threads, and intelligent agents. Also, each process may refer to multiple processes, for carrying out instructions in sequence or in parallel, continuously or intermittently. terms “system” and “method” are used herein interchangeably insofar as system may embody one or more methods and methods may be considered a system. [0501] In present document, references may be made to obtaining, acquiring, receiving, or inputting analog or digital data into a subsystem, computer system, or computer- implemented machine. process of obtaining, acquiring, receiving, or inputting analog and digital data can be accomplished in a variety of ways such as by receiving data as a parameter of a function call or a call to an application programming interface. In some implementations, process of obtaining, acquiring, receiving, or inputting analog or digital data can be accomplished by transferring data via a serial or parallel interface. In another implementation, process of obtaining, acquiring, receiving, or inputting analog or digital data can be accomplished by transferring data via a computer network from providing entity to acquiring entity. References may also be made to providing, outputting, transmitting, sending, or presenting analog or digital data. In various examples, process of providing, outputting, transmitting, sending, or presenting analog or digital data can be accomplished by transferring data as an input or output parameter of a function call, a parameter of an application programming interface or interprocess communication mechanism. [0502] Although discussion above sets forth example implementations of described techniques, other architectures may be used to implement described functionality, and are intended to be within scope of this disclosure. Furthermore, although specific distributions of responsibilities are defined above for purposes of discussion, various functions and responsibilities might be distributed and divided in different ways, depending on circumstances. [0503] Furthermore, although subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that subject matter claimed in appended claims is not necessarily limited to specific features or acts described. Rather, specific features and acts are disclosed as exemplary forms of implementing the claims.
According to various examples, a device is described. The device may include an interposer. The device may also include a plurality of first through silicon vias disposed in the interposer, where the plurality of first through silicon vias have a first diameter. The device may also include a plurality of second through silicon vias disposed in the interposer, where the plurality of second through silicon vias have a second diameter greater than the first via diameter. The device may also include a first recess positioned in the interposer at a bottom end of the plurality of second through silicon vias.
1.A device comprising:interpolator;a plurality of first through-silicon vias disposed in the interposer, wherein the plurality of first through-silicon vias have a first diameter;a plurality of second through-silicon vias disposed in the interposer, wherein the plurality of second through-silicon vias have a diameter greater than that of the first vias the second diameter; anda first recess positioned in the interposer at bottom ends of the second plurality of through silicon vias.2.The device of claim 1, further comprising:a plurality of first solder bumps, the plurality of first solder bumps having a first bump diameter, the plurality of first solder bumps disposed under the plurality of first through-silicon vias;A plurality of second solder bumps, the plurality of second solder bumps having a second bump diameter, the plurality of second solder bumps disposed on the first plurality of the second through-silicon vias In a recess, wherein the diameter of the second bump is larger than the diameter of the first bump;Wherein, the plurality of first solder bumps and the plurality of second solder bumps are configured to couple the interposer to a package substrate.3.The device of claim 2, further comprising:at least one semiconductor device disposed on the interposer;wherein the plurality of first through-silicon vias are configured to transmit signals from the interposer to the at least one semiconductor device; andWherein, the plurality of second through-silicon vias are configured to transfer power from the interposer to the at least one semiconductor device.4.The device of any one of claims 2 or 3, further comprising:A plurality of third through-silicon vias, the plurality of third through-silicon vias having a second via length, the plurality of third through-silicon vias are disposed in the interposer, wherein the first two via lengths that are shorter than first via lengths of the plurality of first through-silicon vias; anda first passive device coupled to the plurality of third through-silicon vias.5.The device of claim 4, wherein the first passive device is disposed in the first recess.6.The device of claim 4, further comprising:a second recess positioned in the interposer at bottom ends of the plurality of third through-silicon vias, wherein the first passive device is disposed in the second recess .7.The device of claim 4, further comprising:a plurality of fourth through-silicon vias having a third via length, the plurality of fourth through-silicon vias are disposed in the interposer, wherein the fourth through-silicon vias have a third via length. The third via length is shorter than the second via length; andA plurality of second passive devices coupled to the plurality of fourth through silicon vias.8.The device of claim 7, further comprising:third recesses positioned in the interposer at bottom ends of the fourth plurality of through silicon vias, wherein the second plurality of passive devices are disposed on the third sunken.9.A method that includes:form an interpolator;forming a first recess in the interposer;forming a plurality of first through-silicon vias in the interposer, wherein the plurality of first through-silicon vias have a first diameter; andA plurality of second through-silicon vias are formed in the interposer, wherein the second plurality of through-silicon vias have a second diameter greater than the diameter of the first vias, and wherein the first Recesses are positioned at bottom ends of the second plurality of through silicon vias.10.The method of claim 9, further comprising:forming a plurality of first solder bumps having a first bump diameter under the plurality of first through-silicon vias;A plurality of second solder bumps having a second bump diameter are formed in the first recesses under the plurality of second through-silicon vias, wherein the second bump diameters are larger than the first bumps block diameter;The interposer is coupled to the package substrate through the plurality of first solder bumps and the plurality of second solder bumps.11.The method of claim 10, further comprising:forming at least one semiconductor device on the interposer;transmitting signals from the interposer to the at least one semiconductor device using the plurality of first through-silicon vias; andPower is transferred from the interposer to the at least one semiconductor device using the plurality of second through silicon vias.12.The method of any one of claims 10 or 11, further comprising:A plurality of third through-silicon vias having a second via length are formed in the interposer, wherein the second via lengths are shorter than first vias of the first plurality of through-silicon vias length; andA first passive device is coupled to the plurality of third through-silicon vias.13.13. The method of claim 12, wherein the first passive device is disposed in the first recess.14.The method of claim 12, further comprising:A second recess is formed in the interposer and positioned at bottom ends of the plurality of third through-silicon vias, wherein the first passive device is disposed in the second recess middle.15.The method of claim 12, further comprising:forming a plurality of fourth through-silicon vias having a third via length, wherein the third via length is shorter than the second via length; andA plurality of second passive devices are coupled to the plurality of fourth through-silicon vias.16.The method of claim 15, further comprising:A third recess is formed in the interposer, and the third recess is positioned at bottom ends of the fourth plurality of through-silicon vias, wherein the second plurality of passive devices are disposed at the in the third depression.17.A computing device comprising:printed circuit boards; andA device coupled to the printed circuit board, the device comprising:interpolator;a plurality of first through-silicon vias disposed in the interposer, wherein the plurality of first through-silicon vias have a first diameter;a plurality of second through-silicon vias disposed in the interposer, wherein the plurality of second through-silicon vias have a diameter greater than that of the first vias the second diameter; anda first recess positioned in the interposer at bottom ends of the second plurality of through silicon vias.18.The computing device of claim 17, further comprising:a plurality of first solder bumps, the plurality of first solder bumps having a first bump diameter, the plurality of first solder bumps disposed under the plurality of first through-silicon vias;A plurality of second solder bumps, the plurality of second solder bumps having a second bump diameter, the plurality of second solder bumps disposed on the first plurality of the second through-silicon vias In a recess, wherein the diameter of the second bump is larger than the diameter of the first bump;Wherein, the plurality of first solder bumps and the plurality of second solder bumps are configured to couple the interposer to a package substrate.19.The computing device of claim 18, further comprising:at least one semiconductor device disposed on the interposer;wherein the plurality of first through-silicon vias are configured to transmit signals from the interposer to the at least one semiconductor device; andWherein, the plurality of second through-silicon vias are configured to transfer power from the interposer to the at least one semiconductor device.20.The computing device of any of claims 18 or 19, further comprising:A plurality of third through-silicon vias, the plurality of third through-silicon vias having a second via length, the plurality of third through-silicon vias are disposed in the interposer, wherein the first two via lengths that are shorter than first via lengths of the plurality of first through-silicon vias; anda first passive device coupled to the plurality of third through-silicon vias.
Semiconductor Packages with Hybrid Through-Silicon ViasBackground techniqueConventional semiconductor packages have through-silicon vias (TSVs) in the interposer of the semiconductor package to deliver power and signals. Due to device miniaturization, TSV geometries have been scaled down in order to reduce the footprint. However, this may result in a maximum current (Imax) constraint caused by the reduced current carrying capacity of the smaller TSVs. This can lead to device reliability risks and reduced computing performance.In addition, conventional semiconductor packages have passive devices located away from the stacked integrated circuit devices on the landside of the package substrate. This can lead to progressively increasing power supply noise jitter and Vmin/IR drop performance degradation due to the large amount of power loop inductance between the stacked integrated circuit chiplets and passive devices.Existing solutions to the above problems include: increasing the metal-insulator-metal (MIM) capacitance of the chiplet or base die to suppress the peak impedance of the power delivery network (ZPDN); increasing the device voltage supply (for example, from 0.9V to 1.1V) to allow performance scaling; and increasing the number of TSVs to meet the required current density (Imax) and reliability risk. However, these existing solutions may result in increased device power consumption and/or increased silicon device form factor.Description of drawingsIn the drawings, the same reference numbers generally refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the disclosure. The dimensions of various features or elements may be arbitrarily expanded or reduced for clarity. In the following description, various aspects of the present disclosure are described with reference to the following drawings, in which:1A shows a cross-sectional view of a semiconductor package according to an aspect of the present disclosure;FIG. 1B shows a top view of a semiconductor package according to aspects of the semiconductor package shown in FIG. 1A;2 shows a flowchart illustrating a method of forming a semiconductor package according to an aspect of the present disclosure;3 shows a cross-sectional view of a semiconductor package according to an aspect of the present disclosure;4A-4G illustrate cross-sectional views of exemplary process flows related to a method for forming a semiconductor package in accordance with an aspect of the present disclosure; and5 shows an illustration of a computing device including a semiconductor package in accordance with yet another aspect of the present disclosure.detailed descriptionThe following detailed description refers to the accompanying drawings, which show by way of illustration specific details and aspects in which the present disclosure may be practiced. These aspects are described in sufficient detail to enable those skilled in the art to practice the disclosure. Various aspects are provided for the device, and various aspects are provided for the method. It should be understood that fundamental properties of the device also apply to the method and vice versa. Other aspects may be utilized and structural and logical changes may be made without departing from the scope of the present disclosure. The various aspects are not necessarily mutually exclusive, as some aspects may be combined with one or more other aspects to form new aspects.Advantages of the present disclosure may include Fmax performance gains by mitigating direct current (DC) and alternating current (AC) losses by reducing Vmin and LL3 impedances.Advantages of the present disclosure may include improving power integrity by reducing power delivery network (PDN) parasitic impedances with embedded decoupling capacitors, thereby minimizing device power consumption. This can lower the supply voltage threshold.Advantages of the present disclosure may include increased Imax capacity through configurable geometry of TSV interconnects and hybrid side bumps with increased geometry to circumvent geometry constraints in conventional silicon interposers, thereby improving Device reliability and computational performance.These and other foregoing advantages and features of the aspects disclosed herein will be apparent by reference to the following description and accompanying drawings. Furthermore, it should be understood that the features of the various aspects described herein are not mutually exclusive and may exist in various combinations and permutations.The present disclosure generally relates to a device. The device may include an interposer. The device may also include a plurality of first through-silicon vias disposed in the interposer, wherein the plurality of first through-silicon vias have a first diameter. The device may also include a plurality of second through-silicon vias disposed in the interposer, wherein the plurality of second through-silicon vias have a second diameter that is greater than a diameter of the first via. The device may also include a first recess positioned in the interposer at bottom ends of the second plurality of through-silicon vias.The present disclosure generally relates to a method of forming a device. The method may include forming an interpolator. The method may include forming a first recess in the interposer. The method may also include forming a plurality of first through-silicon vias in the interposer, wherein the plurality of first through-silicon vias have a first diameter. The method may also include forming a plurality of second through-silicon vias in the interposer, wherein the plurality of second through-silicon vias have a second diameter that is greater than a diameter of the first vias, and wherein the first recess is positioned at at the bottom ends of the plurality of second through-silicon vias.The present disclosure generally relates to a computing device. The computing device may include a printed circuit board. The computing device may include a semiconductor package including an interposer coupled to a printed circuit board. The semiconductor package may include a plurality of first through-silicon vias disposed in the interposer, wherein the plurality of first through-silicon vias have a first diameter. The semiconductor package may also include a plurality of second through-silicon vias disposed in the interposer, wherein the plurality of second through-silicon vias have a second diameter that is greater than a diameter of the first via. The semiconductor package may also include a first recess positioned in the interposer at bottom ends of the second plurality of through-silicon vias.In order to be more easily understood and put into practical use, the apparatus, computing device, method and other specific aspects will now be described by way of example and not limitation and with reference to the accompanying drawings. Repeated descriptions of features and properties may be omitted for brevity.1A shows a cross-sectional view of a semiconductor package according to an aspect of the present disclosure. FIG. 1B shows a top view of a semiconductor package in accordance with aspects of the semiconductor package shown in FIG. 1A .In one aspect of the present disclosure, a semiconductor package 100 is shown in FIGS. 1A and 1B . The semiconductor package 100 may be a device. The semiconductor package 100 may be a stacked semiconductor package, such as a 2.5D or 3D semiconductor package.In one aspect of the present disclosure, the semiconductor package 100 may include a package substrate 102 . Package substrate 102 may include any contact pads, electrical interconnects, routing, and other features not shown in this figure. The package substrate 102 may have one or more rigid core layers for improved structural stability, or a coreless substrate package for reduced form factor. In other aspects, the package substrate 102 may be part of a larger substrate that supports additional semiconductor packages and/or components.In one aspect of the present disclosure, the semiconductor package 100 may include a plurality of solder balls 104 . The package substrate 102 may be connected to a motherboard (not shown) through a plurality of solder balls 104 . The motherboard can be a PCB. In one aspect, the plurality of solder balls 104 may provide electrical connections between the package substrate 102 and the motherboard.In one aspect of the present disclosure, the semiconductor package 100 may include an interposer 106 . Interposer 106 may be a circuit-by-circuit interface between one connection and another. The purpose of the interposer 106 may be to redistribute connections to wider spacing or to reroute connections to different connections. Interpolator 106 may be an active interpolator (ie, including one or more transceiver devices) or a passive interpolator (ie, without transceiver devices). The interposer 106 may be a silicon interposer, a ceramic interposer, or an organic interposer.In one aspect of the present disclosure, the semiconductor package 100 may include a plurality of first package bumps 108 disposed on the package substrate 102 . In an aspect, each first package bump 108 of the plurality of first package bumps 108 may have a first bump diameter. The first bump diameter may be between 30 μm and 80 μm.In an aspect of the present disclosure, the semiconductor package 100 may include a plurality of second package bumps 110 disposed on the package substrate 102 . In an aspect, each second package bump 110 of the plurality of second package bumps 110 may have a second bump diameter. The second bump diameter may be between 90 μm and 200 μm. In one aspect, the second bump diameter may be larger than the first bump diameter. In one aspect, the plurality of first package bumps 108 and/or the plurality of second package bumps 110 may be controlled collapse chip attach (C4) bumps.In one aspect of the present disclosure, the underfill layer 122 may be deposited in a conventional manner to cover and protect the plurality of first package bumps 108 and the plurality of second package bumps 110 . The underfill layer 122 may be provided to enhance the mechanical reliability of the plurality of first package bumps 108 and the plurality of second package bumps 110 . The underfill layer 122 may be provided using a conventional underfill process or a no-flow underfill process to reduce the effects of thermal expansion and reduce stress and stress on the plurality of first package bumps 108 and the plurality of second package bumps 110 . strain.In one aspect of the present disclosure, the interposer 106 may be disposed on the package substrate 102 . In one aspect, the interposer 106 may be connected to the package substrate 102 through the plurality of first package bumps 108 and/or the plurality of second package bumps 110 . The plurality of first package bumps 108 and/or the plurality of second package bumps 110 may also provide electrical connections between the interposer 106 and the package substrate 102 .In one aspect of the present disclosure, the interposer may include a mix of TSVs of different diameters and/or different heights. In an aspect of the present disclosure, the interpolator 106 may include a plurality of first TSVs 112 . In an aspect, the plurality of first package bumps 108 may be disposed under the plurality of first TSVs 112 . In an aspect, the plurality of first package bumps 108 may provide electrical connections between the plurality of first TSVs 112 and the package substrate 102 . In an aspect, each of the plurality of first TSVs 112 may have a first via diameter. The diameter of the first via hole may be between 30 μm and 60 μm. In an aspect, each first TSV 112 of the plurality of first TSVs 112 may have a first via height. The height of the first via hole may be between 100 μm and 700 μm.In an aspect of the present disclosure, the interpolator 106 may include a plurality of second TSVs 114 . In an aspect, the plurality of second package bumps 110 may be disposed under the plurality of second TSVs 114 . In one aspect, the second plurality of package bumps 110 may provide electrical connections between the second plurality of TSVs 114 and the package substrate 102 . In an aspect, each second TSV 114 of the plurality of second TSVs 114 may have a second via diameter. The diameter of the second via hole may be between 90 μm and 200 μm. In one aspect, the second via diameter may be larger than the first via diameter. In an aspect, each second TSV 114 of the plurality of second TSVs 114 may have a second via height. The height of the second via hole may be between 40 μm and 500 μm. In one aspect, the second via height may be shorter than the first via height. In one aspect, the plurality of second TSVs 114 are adjacent to the plurality of first TSVs 112 .In an aspect of the present disclosure, the interpolator 106 may include a plurality of third TSVs 116 . In an aspect, each third TSV 116 of the plurality of third TSVs 116 may have a third via diameter. In one aspect, the third via diameter may be smaller than the second via diameter. In one aspect, the third via diameter may be substantially similar to the first via diameter. In another aspect, the third via diameter may be substantially similar to the second via diameter. In an aspect, each third TSV 116 of the plurality of third TSVs 116 may have a third via height. The height of the third via hole may be between 40 μm and 500 μm. In one aspect, the third via height may be shorter than the first via height. In one aspect, the third via height may be substantially similar to the second via height. In an aspect, the plurality of third TSVs 116 may not be coupled or electrically connected to the package substrate 102 . In one aspect, the third plurality of TSVs 116 are adjacent to the first plurality of TSVs 112 and/or the second plurality of TSVs 114 .In one aspect of the present disclosure, the semiconductor package 100 may include the recess 128 . In one aspect, recess 128 may be in interposer 106 . In an aspect, the recesses 128 may be below the plurality of second TSVs 114 . In one aspect, the recesses 128 may be below the plurality of third TSVs 116 . In one aspect, the recess 128 may be below both the second plurality of TSVs 114 and the third plurality of TSVs 116 . In one aspect, the recess 128 may have a depth ranging from 20% to 60% of the thickness of the interposer 106 .In an aspect of the present disclosure, the plurality of second package bumps 110 may be disposed in the recesses 128 . In one aspect, the depth of the recesses 128 may be selected based on the difference between the dimensions of the second package bumps 110 and the dimensions of the first package bumps 108 . In one aspect, the overall length of the first TSV and the first solder bump is substantially similar to the overall length of the second TSV and the second solder bump.In one aspect of the present disclosure, the semiconductor package 100 may include passive devices 118 . Passive components are electrical components that allow signal and/or power delivery noise filtering to improve electrical performance. In one aspect, the passive devices 118 may be inductors, resistors, diodes, or decoupling capacitors, eg, multilayer ceramic capacitors or silicon capacitors.In one aspect of the present disclosure, passive devices 118 may be disposed in recesses 128 . In an aspect, passive devices 118 may be coupled to the plurality of third TSVs 116 . In an aspect, the passive devices 118 may not contact the package substrate 102 . In an aspect, a gap may exist between the passive device 118 and the package substrate 102 . In one aspect, the depth of recess 128 may be selected to accommodate passive device 118 without passive device 118 contacting package substrate 102 .Alternatively, in an aspect of the present disclosure, instead of passive devices 118 , active devices may be disposed in recesses 128 . Active devices are capable of transmitting and/or processing electrical signals. Active devices may include one or more transistor devices. In one aspect, active devices may be coupled to the plurality of third TSVs 116 . In an aspect, the active devices may not contact the package substrate 102 . In one aspect, a gap may exist between the active device and the package substrate 102 . In one aspect, the depth of recess 128 may be selected to accommodate active devices that do not contact package substrate 102 .In one aspect of the present disclosure, the semiconductor package 100 may include at least one semiconductor device 124 . In one aspect, at least one semiconductor device 124 may be fabricated from any suitable semiconductor (eg, silicon or gallium arsenide). At least one semiconductor device 124 may be a semiconductor die, chip, or chiplet, eg, a system on a chip (SOC), platform controller hub (PCH)/chiplet, memory device, field programmable gate array (FPGA) device, central Processing Unit (CPU) or Graphics Processing Unit (GPU). In the aspect shown in FIG. 1A, at least one semiconductor device 124 may be a chiplet, which may include a first semiconductor device 124A, a second semiconductor device 124B, and a third semiconductor device 124C.In one aspect of the present disclosure, at least one semiconductor device 124 may be disposed on the interposer 106 . In an aspect of the present disclosure, a plurality of solder bumps 126 may be provided on the interposer 106 . A plurality of solder bumps 126 may be disposed on the interposer chiplet surface of the interposer 106 . The plurality of solder bumps 126 may provide electrical connections between the plurality of first TSVs 112 , the plurality of second TSVs 114 , the plurality of third TSVs 116 and the at least one semiconductor device 124 . In an aspect, the plurality of first TSVs 112 may be configured to transmit signals between the package substrate 102 and the semiconductor device 124 . In an aspect, the plurality of second TSVs 114 may be configured to transfer power between the package substrate 102 and the semiconductor device 124 . Since the plurality of second TSVs 114 and the plurality of second package bumps 110 have a larger diameter than the plurality of first TSVs 112 and the plurality of first package bumps 108 , the There may be lower resistances in the second plurality of TSVs 114 and the second plurality of package bumps 110 compared to the bumps 108 . Therefore, in a preferred aspect, the supply voltages, eg, the supply reference voltage (Vcc) and the ground reference voltage (Vss), may be facilitated by the plurality of second TSVs 114 rather than the plurality of first TSVs 112 .In an aspect of the present disclosure, the semiconductor device 124 may be electrically coupled to the package substrate 102 through the plurality of first TSVs 112 and the plurality of second TSVs 114 . In an aspect, the semiconductor device 124 may be electrically coupled to the passive device 118 through the plurality of third TSVs 116 . In an aspect, the plurality of third TSVs 116 may facilitate short power loop inductance between the semiconductor device 124 and passive components 118 without passing through the package substrate 102 .In an aspect of the present disclosure, semiconductor devices 124 , which may include first semiconductor device 124A, second semiconductor device 124B, and third semiconductor device 124C, may communicate signals and/or power to each other through RDL 120 within interposer 106 . In one aspect, RDL 120 may include a plurality of conductive traces interleaved with a plurality of dielectric layers. In other aspects, RDL 120 is coupled to first plurality of TSVs 112 , second plurality of TSVs 114 , and third plurality of TSVs 116 within interpolator 106 . In one aspect, the semiconductor device 124 may communicate signal I/O and/or power from the package substrate 102 between the first semiconductor device 124A, the second semiconductor device 124B, and the third semiconductor device 124C through the RDL 120 .2 shows a flowchart illustrating a method of forming a semiconductor package according to an aspect of the present disclosure.As shown in FIG. 2, there may be a method 200 of forming a device. In method 200, a first operation 202 may include forming an interpolator. The second operation 204 may include forming a first recess in the interposer. A third operation 206 may include forming a plurality of first through-silicon vias in the interposer, wherein the plurality of first through-silicon vias have a first diameter. A fourth operation 208 may include forming a second plurality of through-silicon vias in the interposer, wherein the plurality of second through-silicon vias have a second diameter greater than a diameter of the first via, and wherein the first recess is positioned at the bottom ends of the second plurality of through silicon vias.It should be understood that the above operations described above in relation to FIG. 2 are not limited to this particular order. Any suitable, modified sequence of operations may be used.3 shows a cross-sectional view of a semiconductor package according to an aspect of the present disclosure.In one aspect of the present disclosure, a semiconductor package 300 is shown in FIG. 3 . The semiconductor package 300 may be a device. The semiconductor package 300 may include a semiconductor package, eg, a stacked semiconductor package, such as a 2.5D or 3D semiconductor package.In one aspect of the present disclosure, the semiconductor package 300 may include a package substrate 302 . Package substrate 302 may include any contact pads, electrical interconnects, routing, and other features not shown in this figure. Package substrate 302 may have one or more rigid core layers for improved structural stability, or a coreless substrate package for reduced form factor. In other aspects, package substrate 302 may be part of a larger substrate that supports additional semiconductor packages and/or components.In one aspect of the present disclosure, the semiconductor package 300 may include a plurality of solder balls 304 . The package substrate 302 may be connected to a motherboard (not shown) through a plurality of solder balls 304 . The motherboard can be a PCB. In one aspect, the plurality of solder balls 304 may provide electrical connections between the package substrate 302 and the motherboard.In one aspect of the present disclosure, the semiconductor package 300 may include an interposer 306 . Interposer 306 may be a circuit-by-circuit interface between one connection and another. The purpose of interpolator 306 may be to redistribute connections to wider spacing or to reroute connections to different connections. Interpolator 306 may be an active interpolator (ie, including one or more transceiver devices) or a passive interpolator (ie, without transceiver devices). The interposer 306 may be a silicon interposer, a ceramic interposer, or an organic interposer.In one aspect of the present disclosure, the semiconductor package 300 may include a plurality of first package bumps 308 disposed on the package substrate 302 . In an aspect, each first package bump 308 of the plurality of first package bumps 308 may have a first bump diameter. The first bump diameter may be between 30 μm and 80 μm.In an aspect of the present disclosure, the semiconductor package 300 may include a plurality of second package bumps 310 disposed on the package substrate 302 . In an aspect, each second package bump 310 of the plurality of second package bumps 310 may have a second bump diameter. The second bump diameter may be between 90 μm and 200 μm. In one aspect, the second bump diameter may be larger than the first bump diameter. In one aspect, the plurality of first package bumps 308 and/or the plurality of second package bumps 310 may be controlled collapse chip attach (C4) bumps.In one aspect of the present disclosure, the underfill layer 322 may be deposited in a conventional manner to cover and protect the plurality of first package bumps 308 and the plurality of second package bumps 310 . The underfill layer 322 may enhance the mechanical reliability of the plurality of first package bumps 308 and the plurality of second package bumps 310 . The underfill layer 322 may be provided using a conventional underfill process or a no-flow underfill process to reduce the effects of thermal expansion and reduce stress and stress on the plurality of first package bumps 308 and the plurality of second package bumps 310 . strain.In one aspect of the present disclosure, the interposer 306 may be disposed on the package substrate 302 . In one aspect, the interposer 306 may be connected to the package substrate 302 through the plurality of first package bumps 308 and/or the plurality of second package bumps 310 . The plurality of first package bumps 308 and/or the plurality of second package bumps 310 may also provide electrical connections between the interposer 306 and the package substrate 302 .In one aspect of the present disclosure, the interposer may include a mix of TSVs of different diameters and/or different heights. In an aspect of the present disclosure, the interpolator 306 may include a plurality of first TSVs 312 . In an aspect, the plurality of first package bumps 308 may be disposed under the plurality of first TSVs 312 . In one aspect, the plurality of first package bumps 308 may provide electrical connections between the plurality of first TSVs 312 and the package substrate 302 . In an aspect, each first TSV 312 of the plurality of first TSVs 312 may have a first via diameter. The diameter of the first via hole may be between 30 μm and 60 μm. In an aspect, each first TSV 312 of the plurality of first TSVs 312 may have a first via height. The height of the first via hole may be between 100 μm and 700 μm.In an aspect of the present disclosure, the interpolator 306 may include a plurality of second TSVs 314 . In an aspect, the plurality of second package bumps 310 may be disposed under the plurality of second TSVs 314 . In one aspect, the second plurality of package bumps 310 may provide electrical connections between the second plurality of TSVs 314 and the package substrate 302 . In an aspect, each second TSV 314 of the plurality of second TSVs 314 may have a second via diameter. The diameter of the second via hole may be between 90 μm and 200 μm. In one aspect, the second via diameter may be larger than the first via diameter. In an aspect, each second TSV 314 of the plurality of second TSVs 314 may have a second via height. The height of the second via hole may be between 40 μm and 500 μm. In one aspect, the second via height may be shorter than the first via height. In one aspect, the plurality of second TSVs 314 are adjacent to the plurality of first TSVs 312 .In an aspect of the present disclosure, the interpolator 306 may include a plurality of third TSVs 316 . In an aspect, each third TSV 316 of the plurality of third TSVs 316 may have a third via diameter. In one aspect, the third via diameter may be smaller than the second via diameter. In one aspect, the third via diameter may be substantially similar to the first via diameter. In another aspect, the third via diameter may be substantially similar to the second via diameter. In an aspect, each third TSV 316 of the plurality of third TSVs 316 may have a third via height. The height of the third via hole may be between 40 μm and 500 μm. In one aspect, the third via height may be shorter than the first via height. In one aspect, the third via height may be substantially similar to the second via height. In an aspect, the plurality of third TSVs 316 may not be coupled or electrically connected to the package substrate 302 . In one aspect, the third plurality of TSVs 316 are adjacent to the first plurality of TSVs 312 and/or the second plurality of TSVs 314 .In an aspect of the present disclosure, the interpolator 306 may include a plurality of fourth TSVs 334 . In an aspect, each fourth TSV 334 of the plurality of fourth TSVs 334 may have a fourth via diameter. In one aspect, the fourth via diameter may be smaller than the second via diameter. In one aspect, the fourth via diameter may be substantially similar to the first via diameter. In another aspect, the fourth via diameter may be substantially similar to the second via diameter. In an aspect, each fourth TSV 334 of the plurality of fourth TSVs 334 may have a fourth via height. The height of the fourth via hole may be between 20 μm and 250 μm. In one aspect, the fourth via height may be shorter than the first via height and/or the second via height and/or the third via height. In an aspect, the plurality of fourth TSVs 334 may not be coupled or electrically connected to the loading substrate 302 . In one aspect, the fourth plurality of TSVs 334 are adjacent to the first plurality of TSVs 312 and/or the second plurality of TSVs 314 and/or the third plurality of TSVs 316 .In an aspect of the present disclosure, the semiconductor package 300 may include the first recess 328 . In one aspect, the first recess 328 may be in the interposer 306 . In an aspect, the first recess 328 may be below the plurality of second TSVs 314 . In one aspect, the first recess 328 may have a depth ranging from 20% to 60% of the thickness of the interposer 306 . In one aspect, the first recess 328 may be positioned at the bottom ends of the second plurality of TSVs 314 .In an aspect of the present disclosure, a plurality of second package bumps 310 may be disposed in the first recesses 328 . In an aspect, the depth of the first recess 328 may be selected based on the difference between the size of the second package bump 310 and the size of the first package bump 308 . In one aspect, the overall length of the first TSV 312 and the first solder bump 308 is substantially similar to the overall length of the second TSV 314 and the second solder bump 310 .In an aspect of the present disclosure, the semiconductor package 300 may include the second recess 330 . In one aspect, the second recess 330 may be in the interposer 306 . In an aspect, the second recess 330 may be below the plurality of third TSVs 316 . In one aspect, the second recess 330 may be positioned at the bottom ends of the plurality of third TSVs 316 . In one aspect, the second recess 330 may be adjacent to the first recess 328 . In one aspect, the second recess 330 may have a second recess depth ranging from 20% to 60% of the thickness of the interposer 306 . In one aspect, the second recess depth may be substantially similar to the first recess depth of the first recess 328 .In an aspect of the present disclosure, the semiconductor package 300 may include the third recess 332 . In one aspect, the third recess 332 may be in the interposer 306 . In an aspect, the third recess 332 may be below the plurality of fourth TSVs 334 . In one aspect, the third recess 332 may be positioned at the bottom ends of the plurality of fourth TSVs 334 . In one aspect, the third recess 332 may be adjacent to the first recess 328 and/or the second recess 330 . In one aspect, the third recess 332 may have a third recess depth ranging from 40% to 80% of the thickness of the interposer 306 . In one aspect, the third recess depth may be greater than the first recess depth and/or the second recess depth.In one aspect of the present disclosure, the semiconductor package 300 may include passive devices 318 . Passive components are electrical components that allow signal and/or power delivery noise filtering to improve electrical performance. In one aspect, the passive devices 318 may be inductors, resistors, diodes, or decoupling capacitors, eg, multilayer ceramic capacitors or silicon capacitors.In an aspect of the present disclosure, the passive device 318 may be disposed in the second recess 330 . In an aspect, passive devices 318 may be coupled to the plurality of third TSVs 316 . In an aspect, passive devices 318 may not contact package substrate 302 . In an aspect, a gap may exist between passive device 318 and package substrate 302 . In one aspect, the depth of the second recess 330 may be selected to accommodate the passive device 318 without the passive device 318 contacting the package substrate 302 .In an aspect of the present disclosure, a plurality of passive devices 318 (eg, 2 passive devices 318 ) may be disposed in the third recess 332 . In an aspect, the plurality of passive devices 318 may be coupled to the plurality of fourth TSVs 334 . In an aspect, the plurality of passive devices 318 may have a stacked configuration. In an aspect, the plurality of passive devices 318 may not contact the package substrate 302 . In an aspect, gaps may exist between the plurality of passive devices 318 and the package substrate 302 . In one aspect, the depth of the third recess 332 may be selected to accommodate the plurality of passive devices 318 without the plurality of passive devices 318 contacting the package substrate 302 .In one aspect, the first recess 328 , the second recess 330 , the third recess 332 , and the passive device 318 may be within a projected footprint of the at least one semiconductor device 324 on the interposer 306 .In an aspect of the present disclosure, instead of passive devices 318 , one or more active devices may be disposed in second recess 330 and/or third recess 332 . Active devices are capable of transmitting and/or processing electrical signals. The one or more active devices may include one or more transistor devices. In one aspect, one or more active devices may be coupled to the third plurality of TSVs 316 and/or the fourth plurality of TSVs 334 . In an aspect, one or more active devices may not contact the package substrate 302 . In one aspect, a gap may exist between the one or more active devices and the package substrate 302 . In one aspect, the depth of the second recess 330 and/or the third recess 332 may be selected to accommodate one or more active devices without the one or more active devices contacting the package substrate 302 .In one aspect of the present disclosure, the semiconductor package 300 may include at least one semiconductor device 324 . In one aspect, at least one semiconductor device 324 may be fabricated from any suitable semiconductor (eg, silicon or gallium arsenide). At least one semiconductor device 324 may be a semiconductor die, chip, or chiplet, eg, a system on a chip (SOC), platform controller hub (PCH)/chiplet, memory device, field programmable gate array (FPGA) device, central Processing Unit (CPU) or Graphics Processing Unit (GPU). In the aspect shown in FIG. 3, at least one semiconductor device 324 may be a chiplet, which may include a first semiconductor device 324A, a second semiconductor device 324B, and a third semiconductor device 324C.In one aspect of the present disclosure, at least one semiconductor device 324 may be disposed on the interposer 306 . In an aspect of the present disclosure, a plurality of solder bumps 326 may be provided on the interposer 306 . A plurality of solder bumps 326 may be disposed on the interposer chiplet surface of the interposer 306 . The plurality of solder bumps 326 may provide electrical connections between the plurality of first TSVs 312 , the plurality of second TSVs 314 , the plurality of third TSVs 316 , the plurality of fourth TSVs 334 and the at least one semiconductor device 324 . In an aspect, the plurality of first TSVs 312 may be configured to transmit signals between the package substrate 302 and the semiconductor device 324 . In an aspect, the plurality of second TSVs 314 may be configured to transfer power between the package substrate 302 and the semiconductor device 324 . Since the plurality of second TSVs 314 and the plurality of second package bumps 310 have a larger diameter than the plurality of first TSVs 312 and the plurality of first package bumps 308, the There may be lower resistances in the second plurality of TSVs 314 and the second plurality of package bumps 310 as compared to the package bumps 308 . Thus, in a preferred aspect, supply voltages, such as a supply reference voltage (Vcc) and a ground reference voltage (Vss), may be facilitated by a plurality of second TSVs 314 rather than a plurality of first TSVs 312 .In an aspect of the present disclosure, the semiconductor device 324 may be electrically coupled to the package substrate 302 through the plurality of first TSVs 312 and the plurality of second TSVs 314 . In one aspect, the semiconductor device 324 may be electrically coupled to the passive device 318 through the plurality of third TSVs 316 . In an aspect, the plurality of third TSVs 316 may facilitate short power loop inductance between the semiconductor device 324 and the passive components 318 without passing through the package substrate 302 . In an aspect, the semiconductor device 324 may be electrically coupled to the plurality of passive devices 318 through the plurality of fourth TSVs 334 . In an aspect, the plurality of fourth TSVs 334 may facilitate short power loop inductance between the semiconductor device 324 and the plurality of passive components 318 without passing through the package substrate 302 .In an aspect of the present disclosure, semiconductor devices 324 , which may include first semiconductor device 324A, second semiconductor device 324B, and third semiconductor device 324C, may communicate signals and/or power to each other through RDL 320 within interposer 306 . In one aspect, RDL 320 may include a plurality of conductive traces interleaved with a plurality of dielectric layers. In other aspects, RDL 320 is coupled to first plurality of TSVs 312 , second plurality of TSVs 314 , third plurality of TSVs 316 , and fourth plurality of TSVs 334 within interpolator 306 . In one aspect, the semiconductor device 324 may communicate signal I/O and/or power from the package substrate 302 between the first semiconductor device 324A, the second semiconductor device 324B, and the third semiconductor device 324C through the RDL 320 .4A-4G illustrate cross-sectional views of exemplary process flows related to a method for forming a semiconductor package in accordance with an aspect of the present disclosure.As shown in FIG. 4A , an interposer 406 with an RDL 420 may be provided on a carrier 440 . RDL 420 may be disposed on interposer 406 by photolithography, electroplating, and/or etching processes. The interposer 406 may be disposed on the carrier 440 by lamination, thermocompression, and/or mechanical attachment processes.As shown in FIG. 4B, the recesses 428 may be formed in the interposer 406 using mechanical drilling and/or laser drilling.As shown in FIG. 4C, a plurality of via openings may be formed in the interposer 406 using mechanical drilling and/or laser drilling. Multiple TSVs may be formed in multiple via openings using electroplating processes, solder paste printing and/or coating processes. The plurality of TSVs may include a plurality of first TSVs 412 , a plurality of second TSVs 414 , and a plurality of third TSVs 416 .As shown in FIG. 4D , the passive device 418 may be attached to the interposer 406 using a thermocompression bonding process or a solder reflow process.As shown in Figure 4E, the structure of Figure 4D can be flipped. The first semiconductor device 424A, the second semiconductor device 424B, and the third semiconductor device 424C may be attached on the flipped structure using thermocompression bonding and/or solder reflow processes.As shown in FIG. 4F, package substrate 402 may be prepared according to conventional methods. The plurality of first package bumps 408 and the plurality of second package bumps 410 may be disposed on the package substrate 402 using a solder reflow process.As shown in FIG. 4G, the interposer 406 may be disposed on the package substrate using a solder reflow process. The plurality of first TSVs 412 are disposed on the plurality of first package bumps 408 , and the plurality of second TSVs 414 are disposed on the second package bumps 410 . The package substrate 402 may have solder balls 404 on opposing surfaces for connection to the motherboard. Additionally, the underfill may be provided using conventional underfill processes and/or no-flow underfill processes to reduce the effects of thermal expansion.It should be understood that the exemplary processes described above in relation to FIGS. 4A-4G are not limited to this particular order. Any suitable, modified sequence of operations may be used.Aspects of the present disclosure may be implemented into a system using any suitable hardware and/or software.5 schematically illustrates a computing device 500 that may include a semiconductor package as described herein, in accordance with some aspects.As shown in FIG. 5 , computing device 500 may house a board, such as motherboard 502 . Motherboard 502 may include various components including, but not limited to, processor 504 and at least one communication chip 506 . The processor 504 may be physically and electrically coupled to the motherboard 502 . In some embodiments, at least one communication chip 506 may also be physically and electrically coupled to the motherboard 502 . In other embodiments, the communication chip 506 may be part of the processor 504 .Depending on its application, computing device 500 may include other components that may or may not be physically and electrically coupled to motherboard 502 . These other components may include, but are not limited to, volatile memory (eg, DRAM), non-volatile memory (eg, ROM), flash memory, graphics processors, digital signal processors, cryptographic processors, chipsets, antennas, Displays, touchscreen monitors, touchscreen controllers, batteries, audio codecs, video codecs, power amplifiers, global positioning system (GPS) devices, compasses, Geiger counters, accelerometers, gyroscopes, speakers, cameras, and large Capacity storage devices (eg, hard drives, compact discs (CDs), digital versatile discs (DVDs), etc.). In another aspect, the processor 504 of the computing device 500 may be packaged in a semiconductor package as described herein, and/or other semiconductor devices may be packaged together in a semiconductor package as described herein.Communication chip 506 may implement wireless communication for transferring data to and from computing device 500 . The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communication channels, etc. that can communicate data via a non-solid medium through the use of modulated electromagnetic radiation. The term does not imply that the associated devices do not contain any wires, although in some respects they may not. Communication chip 506 may implement any of a variety of wireless standards or protocols, including but not limited to Institute of Electrical and Electronics Engineers (IEEE) standards, including Wi-Fi (IEEE 502.11 family), IEEE 502.16 standards (eg, IEEE 502.16 - 2005 revision), the Long Term Evolution (LTE) project, and any revisions, updates and/or amendments (eg, the LTE-Advanced project, the Ultra Mobile Broadband (UMB) project (also known as "3GPP2"), etc.). An IEEE 502.16 compliant BWA network is generally referred to as a WiMAX network, an acronym that stands for Worldwide Interoperability for Microwave Access, which is a certification mark for products that have passed the conformance and interoperability tests of the IEEE 502.16 standard.The communication chip 506 may also operate according to a Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Evolved HSPA (E-HSPA) or LTE networks . The communication chip 506 may operate according to Enhanced Data for GSM Evolution (EDGE), GSM EDGE Radio Access Network (GERAN), Universal Terrestrial Radio Access Network (UTRAN) or Evolved UTRAN (E-UTRAN). Communication chip 506 may be based on Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced Cordless Telecommunications (DECT), Evolution-Data Optimized (EV-DO), derivatives thereof, and designated as 3G, 4G, 5G and any other wireless protocol above. In other aspects, the communication chip 506 may operate according to other wireless protocols.Computing device 500 may include multiple communication chips 506 . For example, the first communication chip 506 may be dedicated to shorter-range wireless communication such as Wi-Fi and Bluetooth, and the second communication chip 506 may be dedicated to communication such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, ev-DO, etc. Long range wireless communication.In various embodiments, computing device 500 may be a laptop computer, netbook, notebook, ultrabook, smart phone, tablet computer, personal digital assistant (PDA), ultra mobile PC, mobile phone, desktop computer, server, printer , scanners, monitors, set-top boxes, entertainment control units, digital cameras, portable music players or digital video recorders. In one aspect, computing device 500 may be a mobile computing device. In other implementations, computing device 500 may be any other electronic device that processes data.ExampleExample 1 can include a device comprising: an interposer; a plurality of first through-silicon vias, the plurality of first through-silicon vias disposed in the interposer, wherein the plurality of first through-silicon vias having a first diameter; a plurality of second through-silicon vias, the plurality of second through-silicon vias disposed in the interposer, wherein the plurality of second through-silicon vias have a second diameter greater than the diameter of the first vias and a first recess positioned in the interposer at bottom ends of the second plurality of through silicon vias.Example 2 may include the device of Example 1 and/or any other example disclosed herein, wherein the device further includes: a plurality of first solder bumps having a first bump diameter, a plurality of first solder bumps A solder bump is disposed under the plurality of first through-silicon vias; a plurality of second solder bumps, the plurality of second solder bumps have a second bump diameter, and the plurality of second solder bumps are disposed on the plurality of first solder bumps In the first recess under the two through-silicon vias, wherein the diameter of the second bump is larger than the diameter of the first bump; wherein the plurality of first solder bumps and the plurality of second solder bumps are configured to connect the interposer coupled to the package substrate.Example 3 may include the device of Example 2 and/or any other example disclosed herein, wherein the device further includes: at least one semiconductor device disposed on the interposer; wherein the plurality of first through-silicon vias The vias are configured to transmit signals from the interposer to the at least one semiconductor device; and wherein the plurality of second through-silicon vias are configured to transmit power from the interposer to the at least one semiconductor device.Example 4 may include the device of Example 2 and/or any other example disclosed herein, wherein the device further includes: a plurality of third through-silicon vias having a second via length, a plurality of third through-silicon vias a third through-silicon via is disposed in the interposer, wherein the second via length is shorter than the first via length of the plurality of first through-silicon vias; and a first passive device, the first passive device coupled to a plurality of third through silicon vias.Example 5 may include the device of Example 4 and/or any other example disclosed herein, wherein the first passive device is disposed in the first recess.Example 6 can include the device of Example 4 and/or any other example disclosed herein, wherein the device further comprises: a second recess positioned in the interposer at bottom ends of the plurality of third through-silicon vias where the first passive device is disposed in the second recess.Example 7 may include the device of Example 4 and/or any other example disclosed herein, wherein the device further includes: a plurality of fourth through-silicon vias having a third via length, a plurality of fourth through-silicon vias a fourth through-silicon via is disposed in the interposer, wherein the third via length is shorter than the second via length; and a plurality of second passive devices coupled to the plurality of first passive devices Four through silicon vias.Example 8 can include the device of Example 7 and/or any other example disclosed herein, wherein the device further comprises: a third recess positioned in the interposer at bottom ends of the fourth plurality of through silicon vias , wherein a plurality of second passive devices are disposed in the third recesses.Example 9 may include a method comprising: forming an interposer; forming a first recess in the interposer; forming a plurality of first through-silicon vias in the interposer, wherein the plurality of first through-silicon vias the vias have a first diameter; and a plurality of second through-silicon vias are formed in the interposer, wherein the plurality of second through-silicon vias have a second diameter greater than the diameter of the first vias, and wherein the first Recesses are positioned at bottom ends of the second plurality of through silicon vias.Example 10 may include the method of Example 9 and/or any other example disclosed herein, wherein the method further comprises: forming a plurality of first solder bumps having a first bump diameter under the plurality of first through-silicon vias ; forming a plurality of second solder bumps having a second bump diameter in the first recesses under the plurality of second through-silicon vias, wherein the second bump diameter is larger than the first bump diameter; A solder bump and a plurality of second solder bumps couple the interposer to the package substrate.Example 11 may include the method of Example 10 and/or any other example disclosed herein, wherein the method further comprises: forming at least one semiconductor device on the interposer; using a plurality of first through-silicon vias to route signals from the interposer and transferring power from the interposer to the at least one semiconductor device using a plurality of second through-silicon vias.Example 12 may include the method of Example 10 and/or any other example disclosed herein, wherein the method further comprises: forming a plurality of third through-silicon vias having a second via length in the interposer, wherein the first two via lengths that are shorter than first via lengths of the first plurality of through silicon vias; and coupling the first passive device to the third plurality of through silicon vias.Example 13 may include the method of Example 12 and/or any other example disclosed herein, wherein the first passive device is disposed in the first recess.Example 14 can include the method of Example 12 and/or any other example disclosed herein, wherein the method further comprises: forming a second recess in the interposer, and positioning the second recess in the plurality of third through-silicon vias at the bottom end of the , wherein the first passive device is disposed in the second recess.Example 15 may include the method of Example 12 and/or any other example disclosed herein, wherein the method further comprises: forming a plurality of fourth through-silicon vias having a third via length, wherein the third via length is short and coupling the plurality of second passive devices to the plurality of fourth through-silicon vias.Example 16 may include the method of Example 15 and/or any other example disclosed herein, wherein the method further comprises: forming a third recess in the interposer, and positioning the third recess in the fourth plurality of through silicon vias At the bottom end of the , wherein a plurality of second passive devices are disposed in the third recesses.Example 17 can include a computing device including a printed circuit board and a device coupled to the printed circuit board, the device including: an interposer; a plurality of first through-silicon vias, a plurality of first through-silicon vias provided in the interposer, wherein a plurality of first through-silicon vias have a first diameter; a plurality of second through-silicon vias, and a plurality of second through-silicon vias are arranged in the interposer, wherein a plurality of The second through-silicon vias have a second diameter greater than the diameter of the first vias; and a first recess positioned in the interposer at bottom ends of the plurality of second through-silicon vias.Example 18 may include the computing device of Example 17 and/or any other example disclosed herein, wherein the computing device further includes: a plurality of first solder bumps, the plurality of first solder bumps having a first bump diameter, a plurality of A plurality of first solder bumps are arranged under the plurality of first through-silicon vias; a plurality of second solder bumps, the plurality of second solder bumps have a second bump diameter, and the plurality of second solder bumps are arranged in a plurality of In the first recess under the second through-silicon vias, wherein the diameter of the second bump is larger than the diameter of the first bump; wherein the plurality of first solder bumps and the plurality of second solder bumps are configured to The interposer is coupled to the package substrate.Example 19 may include the computing device of Example 18 and/or any other example disclosed herein, wherein the computing device further includes: at least one semiconductor device disposed on an interposer; wherein the plurality of first through The silicon vias are configured to transmit signals from the interposer to the at least one semiconductor device; and wherein the plurality of second through-silicon vias are configured to transmit power from the interposer to the at least one semiconductor device.Example 20 may include the computing device of Example 18 and/or any other example disclosed herein, wherein the computing device further includes: a plurality of third through-silicon vias having a second via length , a plurality of third through-silicon vias are provided in the interposer, wherein the length of the second vias is shorter than the length of the first vias of the plurality of first through-silicon vias; and the first passive device, the first non- The source device is coupled to a plurality of third through-silicon vias.These and other advantages and features of the aspects disclosed herein will be apparent by reference to the following description and drawings. Furthermore, it is to be understood that the features of the various aspects described herein are not mutually exclusive and may exist in various combinations and permutations.It should be understood that any attributes described herein for a particular package or device may also apply to any package or device described herein. It should also be understood that any properties described herein for a specific method may apply to any method described herein. Furthermore, it should be understood that with respect to any device, package or method described herein, not all components or operations described are necessarily included in the device, package or method, but may include only some (but not all) components or operations .The term "comprising" should be understood to have a broad meaning similar to the term "including" and should be understood to imply the inclusion of a stated integer or operation or group of integers or operations, but not Excludes any other whole or operation or group of wholes or operations. This definition also applies to variations of the term "comprising", such as "comprise" and "comprises".The term "coupled" (or "connected") herein may be understood as an electrical or mechanical coupling, such as attachment or fixation or attachment, or only contact without any fixation, and it will be understood that direct coupling may be provided Or indirectly coupled (in other words, coupled without direct contact) both.Although the present disclosure has been particularly shown and described with reference to specific aspects, it will be understood by those skilled in the art that changes may be made in form and detail without departing from the scope of the disclosure as defined by the appended claims. Various changes. Accordingly, the scope of the present disclosure is indicated by the appended claims, and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be covered.
A variable resistance memory array, programming a variable resistance memory element and methods of forming the array. A variable resistance memory array is formed with a plurality of word line transistors surrounding each phase change memory element. To program a selected variable resistance memory element, all of the bitlines are grounded or biased at the same voltage. A top electrode select line that is in contact with the selected variable resistance memory element is selected. The word line having the word line transistors surrounding the selected variable resistance memory element are turned on to supply programming current to the element. Current flows from the selected top electrode select line through the variable resistance memory element into the common source/drain region of the surrounding word line transistors, across the transistors to the nearest bitline contacts. The word lines are patterned in various lattice configurations.
CLAIMS What is claimed as new and desired to be protected by Letters Patent of the United States is: 1. A memory array comprising: a plurality of memory elements; a plurality of top electrode select lines for selecting a memory element; and a plurality of word lines arranged such that the plurality of word lines form a pattern of transistors in which at least three transistors are adjacent to each memory element. 2. The memory array of claim 1, wherein the plurality of memory elements comprises a plurality of phase change memory elements. 3. The memory array of claim 2, wherein each phase change memory element is electrically connected to a respective doped region in the substrate. 4. The memory array of claim 3, wherein the respective doped regions are common source/drain regions for at least three transistors. 5. The memory array of claim 1, wherein the at least three transistors surround each memory element. 6. The memory array of claim 1, wherein the at least three transistors have a common source/drain region that is electrically connected to the bottom electrode of the memory element. 7. The memory array of claim 1, wherein the plurality of top electrode select lines is arranged such that no two adjacent memory elements are electrically connected to the same top electrode select line. 8. The memory array of claim 1, wherein the plurality of word lines comprises a first plurality of word lines and a second plurality of word lines, the first plurality of word lines arranged substantially perpendicularly to the second plurality of word lines. 9. The memory array of claim 1, wherein the plurality of word lines comprises a first plurality of word lines, a second plurality of word lines, and a third plurality of word lines, the second plurality of word lines arranged at a substantially 60 degree angle to the first plurality of word lines and the third plurality of word lines arranged at a substantially negative 60 degree angle to the first plurality of word lines. 10. The memory array of claim 1, wherein the plurality of word lines comprises a plurality of substantially ladder-shaped word lines, each ladder-shaped word line having two substantially parallel segments and a plurality of rung segments connecting the two substantially parallel segments. 11. The memory array of claim 1, wherein the plurality of word lines comprises a plurality of word lines that each comprise a plurality of substantially diamond shapes. 12. The memory array of claim 1, wherein the plurality of word lines comprises a plurality of word lines that each comprise a plurality of substantially triangle shapes. 13. A memory array comprising: a plurality of phase change memory elements; a plurality of top electrode select lines for selecting a phase change memory element; a first plurality of word lines arranged substantially parallel to each other; and a second plurality of word lines arranged substantially parallel to each other;wherein the first plurality of word lines and the second plurality of word lines are arranged to form at least three transistors adjacent to each phase change memory element. 14. The memory array of claim 13, wherein the first plurality of word lines are arranged substantially perpendicularly to the second plurality of word lines, and wherein the first plurality of word lines do not come into contact with, the second plurality of word lines. 15. The memory array of claim 14, wherein four transistors surround each phase change memory element. 16. The memory array of claim 14, wherein the plurality of top electrode select lines is arranged such that no two adjacent phase change memory elements are electrically connected to the same top electrode select line. 17. The memory array of claim 16, wherein the plurality of top electrode select lines comprises wavy top electrode select lines. 18. The memory array of claim 14, wherein a unit cell area of the memory array is 8P. 19. The memory array of claim 13, further comprising a third plurality of word lines arranged substantially parallel to each other, wherein the first, second, and third plurality of word linesintersect, but do not come into contact with, each other at a 60 degree angle to form a triangular grid pattern. 20. The memory array of claim 19, wherein three transistors surround each phase change memory element. 21. The memory array of claim 19, wherein the plurality of top electrode select lines is arranged such that no two adjacent phase change memory elements are electrically connected to the same top electrode select line. 22. The memory array of claim 19, wherein a unit cell area of the memory array is 2V3f2. 23. A memory array comprising: a plurality of phase change memory elements; a plurality of top electrode select lines for selecting a phase change memory element; and a plurality of word lines, each word line forming at least three transistors adjacent to each phase change memory element in a row. 24. The memory array of claim 23, wherein the plurality of word lines comprises a plurality of substantially ladder-shaped word lines, each ladder-shaped word line having two generally parallelsegments and a plurality of rung segments connecting the two generally parallel segments. 25. The memory array of claim 24, wherein each plurality of substantially ladder-shaped word lines forms four transistors adjacent to each phase change memory element in a row. 26. The memory array of claim 24, wherein the generally parallel segments and the rung segments are substantially straight lines. 27. The memory array of claim 24, wherein the generally parallel segments and the rung segments are rounded. 28. The memory array of claim 24, wherein a unit cell area of the memory array is less than 14R 29. The memory array of claim 24, wherein a unit cell area of the memory array is less than 8P. 30. The memory array of claim 23, wherein the plurality of word lines comprises a plurality of word lines that each comprise a plurality of substantially diamond shapes and each form four transistors adjacent to each phase change memory element in a row. 31. The memory array of claim 30, wherein a unit cell area of the memory array is less than 9.5P. 32. The memory array of claim 23, wherein the plurality of word lines comprises a plurality of word lines that each comprise a plurality of substantially triangle shapes and each form three transistors adjacent to each phase change memory element in a row. 33. The memory array of claim 32, wherein a unit cell area of the memory array is less than 16P. 34. A method of programming a phase change memory array comprising: biasing bitline contacts of the array at a same voltage; turning on a word line forming a plurality of transistors, wherein at least two transistors surround a selected phase change memory element, the at least two transistors sharing a common source/drain region and the selected phase change memory element being in contact with the common source/drain region; and turning on a top electrode select line to transfer a current through the selected phase change memory element, wherein the current is transferred from the common source/drain region and across the at least two transistors to the bitline contacts. 35. A method of programming a phase change memory array comprising:biasing bitline contacts of the array at a same voltage; turning on a selected top electrode select line to transfer a current through a selected phase change memory element, wherein at least three word line transistors share a common source/drain region, the common source/drain region is electrically connected to the selected phase change memory element and the source/drain regions of the at least three word line transistors are electrically connected to a plurality of bitline contacts. 36. A method of forming a memory array comprising: providing a plurality of word lines to form groups of at least three word line transistors, each group sharing one of a plurality of common source/drain regions; providing a plurality of bitline contacts electrically connected to the plurality of source/drain regions; providing a plurality of phase change memory elements, wherein each phase change memory element is electrically connected to one of the plurality of common source/drain regions; and providing a plurality of top electrode select lines in contact with the phase change memory elements. 37. The method of claim 36, further comprising:providing gate materials over a substrate; etching an array of patterns into the substrate by photolithography and dry etch processes; forming shallow trench isolation regions in the etched patterns; forming a first plurality of gate stacks along a first direction of the etched patterns and a second plurality of gate stacks along a second direction of the etched patterns; and forming a first electrical connection between the first plurality of gate stacks to form the first plurality of word lines and forming a second electrical connection between the second plurality of gate stacks to form the second plurality of word lines. 38. The method of claim 37, further comprising forming a third plurality of gate stacks along a third direction of the etched patterns, wherein the etched patterns comprise a hexagonal array pattern. 39. The method of claim 36, further comprising: providing gate materials over a substrate; forming a first plurality of gate stacks arranged in a first direction and forming a second plurality of gate stacks arranged in a second direction; andforming a first electrical connection between the first plurality of gate stacks to form the first plurality of word lines and forming a second electrical connection between the second plurality of gate stacks to form the second plurality of word lines. 40. The method of claim 36, wherein providing the plurality of word lines comprises providing a first plurality of word lines and a second plurality of word lines, the first plurality of word lines arranged substantially perpendicularly to the second plurality of word lines. 41. The method of claim 36, wherein providing the plurality of word lines comprises providing a first plurality of word lines, a second plurality of word lines, and a third plurality of word lines, the second plurality of word lines arranged at a substantially 60 degree angle to the first plurality of word lines and the third plurality of word lines arranged at a substantially negative 60 degree angle to the first plurality of word lines. 42. The method of claim 36, wherein providing the plurality of word lines comprises providing a plurality of substantially ladder- shaped word lines, each ladder-shaped word line having two substantially parallel segments and a plurality of rung segments connecting the two substantially parallel segments. 43. The method of claim 36, wherein providing the plurality of word lines comprises providing a plurality of word lines that each comprise a plurality of substantially diamond shapes. 44. The method of claim 36, wherein providing the plurality of word lines comprises providing a plurality of word lines that each comprise a plurality of substantially triangle shapes.
VARIABLE RESISTANCE MEMORY WITH LATTICE ARRAY USING ENCLOSING TRANSISTORS FIELD OF THE INVENTION [0001] Embodiments of the invention relate to semiconductor devices, and in particular, to variable resistance memory arrays and methods of forming and using the same. BACKGROUND OF THE INVENTION [0002] Non-volatile memories are useful storage devices due to their ability to maintain data absent a power supply. Materials have been investigated for use in non-volatile memory cells. One class of programmable resistance materials are phase change materials, such as chalcogenide alloys, which are capable of stably transitioning between amorphous and crystalline phases. Each phase exhibits a particular resistance state and the resistance states distinguish the logic values of a memory element formed with such materials. Specifically, an amorphous state exhibits a relatively high resistance, and a crystalline state exhibits a relatively low resistance. [0003] A conventional phase change memory element 1, illustrated in FIGS. IA and IB, often has a layer of phase change material 8 between first and second electrodes 2, 4. The first electrode 2 is within a dielectric material 6. The phase change material 8 is set to a particular resistance state according to the amount of current applied between the first and second electrodes 2, 4. To obtain an amorphous state (FIG. IB), a relatively high write current pulse (a reset pulse) is applied through thephase change memory element 1 to melt at least a portion 9 of the phase change material 8 covering the first electrode 2 for a first period of time. The current is removed and the phase change material 8 cools rapidly to a temperature below the crystallization temperature, which results in the portion 9 of the phase change material 8 covering the first electrode 2 having the amorphous state. To obtain a crystalline state (FIG. IA), a lower current write pulse (a set pulse) is applied to the phase change memory element 1 for a second period of time (typically longer in duration than the first period of time and crystallization time of amorphous phase change material) to heat the amorphous portion 9 of the phase change material 8 to a temperature below its melting point, but above its crystallization temperature. This causes the amorphous portion 9 of the phase change material 8 to re-crystallize to the crystalline state that is maintained once the current is removed and the phase change memory element 1 is cooled. The phase change memory element 1 is read by applying a read voltage, which does not change the phase state of the phase change material 8. [0004] One drawback of conventional phase change memory elements is the large programming current needed to achieve the phase change. This requirement leads to a large access transistor to achieve adequate current drive. Accordingly, it is desirable to have phase change memory elements with reduced programming requirements. It is also desirable to implement novel transistors with a large current drive or provide an innovative circuit layout that can provide more transistor current drive within the same silicon area or both.BRIEF DESCRIPTION OF THE DRAWINGS [0005] FIGS. IA and IB illustrate a cross-sectional view of a conventional phase change memory element. [0006] FIG. 2 illustrates a top view of a phase change memory array according to a first embodiment. [0007] FIG. 3A illustrates an expanded top view of the phase change memory array of FIG. 2. [0008] FIG. 3B illustrates a cross-section taken along line 3B-3B of the phase change memory array of FIG. 3 A. [0009] FIG. 4A illustrates an expanded top view of the phase change memory array of FIG. 2 at an initial stage of a first method of fabrication. [0010] FIG. 4B illustrates a cross-section taken along line 4B-4B of the phase change memory array of FIG. 4A. [0011] FIG. 5 A illustrates a top view of the phase change memory array of FIG. 2 at a stage of fabrication subsequent to FIG. 4A. [0012] FIG. 5B illustrates a cross-section taken along line 5B-5B of the phase change memory array of FIG. 5A. [0013] FIG. 6 A illustrates an expanded top view of the phase change memory array of FIG. 2 at an initial stage of a second method of fabrication.[0014] FIG. 6B illustrates a cross-section taken along line 6B-6B of the phase change memory array of FIG. 6 A. [0015] FIG. 7 A illustrates a top view of the phase change memory array of FIG. 2 at a stage of fabrication subsequent to FIG. 6A. [0016] FIG. 7B illustrates a cross-section taken along line 7B-7B of the phase change memory array of FIG. 7 A. [0017] FIG. 8 A illustrates a top view of the phase change memory array of FIG. 2 at a stage of fabrication subsequent to FIG. 7 A. [0018] FIG. 8B illustrates a cross-section taken along line 8B-8B of the phase change memory array of FIG. 8 A. [0019] FIG. 9 A illustrates a top view of the phase change memory array of FIG. 2 at a stage of fabrication subsequent to FIG. 8A. [0020] FIG. 9B illustrates a cross-section taken along line 9B-9B of the phase change memory array of FIG. 9A. [0021] FIG. 1OA illustrates a top view of the phase change memory array of FIG. 2 at a stage of fabrication subsequent to FIG. 9 A. [0022] FIG. 1OB illustrates a cross-section taken along line lOB-lOB of the phase change memory array of FIG. 1OA. [0023] FIG. HA illustrates an expanded top view of the phase change memory array of FIG. 2 at an initial stage of a third method of fabrication.[0024] FIG. HB illustrates a cross-section taken along line 11B-11B of the phase change memory array of FIG. HA. [0025] FIG. 12A illustrates a top view of the phase change memory array of FIG. 2 at a stage of fabrication subsequent to FIG. HA. [0026] FIG. 12B illustrates a cross-section taken along line 12B-12B of the phase change memory array of FIG. 12A. [0027] FIG. 13A illustrates an expanded top view of the phase change memory array of FIG. 2 at an initial stage of a fourth method of fabrication. [0028] FIG. 13B illustrates a cross-section taken along line 13B-13B of the phase change memory array of FIG. 13A. [0029] FIG. 14 illustrates a top view of a phase change memory array according to a second embodiment. [0030] FIG. 15A illustrates an expanded top view of the phase change memory array of FIG. 14. [0031] FIG. 15B illustrates a cross-section taken along line 15B-15B of the phase change memory array of FIG. 15 A. [0032] FIG. 16 illustrates a top view of a phase change memory array according to a third embodiment. [0033] FIG. 17A illustrates an expanded top view of the phase change memory array of FIG. 16 at an initial stage of fabrication.[0034] FIG. 17B illustrates a cross-section taken along line 17B-17B of the phase change memory array of FIG. 17A. [0035] FIG. 18 A illustrates a top view of the phase change memory array of FIG. 16 at a stage of fabrication subsequent to FIG. 17A. [0036] FIG. 18B illustrates a cross-section taken along line 18B-18B of the phase change memory array of FIG. 18 A. [0037] FIG. 19A illustrates a top view of the phase change memory array of FIG. 16 at a stage of fabrication subsequent to FIG. 18A. [0038] FIG. 19B illustrates a cross-section taken along line 19B-19B of the phase change memory array of FIG. 19 A. [0039] FIG. 2OA illustrates a top view of the phase change memory array of FIG. 16 at a stage of fabrication subsequent to FIG. 19A. [0040] FIG. 2OB illustrates a cross-section taken along line 20B-20B of the phase change memory array of FIG. 2OA. [0041] FIG. 21 A illustrates a top view of the phase change memory array of FIG. 16 at a stage of fabrication subsequent to FIG. 2OA. [0042] FIG. 21B illustrates a cross-section taken along line 21B-21B of the phase change memory array of FIG. 21A. [0043] FIG. 22 illustrates a top view of a phase change memory array according to a fourth embodiment.[0044] FIG. 23 illustrates a top view of a phase change memory array according to a fifth embodiment. [0045] FIG. 24 illustrates a top view of a phase change memory array according to a sixth embodiment. [0046] FIG. 25A illustrates an expanded top view of the phase change memory array of FIG. 24 at an initial stage of fabrication. [0047] FIG. 25B illustrates a cross-section taken along line 25B-25B of the phase change memory array of FIG. 25A. [0048] FIG. 26A illustrates an expanded top view of the phase change memory array of FIG. 24 at a stage of fabrication subsequent to FIG. 25A. [0049] FIG. 26B illustrates a cross-section taken along line 26B-26B of the phase change memory array of FIG. 26 A. [0050] FIG. 27A illustrates an expanded top view of the phase change memory array of FIG. 24 at a stage of fabrication subsequent to FIG. 26A. [0051] FIG. 27B illustrates a cross-section taken along line 27A-27A of the phase change memory array of FIG. 27A. [0052] FIG. 28 illustrates a top view of a phase change memory array according to a seventh embodiment. [0053] FIG. 29 illustrates a top view of a phase change memory array according to an eighth embodiment.[0054] FIG. 3OA illustrates an expanded top view of the phase change memory array of FIG. 28 at an initial stage of fabrication. [0055] FIG. 3OB illustrates a cross-section taken along line 30B-30B of the phase change memory array of FIG. 3OA. [0056] FIG. 31 illustrates a cross-section of the phase change memory array of FIG. 28 at a stage of fabrication subsequent to FIG. 3OA. [0057] FIG. 32 illustrates a cross-section of the phase change memory array of FIG. 28 at a stage of fabrication subsequent to FIG. 31. [0058] FIG. 33 illustrates a top view of a phase change memory array according to a ninth embodiment. [0059] FIG. 34 illustrates a top view of a phase change memory array according to a tenth embodiment. [0060] FIG. 35 is a block diagram of a processor system having a memory element incorporating a phase change memory array constructed in accordance with an embodiment of the invention. DETAILED DESCRIPTION OF THE INVENTION [0061] In the following detailed description, reference is made to various embodiments of the invention. These embodiments are described with sufficient detail to enable those skilled in the art to practice them. It is to be understood that other embodiments may be employed, and that various structural, logical and electrical changes may be made.[0062] The term "substrate" used in the following description may include any supporting structure including, but not limited to, a semiconductor substrate that has an exposed substrate surface. A semiconductor substrate should be understood to include silicon, silicon-on-insulator (SOI), silicon-on-sapphire (SOS), doped and undoped semiconductors, epitaxial layers of silicon supported by a base semiconductor foundation, and other semiconductor structures, including those made of semiconductors other than silicon. When reference is made to a semiconductor substrate or wafer in the following description, previous process steps may have been utilized to form regions or junctions in or over the base semiconductor or foundation. The substrate also need not be semiconductor-based, but may be any support structure suitable for supporting an integrated circuit, including, but not limited to, metals, alloys, glasses, polymers, ceramics, and any other supportive materials as is known in the art. [0063] Embodiments are now explained with reference to the figures, throughout which like reference numbers indicate like features. FIG. 2 illustrates a first embodiment, in which word lines 20 run horizontally and vertically in a square lattice configuration. Each word line 20 forms transistor gates which have source/drain regions on both sides of the gate. Phase change memory elements 25 are positioned within the lattice of the word lines 20, alternating horizontally and vertically with bitline contacts 26. Bitlines 21 run diagonally between bitline contacts 26. For ease of illustration, not all bitlines are shown. [0064] To program a selected phase change memory element 25a, two adjacent vertical word lines 20a and two adjacent horizontal word lines 20b enclosing the selected phase change memory element 25a are turned on. A top electrode select line22a that is in contact with the selected phase change memory element 25a is also selected. For ease of illustration, not all top electrode select lines 22 are shown. All of the bitlines 21 are grounded or biased at the same voltage. The four transistors associated with the word lines 20a, 20b enclosing the phase change memory element 25a are turned on to supply programming current to the element 25a. Current flows from the selected top electrode select line 22a through the transistors associated with the word lines surrounding the phase change memory element 25a into the nearest bitline contacts 26a. [0065] Turning now to FIG. 3A, an expanded top view of a portion of the phase change memory array of FIG. 2 is shown. The selected phase change memory element 25a is enclosed by word lines 20a, 20b. FIG. 3B illustrates a cross-section taken along line 3B-3B of the phase change memory array of FIG. 3A. Top electrode select lines 22 run above the phase change memory elements 25, contacting their top electrodes. When the selected top electrode select line 22a is turned on, current is supplied by the selected top electrode select line 22a and passes through the selected phase change memory element 25a. Since the bitlines 21 are grounded or biased at the same voltage, the current through the selected phase change memory element 25a goes across all four transistors defined by four segments of word lines 20a, 20b to adjacent bitline contacts 26a.[0066] FIGS. 4A-5B illustrate a first method of forming the phase change memory array of FIG. 2. FIG. 4A is an expanded top view of the memory array at an initial stage of fabrication according to the first method. FIG. 4B is a cross-section of FIG. 4A, taken across line 4B-4B. A first array of vertically-aligned word lines 20 are formed on a silicon substrate 10 using any known fabrication method. An ion implantation process may be performed to dope regions in the silicon that are not protected by the vertically-aligned word lines 20 so that the desired silicon doping profile is preserved. No trench isolation regions are necessary. [0067] A cleaning process may be performed to remove damaged oxide on the silicon substrate 10 before forming a second array of horizontally-aligned word lines 20', as shown in FIGS. 5 A and 5B. Methods such as photolithography and dry etching may be used to form the horizontally-aligned word lines 20'. The horizontally- aligned word lines 20' are perpendicular to the vertically-aligned word lines 20. An optional strip of nitride spacers may be formed on the word lines 20, before source/drain regions 23 are formed by one or more high-dose implants. A suicide metal such as Co, Ni, or Ti is deposited for silicidation (or salicidation if the gate stacks of the word lines are polysilicon/TEOS gate stacks) of the source/drain regions 23. [0068] Self-aligned metal contacts and bitline contacts 26a are formed over the source/drain regions 23, as shown in FIG. 3B. Material for bitlines 21 are deposited and patterned. The phase change memory elements 25 are formed in layers in the shape of mesas or stripes, as shown in FIGS. IA and IB, and the top electrode select lines 22 are formed with a contact to the top electrode 4 of the phase change memory element 25, which is in contact with the phase change memory material 8 having aportion 9 in contact with the bottom electrode 2. Depending upon the desired orientation of the top electrode select lines 22, they may be provided in one or more layers, as long as no two adjacent phase change memory elements 25 are contacted by the same top electrode select line 22. [0069] In a second method of forming the phase change memory array of FIG. 2, word line gate materials 127 are deposited over a silicon substrate 110, as shown in FIGS. 6A and 6B. FIG. 6B is a cross-section taken across line 6B-6B in the expanded top view of FIG. 6A. The silicon substrate 110 may be provided with ion implantation to define a desired dopant profile. Photolithograpy and dry etch processes may be used to etch an array of square patterns into the silicon substrate 110, and filled with high-density plasma (HDP) oxide to form shallow trench isolation (STI) regions 128. [0070] As shown in FIG. 7 A, a resist pattern 137 is provided over the substrate 110, such that strips of resist material intersect perpendicularly over the STI regions 128. FIG. 7B illustrates a cross-section taken across line 7B-7B in the expanded top view of 7A. [0071] A photolithography and dry etch process is performed to produce vertically- and horizontally-aligned gate stacks of word lines 120, 120' that intersect over the STI regions 128, as shown in the expanded top view of FIG. 8A. The photolithography and dry etch process is used to etch isolated gate stacks of word lines 120, 120', stopping above the silicon substrate 110, as shown in the cross-section illustrated in FIG. 8B, taken across line 8B-8B of FIG. 8A. Nitride spacers 120" are formed to complete the formation of the transistors and source/drain regions 123 areformed. A suicide metal (such as Co, Ni or Ti) is deposited for source/drain silicidation (or salicidation for polysilicon/TEOS gate stacks). [0072] Because the gate stacks of word lines 120, 120' are isolated from each other, they must be electrically connected in order to form continuous word lines. FIG. 9A illustrates an expanded top view of this connection and FIG. 9B is a cross- section taken along line 9B-9B of FIG. 9A. As shown in FIG. 9B, contacts 130 are formed over the vertically-aligned gate stacks of word lines 120 to electrically connect the vertically-aligned gate stacks of word lines 120 with vertically-aligned straps 129. Contacts 130' are formed over the horizontally-aligned gate stacks of word lines 120' to electrically connect the horizontally aligned gate stacks of word lines 120' with horizontally-aligned straps 129'. Both vertically- and horizontally-aligned straps 129, 129' are typically conductive metal lines having a nitride encapsulating layer provided over them to electrically isolate the straps 129, 129'. [0073] Depending upon the desired orientation of the top electrode select lines 122, they may be provided in one or more layers, as long as no two adjacent phase change memory elements 125 are contacted by the same top electrode select line 122, as shown in FIG. 1OB, which is a cross-section of expanded top view 1OA taken along line lOB-lOB. [0074] In a third method of forming the phase change memory array of FIG. 2, gate materials 227 are deposited over a silicon substrate 210, as shown in FIGS. HA and HB. FIG. HB is a cross-section taken across line HB-HB in the expanded top view of FIG. HA. The silicon substrate 210 may be provided with ion implantation to define a desired dopant profile. A resist 227 is patterned as shown in FIG. HA. Thepattern of the resist 227 defines the location of the isolated gate stacks, as will be described below. [0075] A photolithography and dry etch process is performed to produce vertically- and horizontally-aligned word lines 220, 220', as shown in the expanded top view of FIG. 12A. The photolithography and dry etch process is used to etch isolated gate stacks, stopping above the silicon substrate 210, as shown in the cross- section illustrated in FIG. 12B, taken across line 12B-12B of FIG. 12A. Nitride spacers 220" are formed to complete the formation of the transistors and source/drain regions 223 are formed. The remainder of the steps are performed in accordance with the second method described above with respect to FIGS. 9A and 9B. [0076] In a fourth method of forming the phase change memory array of FIG. 2, a first array of parallel word lines 320 are formed on a substrate 310 using a recessed transistor process, as shown in FIG. 13B, which is a cross-section taken across 13B-13B in expanded top view 13A. Because the bottom layer 321 of the recessed word lines 320 are formed within trenches in the substrate 310, recessed word lines 320 have a lower topography than the word lines in the arrays described above. By forming recessed word lines 320, the second array of parallel word lines that will be formed perpendicular to the first array 320 may also have a reduced topography. The remainder of the steps are performed in accordance with the first method described above with respect to FIGS. IA, IB, 3B, 5A and 5B. [0077] A phase change memory array with word lines configured in a lattice configuration having enclosing transistors around the phase change memory elements can provide to each phase change memory element a current that is more than fourtimes greater than a conventional planar transistor. At the same time, this array- optimizes the silicon area by taking advantage of the symmetry of the array to minimize the unit cell area by sharing transistor source/drain regions with adjacent transistors in a two-dimensional configuration. In the embodiment of FIG. 2, the unit cell area is δf2 with more than four times the transistor current drive than can be obtained from a one-transistor current drive for a conventional δf2 unit cell layout. The circuit biasing scheme is similar to the conventional planar transistor circuits with perpendicular word lines and top electrode select lines. However, the fabrication process is simpler since no trench isolation regions are needed for element isolation. [0078] FIG. 14 illustrates a second embodiment in which, similar to the embodiment of FIG. 2, the word lines 20 run horizontally and vertically in a square lattice configuration. The phase change memory elements 25 are positioned within the lattice of the word lines 20, alternating horizontally and vertically with bitline contacts 26. The bitlines 21 run diagonally between bitline contacts 26. For ease of illustration, not all bitlines are shown. [0079] The top electrode select lines 322 have a "wavy" configuration such that every other diagonally adjacent phase change memory elements 25 are in contact with the same top electrode select line 322, but no two adjacent phase change memory elements 25 are in contact with the same top electrode select line 322. For ease of illustration, not all top electrode select lines are shown. [0080] This configuration of top electrode select lines 322 has a benefit over the configuration of FIG. 2, since fewer top electrode select lines 322 are necessary and may be relatively easier to pattern.[0081] Otherwise, the methods for forming the second embodiment illustrated in FIG. 14 are the same as the methods for forming the first embodiment illustrated in FIG. 2. As shown in the expanded top view of FIG. 15A and cross-section taken along line 15B-15B in FIG. 15B, the word lines 20, phase change memory elements 25, bitline contacts 26 and bitline 21 have the same configuration as the embodiment in FIG. 2. Only the top electrode select lines 322 have a different configuration, being curved around every other phase change memory element and making contact with every other phase change memory element on a diagonal line. [0082] FIG. 16 illustrates a third embodiment in which the word lines 420a, 420b, 420c run at 60 degree angles with respect to each other in a hexagonal lattice configuration. The phase change memory elements 425 are positioned within a lattice formed of a first array of horizontal word lines 420a, a second array of word lines 420b rotated at a +60 degree angle from the first array of word lines 420a, and a third array of word lines 420c rotated at a -60 degree angle from horizontal word lines 420a. The bitline contacts 426 are also positioned within the lattice formed of word lines 420a, 420b, 420c, alternating with the phase change memory elements 425, so that no two adjacent enclosures formed by word lines 420a, 420b, 420c have phase change memory elements 425 in them and no two adjacent enclosures formed by word lines 42Oa7 420b, 420c have bitline contacts 426 in them. The bitline contacts 426 may be individually addressed, or can be grounded or biased at the same voltage. For ease of illustration, not all bitlines are shown. [0083] To program a selected phase change memory element 425a, the three word lines 420a', 420b', 420c' enclosing the selected phase change memory element 425a are turned on. A top electrode select line 422a that is in contact with the selectedphase change memory element 425a is also selected. The top electrode select lines 422, although shown here with 422a in a straight line, may have any configuration since no two phase change memory elements 425 are adjacent to each other. For ease of illustration, not all top electrode select lines are shown. All of the bitline contacts 426 are grounded or biased at the same voltage. The three transistors enclosing the phase change memory element 425a are turned on to supply programming current to the element 425a. Current flows from the selected top electrode select line 422a through the phase change memory element 425a into the three nearest bitline contacts 426a. [0084] The embodiment of FIG. 16 having word lines configured in a hexagonal lattice configuration with three enclosing transistors around the phase change memory elements can provide to each phase change memory element a current that is more than three times greater than a conventional planar transistor. At the same time, this array optimizes the silicon area by taking advantage of the symmetry of the array to minimize the unit cell area by sharing transistor source/drain regions with adjacent transistors. In the embodiment of FIG. 16, the unit cell area is 2^3 f2. [0085] Turning now to FIGS. 17A-21B, which illustrate the process by which the embodiment of FIG. 16 is formed, gate materials 427 are deposited over a silicon substrate 410, as shown in FIGS. 17A and 17B. FIG. 17A illustrates an expanded top view of an initial stage of fabrication and FIG. 17B is a cross-section taken across line 17B-17B of FIG. 17A. The silicon substrate 410 may be provided with ion implantation to define a desired dopant profile. Photolithograpy and dry etch processes may be used to etch a hexagonal array pattern into the silicon substrate 410, and filled with high-density plasma (HDP) oxide to form shallow trench isolation (STI) regions 428.[0086] As shown in FIG. 18A, a resist pattern 437 is provided over the substrate 410, such that intersections are provided over the STI regions 428. FIG. 18B illustrates a cross-section taken across line 18B-18B in the expanded top view of 18A. [0087] A photolithography and dry etch process is performed to produce gate stacks of word lines 420a, 42Ob7 420c that intersect over the STI regions 128, as shown in the expanded top view of FIG. 19A. The photolithography and dry etch process is used to etch isolated gate stacks of word lines 420a, 420b, 420c, stopping above the silicon substrate 410, as shown in the cross-section illustrated in FIG. 19B, taken across line 19B-19B of FIG. 19A. Nitride spacers are formed to complete the formation of the transistors and source/drain regions 423 are formed. A suicide metal (such as Co, Ni or Ti) is deposited for source/drain silicidation (or salicidation for polysilicon/TEOS gate stacks). [0088] Because the gate stacks of word lines 420a, 420b, 420c are isolated, they must be electrically connected in order to form word lines. FIG. 2OA illustrates an expanded top view of this connection and FIG. 2OB is a cross-section taken along line 20B-20B of FIG. 20A. As shown in FIG. 20B, contacts 430a are formed to electrically connect the gate stacks of the first array of word lines 420a to a first array of horizontally-aligned straps 429a. Contacts 430b are formed to electrically connect the gate stacks of the second array of word lines 420b with a second array of straps 429b, which are positioned along the second array of word lines 420b. Contacts 430c are formed to electrically connect the gate stacks of the third array of word lines 420c to a third array of straps 429c, which are positioned along the third array of word lines 420c. All three arrays of straps 429a, 429b, 429c are typically conductive metal lineshaving a nitride encapsulating layer 431a, 431b, 431c provided over them to electrically isolate the straps 429a, 429b, 429c. [0089] A plurality of top electrode select lines 422 are provided in contact with the top electrodes of the phase change memory elements 425, however, no two adjacent phase change memory elements 425 are connected to the same top electrode select lines 422, as shown in FIGS. 21A and 21B. It should be understood that, for simplicity of illustration, the transistors and straps connecting them are represented as word lines 420a, 420b, 420c. [0090] The embodiment of FIG. 16 having word lines configured in a hexagonal lattice configuration may also be fabricated with one word line array using a recessed transistors, while the other two word line arrays are conventional transistors, or with all three word line arrays having conventional transistors, as described above. Another method of forming the embodiment of FIG. 16 may be the third method described above with respect to FIGS. 9A, 9B and 11A-12B, which employs photo- patterning and dry etch techniques to form the enclosing gate stacks. [0091] FIG. 22 illustrates a fourth embodiment in which the word lines 520 have a "ladder-shaped" configuration, consisting of two parallel segments 520' and shorter segments 520" connecting the two parallel segments 520'. The two parallel segments 520' run on either side of a column of alternating phase change memory elements 525 and bitline contacts 526, while the shorter segments 520" are positioned between the phase change memory elements 525 and the bitline contacts 526. The bitline contacts 526 may all be grounded or biased at the same voltage. For ease of illustration, not all bitlines are shown.[0092] To program a selected phase change memory element 525a, the word line 520a enclosing the selected phase change memory element 525a is turned on. A top electrode select line 522a that is in contact with the selected phase change memory element 525a is also selected. For ease of illustration, not all top electrode select lines are shown. The four transistors of selected word line 520a enclosing the phase change memory element 525a are turned on to supply programming current to the element 525a. Current flows from the selected top electrode select line 522a through the phase change memory element 525a to the common source/drain region of the transistors of the word lines 522a and across the transistors to the common source/drain regions to the nearest bitline contacts 526a. [0093] The embodiment of FIG. 22 having word lines configured in a "ladder" lattice configuration with four enclosing transistors around the phase change memory elements can provide to each phase change memory element a current that is at least four times greater than a conventional planar transistor. At the same time, this array optimizes the silicon area by taking advantage of the symmetry of the array to minimize the unit cell area by sharing transistor source/drain regions with adjacent transistors. In the embodiment of FIG. 22, the unit cell area is less than 14R [0094] FIG. 23 illustrates a fifth embodiment in which the word lines 620 have a "rounded ladder-shaped" configuration, consisting of rings 620' enclosing the phase change memory elements 625 that are connected by segments 620" that enclose bitline contacts 626. The bitline contacts 626 and phase change memory elements 625 are alternately positioned in columns and rows, with at least a ring 620' and/or segment 620" between them. Because the word lines 620 are curved, the transistor effectivewidth of the word lines 620 is increased when compared to a straight word line in the same configuration. The unit cell area is less than 14P. [0095] Fig. 24 illustrates a sixth embodiment in which the word lines 720 have a ladder-shaped configuration, consisting of two parallel segments 720' and rung segments 720" connecting the two parallel segments 720'. The two parallel segments 720' run on either side of a column of phase change memory elements 725, with the rung segments 720" being positioned between the phase change memory elements 725. Bitline contacts 726 are positioned within the rows of phase change memory elements 725, alternating with the phase change memory elements 725, and placed between the word lines 720. The bitline contacts 726 may all be grounded or biased at the same voltage. For ease of illustration, not all bitlines are shown. [0096] To program a selected phase change memory element 725a, the word line 720a enclosing the selected phase change memory element 725a is turned on. A top electrode select line 722a that is in contact with the selected phase change memory element 725a is also selected. For ease of illustration, not all top electrode select lines are shown. The four transistors enclosing the phase change memory element 725a are turned on to supply programming current to the element 725a. Current flows from the selected top electrode select line 722a through the phase change memory element 725a across the transistors of the selected word lines 720a to the adjacent bitline contacts 726a. Current also flows through the transistors 720" to the common source/drain region of the transistors 720" and a neighboring transistor 720' to the adjacent bitline contacts 726b.[0097] The embodiment of FIG. 24 having word lines configured in a ladder lattice configuration with four enclosing transistors around the phase change memory elements can provide to each phase change memory element a current that is at least three times greater than a conventional planar transistor. At the same time, this array optimizes the silicon area by taking advantage of the symmetry of the array to minimize the unit cell area by sharing transistor source/drain regions with adjacent transistors. In the embodiment of FIG. 24, the unit cell area is approximately 8R [0098] FIGS. 25A-27B illustrate a first method of forming the phase change memory array of FIG. 24. FIG. 25 A is an expanded top view of the memory array at an initial stage of fabrication. FIG. 25B is a cross-section of FIG. 25A, taken across line 25B-25B. An ion implantation process may be performed to define a desired dopant profile in the silicon substrate 710. An array ladder-like word lines 720 are patterned on a silicon substrate 710 by photolithography and dry etch processes. [0099] Turning now to FIGS. 26A and 26B, nitride spacers may be formed on the word lines 720 before source/drain regions 723 are formed by one or more high-dose implants. A suicide metal such as Co7 Ni, or Ti is deposited for silicidation (or salicidation if the gate stacks of the word lines are polysilicon/TEOS gate stacks) of the source/drain regions 723. [00100] Self-aligned metal contacts and bitline contacts 726 are formed over the source/drain regions 723, as shown in FIGS. 27 A and 27B. Material for bitlines 721 are deposited and patterned. The phase change memory elements 725 are formed in layers, as shown in FIGS. IA and IB, and the top electrode select lines 722a are formed with a contact to the top electrode 4 of the phase change memory elements 725.[00101] FIG. 28 illustrates a seventh embodiment in which the word lines 820 have a "diamond" lattice configuration, enclosing the phase change memory elements 825 with four transistors in a diamond-shaped configuration. Bitline contacts 826 are positioned between columns of diamond-shaped word lines 820. More or fewer bitline contacts 826 may be provided than are shown. The bitline contacts 826 may all be grounded or biased at the same voltage. [00102] To program a selected phase change memory element 825a, the word line 820a enclosing the selected phase change memory element 825a is turned on. A top electrode select line 822a that is in contact with the selected phase change memory element 825a is also selected. For ease of illustration, not all top electrode select lines are shown. The four transistors enclosing the phase change memory element 825a are turned on to supply programming current to the element 825a, Current flows from the selected top electrode select line 822a through the phase change memory element 825a into the common source/drain regions surrounding the enclosing transistors to the adjacent bitline contacts 826a. [00103] The embodiment of FIG. 28 having word lines configured in a diamond lattice configuration with four enclosing transistors around the phase change memory elements can provide to each phase change memory element a current that is at least four times greater than a conventional planar transistor. At the same time, this array optimizes the silicon area by taking advantage of the symmetry of the array to minimize the unit cell area by sharing transistor source/drain regions with adjacent transistors. In the embodiment of FIG. 28, the unit cell area is less than 9.5F.[00104] FIG. 29 illustrates an eighth embodiment, which is a variation on FIG. 28 having a phase change memory array with word lines 920a in a diamond lattice configuration. However, top electrode select line 922a is wavy, and runs in a perpendicular line across the plurality of word lines 920 and phase change memory elements 925, 925a. [00105] FIGS. 30A-32 illustrate a method of forming the phase change memory array of FIG. 28. FIG. 3OA is an expanded top view of the memory array at an initial stage of fabrication. FIG. 3OB is a cross-section of FIG. 3OA, taken across line 30B-30B. An ion implantation process may be performed to define a desired dopant profile in the silicon substrate 810. An array of diamond-like word lines 820 are patterned on a silicon substrate 810 by photolithography and dry etch processes. [00106] Turning now to FIG. 31, nitride spacers may be formed on the word lines 820 before source/drain regions 823 are formed by one or more high-dose implants. A suicide metal such as Co, Ni, or Ti is deposited for silicidation (or salicidarion if the gate stacks of the word lines are polysilicon/TEOS gate stacks) of the source/drain regions 823. [00107] Self-aligned metal contacts and bitline contacts 826 are formed over the source/drain regions 823, as shown in FIG. 32. Material for bitlines 821 are deposited and patterned. The phase change memory element 825 is formed in layers, as shown in FIGS. IA and IB, and the top electrode select line 822 is formed with a contact to the top electrode 4 of the phase change memory elements 825. A similar method may be employed to form the phase change memory array of FIG. 29.[00108] FIG. 33 illustrates a ninth embodiment in which the word lines 1020 have a "triangular" lattice configuration, enclosing the phase change memory elements 1025 with three transistors in a triangle configuration. Bitline contacts 1026 may be positioned near the apex of the triangle-shaped word lines 1020 or other locations outside of the three enclosing transistors. The bitline contacts 1026 may all be grounded or biased at the same voltage. [00109] To program a selected phase change memory element 1025a, the word line 1020a enclosing the selected phase change memory element 1025a is turned on. A top electrode select line 1022a that is in contact with the selected phase change memory element 1025a is also selected. For ease of illustration, not all top electrode select lines are shown. The three transistors enclosing the phase change memory element 1025a are turned on to supply programming current to the element 1025a. Current flows from the selected top electrode select line 1022a through the phase change memory element 1025a across the transistors of the enclosing word lines 1020 and into the common source/drain regions to the adjacent bitline contacts 1026a. [00110] The embodiment of FIG. 33 having word lines 1020 configured in a triangular lattice configuration with three enclosing transistors around the phase change memory elements can provide to each phase change memory element a current that is about five times greater than a conventional planar transistor. At the same time, this array optimizes the silicon area by taking advantage of the symmetry of the array to minimize the unit cell area by sharing transistor source/drain regions with adjacent transistors. In the embodiment of FIG. 33, the unit cell area is less than 16R[00111] FIG. 34 illustrates a tenth embodiment which is a variation on FIG. 33 having a phase change memory array with word lines 1120a in a triangular lattice configuration. However, top electrode select line 1122a is straight, and runs in an angle across the plurality of word lines 1120 and phase change memory elements 1125, 1125a. [00112] FIG. 35 illustrates a simplified processor system 100 which includes a memory circuit 106 having a phase change memory array constructed in accordance with the invention. [00113] The FIG. 35 processor system 100, which can be any system including one or more processors, for example, a computer, PDA, phone or other control system, generally comprises a central processing unit (CPU) 102, such as a microprocessor, a digital signal processor, or other programmable digital logic devices, which communicates with an input/output (I/)) device 105 over a bus 101. The memory circuit 106 communicates with the CPU 102 over buss 101 typically through a memory controller. The memory circuit 106 includes one or more of the phase change memory arrays depicted in FIGS. 2, 14, 16, 22-24, 28, 29, 33 and/or 34. [00114] In the case of a computer system, the processor system 100 may include peripheral devices such as a compact disc (CD) ROM drive 103 and hard drive 104, which also communicate with CPU 102 over the bus 101. If desired, the memory circuit 106 may be combined with the processor, for example, CPU 102, in a single integrated circuit. [00115] While various embodiments have been described herein as relating to a phase change memory arrays, it should be appreciated that the lattice arrays andtransistor arrangements described herein may be used with other variable resistance memory technologies and other technologies that require high programming current. Examples of such memory technologies include MRAM, RRAM, STT (Spin-Torque- Transfer), and the like. [00116] The above description and drawings are only to be considered illustrative of specific embodiments, which achieve the features and advantages described herein. Modification and substitutions to specific process conditions and structures can be made. Accordingly, the embodiments of the invention are not to be considered as being limited by the foregoing description and drawings, but is only limited by the scope of the appended claims.
A method and an apparatus for selectively processing a layer of a workpiece based upon dependencies with other layers in the workpiece. A process step upon the workpiece is performed. Metrology data relating to the workpiece is acquired. A process adjustment relating to a first layer on the workpiece is calculated based upon the metrology data. A determination whether an error on a second layer on the workpiece would occur in response to an implementation of the process adjustment performed on the first layer. A magnitude of the calculated process adjustment is reduced in response to a determination that the second layer would be affected in response to the implementation of the process adjustment.
1. A method, comprising:performing a process step upon a workpiece; acquiring metrology data relating to said workpiece; calculating a process adjustment relating to a first layer on said workpiece based upon said metrology data; determining whether an error on a second layer on said workpiece would occur in response to an implementation of said process adjustment performed on said first layer; and reducing a magnitude of said calculated process adjustment in response to a determination that an error on said second layer would occur in response to said implementation of said process adjustment. 2. The method of claim 1, wherein performing said process step upon said workpieces further comprises performing said process step upon a semiconductor wafer.3. The method of claim 1, wherein determining whether an error on said second layer would occur in response to said implementation of said process adjustment upon said first layer further comprises determining whether an overlay misalignment between said first and second layer will occur in response to said implementation of said process adjustment.4. The method of claim 1, wherein reducing said magnitude of said calculated process adjustment further comprises reducing the amount of process control modification implemented upon said first layer.5. The method of claim 4, wherein reducing said magnitude of said calculated process adjustment further comprises reducing an amount of process control modification performed upon a first feature on said first layer that is aligned with a second feature upon said second layer.6. The method of claim 5, wherein reducing said amount of process control modification performed upon said first feature on said first layer further comprises reducing an amount of process control modification performed upon a trench formation on said first layer.7. The method of claim 1, further comprising implementing said calculated process adjustment upon said first layer in response to a determination that said second layer would not be adversely affected by said implementation of said process adjustment upon said first layer.8. The method of claim 7, wherein implementing said calculated process adjustment upon said first layer further comprises implementing said calculated process adjustment upon an implant site on said workpiece.9. The method of claim 1, wherein reducing said magnitude of said calculated process adjustment further comprises filtering said calculated process adjustment to reduce an impact of implementing a control modification based upon said calculated process adjustment.10. A method, comprising:performing a process step upon a first layer of a workpiece; acquiring metrology data relating to said workpiece; calculating a process adjustment relating to said first layer on said workpiece based upon said metrology data; determining whether an overlay misalignment between said first and a subsequent layer on said workpiece would occur in response to an implementation of said process adjustment performed on said first layer; and reducing a magnitude of said calculated process adjustment in response to a determination that said overlay misalignment would occur in response to said implementation of said process adjustment. 11. An apparatus, comprising:means for performing a process step upon a workpiece; means for acquiring metrology data relating to said workpiece; means for calculating a process adjustment relating to a first layer on said workpiece based upon said metrology data; means for determining whether an error on a second layer on said workpiece would occur in response to an implementation of said process adjustment performed on said first layer; and means for reducing a magnitude of said calculated process adjustment in response to a determination that an error on said second layer would occur in response to said implementation of said process adjustment. 12. A system, comprising:a processing tool to process a workpiece; and a process controller operatively coupled to said processing tool, said process controller to perform a control tuning function, said control tuning function comprising calculating a process adjustment relating to a first layer on said workpiece based upon metrology data, and reducing a magnitude of said calculated process adjustment in response to a determination that an overlay misalignment of said first and said subsequent layer would occur in response to an implementation of said calculated process adjustment performed on said first layer. 13. The system of claim 12, wherein said workpiece is a semiconductor wafer.14. The system of claim 12, further comprising:a metrology tool operatively coupled to said process controller and to said processing tool, said metrology tool to acquire metrology data relating to said processed workpiece; a fault detection and classification (FDC) unit operatively coupled to said process controller, said fault detection and classification unit to perform said fault detection process; and a control tuning unit operatively coupled to said process controller, said control tuning unit to determine whether said magnitude of said calculated process adjustment is to be reduced. 15. The system of claim 14, further comprising a database unit to store said at least one of metrology data, said tool state data, and said electrical test data.16. An apparatus, comprising:a process controller to perform a control tuning function for processing a workpiece, said control tuning function comprising calculating a process adjustment relating to a first layer on said workpiece based upon metrology data, and reducing a magnitude of said calculated process adjustment in response to a determination that an overlay misalignment of said first and said subsequent layer would occur in response to an implementation of said calculated process adjustment performed on said first layer. 17. The apparatus of claim 16, wherein said workpiece is a semiconductor wafer.18. The apparatus of claim 16, further comprising:a metrology tool operatively coupled to said process controller and to said processing tool, said metrology tool to acquire metrology data relating to said processed workpiece; a fault detection and classification (FDC) unit operatively coupled to said process controller, said fault detection and classification unit to perform said fault detection process; and a control tuning unit operatively coupled to said process controller, said control tuning unit to determine whether said magnitude of said calculated process adjustment is to be reduced. 19. A computer readable program storage device encoded with instructions that, when executed by a computer, performs a method, comprising:performing a process step upon a workpiece; acquiring metrology data relating to said workpiece; calculating a process adjustment relating to a first layer on said workpiece based upon said metrology data; determining whether an error on a second layer on said workpiece would occur in response to an implementation of said process adjustment performed on said first layer; and reducing a magnitude of said calculated process adjustment in response to a determination that an error on said second layer would occur in response to said implementation of said process adjustment. 20. The computer readable program storage device encoded with instructions that, when executed by a computer, performs the method of claim 19, wherein performing said process step upon said workpiece further comprises performing said process step upon a semiconductor wafer.21. The computer readable program storage device encoded with instructions that, when executed by a computer, performs the method of claim 19, wherein determining whether said second layer would be affected in response to said implementation of said process adjustment upon said first layer further comprises determining whether an overlay misalignment between said first and second layer will occur in response to said implementation of said process adjustment.22. The computer readable program storage device encoded with instructions that, when executed by a computer, performs the method of claim 19, wherein reducing a magnitude of said calculated process adjustment further comprises reducing the amount of process control modification implemented upon said first layer.23. The computer readable program storage device encoded with instructions that, when executed by a computer, performs the method of claim 22, wherein reducing said magnitude of said calculated process adjustment further comprises reducing said amount of process control modification performed upon a first feature on said first layer that is aligned with a second feature upon said second layer.24. The computer readable program storage device encoded with instructions that, when executed by a computer, performs the method of claim 23, wherein reducing said amount of process control modification performed upon said first feature on said first layer further comprises reducing said amount of process control modification performed upon trench formation on said first layer.25. The computer readable program storage device encoded with instructions that, when executed by a computer, performs the method of claim 19, further comprising implementing said calculated process adjustment upon said first layer in response to a determination that said second layer would not be adversely affected by said implementation of said process adjustment upon said first layer.26. The computer readable program storage device encoded with instructions that, when executed by a computer, performs the method of claim 25, wherein implementing said calculated process adjustment upon said first layer further comprises implementing said calculated process adjustment upon an implant site on said workpiece.27. The computer readable program storage device encoded with instructions that, when executed by a computer, performs the method of claim 19, wherein reducing said magnitude of said calculated process adjustment further comprises filtering said calculated process adjustment to reduce an impact of implementing a control modification based upon said calculated process adjustment.
BACKGROUND OF THE INVENTION1. Field of the InventionThis invention relates generally to semiconductor manufacturing, and, more particularly, to a method and apparatus for selectively processing layers of a semiconductor wafer based upon layer dependencies.2. Description of the Related ArtThe technology explosion in the manufacturing industry has resulted in many new and innovative manufacturing processes. Today's manufacturing processes, particularly semiconductor manufacturing processes, call for a large number of important steps. These process steps are usually vital, and therefore, require a number of inputs that are generally fine-tuned to maintain proper manufacturing control.The manufacture of semiconductor devices requires a number of discrete process steps to create a packaged semiconductor device from raw semiconductor material. The various processes, from the initial growth of the semiconductor material, the slicing of the semiconductor crystal into individual wafers, the fabrication stages (etching, doping, ion implanting, or the like), to the packaging and final testing of the completed device, are so different from one another and specialized that the processes may be performed in different manufacturing locations that contain different control schemes.Generally, a set of processing steps is performed across a group of semiconductor wafers, sometimes referred to as a lot. For example, a process layer that may be composed of a variety of different materials may be formed across a semiconductor wafer. Thereafter, a patterned layer of photoresist may be formed across the process layer using known photolithography techniques. Typically, an etch process is then performed across the process layer using the patterned layer of photoresist as a mask. This etching process results in the formation of various features or objects in the process layer. Such features may be used as, for example, a gate electrode structure for transistors. Many times, trench isolation structures are also formed across the substrate of the semiconductor wafer to isolate electrical areas across a semiconductor wafer. One example of an isolation structure that can be used is a shallow trench isolation (STI) structure.The manufacturing tools within a semiconductor manufacturing facility typically communicate with a manufacturing framework or a network of processing modules. Each manufacturing tool is generally connected to an equipment interface. The equipment interface is connected to a machine interface to which a manufacturing network is connected, thereby facilitating communications between the manufacturing tool and the manufacturing framework. The machine interface can generally be part of an advanced process control (APC) system. The APC system initiates a control application, which can be a software program that automatically retrieves the data needed to execute a manufacturing process.FIG. 1 illustrates a typical semiconductor wafer 105. The semiconductor wafer 105 typically includes a plurality of individual semiconductor die 103 arranged in a grid 150. Using known photolithography processes and equipment, a patterned layer of photoresist may be formed across one or more process layers that are to be patterned. As part of the photolithography process, an exposure process is typically performed by a stepper on multiple die 103 locations at a time, depending on the specific photomask employed. The patterned photoresist layer can be used as a mask during etching processes, wet or dry, performed on the underlying layer or layers of material, e.g., a layer of polysilicon, metal, or insulating material, to transfer the desired pattern to the underlying layer. The patterned layer of photoresist is comprised of a plurality of features, e.g., line-type features or opening-type features that are to be replicated in an underlying process layer.Turning now to FIG. 2, a typical flow of processes performed on a semiconductor wafer 105 by a semiconductor manufacturing system is illustrated. A manufacturing system processes semiconductor wafers 105 from a batch/lot (block 210). Upon processing semiconductor wafers 105, the manufacturing system may acquire metrology data relating to the processed semiconductor wafers 105 (block 220). Based upon the analysis of the metrology data, the manufacturing system may determine one or more process control modifications that may be implemented on subsequent processes performed on the semiconductor wafers 105 (block 230). The manufacturing system may then process subsequent semiconductor wafers 105 based upon the process modifications calculated (block 240). The process modification may include modifying several features on a layer on the wafer 105 that may result in overlay misalignment of the layer relative to other layers on the semiconductor wafers 105.One problem associated with the current methodology includes the fact that a modification made upon a target layer (i.e., a layer targeted for process control modifications) may cause adverse affects on subsequent layers formed on the semiconductor wafers 105. For example, process modifications may be made to a particular feature on a target layer, resulting in a change in the alignment of the feature relative to the alignment of corresponding features on subsequently formed layers. In the context of a photolithography process, control adjustments performed on one layer may adversely affect a plurality of other layers due to a shift in the alignment of features on various layers of the semiconductor wafers 105. Generally, process control adjustments are made to features on layers on a wafer 105 based upon control adjustments calculated for. improvements in accuracy when processing semiconductor wafers 105. However, utilizing the current methodology, corrections made to one layer may cause misalignment problems such that the overall accuracy and reliability of the semiconductor wafer 105 may be compromised.The present invention is directed to overcoming, or at least reducing, the effects of, one or more of the problems set forth above.SUMMARY OF THE INVENTIONIn one aspect of the present invention, a method is provided for selectively processing a layer of a workpiece based upon dependencies with other layers in the workpiece. A process step upon a workpiece is performed. Metrology data relating to the workpiece is acquired. A process adjustment relating to a first layer on the workpiece is calculated based upon the metrology data. A determination whether an error on a second layer on the workpiece would occur in response to an implementation of the process adjustment performed on the first layer. A magnitude of the calculated process adjustment is reduced in response to a determination that the second layer would be affected in response to the implementation of the process adjustment.In another aspect of the present invention, a system is provided for selectively processing a layer of a workpiece based upon dependencies with other layers in the workpiece. The system includes a processing tool to process a workpiece. The system also includes a process controller operatively coupled to the processing tool. The process controller is capable of performing a control tuning function. The control tuning function includes calculating a process adjustment relating to a first layer on the workpiece based upon metrology data. The control tuning function also includes reducing a magnitude of the calculated process adjustment in response to a determination that an overlay misalignment of the first and the subsequent layer would occur in response to an implementation of the calculated process adjustment upon the first layer.In another aspect of the present invention, an apparatus is provided for selectively processing a layer of a workpiece based upon dependencies with other layers in the workpiece. The apparatus includes a process controller adapted to perform a control tuning function for processing a workpiece. The control tuning function includes calculating a process adjustment relating to a first layer on the workpiece based upon metrology data. The control tuning function also includes reducing a magnitude of the calculated process adjustment in response to a determination that an overlay misalignment of the first and the subsequent layer would occur in response to an implementation of the calculated process adjustment upon the first layer.In yet another aspect of the present invention, a computer readable program storage device encoded with instructions is provided for selectively processing a layer of a workpiece based upon dependencies with other layers in the workpiece. The computer readable program storage device encoded with instructions that, when executed by a computer, performs a method, which comprises: performing a process step upon a workpiece; acquiring metrology data relating to the workpiece; calculating a process adjustment relating to a first layer on the workpiece based upon the metrology data; determining whether a second layer on the workpiece would be affected in response to an implementation of the process adjustment performed on the first layer; and reducing a magnitude of the calculated process adjustment in response to a determination that the second layer would be affected in response to the implementation of the process adjustment.BRIEF DESCRIPTION OF THE DRAWINGSThe invention may be understood by reference to the following description taken in conjunction with the accompanying drawings, in which like reference numerals identify like elements, and in which:FIG. 1 is a simplified diagram of a prior art semiconductor wafer being processed;FIG. 2 illustrates a simplified flowchart depiction of a prior art process flow during manufacturing of semiconductor wafers;FIG. 3 provides a block diagram representation of a system in accordance with one illustrative embodiment of the present invention;FIG. 4 illustrates a more detailed block diagram representation of a control tuning unit of FIG. 3, in accordance with one illustrative embodiment of the present invention;FIG. 5 illustrates a more detailed block diagram representation of the system shown in FIG. 3, in accordance with one illustrative embodiment of the present invention;FIG. 6 illustrates a flowchart depiction of a method in accordance with one illustrative embodiment of the present invention; andFIG. 7 illustrates a more detailed flowchart depiction of a method of performing a control tuning process, as indicated in FIG. 6, in accordance with one illustrative embodiment of the present invention.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTSIllustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.There are many discrete processes that are involved in semiconductor manufacturing. Many times, workpieces (e.g., semiconductor wafers 105, semiconductor devices, etc.) are stepped through multiple manufacturing process tools. Embodiments of the present invention provide for processing a plurality of layers on a semiconductor wafers 105 and acquiring metrology data related to the processed layers. Varied control adjustments may be performed on selected layers, such that subsequently processed layers on the semiconductor wafers 105 are not adversely affected. A control tuning process may be performed such that the magnitude of a control adjustment that may normally be performed on a target layer of a wafer 105, is attenuated to reduce adverse affects upon subsequently processed layers.During photolithography processes, control operations performed on one layer may affect a plurality of other layers on the semiconductor wafers 105. For example, a target layer on the semiconductor wafers 105 may house a reference pattern or feature that may correspond to patterns or features formed on a plurality of other layers. Therefore, modifications performed on the reference pattern may affect a plurality of other layers. A control modification performed on a target layer, which may comprise a reference pattern, may cause overlay misalignment of the target layer relative to subsequent layers formed on the wafer 105. Therefore, a reduced control adjustment may be selectively performed upon a layer to reduce the possibility of overlay misalignment with other layers on the wafer 105. A layer may be examined for certain patterns. If a target layer predominantly does not have reference patterns or features that are related to features on other layers, more aggressive process controlled modifications may be implemented upon the target layer without substantially affecting alignment with subsequent layers. For layers comprising features (e.g., reference patterns) where alignment with other layers is more important, less aggressive process control adjustments may be implemented.Turning now to FIG. 3, a block diagram depiction of a system 300 in accordance with embodiments of the present invention is illustrated. A process controller 310 in the system 300 is capable of controlling various operations relating to a processing tool 510. The system 300 is capable of acquiring manufacturing related data, such as metrology data, related to processed semiconductor wafers 105, tool state data, and the like. The system 300 may also comprise a metrology tool 550 to acquire metrology data related to the processed semiconductor wafers 105.The system 300 may also comprise a database unit 340. The database unit 340 is provided for storing a plurality of types of data, such as manufacturing related data, or data related to the operation of the system 300 (e.g., the status of the processing tool 510, the status of semiconductor wafers 105, etc.). The database unit 340 may store tool state data relating to a plurality of process runs performed by the processing tool 510. The database unit 340 may comprise a database server 342 for storing tool state data and/or other manufacturing data related to processing semiconductor wafers 105, into a database storage unit 345.The system 300 may also comprise a fault detection and classification (FDC) unit 320. The fault detection and classification unit 320 is capable of providing data relating to faults during processing of semiconductor wafer 105. Fault detection analysis performed by the fault detection and classification unit 320 may include analysis of tool state data and/or metrology data. The FDC unit 320 may correlate particular tool state data to errors detected on the processed semiconductor wafer 105 by analyzing the metrology tool data. For example, particular errors, such as critical dimension errors discovered on the processed semiconductor wafers 105 may be correlated to particular gas flow rates or temperature data relating to tool state data. The fault detection performed by the FDC unit 320 may also include analyzing data from in situ sensors integrated into the processing tools 510. Based upon the fault detection analysis provided by the FDC unit 320, the system 300 may perform a modification to a previously or predetermined routing scheme determined by the system 300.Upon analysis of the metrology data and/or the fault detection data, the system 300 may determine that a process control adjustment is to be made upon a process layer formed on a semiconductor wafer 105. A control tuning unit 330 in the system 300 is capable of attenuating the calculated process adjustment on a layer and discriminate on a layer-by-layer basis whether to implement the calculated control adjustment. In other words, the control tuning unit 330 may determine that a particular layer may not be suitable for a calculated control adjustment, therefore, a less aggressive, attenuated control adjustment may be performed on a particular layer. Additionally, the control tuning unit 330 may determine that the control adjustment to be performed upon a particular site on a layer should be attenuated to preserve proper alignment between various layers on the semiconductor wafer 105. In one embodiment, the attenuation of the control adjustments that are calculated may be performed to reduce misalignment of various layers on the semiconductor wafer 105.The control tuning unit 330 may determine that a full-force control adjustment performed on a target layer may adversely affect the alignment of features on that layer with corresponding features on other layers on the semiconductor wafers 105. Therefore, a less aggressive control adjustment may be implemented. Corresponding features on layers may include various structures of a transistor, such as gate regions, source regions, drain regions, and the like. For features on the layer that generally do not have corresponding features on other layers, e.g., implant regions on a layer, control modifications as originally calculated are implemented, without attenuation of the control modifications. The control tuning unit 330 may determine that another layer may be adjusted such that substantial adverse affects on subsequent layers may not take place, therefore, more aggressive control adjustments may be performed on those layers. In other words, for features on the layer that generally do not have corresponding features on other layers, e.g., implant regions on a layer, control modifications as originally calculated are implemented, without attenuation of the control modifications. Therefore, the control tuning unit 330 provides for discriminating on a layer-by-layer and/or a site-by-site basis whether to attenuate a control adjustment performed on particular layers. This may be particularly applicable to photolithography type processes here alignment of layers and/or alignment of features formed on the layers are relevant. A more detailed description of the control tuning unit 330 is provided in FIG. 4 and accompanying description below.The process controller 310, the FDC unit 320, and/or the control tuning unit 330 may be software, hardware, or firmware units that are standalone units or may be integrated into a computer system associated with the system 300. Furthermore, the various components represented by the blocks illustrated in FIG. 3 may communicate with one another via a system communications line 315. The system communications line 315 may be a computer bus link, a dedicated hardware communications link, a telephone system communications link, a wireless communications link, or other communication links that may be implemented by those skilled in the art having benefit of the present disclosure.Generally, there are at least two types of masking processes performed on semiconductor wafers 105 during photolithography processes. One type of masking process provides for placing patterns or features on a first/target layer on the semiconductor wafers 105 where corresponding features on subsequent layers are not formed. Therefore, subsequently processed layers may not need to be aligned substantially with the first layer. An example of such a process is a masking process that is used in an implant process, which generally does not act as a reference for features formed on the other layer.The second type of masking provides for placing patterns (i.e., reference patterns) that do affect alignment with other layers, for example, shallow isolation trenches; gate, source, or drain structures for a transistor; and the like. In other words, the target layer may comprise features that are associated with other corresponding features on subsequent layers formed on the semiconductor wafer 105. Therefore, the target layer may house features that are to be substantially aligned with corresponding features on subsequently formed layers. Therefore, alignment of these layers is more relevant. Embodiments of the present invention provide for discriminating between layers that comprise patterns or features that do not affect other layers, and layers that comprise patterns/features that do affect other layers. Based upon such discrimination, aggressive, or alternatively, more passive control adjustments may be performed on a particular layer depending on the type of features formed on the layer.Turning now to FIG. 4, a more detailed block diagram depiction of the control tuning unit 330 in accordance with one illustrative embodiment of the present invention is illustrated. Metrology data, fault data, and/or data relating to various layers formed on the semiconductor wafers 105 may be received by the control tuning unit 330. The control tuning unit 330 may comprise a control adjustment calculation unit 410 and a control adjustment filter unit 420. Based upon the metrology data, the fault data, and/or the layer data, the control adjustment calculation unit 410 may calculate a process control adjustment to be performed on a layer to correct errors found on the semiconductor wafers 105.The control adjustment filter unit 420 may filter certain control adjustment calculations based upon the nature of features on particular layers. The control adjustment filter unit 420 may filter out some or all of certain calculated control adjustments in order to make the control process less aggressive when implementing control adjustments on certain layers. In one embodiment, the magnitude of the modifications prescribed by the control adjustment calculation is reduced to implement smaller control adjustments, thereby decreasing the possibility of overlay misalignment when performing photolithography processes. This attenuation is performed on control adjustments made to layers where, if fully implemented, adverse affects (e.g., overlay misalignment with other layers) upon other layers may occur. For example, during an implant mask process performed on a semiconductor wafer 105, where masking of part of the semiconductor wafer 105 to implant dopants into the substrate is performed, a non-attenuated control adjustment may be performed since the implant process produces features that generally do not have to be substantially aligned with other features on other layers on the semiconductor wafer 105. Therefore, the system 300 may perform control adjustments that may be more aggressive since downstream dependencies are not substantial.For layers formed on the semiconductor wafers 105 that comprise features such as shallow isolation trenches, source/drain regions, active regions of a transistor, contact metal via layer, etc., an attenuated version of the calculated control adjustment may be implemented since a non-attenuated control adjustment performed on such layers may adversely affect alignment with other layers. Therefore, the control adjustment filter unit 420 may filter out certain control adjustment calculations to reduce overlay misalignment among various layers when controlling a target layer on the wafer 105. The control adjustment filter unit 420 generates attenuated control adjustment data, which may be used by the system 300 to perform attenuated control adjustments on alignment-sensitive layers on the semiconductor wafer 105. The control tuning unit 330 may provide attenuated and non-attenuated control adjustment data to be used by the process controller 310 to selectively implement control adjustments to certain layers on the semiconductor wafer 105.Turning now to FIG. 5, a more detailed block diagram of the system 300 in accordance with one embodiment of the present invention is illustrated. Semiconductor wafers 105 are processed on processing tools 510a, 510b using a plurality of control input signals, or manufacturing parameters, provided via a line or network 523. The control input signals, or manufacturing parameters, on the line 523 are sent to the processing tools 510a, 510b from a computer system 530 via machine interfaces 515a, 515b. The first and second machine interfaces 515a, 515b are generally located outside the processing tools 510a, 510b. In an alternative embodiment, the first and second machine interfaces 515a, 515b are located within the processing tools 510a, 510b. The semiconductor wafers 105 are provided to and carried from a plurality of processing tools 510. In one embodiment, semiconductor wafers 105 may be provided to a processing tool 510 manually. In an alternative embodiment, semiconductor wafers 105 may be provided to a processing tool 510 in an automatic fashion (e.g., robotic movement of semiconductor wafers 105). In one embodiment, a plurality of semiconductor wafers 105 is transported in lots (e.g., stacked in cassettes) to the processing tools 510.In one embodiment, the computer system 530 sends control input signals, or manufacturing parameters, on the line 523 to the first and second machine interfaces 515a, 515b. The computer system 530 is capable of controlling processing operations. In one embodiment, the computer system 530 is a process controller. The computer' system 530 is coupled to a computer storage unit 532 that may contain a plurality of software programs and data sets. The computer system 530 may contain one or more processors (not shown) that are capable of performing the operations described herein. The computer system 530 employs a manufacturing model 540 to generate control input signals on the line 523. In one embodiment, the manufacturing model 540 contains a manufacturing recipe that determines a plurality of control input parameters that are sent on the line 523 to the processing tools 510a, 510b. In one embodiment, the manufacturing model 540 defines a process script and input control that implement a particular manufacturing process. The control input signals (or control input parameters) on the line 523 that are intended for processing tool A 510a are received and processed by the first machine interface 515a. The control input signals on the line 523 that are intended for processing tool B 510b are received and processed by the second machine interface 515b. Examples of the processing tools 510a, 510b used in semiconductor manufacturing processes are steppers, etch process tools, deposition tools, and the like.One or more of the semiconductor wafers 105 that are processed by the processing tools 510a, 510b can also be sent to a metrology tool 550 for acquisition of metrology data. The metrology tool 550 may be a scatterometry data acquisition tool, an overlay-error measurement tool, a critical dimension measurement tool, and the like. In one embodiment, a metrology tool 550 examines one or more processed semiconductor wafers 105. The metrology data analysis unit 560 may collect, organize, and analyze data from the metrology tool 550. The metrology data is directed to a variety of physical or electrical characteristics of the devices formed across the semiconductor wafers 105. For example, metrology data may be obtained as to line width measurements, depth of trenches, sidewall angles, thickness, resistance, and the like. Metrology data may be used to determine faults that may be present across the processed semiconductor wafers 105, which may be used to quantify the performance of the processing tools 510.As provided above, the control tuning unit 330 may receive metrology data from the metrology data analysis unit 560, fault detection data from the FDC unit 320, and/or stored manufacturing data from the database unit 340. The database unit 340 may provide data relating to features formed on particular layers on semiconductor wafer 105. The control tuning unit 330 may then provide attenuated and non-attenuated control adjustment signals based upon which layer is being targeted, to the computer system 530. The control tuning unit 330 may provide filtered feedback/feed forward data that may be used to selectively and discriminatingly control some layers aggressively while more passively controlling other layers such that acceptable alignment among various layers formed on the wafer 105 is maintained.Turning now to FIG. 6, a flow chart depiction of the method in accordance with embodiments of the present invention is illustrated. The system 300 processes semiconductor wafers 105 that may be associated with a lot/batch (block 610). Upon processing the semiconductor wafers 105, the system 300 may acquire metrology data relating to the processed semiconductor wafers 105 (block 620). The system 300may perform a metrology data analysis to analyze errors that may occur on layers formed on the semiconductor wafers 105 (block 630). Furthermore, the system 300 may perform FDC analysis (block 640) to generate fault data associated with processing of semiconductor wafers 105. The system 300 may then perform a control tuning process utilizing stored data, metrology data analysis, and/or the fault detection data to selectively affect appropriate process control adjustments performed on layers formed on the semiconductor wafers 105 (block 650). The control tuning process provides attenuated and/or non-attenuated control adjustment data that is selectively used to aggressively adjust some layers while more passively adjusting other layers. The system 300 then implements the control adjustments on subsequent processes performed on the semiconductor wafers 105 based upon the attenuated and non-attenuated control adjustment calculations (block 660).Turning now to FIG. 7, a more detailed flow chart depiction of the step of performing the control tuning process indicated in block 650 of FIG. 6 is illustrated. The system 300 determines process control adjustments that are calculated (block 710). These process control adjustments call for performing control modifications such that errors detected on the layers of the semiconductor wafers 105 are diminished. Based upon the calculated adjustments, the system 300 may analyze a layer to determine which types of features formed on the layer are to be adjusted, and how these features affect other layers (block 720). For example, the system 300 may check for certain features such as shallow isolation trenches, gate structures of transistors, and the like, to determine whether a layer contains features that affect alignment with other layers.The system 300 makes a determination whether the calculated adjustment, if implemented, would excessively affect other layers based upon the features formed on the layers (block 730). In other words, the system 300 determines whether control modification implemented on a target layer would affect overlay alignment with subsequently processed Iayers on the wafer 105. When the system 300 determines that the calculated adjustment does not excessively affect other layers, the unfiltered adjustment calculation data (i.e., the non-attenuated control adjustment data) is used to perform the control adjustments (block 740). However, when the system 300 determines that the calculated adjustment may excessively affect other layers on the semiconductor wafers 105, the system 300 attenuates the amount/magnitude of the control adjustments, such that the adjustment would be less aggressive. After attenuating the process adjustment, the system 300 checks to see if such attenuated adjustment would excessively affect other layers. When a determination is made that implementation of the control adjustment(s) on target layer may not adversely affect subsequently processed layers, the attenuated control adjustment data is used to perform control adjustments on the target layer.Utilizing embodiments of the present invention, discriminatingly selecting certain layers for aggressive control adjustment and other layers for more passive control adjustments provide for maintaining more accurate alignment among various layers formed on semiconductor wafers 105. During photolithography processes, these methods can be used to maintain proper overlay alignments such that structures that are more reliable are formed on layers of the semiconductor wafers 105. This may result in increased yields and more accurately processed semiconductor wafers 105.The principles taught by the present invention can be implemented in an Advanced Process Control (APC) Framework, such as a Catalyst system offered by KLA Tencor, Inc. The Catalyst system uses Semiconductor Equipment and Materials International (SEMI) Computer Integrated Manufacturing (CIM) Framework compliant system technologies, and is based on the Advanced Process Control (APC) Framework. CIM (SEMI E81-0699-Provisional Specification for CIM Framework Domain Architecture) and APC (SEMI E93-0999-Provisional Specification for CIM Framework Advanced Process Control Component) specifications are publicly available from SEMI. The APC framework is a preferred platform from which to implement the control strategy taught by the present invention. In some embodiments, the APC framework can be a factory-wide software system; therefore, the control strategies taught by the present invention can be applied to virtually any of the semiconductor manufacturing tools on the factory floor. The APC framework also allows for remote access and monitoring of the process performance. Furthermore, by utilizing the APC framework, data storage can be more convenient, more flexible, and less expensive than local drives. The APC framework allows for more sophisticated types of control because it provides a significant amount of flexibility in writing the necessary software code.Deployment of the control strategy taught by the present invention onto the APC framework could require a number of software components. In addition to components within the APC framework, a computer script is written for each of the semiconductor manufacturing tools involved in the control system. When a semiconductor manufacturing tool in the control system is started in the semiconductor manufacturing fab, it generally calls upon a script to initiate the action that is required by the process controller, such as the overlay controller. The control methods are generally defined and performed in these scripts. The development of these scripts can comprise a significant portion of the development of a control system. The principles taught by the present invention can be implemented into other types of manufacturing frameworks.The particular embodiments disclosed above are illustrative only, as the invention ay be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below.
A method of forming a chalcogenide memory element having a first electrode, a second electrode, and a doped chalcogenide layer interposed between the first electrode and the second electrode, the method comprising: forming a chalcogenide layer (215) on the first electrode (210); sputtering metal (240) onto the chalcogenide layer using a first plasma containing at least one component gas selected from the group consisting of neon and helium, thereby forming the doped chalcogenide layer (230), wherein the first plasma emits a UV component sufficient to induce diffusion of the sputtered metal into the chalcogenide layer; and sputtering metal (245) onto the doped chalcogenide layer using a second plasma containing at least one component gas having an atomic weight higher than an atomic weight of neon, thereby forming the second electrode (250).
A method of forming a chalcogenide memory element having a first electrode, a second electrode, and a doped chalcogenide layer interposed between the first electrode and the second electrode, the method comprising:forming a chalcogenide layer on the first electrode;sputtering metal onto the chalcogenide layer using a first plasma containing at least one component gas selected from the group consisting of neon and helium,thereby forming the doped chalcogenide layer, wherein the first plasma emits a UV component sufficient to induce diffusion of the sputtered metal into the chalcogenide layer; andsputtering metal onto the doped chalcogenide layer using a second plasma containing at least one component gas having an atomic weight higher than an atomic weight of neon, thereby forming the second electrode.The method of Claim 1, wherein the at least one component gas having an atomic weight higher than an atomic weight of neon is argon.The method of Claim 1, wherein forming a chalcogenide layer further comprises forming a layer of germanium selenide material, wherein sputtering metal onto the chalcogenide layer using the first plasma further comprises sputtering silver, and wherein sputtering metal onto the doped chalcogenide layer using the second plasma also comprises sputtering silver.The method of Claim 1, wherein the first plasma and the second plasma are the same plasma.The method of Claim 1, wherein the second electrode has a different work function (φm ) than the first electrode.A method of forming a chalcogenide memory element having a first electrode, a second electrode, and a doped chalcogenide layer interposed between the first electrode and the second electrode, the method comprising:forming a chalcogenide layer on the first electrode;sputtering metal onto the chalcogenide layer using a first plasma containing at least one component gas selected from the group consisting of neon and helium,thereby forming the doped chalcogenide layer; andsputtering metal onto the doped chalcogenide layer using a second plasma containing at least argon, thereby forming the second electrode.A method of forming a chalcogenide memory element having a first electrode, a second electrode, and a doped chalcogenide layer interposed between the first electrode and the second electrode, the method comprising:forming a chalcogenide layer on the first electrode;sputtering metal onto the chalcogenide layer using a plasma containing at least one component gas selected from the group consisting of neon and helium,thereby forming the doped chalcogenide layer; andsputtering metal onto the doped chalcogenide layer using the plasma, thereby forming the second electrode.A method of forming a chalcogenide memory element having a first electrode, a second electrode, and a doped chalcogenide layer interposed between the first electrode and the second electrode, the method comprising:forming a chalcogenide layer on the first electrode;sputtering metal onto the chalcogenide layer using a plasma initially generated from feed gas containing at least one component gas selected from the group consisting of neon and helium, thereby forming the doped chalcogenide layer;increasing an average atomic weight of the feed gas used to generate the plasma; andsputtering metal onto the doped chalcogenide layer using the plasma generated from the feed gas having the increased average atomic weight, thereby forming the second electrode.The method of Claim 8, wherein increasing an average atomic weight of the feed gas used to generate the plasma further comprises evacuating the feed gas after forming the doped chalcogenide layer and generating the plasma used for forming the second electrode using the feed gas having the higher average atomic weight.The method of Claim 8, wherein increasing an average atomic weight of the plasma further comprises modifying feed rates of component gases into the plasma while sputtering metal.A method of forming a chalcogenide memory element having a first electrode, a second electrode, and a doped chalcogenide layer interposed between the first electrode and the second electrode, the method comprising:forming a chalcogenide layer on the first electrode;sputtering metal onto the chalcogenide layer using a first plasma in a deposition chamber to form the doped chalcogenide layer, wherein the first plasma is generated using at least one component gas selected from the group consisting to neon and helium; andsputtering metal onto the doped chalcogenide layer using a second plasma in the deposition chamber to form the second electrode, wherein the second plasma is generated using at least one component gas having an atomic weight higher than an atomic weight of neon.The method of Claim 11, wherein the at least one component gas used in generating the first plasma consists essentially of neon.The method of Claim 11, wherein the at least one component gas used in generating the second plasma consists essentially of argon.The method of Claim 13, wherein the second plasma is generated using at least argon.The method of Claim 11, wherein sputtering metal onto the doped chalcogenide layer to form the second electrode is performedin situ with sputtering metal onto the chalcogenide layer to form the doped chalcogenide layer.The method of Claim 11, wherein sputtering metal onto the chalcogenide layer to form the doped chalcogenide layer further comprises sputtering from a metal target and wherein sputtering metal onto the doped chalcogenide layer to form the second electrode further comprises sputtering from the same metal target.The method of Claim 16, wherein the metal target is a silver target and the chalcogenide layer contains a germanium selenide material.The method of Claim 11, wherein the first plasma and the second plasma each contain at least one component gas selected from the group consisting of neon and helium and at least one component gas having an atomic weight higher than an atomic weight of neon.The method of Claim 18, wherein the first plasma and the second plasma have the same composition.The method of forming a chalcogenide memory element having a first electrode, a second electrode, and a doped chalcogenide layer interposed between the first electrode and the second electrode, the method comprising:forming a chalcogenide layer on the first electrode;sputtering silver onto the chalcogenide layer using a first plasma generated from a feed gas consisting essentially of neon, thereby forming the doped chalcogenide layer; andsputtering a metal onto the doped chalcogenide layer using a second plasma generated from a feed gas consisting essentially of argon, thereby forming the second electrode, wherein the second electrode has a different work function (φm ) than the first electrode.A method of forming a chalcogenide memory element having a first electrode, a second electrode, and a doped chalcogenide layer interposed between the first electrode and the second electrode, the method comprising:forming a chalcogenide layer on the first electrode;sputtering silver onto the chalcogenide layer using a first plasma consisting essentially of neon, thereby forming the doped chalcogenide layer; andsputtering silver onto the doped chalcogenide layer using a second plasma consisting essentially of argon, thereby forming the second electrode.The method of Claim 21, wherein the chalcogenide layer is a germanium selenide material.The method of forming a non-volatile memory device, comprising:forming word lines;forming first electrodes coupled to the word lines, wherein each word line is coupled to more than one first electrode;forming a chalcogenide layer on each first electrode;sputtering metal onto each chalcogenide layer using a first plasma containing at least one component gas selected from the group consisting of neon and helium, thereby forming doped chalcogenide layers;sputtering metal onto each doped chalcogenide layer using a second plasma containing at least one component gas having an atomic weight higher than an atomic weight of neon, thereby forming second electrodes; andforming bit lines coupled to the second electrodes, wherein each bit line is coupled to more than one second electrode.The method of Claim 23, further comprising:forming diodes, wherein each diode is formed at a location selection from the group consisting of interposed between a second electrode and a bit line, such that each second electrode is coupled to a bit line through a diode, andinterposed between a first electrode and a word line, such that each first electrode is coupled to a word line through a diode.A method of forming a non-volatile memory device, comprising:forming word lines;forming first electrodes coupled to the word lines, wherein each word line is coupled to more than one first electrode;forming a chalcogenide layer on each first electrode;sputtering metal onto each chalcogenide layer using a first plasma containing at least one component gas selected from the group consisting of neon and helium, thereby forming doped chalcogenide layers;sputtering metal onto each doped chalcogenide layer using a second plasma containing at least one component gas having an atomic weight higher than an atomic weight of neon, thereby forming a second electrodes;forming a diode coupled to each second electrode; andforming bit lines coupled to the diodes, wherein each bit line is coupled to more than one diode.A method of forming a non-volatile memory device, comprising:forming word lines;forming diodes coupled to the word lines, wherein each word line is coupled to more than one diode;forming a first electrode coupled to each diode;forming a chalcogenide layer on each first electrode;sputtering metal onto each chalcogenide layer using a first plasma containing at least one component gas selected from the group consisting of neon and helium, thereby forming doped chalcogenide layers;sputtering metal onto each doped chalcogenide layer using a second plasma containing at least one component gas having an atomic weight higher than an atomic weight of neon, thereby forming second electrodes;forming a diode coupled to each second electrode; andforming bit lines coupled to the second electrodes, wherein each bit line is coupled to more than one second electrode.A method of forming a non-volatile memory device, comprising:forming word lines;forming first electrodes coupled to the word lines, wherein each word line is coupled to more than one first electrode;forming a chalcogenide layer on each first electrode;sputtering silver onto each chalcogenide layer using a first plasma consisting essentially of neon, thereby forming doped chalcogenide layers;sputtering a metal onto each doped chalcogenide layer using a second plasma consisting essentially of argon, thereby forming second electrodes;wherein the metal has a different work function (φm ) than the first electrodes; andforming bit lines coupled to the second electrodes, wherein each bit line is coupled to more than one second electrode.A method of forming a non-volatile memory device, comprising:forming word lines;forming first electrodes coupled to the word lines, wherein each word line is coupled to more than one first electrode;forming a chalcogenide layer on each first electrode;sputtering silver onto each chalcogenide layer using a first plasma consisting essentially of neon, thereby forming doped chalcogenide layers;sputtering silver onto each doped chalcogenide layer using a second plasma consisting essentially of argon, thereby forming second electrodes; andforming bit lines coupled to the second electrodes, wherein each bit line is coupled to more than one second electrode.The method of Claim 28, further comprising:forming diodes, wherein each diode is formed at a location selected from the group consisting of interposed between a second electrode and a bit line, such that each second electrode is coupled to a bit line through a diode, and interposed between a first electrode and a word line, such that each first electrode is coupled to a word line through a diode.
TECHNICAL FIELD OF THE INVENTION The present invention relates generally to integrated circuit memory devices, and in particular to the metal doping of chalcogenide materials in the fabrication of chalcogenide memory elements and integrated circuit devices containing such memory elements. BACKGROUND OF THE INVENTION Electrically programmable and erasable materials, i.e., materials that can be electrically switched between a generally resistive state and a generally conductive state are well known in the art. Chalcogenide materials are one class of examples of such materials finding use in the semiconductor industry, particularly in the fabrication of non-volatile memory devices.Chalcogenide materials are compounds made of one or more chalcogens and one or more elements that are more electropositive than the chalcogens. Chalcogens are the Group VIB elements of the traditional IUPAC version of the periodic table, i.e., oxygen (O), sulfur (S), selenium (Se), tellurium (Te) and polonium (Po). The more electropositive elements are generally selected from Groups IVB and VB. Typical combinations for non-volatile memory devices include selenium and/or tellurium with germanium (Ge) and/or antimony (Sb). However, other combinations are also known, such as combinations of arsenic (As) and sulfur.To obtain the desired electrical characteristics, chalcogenide materials are often doped with metal, such as copper (Cu), silver (Ag), gold (Au) or aluminum (Al). Figures 1A-1D depict the fabrication of a simple chalcogenide memory element 100. The basic structure of a chalcogenide memory element includes a first electrode, a second electrode and a chalcogenide material interposed between the first and second electrodes. Additional detail of chalcogenide memory devices, as well as examples of variations on the basic structure of a chalcogenide memory element, are given in U.S. Patent No. 5,998,244 issued December 7, 1999 to Wolstenholme et al., U.S. Patent No. 5,920,788 issued July 6, 1999 to Reinberg , and U.S. Patent No. 5,837,564 issued November 17, 1998 to Sandhu et al. , each of which is commonly assigned with the assignee of the present disclosure. In general, chalcogenide memory elements are formed on a semiconductor wafer or other substrate as a portion of an integrated circuit device.Chalcogenide memory elements typically store a single bit, e.g., a low resistivity (high conductivity) corresponding to a first logic state and a high resistivity (low conductivity) corresponding to a second logic state. Differing levels of resistivity of the chalcogenide memory elements are sensed using current sensing techniques well known in the art while applying a read potential of less than the threshold potential.Chalcogenide memory elements can be electrically switched between conductivity states by applying varying electrical fields to the doped chalcogenide material. By applying a programming potential above some threshold potential, the metal dopant atoms are believed to align in a dendritic structure, thereby forming conductive channels and decreasing the resistivity of the chalcogenide material. This transition is reversible by applying a potential having an opposite polarity. A range of applied potentials having a magnitude of less than the threshold potential, i.e., read potentials, can be applied without altering the resistivity of the doped chalcogenide materials. These read potentials can be applied to the chalcogenide memory elements for sensing the resistivity of the doped chalcogenide material and, thus, the memory elements' data values.Unlike dynamic random access memory (DRAM) devices, a non-volatile memory device does not require a periodic refresh to maintain its programmed state. Instead, non-volatile memory devices can be disconnected from a power source for extended periods of time, often measured in years, without the loss of the information stored in its memory cells. Chalcogenide materials best suited for use in non-volatile memory devices will thus tend to maintain their degree of resistivity indefinitely if an applied voltage does not exceed the threshold potential.In Figure 1A, a first electrode 110 is formed and a chalcogenide layer 115 is formed overlying the first electrode 110. As noted previously, electrical characteristics of chalcogenide layer 115 may be improved through doping of the chalcogenide material with metal. This is typically carried out through a process known as photo-doping where diffusion of metal atoms is photon induced. In this process, a metal layer 120 is first formed on the chalcogenide layer 115 as shown in Figure 1A. The metal layer 120 typically contains the copper, silver, gold, aluminum or other high-diffusing metal. Formation of the first electrode 110 and/or the metal layer 120 is typically performed in a vacuum chamber, e.g., using a vacuum sputtering process.To continue the photo-doping process in Figure 1B, electromagnetic radiation 125 is directed at the metal layer 120, resulting in diffusion of metal atoms from the metal layer 120 into the chalcogenide layer 115. The electromagnetic radiation 125 is generally ultraviolet (UV) light. Driving metal atoms into the chalcogenide layer 115 results in a doped chalcogenide layer 130 containing the chalcogenide material and the diffused metal. The semiconductor wafer must generally be removed from the vacuum chamber to expose the wafer surface to the UV light source.The photo-doping process is generally carried out until the metal layer 120 is completely diffused into the doped chalcogenide layer 130 as shown in Figure 1C. The thickness of the metal layer 120 should be chosen such that the desired doping level can be attained in the doped chalcogenide layer 130. However, the metal layer 120 must be thin enough, e.g., hundreds of angstroms, to allow transmission of the electromagnetic radiation 125 in order to produce the desired photon-induced diffusion of metal. As shown in Figure 1D, a second electrode 150 is then formed overlying the doped chalcogenide layer 130 and any remaining portion of the metal layer 120 to produce chalcogenide memory element 100. As with the first electrode 110 and/or the chalcogenide layer 115, formation of the second electrode 150 is also typically performed in a vacuum chamber. The second electrode 150 is preferably a material having a different work function (ϕm) than the first electrode 110. The work function is a measure of the energy required to remove an electron from a material's surface.There are several disadvantages to the traditional photo-doping process. The process can be time consuming as the semiconductor wafers are moved in and out of a vacuum chamber during the various processing stages described above. This movement of the semiconductor wafers among various process equipment also increases the chance of contamination or other damage during transport. Also, because the metal layer must be thin for efficient photon-induced diffusion of metal, the desired doping level may not be efficiently attainable with a single photo-doping process as the necessary thickness of the metal layer may result in excessive reflection of the electromagnetic radiation.For the reasons stated above, and for other reasons stated below that will become apparent to those skilled in the art upon reading and understanding the present specification, there is a need in the art for alternative methods for producing chalcogenide memory elements. SUMMARY Methods are described herein for forming metal-doped chalcogenide layers and devices containing such doped chalcogenide layers. The methods include using a plasma to induce diffusion of metal into a chalcogenide layer concurrently with metal deposition. The plasma contains at least one noble gas of low atomic weight, such as neon or helium. The plasma has a sputter yield sufficient to sputter a metal target and a UV component of its emitted spectrum sufficient to induce diffusion of the sputtered metal into the chalcogenide layer. Using such methods, a conductive layer can be formed on the doped chalcogenide layerin situ.In integrated circuit devices, such as non-volatile chalcogenide memory devices, doping of a chalcogenide layer concurrently with metal deposition and formation of a conductive layerin situwith the doping of the chalcogenide layer reduces contamination concerns and physical damage resulting from moving the device substrate from tool to tool, thus facilitating improved device reliability.For another embodiment, the invention provides a method of forming a doped chalcogenide layer. The method includes sputtering metal using a plasma containing at least one component gas selected from the group consisting of neon and helium and driving the sputtered metal into a layer of chalcogenide material using the UV component generated by the plasma.For a further embodiment, the invention provides a method of forming a doped chalcogenide layer. The method includes forming a layer of chalcogenide material and sputtering metal onto the layer of chalcogenide material using a plasma containing at least two noble gases. The plasma emits a spectrum having a UV component capable of driving the sputtered metal into the layer of chalcogenide material through UV-enhanced diffusion. For one embodiment, the composition of the plasma is chosen to have an average atomic weight sufficient to produce a desired sputtering efficiency. For another embodiment, the composition of the plasma is chosen to have a desired relative intensity of a UV component of the emitted spectrum of the plasma. For yet another embodiment, the composition of the plasma is chosen to have a desired emitted spectrum of the plasma.For one embodiment, the invention provides a method of forming a chalcogenide memory element having a first electrode, a second electrode, and a doped chalcogenide layer interposed between the first electrode and the second electrode. The method includes forming a chalcogenide layer on the first electrode, sputtering metal onto the chalcogenide layer and diffusing metal into the chalcogenide layer using a first plasma containing at least one component gas selected from the group consisting of neon and helium, thereby forming the doped chalcogenide layer, and sputtering metal onto the chalcogenide layer using a second plasma containing at least one component gas having an atomic weight higher than an atomic weight of neon, thereby forming the second electrode. For a further embodiment, the first plasma and the second plasma are the same plasma.For a still further embodiment, the composition of the first plasma is modified to generate the second plasma. Such modification of the composition may occur as a step change between sputtering stages or it may occur concurrently with sputtering of the metal.For another embodiment, the invention provides a method of forming a chalcogenide memory element having a first electrode, a second electrode, and a doped chalcogenide layer interposed between the first electrode and the second electrode. The method includes forming a chalcogenide layer on the first electrode, sputtering silver onto the chalcogenide layer and diffusing silver into the chalcogenide layer using a first plasma generated from feed gas consisting essentially of neon, thereby forming the doped chalcogenide layer, and sputtering silver onto the doped chalcogenide layer using a second plasma generated from feed gas consisting essentially of argon, thereby forming the second electrode.For yet another embodiment, the invention provides a method of forming a non-volatile memory device. The method includes forming word lines and forming first electrodes coupled to the word lines, wherein each word line is coupled to more than one first electrode. The method further includes forming a chalcogenide layer on each first electrode and sputtering metal onto each chalcogenide layer and diffusing metal into each chalcogenide layer using a first plasma containing at least one component gas selected from the group consisting of neon and helium, thereby forming doped chalcogenide layers. The method still further includes sputtering metal onto each doped chalcogenide layer using a second, different, plasma, thereby forming second electrodes. The second plasma may contain at least one component gas having an atomic weight higher than the atomic weight of neon. Alternatively or additionally, the second plasma may contain nitrogen (N2) such that the second electrode is formed of a metal-nitride material. The method still further includes forming bit lines coupled to the second electrodes, wherein each bit line is coupled to more than one second electrode. Each diode may be formed interposed between a second electrode and a bit line, such that each second electrode is coupled to a bit line through a diode. Alternatively, each diode may be formed interposed between a first electrode and a word line, such that each first electrode is coupled to a word line through a diode.Further embodiments of the invention include methods of varying scope. BRIEF DESCRIPTION OF THE DRAWINGS Figures 1A-1D are cross-sectional views of a chalcogenide memory element during various processing stages.Figures 2A-2D are cross-sectional views of a chalcogenide memory element during various processing stages in accordance with an embodiment of the invention.Figure 3 is a schematic illustration of one physical vapor deposition apparatus suitable for use with the embodiments of the invention.Figure 4 is a schematic of a portion of a memory array in accordance with an embodiment of the invention.Figure 5 is a simplified block diagram of an integrated circuit memory device in accordance with an embodiment of the invention. DETAILED DESCRIPTION In the following detailed description of the present embodiments, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration specific embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that process, electrical or mechanical changes may be made without departing from the scope of the present invention. The terms wafer or substrate used in the following description include any base semiconductor structure. Examples include silicon-on-sapphire (SOS) technology, silicon-on-insulator (SOI) technology, thin film transistor (TFT) technology, doped and undoped semiconductors, epitaxial layers of a silicon supported by a base semiconductor structure, as well as other semiconductor structures well known to one skilled in the art. Furthermore, when reference is made to a wafer or substrate in the following description, previous process steps may have been utilized to form regions/junctions in the base semiconductor structure, and the terms wafer and substrate include the underlying layers containing such regions/junctions. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims and equivalents thereof.Figures 2A-2D depict fabrication of a chalcogenide memory element 200 as a portion of an integrated circuit device in accordance with one embodiment of the invention. Figures 2A-2D are cross-sectional views taken during various processing stages.In Figure 2A, a lower or first electrode 210 is formed on a substrate (not shown) . The first electrode 210 contains conductive material. Examples include conductively doped polysilicon, carbon (C), metals, metal alloys, metal silicides, conductive metal nitrides and conductive metal oxides. The first electrode 210 may further contain more than one conductive material. For example, the first electrode 210 may contain a layer of carbon overlying a layer of molybdenum (Mo) or a layer of tungsten (W) overlying a layer of titanium nitride (TiN). In addition, the first electrode 210 may include one or more adhesion or barrier layers adjacent underlying or overlying layers. Any adhesion or barrier layer should preferably be conductive as to not interfere with programming of the chalcogenide memory element 200. For one embodiment, the first electrode 210 contains silver. For a further embodiment, the first electrode 210 is a layer of silver.The first electrode 210 is preferably formed using a physical vapor deposition (PVD) process. Examples include vacuum or thermal evaporation, electron-beam evaporation and sputtering techniques well known in the art. In a PVD process, a source or target containing the material to be deposited is evaporated and may include ionization of some or all of the vaporized target material. The vaporized and/or ionized species impinging on the substrate can then deposit on the substrate. PVD processes are preferred for their general ability to form layers of high purity, limited only by the purity of the source or target used in the PVD process. However, other deposition techniques may be used, such as a chemical vapor deposition (CVD) process in which vaporized chemical precursors are adsorbed on the substrate surface and reacted to form the first electrode 210.For one embodiment, the first electrode 210 has a thickness of approximately 500-1000Å. For a further embodiment, the first electrode 210 has a thickness of approximately 700Å.Following formation of the first electrode 210, a chalcogenide layer 215 is formed on the first electrode 210. As with the first electrode 210, the chalcogenide layer 215 is preferably formed using a PVD process, but may be formed using other deposition techniques. For one embodiment, the chalcogenide layer 215 contains a chalcogenide material containing one or more Group VIB elements of the traditional IUPAC version of the periodic table, i.e., oxygen (O), sulfur (S), selenium (Se), tellurium (Te) and polonium (Po), and one or more Groups IVB and VB elements of the traditional IUPAC version of the periodic table, i.e., carbon (C), silicon (Si), germanium (Ge), tin (Sn), lead (Pb), nitrogen (N), phosphorus (P), arsenic (As), antimony (Sb) and bismuth (Bi). More preferably, the chalcogenide layer 215 contains a chalcogenide material containing a combination of selenium and/or tellurium with germanium and/or antimony. For one embodiment, the chalcogenide layer 215 contains a germanium selenide material (GeSe or GeSe2).For one embodiment, the chalcogenide layer 215 has a thickness of approximately 300-700Å. For a further embodiment, the chalcogenide layer 215 has a thickness of approximately 500Å.As shown in Figure 2B, the chalcogenide layer 215 is doped with metal 240 using a sputtering process to produce a doped chalcogenide layer 230. The doped chalcogenide layer 230 is doped to a desired doping level. For one embodiment, the desired doping level produces a doped chalcogenide layer 230 saturated with the metal 240. For another embodiment, the desired doping level produces an oversaturated doped chalcogenide layer 230. For yet another embodiment, the desired doping level is approximately 15-30 wt% of the metal 240 in the doped chalcogenide layer 230.One example of an apparatus for performing sputtering may include an ENDURA®system commercially available from Applied Materials, Santa Clara, California, USA. The plasma generated in such equipment will emit a UV component, thus providing photon-induced diffusion during the sputtering process.Figure 3 is a schematic illustration of one PVD apparatus 310 suitable for use with the embodiments of the invention. Those familiar with PVD apparatus will recognize that it is a simplified schematic and that typical PVD apparatus may contain additional or alternate components.A conductive pedestal 314 containing substrate 312 is located in a deposition chamber 316. The pedestal 314 is connected to a DC power source 324. A gas inlet 318 is provided for introduction of component gases into the chamber 316. The component gases make up the plasma 322. The component gases are generally fed to the deposition chamber 316 continuously during the operation of the apparatus 310. As used herein, component gases do not include any vaporized target material created during the sputter process.A sputter target 326 connected to a DC power source 328 is located in the chamber 316. The target 326 may be a plate formed of the material to be sputtered. Examples of materials to be sputtered in the doping of the chalcogenide layer 215 include high-diffusion metals such as copper, silver, gold and aluminum. Excess or spent gases are drawn from the deposition chamber 316 through a vent 329 by a vacuum pump (not shown).In the magnetron configuration, magnets 327 aid in the development of the plasma 322. The plasma 322 is formed by the application of a bias across the target 326 as a cathode and the substrate 312 as an anode. Magnets 327 are often placed behind the target 326.In order to increase the UV component emitted by the plasma, low molecular weight noble gases are added to the plasma. In particular, the plasma is formed at least in part using neon (Ne) and/or helium (He). The plasma may further contain other component gases. One example is argon (Ar), which is commonly used in sputtering processes. While argon's spectrum has a UV component as well, its relative intensity is relatively low compared to that of neon or helium, thus resulting in lower rates of metal diffusion. For one embodiment, the plasma used during the doping process is generated from feed gas consisting essentially of neon. For another embodiment, the plasma used during the doping process contains helium. For yet another embodiment, the plasma used during the doping process contains at least argon and neon. The plasma could also be generated from feed gas consisting essentially of helium for its increased UV component, but such use can lead to undesirable reductions in sputtering efficiency. Use of lower atomic weight gases can result in much higher operating pressures than traditional PVD processes, e.g., 30-300 mTorr.By adjusting the volume percentages of the gases used in generating the plasma, a plasma can be generated having an average atomic weight anywhere between the lowest atomic weight of the gases and the highest atomic weight of the gases. In this manner, a plasma can be created having an average atomic weight sufficient to facilitate a desired sputtering efficiency. Sputtering efficiency generally refers to the number of target atoms ejected per incident ion, typically in the range of about 0.5-1.5. Sputtering efficiency largely determines the rate of sputter implantation or deposition. Sputtering efficiency depends on a number of factors, including the direction of incident ions, target material, mass of bombarding ions, the energy of the bombarding ions, dose, crystal state and surface binding energy.It is noted that where more than two gases make up the plasma, multiple combinations of these gases can produce the same average atomic weight. For example, a mixture of 5% argon, 78% neon and 17% helium by volume will have approximately the same average atomic weight as a mixture of 10% argon, 67% neon and 23% helium by volume.By adjusting the volume percentages of the gases in the plasma, a plasma also can be generated having a UV component that is a composite of the spectra of the individual gases and having a relative intensity generally between that of the lowest relative intensity of the gases in the plasma and that of the highest relative intensity of the gases in the plasma. In this manner, a plasma can be created having a relative intensity of its composite UV component sufficient to produce a desired level of photon-induced diffusion of the sputtered metal. It is noted that where more than two gases make up the plasma, multiple combinations of these gases can emit UV components having the same relative intensity.In view of the above, it is possible to choose a plasma having a desired relative intensity of its emitted UV component and a desired average atomic weight through the selection of two or more component gases and their relative volume percentages. However, it is recognized that these values, i.e., the desired relative intensity and the desired average atomic weight, may be mutually exclusive. In other words, attaining one value may require a compromise on the other. One method of compromise would be to determine the combinations of component gases producing a plasma having the desired relative intensity and then to choose one of these combinations of the component gases having an average atomic weight near the desired atomic weight. Another method would be to determine the combinations of component gases producing a plasma having the desired average atomic weight and then to choose one of these combinations of the component gases having a relative intensity of its UV component near the desired relative intensity.The UV components of differing plasmas may have differing spectra, but the same relative intensity. Because the spectrum can also affect diffusion rates, it may be desirable to produce a specific emitted spectrum in a resulting plasma. Accordingly, for one embodiment, a mixture of component gases is chosen to produce a desired spectrum of the resulting plasma. For a further embodiment, a mixture of component gases is chosen to produce a desired spectrum of the resulting plasma having a higher level of visible components than a plasma consisting of neon. For another embodiment, a mixture of component gases capable of producing a desired spectrum in a resulting plasma is chosen to produce a target sputter efficiency. In general, the component gases of the plasma used in the sputtering process for doping of the chalcogenide layer 215 are selected to produce desired diffusion and sputtering rates.As an example of how the plasma composition affects diffusion, an experiment was undertaken to sputter silver onto germanium selenide using different plasmas, but otherwise comparable processing conditions. Using a plasma generated from feed gas consisting essentially of neon, approximately 501.6Å of silver were sputtered onto approximately 503Å of germanium selenide (GeSe). It is presumed that approximately 300Å of the silver diffused into the germanium selenide layer. In contrast, using a plasma generated from feed gas consisting essentially of argon, and sputtering approximately 468.0Å of silver onto approximately 503Å of germanium selenide (GeSe), approximately 336.3Å of silver were detected on the surface of the germanium selenide. Thus, for argon, it is presumed that only approximately 131.7Å of the silver diffused into the germanium selenide layer.Returning to Figure 2C, a top or second electrode 250 is formed on the doped chalcogenide layer 230. The second electrode 250 generally follows the same guidelines as the first electrode 210. Accordingly, the second electrode 250 contains conductive material. Examples include conductively doped polysilicon, carbon, metals (including refractory metals), metal alloys, metal silicides, conductive metal nitrides and conductive metal oxides. The second electrode 250 may further contain more than one conductive material. In addition, the second electrode 250 may include one or more adhesion or barrier layers adjacent underlying or overlying layers. Any adhesion or barrier layer should preferably be conductive as to not interfere with programming of the chalcogenide memory element 200. For one embodiment, the second electrode 250 contains silver. For a further embodiment, the second electrode 250 is a layer of silver.The second electrode 250 is preferably formed using a PVD process, but may be formed by other methods such as CVD techniques. The second electrode 250 is more preferably formed using the same PVD apparatus and target as used during the doping of the chalcogenide layer 215. In this manner, the second electrode 250 may be formedin situwith the doping process, thus further reducing risks of contamination or damage associated with transport of the semiconductor substrate. Accordingly, for one embodiment, the second electrode 250 is formed by sputtering metal 245 onto the doped chalcogenide layer 230.For one embodiment, the second electrode 250 has a thickness of approximately 800-1200Å. For a further embodiment, the second electrode 250 has a thickness of approximately 1000Å.For one embodiment, the component gases used during doping of the chalcogenide layer 215 are evacuated from the deposition chamber 316 prior to formation of the second electrode 250, For such an embodiment, a new plasma 322 is formed with the new component gases for the deposition of the second electrode 250. For example, doping of the chalcogenide layer 215 can be performed using a plasma 322 generated using a feed gas consisting essentially of neon. The deposition chamber 316 is evacuated after the desired doping level is attained. Subsequently, formation of the second electrode can be performed using a plasma 322 generated using a feed gas consisting essentially of argon. Alternatively or additionally, the second plasma 322 may contain nitrogen or oxygen to form conductive metal nitrides or metal oxides, respectively.Alternatively, the component gas feed composition could be changed without an evacuation of the deposition chamber 316. For example, doping of the chalcogenide layer 215 can be performed using a component gas and plasma 322 having a first composition, e.g., consisting essentially of neon. As the desired doping level is approached, the component gas feed could be changed to the second composition, e.g., consisting essentially of argon. For this example, the concentration of argon in the plasma 322 will thus gradually increase as argon is fed to the deposition chamber 316 and mixed gases are drawn off. As the composition of the plasma 322 changes, driving to a higher average atomic weight and/or a lower UV component, the dynamics would shift away from diffusion and toward deposition. To decrease the rate of change in the composition of the plasma 322, the component gas feed composition could be changed gradually instead of making a step change.For another embodiment, the processing described with reference to Figures 2B and 2C could be combined using a single composition for plasma 322. For such an embodiment, the component gases are chosen such that a desired combination of diffusion and deposition occurs. The rate of diffusion should be high enough relative to the rate of deposition that sufficient doping occurs before the second electrode 250 becomes thick enough to block further diffusion of metal into the doped chalcogenide layer 230.Figure 2D shows the chalcogenide memory element 200 upon formation of the second electrode 250. The chalcogenide memory element 200 has a doped chalcogenide layer interposed between the first electrode 210 and the second electrode 250. The chalcogenide memory element 200 can be used to form a chalcogenide memory cell where the state of the doped chalcogenide layer 230 is indicative of the data value stored by the memory cell.Figure 4 is a schematic showing a portion of a memory array 400 containing chalcogenide memory elements 200 as described herein. The memory array 400 includes a number of memory cells 405 arranged generally in rows and columns. Typical memory arrays 400 contain millions of these memory cells 405. Each memory cell 405 includes a chalcogenide memory element 200 coupled between a first conductive line, such as word line 410, and a diode 415. The diode 415 is further coupled between a second conductive line, such as bit line 420, and the chalcogenide memory element 200. Alternatively, the diode 415 could be coupled between the first conductive line and the chalcogenide memory element 200. The diode 415 serves as the access device to the memory cell 300. A grouping of memory cells 300 coupled to the same word line 410 are typically referred to as a row of memory cells. Likewise, a grouping of memory cells 300 coupled to the same bit line 420 are typically referred to as a column of memory cells.Figure 5 is a simplified block diagram of an integrated circuit memory device 500 in accordance with an embodiment of the invention. The memory device 500 is a non-volatile memory device containing chalcogenide memory elements in accordance with the invention. The memory device 500 includes an array of memory cells 502 including the non-volatile chalcogenide memory elements. The memory array 502 is arranged in a plurality of addressable banks. In one embodiment, the memory contains four memory banks 504, 506, 508 and 510. Each memory bank contains addressable rows and columns of memory cells.The data stored in the memory array 502 can be accessed using externally provided location addresses received by address register 512 via address signal connections 528. The addresses are decoded using bank decode logic 516 to select a target memory bank. The addresses are also decoded using row decode circuitry 514 to select the target rows. The addresses are further decoded using column decode circuitry 518 to select one or more target columns.Data is input and output through I/O circuit 520 via data connections 530. I/O circuit 528 includes data output registers, output drivers and output buffers. Command execution logic 522 is provided to control the basic operations of the memory device 500 in response to control signals received via control signal connections 526. A state machine 524 may also be provided to control specific operations performed on the memory array and cells. The command execution logic 522 and/or state machine 524 can be generally referred to as control circuitry to control read, write, erase and other memory operations. The data connections 530 are typically used for bi-directional data communication. The memory can be coupled to an external processor 550 for operation or testing.It will be appreciated by those skilled in the art that additional circuitry and control signals can be provided, and that the memory device of Figure 5 has been simplified to help focus on the invention. It will be understood that the above description of a memory device is intended to provide a general understanding of the memory and is not a complete description of all the elements and features of a typical memory device.As recognized by those skilled in the art, memory devices of the type described herein are generally fabricated as an integrated circuit containing a variety of semiconductor devices. The integrated circuit is supported by a substrate. Integrated circuits are typically repeated multiple times on each substrate. The substrate is further processed to separate the integrated circuits into dies as is well known in the art.The foregoing figures were used to aid the understanding of the accompanying text. However, the figures are not drawn to scale and relative sizing of individual features and layers are not necessarily indicative of the relative dimensions of such individual features or layers in application. Accordingly, the drawings are not to be used for dimensional characterization.Although dimensional characteristics were provided herein for information purposes, it is recognized that there is a continuing drive to reduce integrated circuit device dimensions for increased performance and reduced fabrication costs. In addition, the concepts described herein are not fundamentally limited by absolute dimensions. Accordingly, improvements in fabrication and sensing technologies are expected to facilitate reduced dimensional characteristics of the chalcogenide memory elements described herein, particularly as they relate to layer thickness. CONCLUSION Methods have been described for forming metal-doped chalcogenide layers and devices containing such doped chalcogenide layers. The methods include using a plasma to induce diffusion of metal into a chalcogenide layer concurrently with metal deposition. The plasma contains at least one noble gas of low atomic weight, such as neon or helium. The plasma has a sputter yield sufficient to sputter a metal target and a UV component of its emitted spectrum sufficient to induce diffusion of the sputtered metal into the chalcogenide layer. Using such methods, a conductive layer can be formed on the doped chalcogenide layerin situ. In integrated circuit devices, such as non-volatile chalcogenide memory devices, doping of a chalcogenide layer concurrently with metal deposition and formation of a conductive layerin situwith the doping of the chalcogenide layer reduces contamination concerns and physical damage resulting from moving the device substrate from tool to tool, thus facilitating improved device reliability.Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that any arrangement that is calculated to achieve the same purpose may be substituted for the specific embodiments shown. Many adaptations of the invention will be apparent to those of ordinary skill in the art. Accordingly, this application is intended to cover any adaptations or variations of the invention. It is manifestly intended that this invention be limited only by the following claims and equivalents thereof.
An embodiment of an integrated circuit may comprise a front end unit, and circuitry coupled to the front end unit, the circuitry to provide a high confidence, multiple branch offset predictor. For example, the circuitry may be configured to identify an entry in a multiple-taken-branch prediction table that corresponds to a conditional branch instruction, determine if a confidence level of the entry exceeds a threshold confidence level, and, if so determined, provide multiple taken branch predictions that stem from the conditional branch instruction from the entry in the multiple-taken-branch prediction table. Other embodiments are disclosed and claimed.
An integrated circuit (10), comprising:a front end unit (11); andcircuitry (13) coupled to the front end unit, the circuitry is configured to:identify an entry in an exclusively multiple-taken-branch prediction table that corresponds to a conditional branch instruction,determine if a confidence level of the entry exceeds a threshold confidence level, and, if so determined,provide exclusively multiple taken branch predictions that stem from the conditional branch instruction from the entry in the multiple-taken-branch prediction table, andcancel a main Branch Prediction Unit, BPU, lookup and cancel a branch target buffer, BTB, lookup.The integrated circuit of claim 1, wherein the circuitry is further to:generate tag information for the conditional branch instruction based on a last taken branch and a branch history; andidentify the entry in the multiple-taken-branch prediction table based on the generated tag information.The integrated circuit of any of claims 1 to 2, wherein the circuitry is further to:jump to a target of a last predicted taken branch.The integrated circuit of claim 3, wherein the circuitry is further to:generate pointers to a next N branches which are predicted to be taken based on a current program counter and a branch history, where N is an integer value greater than 1.The integrated circuit of claim 4, wherein the circuitry is further to:identify a program counter of a taken branch and a target of the taken branch based on the generated pointers.The integrated circuit of claim 5, wherein the circuitry is further to:construct an entire control flow from a current point until the Nth taken branch based on the generated pointers.The integrated circuit of claim 6, wherein the circuitry is further to:redirect the main branch prediction unit to start from the target of the last taken branch.A method, comprising:identifying an entry in an exclusively multiple-taken-branch prediction table that corresponds to a conditional branch instruction;determining if a confidence level of the entry exceeds a threshold confidence level; and, if so determined,providing exclusively multiple taken branch predictions that stem from the conditional branch instruction from the entry in the multiple-taken-branch prediction table, andcancelling a main Branch Prediction Unit, BPU, lookup and cancelling a branch target buffer, BTB, lookup.The method of claim 8, further comprising:jumping to a target of a last predicted taken branch.The method of claim 9, further comprising:generating pointers to a next N branches which are predicted to be taken based on a current program counter and a branch history, where N is an integer value greater than 1.The method of claim 10, further comprising:identifying a program counter of a taken branch and a target of the taken branch based on the generated pointers.The method of claim 11, further comprising:constructing an entire control flow from a current point until the Nth taken branch based on the generated pointers.
CLAIM FOR PRIORITYThis application claims priority to India Provisional Patent Application No. 202041046222, filed October 23, 2020 and titled HIGH CONFIDENCE MULTIPLE BRANCH OFFSET PREDICTOR.BACKGROUND1. Technical FieldThis disclosure generally relates to processor technology, branch prediction technology, and branch offset prediction technology.2. Background ArtSome central processor unit (CPU) cores may utilize speculative execution to avoid pipeline stalls and achieve better performance, which allows execution to continue without having to wait for the architectural resolution of a branch target. Branch prediction technology utilizes a digital circuit that guesses which way a branch will go before the branch instruction is executed. Correct predictions/guesses improve the flow in the instruction pipeline.In general, there are two kind of branch predictions: branch prediction for conditional branches, which may be understood as a prediction for the branch as "taken" vs. "not-taken"; and branch target prediction for unconditional branches, including both direct and indirect branches. Indirect branch prediction is an important part of the overall branch prediction, because an indirect branch typically involves higher latency in its target resolution, especially for a memory indirect branch the target of which needs to be fetched from a specific memory location. A branch prediction unit (BPU) may support speculative execution by providing a predicted target to the front-end (FE) of a CPU based on the branch instruction pointer (IP), branch type, and the control flow history (also referred as branch history) prior to the prediction point. US 2005/268075 discloses a multiple branch predictor which stores host information both taken branch-type instructions and not taken branch-type instructions.BRIEF DESCRIPTION OF THE DRAWINGSThe various embodiments of the present invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:FIG. 1A is a block diagram of an example of an integrated circuit according to an embodiment;FIG. 1B is a block diagram of an example of an electronic apparatus according to an embodiment;FIG. 2A is an illustrative diagram of an example of a fetched instruction stream according to an embodiment;FIG. 2B is an illustrative diagram of an example format of a table entry according to an embodiment;FIGs. 3A to 3B are flow diagrams of an example of a method according to an embodiment;FIG. 4 is a block diagram of an example of an electronic apparatus according to an embodiment;FIGs. 5A to 5B are flow diagrams of another example of a method according to an embodiment;FIG. 6A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention.FIG. 6B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention;FIGs. 7A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip;FIG. 8 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention;FIGs. 9-12 are block diagrams of exemplary computer architectures; andFIG. 13 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention.DETAILED DESCRIPTIONEmbodiments discussed herein variously provide techniques and mechanisms for branch prediction and/or branch target prediction. The technologies described herein may be implemented in one or more electronic devices. Non-limiting examples of electronic devices that may utilize the technologies described herein include any kind of mobile device and/or stationary device, such as cameras, cell phones, computer terminals, desktop computers, electronic readers, facsimile machines, kiosks, laptop computers, netbook computers, notebook computers, internet devices, payment terminals, personal digital assistants, media players and/or recorders, servers (e.g., blade server, rack mount server, combinations thereof, etc.), set-top boxes, smart phones, tablet personal computers, ultra-mobile personal computers, wired telephones, combinations thereof, and the like. More generally, the technologies described herein may be employed in any of a variety of electronic devices including integrated circuitry which is operable to predict a branch target or whether a branch instruction is taken or not taken.In the following description, numerous details are discussed to provide a more thorough explanation of the embodiments of the present disclosure. It will be apparent to one skilled in the art, however, that embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring embodiments of the present disclosure.Note that in the corresponding drawings of the embodiments, signals are represented with lines. Some lines may be thicker, to indicate a greater number of constituent signal paths, and/or have arrows at one or more ends, to indicate a direction of information flow. Such indications are not intended to be limiting. Rather, the lines are used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit or a logical unit. Any represented signal, as dictated by design needs or preferences, may actually comprise one or more signals that may travel in either direction and may be implemented with any suitable type of signal scheme.Throughout the specification, and in the claims, the term "connected" means a direct connection, such as electrical, mechanical, or magnetic connection between the things that are connected, without any intermediary devices. The term "coupled" means a direct or indirect connection, such as a direct electrical, mechanical, or magnetic connection between the things that are connected or an indirect connection, through one or more passive or active intermediary devices. The term "circuit" or "module" may refer to one or more passive and/or active components that are arranged to cooperate with one another to provide a desired function. The term "signal" may refer to at least one current signal, voltage signal, magnetic signal, or data/clock signal. The meaning of "a," "an," and "the" include plural references. The meaning of "in" includes "in" and "on."The term "device" may generally refer to an apparatus according to the context of the usage of that term. For example, a device may refer to a stack of layers or structures, a single structure or layer, a connection of various structures having active and/or passive elements, etc. Generally, a device is a three-dimensional structure with a plane along the x-y direction and a height along the z direction of an x-y-z Cartesian coordinate system. The plane of the device may also be the plane of an apparatus which comprises the device.The term "scaling" generally refers to converting a design (schematic and layout) from one process technology to another process technology and subsequently being reduced in layout area. The term "scaling" generally also refers to downsizing layout and devices within the same technology node. The term "scaling" may also refer to adjusting (e.g., slowing down or speeding up - i.e. scaling down, or scaling up respectively) of a signal frequency relative to another parameter, for example, power supply level.The terms "substantially," "close," "approximately," "near," and "about," generally refer to being within +/- 10% of a target value. For example, unless otherwise specified in the explicit context of their use, the terms "substantially equal," "about equal" and "approximately equal" mean that there is no more than incidental variation between among things so described. In the art, such variation is typically no more than +/-10% of a predetermined target value.It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in other orientations than those illustrated or otherwise described herein.Unless otherwise specified the use of the ordinal adjectives "first," "second," and "third," etc., to describe a common object, merely indicate that different instances of like objects are being referred to and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking or in any other manner.The terms "left," "right," "front," "back," "top," "bottom," "over," "under," and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions. For example, the terms "over," "under," "front side," "back side," "top," "bottom," "over," "under," and "on" as used herein refer to a relative position of one component, structure, or material with respect to other referenced components, structures or materials within a device, where such physical relationships are noteworthy. These terms are employed herein for descriptive purposes only and predominantly within the context of a device z-axis and therefore may be relative to an orientation of a device. Hence, a first material "over" a second material in the context of a figure provided herein may also be "under" the second material if the device is oriented upside-down relative to the context of the figure provided. In the context of materials, one material disposed over or under another may be directly in contact or may have one or more intervening materials. Moreover, one material disposed between two materials may be directly in contact with the two layers or may have one or more intervening layers. In contrast, a first material "on" a second material is in direct contact with that second material. Similar distinctions are to be made in the context of component assemblies.The term "between" may be employed in the context of the z-axis, x-axis or y-axis of a device. A material that is between two other materials may be in contact with one or both of those materials, or it may be separated from both of the other two materials by one or more intervening materials. A material "between" two other materials may therefore be in contact with either of the other two materials, or it may be coupled to the other two materials through an intervening material. A device that is between two other devices may be directly connected to one or both of those devices, or it may be separated from both of the other two devices by one or more intervening devices.As used throughout this description, and in the claims, a list of items joined by the term "at least one of" or "one or more of" can mean any combination of the listed terms. For example, the phrase "at least one of A, B or C" can mean A; B; C; A and B; A and C; B and C; or A, B and C. It is pointed out that those elements of a figure having the same reference numbers (or names) as the elements of any other figure can operate or function in any manner similar to that described, but are not limited to such.In addition, the various elements of combinatorial logic and sequential logic discussed in the present disclosure may pertain both to physical structures (such as AND gates, OR gates, or XOR gates), or to synthesized or otherwise optimized collections of devices implementing the logical structures that are Boolean equivalents of the logic under discussion.Some embodiments advantageously provide technology for a high confidence, multiple branch offset predictor (HCoMB). For example, the offset may refer to relative locations in a cache line. Modern superscalar processors achieve higher performance by extracting more instruction level parallelism (ILP) from the workloads. To facilitate this, superscalar processors employ ever growing Out-of-Order (OOO) instructions windows to identify more and more independent instructions. To support such wide and deep machines, the Front-End of the processor needs to provide a very high sustained instruction bandwidth to the OOO.A major limiter of Front-End bandwidth is the Branch Prediction Unit (BPU). To better understand this, consider the operation of a conventional BPU. A conventional BPU uses the Program Counter (PC) and Branch History (Stew) to predict each branch in a cache-line and then determines the first taken branch out of all the branches. After that, the BPU discards all instructions following the first taken branch. In the next cycle, the BPU operation restarts from the target of the branch instruction. Accordingly, every taken branch causes a BPU re-steering event which involves discarding unused fetched bytes and a cycle change. This limits the overall bandwidth of the Front-End and the performance of the processor.To solve the above problem, some embodiments provide technology for a HCoMB offset predictor which may provide a very high sustained BPU bandwidth. Embodiments of the HCoMB offset predictor may utilize the PC and Stew (e.g., a current program state) and identify a next N taken branches in the program flow and their targets. In a next cycle, the HCoMB offset predictor directly jumps to the target of the Nth taken branch.Where a conventional predictor predicts each branch in a cache-line and then picks the first taken branch amongst them (if any), embodiments of the HCoMB offset predictor may directly produce the relative positions of the next N taken branches from the current PC and the targets of the next N taken branches. This is a major micro-architectural benefit of some embodiments. Additionally, in contrast to a conventional predictor which is re-steered after every taken branch, embodiments of the HCoMB offset predictor may be re-steered only after N taken branches, effectively making a bandwidth of the HCoMB offset predictor N times that of a conventional predictor. Accordingly, some embodiments may provide a much higher BPU bandwidth using a very simple microarchitecture and low storage.Some predictors may utilize Path-based Next Trace prediction (PNT), where a Next-Trace predictor may predict units of traces. Compared to a conventional branch predictor which predicts every branch, the PNT predictor predicts an entire trace in one shot. The PNT predictor records sequences of traces as the path history of the program and uses the recorded sequence to predict the next trace.Decoded stream buffer (DSB) Simple-stream (DSS) technology identifies extremely stable code regions in which the control flow is always constant. Such control flows are generally a result of Always-taken or always-not-taken branches in the program. For such code regions, DSS records the DSB pointers to all micro-ops belonging to this region. Next time the same code region is encountered, DSS provides all the pointers to the DSB from where a stream of micro-ops is read out and supplied to the next pipeline stages. The main BPU is not consulted during this time. Accordingly, DSS can supply a stream of instructions spanning multiple taken branches in a single cycle without any BPU re-steering operation, opportunistically increasing the Front-End bandwidth.The PNT predictor only supports a limited trace size (e.g., 16 instructions) or a limited number of branches (taken or not-taken), which may be too small and not suitable to support the bandwidth requirements of very wide, deep OOO cores. In contrast, embodiments of the HCoMB offset predictor may provide information on the next N taken branches, which may constitute an arbitrarily very long trace if the N taken branches are far apart. Also, the PNT predictor does not check if the branches are taken or not-taken. If a certain program region has many consecutive not-taken branches, the PNT predictor will break the entire region into multiple traces of six (6) branches each and take multiple cycles to predict this entire region. In contrast, embodiments of the HCoMB offset predictor only respects taken branches because not-taken branches do not change the natural control flow of a program and hence, do not need prediction. By implicitly predicting not-taken branches, a single HCoMB prediction spans a much larger code region than that covered by a single PNT prediction. Therefore, HCoMB can provide a much higher throughput at much lower storage than the PNT predictor.DSS relies completely on the DSB implementation. It only records DSB pointers whereas the actual micro-ops must be supplied by the DSB itself. Therefore, DSS requires inclusivity in the DSB. If the micro-ops are not present in DSB, DSS cannot give out a stream-prediction. Embodiments of the HCoMB offset predictor do not have any dependency on the DSB. HCoMB may works as a standalone branch predictor. In terms of branch stability versus prediction stability, DSS relies very much on the stability of a given branch (e.g., DSS only works when branches are always-taken or always-not-taken). If a branch has a flaky behavior, DSS cannot handle it. Embodiments of the HCoMB offset predictor, on the other hand, rely on prediction stability which means HCoMB offset predictors also works very well with branches that change behavior over time if the change can be accurately predicted. For example, embodiments of the HCoMB offset predictor incorporate the branch history (Stew) in its prediction to work better with branches that change behavior over time. The branch history allows embodiments of the HCoMB offset predictor to distinguish between each taken or not-taken instance of the same branch and therefore, accurately predict each instance separately. This contrast between branch stability and prediction stability gives embodiments of the HCoMB offset predictor a superior coverage and performance over DSS.Some embodiments of a HCoMB offset predictor may predict multiple taken branches per cycle and then jump to the target of the last predicted taken branch. Given the current PC and Stew, embodiments of the HCoMB offset predictor generates pointers to the next N branches which are predicted to be taken. Using these pointers, the PC of the taken branch and its target may be accurately identified and thus, the entire control flow may be constructed from a current point until the Nth taken branch. During this operation, the main BPU predictions are discarded. After a HCoMB prediction, the BPU is redirected to start from the target of the last taken branch. Thus, by predicting multiple taken branches at once, the BPU need not be re-steered after every taken branch and this significantly increases the bandwidth of the Front-End (FE). Advantageously, some embodiments provide a mechanism to enhance the bandwidth of the FE of the processor which is a critical limitation when scaling the depth-width of processor cores. Further, some embodiments may be highly area efficient and leverage existing hardware structures in the FE for most of the work. Thus, some embodiments may provide a simple way to support an important requirement of a wide variety of processors.With reference to FIG. 1A , an embodiment of an integrated circuit 10 may comprise a front end unit 11 and circuitry 13 coupled to the front end unit 11, where the circuitry 13 is configured to provide a HCoMB offset predictor. For example, the circuitry 13 may be configured to identify an entry in a multiple-taken-branch (MTB) prediction table that corresponds to a conditional branch instruction, determine if a confidence level of the entry exceeds a threshold confidence level, and, if so determined, provide multiple taken branch predictions that stem from the conditional branch instruction from the entry in the MTB prediction table. In some embodiments, the circuitry 13 may be further configured to generate tag information for the conditional branch instruction based on a last taken branch and a branch history, and identify the entry in the MTB table based on the generated tag information. For example, the circuitry 13 may be configured to predict multiple taken branches per cycle and then jump to the target of the last predicted taken branch. In some embodiments, the circuitry 13 may be further configured to generate pointers to the next N branches which are predicted to be taken based on a current PC and branch history (stew), where N is an integer value greater than 1 (N> 1). In some embodiments, the circuitry 13 may be further configured to identify the PC of the taken branch and its target based on the generated pointers. For example, the circuitry 13 may be configured to construct the entire control flow from a current point until the Nth taken branch based on the generated pointers. In some embodiments, the circuitry may be configured to discard the main BPU prediction and redirect the BPU to start from the target of the last taken branch.Embodiments of the front end unit 11 and/or the circuitry 13 may be incorporated in a processor including, for example, the core 990 ( FIG. 6B ), the cores 1102A-N ( FIGs. 8 , 12 ), the processor 1210 ( FIG. 9 ), the co-processor 1245 ( FIG. 9 ), the processor 1370 ( FIGs. 10-11 ), the processor/coprocessor 1380 ( FIGs. 10-11 ), the coprocessor 1338 ( FIGs. 10-11 ), the coprocessor 1520 ( FIG. 12 ), and/or the processors 1614, 1616 ( FIG. 13 ). In particular, embodiments of the circuitry 13 may be incorporated in the front end unit 930 (FIG. 11B).With reference to Fig. 1B , an embodiment of an electronic apparatus 20 may comprise a front end unit 21 to decode one or more instructions, and an execution unit 22 communicatively coupled to the front end unit 21 to execute the decoded one or more instructions. The front end unit 21 may include a BPU 23 to provide branch prediction information for the one or more instructions, and a HCoMB offset predictor 24 communicatively coupled to the branch prediction unit 23, the HCoMB offset predictor 24 including circuitry to predict multiple taken branches per cycle and then jump to the target of the last predicted taken branch. For example, the circuitry may be configured to identify an entry in a MTB prediction table that corresponds to a conditional branch instruction, determine if a confidence level of the entry exceeds a threshold confidence level, and, if so determined, provide multiple taken branch predictions that stem from the conditional branch instruction from the entry in the MTB prediction table. In some embodiments, the circuitry may be further configured to generate tag information for the conditional branch instruction based on a last taken branch and a branch history, and identify the entry in the MTB table based on the generated tag information. In some embodiments, the circuitry may be further configured to generate pointers to the next N branches which are predicted to be taken, based on a current PC and branch history (stew). In some embodiments, the circuitry may be further configured to identify the PC of the taken branch and its target based on the generated pointers. For example, the circuitry may be configured to construct the entire control flow from a current point until the Nth taken branch based on the generated pointers. In some embodiments, the circuitry may be configured to discard the prediction of the BPU 23 and redirect the BPU 23 to start from the target of the last taken branch.Embodiments of the front end unit 21, the execution unit 22, the BPU 23 and/or the HCoMB offset predictor 24 may be incorporated in a processor including, for example, the core 990 ( FIG. 6B ), the cores 1102A-N ( FIGs. 8 , 12 ), the processor 1210 ( FIG. 9 ), the co-processor 1245 ( FIG. 9 ), the processor 1370 ( FIGs. 10-11 ), the processor/coprocessor 1380 ( FIGs. 10-11 ), the coprocessor 1338 ( FIGs. 10-11 ), the coprocessor 1520 ( FIG. 12 ), and/or the processors 1614, 1616 ( FIG. 13 ). In particular, embodiments of the HCoMB offset predictor 24 may be incorporated in the front end unit 930 (FIG. 11B) and communicatively coupled to the branch prediction unit 932 (FIG. 11B).FIG. 2A is an illustrative diagram of traces in a program. FIG. 2A shows a program broken into multiple traces comprising four (4) Taken branches each. The Fetched instructions stream denotes the output of the main branch predictor. Embodiments of the HCoMB offset predictor snoops this instruction stream and records the taken branches in a N-entry buffer (N=4 in the figure). When the buffer is full, all the information (now referred to as a Trace) is copied into a HCoMB table entry. The table entry is identified using a hash of the Trace entry PC (first valid instruction in the Trace) and the branch history of the program till now. After copying the data, the buffer is cleared and training for the next Trace commences. For learning a HCoMB-Trace, in the first step, the HCoMB offset predictor identifies Traces of taken branches which occur back-to-back in the program flow.FIG. 2B is an illustrative diagram of an example format of a HCoMB table entry. For predictor lookup, embodiments of the HCoMB offset predictor may be composed of a single set-associative table which stores all information required to predict branch traces. Each entry in the table holds data regarding one (1) trace in the program. An example HCoMB table entry appears as shown in FIG. 2B .During lookup, an index and tag is generated from the target of the last taken branch and the branch history. The index and tag are used to identify the Set and Way of the concerning trace respectively. After the Set-Way is identified, the confidence of the entry may be checked. If the confidence exceeds a threshold or is saturated (e.g., equals a maximum value for the confidence field), this trace can be predicted, else, training must continue to build confidence on this trace. Overall, embodiments of the HCoMB offset predictor lookup may be similar to the main BPU lookup operation and may work seamlessly with the existing structures and information available.With reference to FIGs. 3A to 3B , an embodiment of a method 30 shows a sequence of HCoMB operations during prediction ( FIG. 3A ) and training ( FIG. 3B ). The method 30 includes fetching a cacheline at box 31, performing a main BPU lookup on the cacheline at box 32, and providing a final prediction from a branch target buffer (BTB) lookup at box 33. At the same time, the method 30 also includes performing a HCoMB lookup on the cacheline at box 34, and determining if there is a HCoMB hit at box 35. If not, the method 30 may proceed to enabling HCoMB training at box 36 and then proceed to the training ( FIG. 3B ). If there is a hit at box 35, the method 30 may proceed to determining if there is high confidence at box 37. If not, the method 30 may proceed to enabling HCoMB training at box 36 and then proceed to the training.If there is high confidence at box 37, the method 30 may proceed to canceling the main BPU lookup and canceling the BTB lookup at box 38, and instead reading the BTB set and way pointers from the HCoMB data structure at box 39 and providing the BTB read-out entries at box 40. The method 30 may then proceed to providing the final prediction from either the HCoMB predictor or the main BPU to the instruction cache (Icache) and/or decoders at box 41.When training is enabled, the method 30 may include determining if the cacheline includes a taken branch or if the cacheline is crossing at box 42 and, if so, incrementing a prediction count at box 43. The method 30 may then include determining if the next prediction hits the HCoMB at box 44 and, if so, determining if the HCoMB information matches the main BPU information at box 45. If the information matches at box 45, the method 30 may include incrementing a confidence count for the entry and incrementing a utility count for the entry at box 46. If the information does not match at box 45, the method 30 may include resetting the confidence count and the utility count for the entry to zero at box 47. If the next prediction does not hit the HCoMB at box 44, the method 30 may proceed to writing the prediction to a pre-allocate buffer at box 48. After boxes 46, 47, and 48, the method 30 may proceed to determining if the prediction count is full at box 49 and, if so, writing the information to the HCoMB table and/or switch cluster at box 50.During training ( FIG. 3B ), embodiments of the HCoMB offset predictor may rely on inputs from the main branch predictor. If a trace is not present in the HCoMB table (Miss during lookup), HCoMB records the predictions coming out from the main branch predictor in a pre-allocate buffer. When this buffer is full, all the information is written to an empty entry in the HCoMB table. If no empty entry is available, the utilities of all entries in that set are decremented. When the Utility of an entry becomes 0, that entry is replaced with the new data.If a trace has a valid entry in the HCoMB table but it has low confidence, embodiments of the HCoMB offset predictor may snoop the main BPU predictions and match each prediction against the data stored in the HCoMB table entry. If the HCoMB data is consistent with the main BPU predictions, the confidence is incremented. If there is a mismatch, the confidence and utility of the entire entry is reset.When the confidence of the entry exceeds a threshold or saturates (e.g., a counter or field value for the confidence reaches its maximum value), embodiments of the HCoMB offset predictor may perform actual predictions by overriding the main BPU. The HCoMB offset predictor may produce N predictions for each taken branch in the trace. Note that when the HCoMB offset predictor performs an actual prediction, the prediction is compared against the output of branch execution. If the prediction is wrong, a pipeline flush occurs. In addition, the HCoMB table entry is invalidated.With reference to FIG. 4 , an embodiment of an electronic apparatus depicts an example of interfacing of HCoMB in a branch prediction pipeline. The entire BPU complex spans multiple pipeline stages in the processor, represented as N, N+1 and so on. The Main Branch Predictor operates in stage N. Embodiments of the HCoMB offset predictor also operates alongside the Main Branch Predictor (MBP) in stage N. The N-1 stage provides the lookup PC (last available taken branch target) and the last available branch history to both predictors, MBP and HCoMB. In stage N, some embodiments may check if the lookup results in a HCoMB hit and a high confidence trace in the given entry. When this happens, embodiments of the HCoMB offset predictor may issue a cancellation for the MBP prediction.In stage N, along with checking the hit/miss and trace confidence, some embodiments may also read out the contents of the HCoMB table entry. Note that, as shown in FIG. 2B , an HCoMB entry does not hold the actual prediction. Rather, it only records Set-Way pointers to the branch target buffer (BTB) entries which hold the actual prediction. These pointers are sent to the BTB in the next stage. In N+1, all prediction information (branch PCs, targets) has been obtained using the BTB and sent to the next pipeline stages. The further pipeline stages issue the redirection to the BPU based on the predictions from HCoMB as well as update the taken branch history similar to a standard BPU operation.Accordingly, by recording only BTB pointers, embodiments of the HCoMB offset predictor may essentially act as a control unit during the entire prediction operation. This also highlights a major advantage of some embodiments because it greatly reduces the storage cost of the predictor and further helps in its adoption in processor designs.Advantageously, embodiments of the HCoMB offset predictor may provide instructions-per-cycle (IPC) improvement over a baseline while predicting a significant percentage (e.g., about 30%) of all dynamic branches in the program. Embodiments of the HCoMB offset predictor may particularly benefit those benchmarks which have a high fraction of branches and a small branch-to-branch distance. The HCoMB offset predictor's performance may be further increased with larger tables.With reference to FIGs. 5A to 5B , an embodiment of a method 55 may include identifying an entry in a MTB prediction table that corresponds to a conditional branch instruction at box 56, determining if a confidence level of the entry exceeds a threshold confidence level at box 57, and, if so determined, providing multiple taken branch predictions that stem from the conditional branch instruction from the entry in the MTB prediction table at box 58. For example, the method 55 may also include generating tag information for the conditional branch instruction based on a last taken branch and a branch history at box 59, and identifying the entry in the MTB prediction table based on the generated tag information at box 60.Some embodiments of the method 55 may further include jumping to a target of a last predicted taken branch at box 61. For example, the method 55 may include generating pointers to a next N branches which are predicted to be taken based on a current PC and a branch history at box 62, where N is an integer value greater than 1, identifying a PC of a taken branch and a target of the taken branch based on the generated pointers at box 63, and constructing an entire control flow from a current point until the Nth taken branch based on the generated pointers at box 64. Some embodiments of the method 55 may further include discarding a prediction of a main BPU and redirecting the main BPU to start from the target of the last taken branch at box 65.Those skilled in the art will appreciate that a wide variety of devices may benefit from the foregoing embodiments. The following exemplary core architectures, processors, and computer architectures are non-limiting examples of devices that may beneficially incorporate embodiments of the technology described herein.Exemplary Core Architectures, Processors, and Computer ArchitecturesProcessor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures.Exemplary Core ArchitecturesIn-order and out-of-order core block diagramFIG. 6A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. FIG. 6B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes in FIGs. 6A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In FIG. 6A , a processor pipeline 900 includes a fetch stage 902, a length decode stage 904, a decode stage 906, an allocation stage 908, a renaming stage 910, a scheduling (also known as a dispatch or issue) stage 912, a register read/memory read stage 914, an execute stage 916, a write back/memory write stage 918, an exception handling stage 922, and a commit stage 924.FIG. 6B shows processor core 990 including a front end unit 930 coupled to an execution engine unit 950, and both are coupled to a memory unit 970. The core 990 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 990 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.The front end unit 930 includes a branch prediction unit 932 coupled to an instruction cache unit 934, which is coupled to an instruction translation lookaside buffer (TLB) 936, which is coupled to an instruction fetch unit 938, which is coupled to a decode unit 940. The decode unit 940 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 940 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 990 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 940 or otherwise within the front end unit 930). The decode unit 940 is coupled to a rename/allocator unit 952 in the execution engine unit 950.The execution engine unit 950 includes the rename/allocator unit 952 coupled to a retirement unit 954 and a set of one or more scheduler unit(s) 956. The scheduler unit(s) 956 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 956 is coupled to the physical register file(s) unit(s) 958. Each of the physical register file(s) units 958 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 958 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 958 is overlapped by the retirement unit 954 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 954 and the physical register file(s) unit(s) 958 are coupled to the execution cluster(s) 960. The execution cluster(s) 960 includes a set of one or more execution units 962 and a set of one or more memory access units 964. The execution units 962 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 956, physical register file(s) unit(s) 958, and execution cluster(s) 960 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 964). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.The set of memory access units 964 is coupled to the memory unit 970, which includes a data TLB unit 972 coupled to a data cache unit 974 coupled to a level 2 (L2) cache unit 976. In one exemplary embodiment, the memory access units 964 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 972 in the memory unit 970. The instruction cache unit 934 is further coupled to a level 2 (L2) cache unit 976 in the memory unit 970. The L2 cache unit 976 is coupled to one or more other levels of cache and eventually to a main memory.By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 900 as follows: 1) the instruction fetch 938 performs the fetch and length decoding stages 902 and 904; 2) the decode unit 940 performs the decode stage 906; 3) the rename/allocator unit 952 performs the allocation stage 908 and renaming stage 910; 4) the scheduler unit(s) 956 performs the schedule stage 912; 5) the physical register file(s) unit(s) 958 and the memory unit 970 perform the register read/memory read stage 914; the execution cluster 960 perform the execute stage 916; 6) the memory unit 970 and the physical register file(s) unit(s) 958 perform the write back/memory write stage 918; 7) various units may be involved in the exception handling stage 922; and 8) the retirement unit 954 and the physical register file(s) unit(s) 958 perform the commit stage 924.The core 990 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 990 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 934/974 and a shared L2 cache unit 976, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.Specific Exemplary In-Order Core ArchitectureFIGs. 7A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application.FIG. 7A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 1002 and with its local subset of the Level 2 (L2) cache 1004, according to embodiments of the invention. In one embodiment, an instruction decoder 1000 supports the x86 instruction set with a packed data instruction set extension. An L1 cache 1006 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 1008 and a vector unit 1010 use separate register sets (respectively, scalar registers 1012 and vector registers 1014) and data transferred between them is written to memory and then read back in from a level 1 (L1) cache 1006, alternative embodiments of the invention may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).The local subset of the L2 cache 1004 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 1004. Data read by a processor core is stored in its L2 cache subset 1004 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 1004 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring data-path is 1012-bits wide per direction.FIG. 7B is an expanded view of part of the processor core in FIG. 7A according to embodiments of the invention. FIG. 7B includes an L1 data cache 1006A part of the L1 cache 1006, as well as more detail regarding the vector unit 1010 and the vector registers 1014. Specifically, the vector unit 1010 is a 16-wide vector processing unit (VPU) (see the 16-wide ALU 1028), which executes one or more of integer, single-precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 1020, numeric conversion with numeric convert units 1022A-B, and replication with replication unit 1024 on the memory input. Write mask registers 1026 allow predicating resulting vector writes.FIG. 8 is a block diagram of a processor 1100 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. The solid lined boxes in FIG. 8 illustrate a processor 1100 with a single core 1102A, a system agent 1110, a set of one or more bus controller units 1116, while the optional addition of the dashed lined boxes illustrates an alternative processor 1100 with multiple cores 1102A-N, a set of one or more integrated memory controller unit(s) 1114 in the system agent unit 1110, and special purpose logic 1108.Thus, different implementations of the processor 1100 may include: 1) a CPU with the special purpose logic 1108 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 1102A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 1102A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 1102A-N being a large number of general purpose in-order cores. Thus, the processor 1100 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 1100 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more levels of respective caches 1104A-N within the cores 1102A-N, a set or one or more shared cache units 1106, and external memory (not shown) coupled to the set of integrated memory controller units 1114. The set of shared cache units 1106 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 1112 interconnects the integrated graphics logic 1108, the set of shared cache units 1106, and the system agent unit 1110/integrated memory controller unit(s) 1114, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 1106 and cores 1102-A-N.In some embodiments, one or more of the cores 1102A-N are capable of multithreading. The system agent 1110 includes those components coordinating and operating cores 1102A-N. The system agent unit 1110 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 1102A-N and the integrated graphics logic 1108. The display unit is for driving one or more externally connected displays.The cores 1102A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 1102A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.Exemplary Computer ArchitecturesFIGs. 9-12 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.Referring now to FIG. 9 , shown is a block diagram of a system 1200 in accordance with one embodiment of the present invention. The system 1200 may include one or more processors 1210, 1215, which are coupled to a controller hub 1220. In one embodiment the controller hub 1220 includes a graphics memory controller hub (GMCH) 1290 and an Input/Output Hub (IOH) 1250 (which may be on separate chips); the GMCH 1290 includes memory and graphics controllers to which are coupled memory 1240 and a coprocessor 1245; the IOH 1250 couples input/output (I/O) devices 1260 to the GMCH 1290. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 1240 and the coprocessor 1245 are coupled directly to the processor 1210, and the controller hub 1220 in a single chip with the IOH 1250.The optional nature of additional processors 1215 is denoted in FIG. 9 with broken lines. Each processor 1210, 1215 may include one or more of the processing cores described herein and may be some version of the processor 1100.The memory 1240 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1220 communicates with the processor(s) 1210, 1215 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 1295.In one example, not being part of the invention, the coprocessor 1245 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 1220 may include an integrated graphics accelerator.There can be a variety of differences between the physical resources 1210, 1215 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.In one example, not being part of the invention, the processor 1210 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 1210 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 1245. Accordingly, the processor 1210 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 1245. Coprocessor(s) 1245 accept and execute the received coprocessor instructions.Referring now to FIG. 10 , shown is a block diagram of a first more specific exemplary system 1300 in accordance with an example, not being part of the invention, of the present invention. As shown in FIG. 10 , multiprocessor system 1300 is a point-to-point interconnect system, and includes a first processor 1370 and a second processor 1380 coupled via a point-to-point interconnect 1350. Each of processors 1370 and 1380 may be some version of the processor 1100. In one embodiment of the invention, processors 1370 and 1380 are respectively processors 1210 and 1215, while coprocessor 1338 is coprocessor 1245. In another embodiment, processors 1370 and 1380 are respectively processor 1210 coprocessor 1245.Processors 1370 and 1380 are shown including integrated memory controller (IMC) units 1372 and 1382, respectively. Processor 1370 also includes as part of its bus controller units point-to-point (P-P) interfaces 1376 and 1378; similarly, second processor 1380 includes P-P interfaces 1386 and 1388. Processors 1370, 1380 may exchange information via a point-to-point (P-P) interface 1350 using P-P interface circuits 1378, 1388. As shown in FIG. 10 , IMCs 1372 and 1382 couple the processors to respective memories, namely a memory 1332 and a memory 1334, which may be portions of main memory locally attached to the respective processors.Processors 1370, 1380 may each exchange information with a chipset 1390 via individual P-P interfaces 1352, 1354 using point to point interface circuits 1376, 1394, 1386, 1398. Chipset 1390 may optionally exchange information with the coprocessor 1338 via a high-performance interface 1339 and an interface 1392. In one embodiment, the coprocessor 1338 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.Chipset 1390 may be coupled to a first bus 1316 via an interface 1396. In one example, not being part of the invention, first bus 1316 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited.As shown in FIG. 10 , various I/O devices 1314 may be coupled to first bus 1316, along with a bus bridge 1318 which couples first bus 1316 to a second bus 1320. In one example, not being part of the invention, one or more additional processor(s) 1315, such as coprocessors, high-throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 1316. In one embodiment, second bus 1320 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 1320 including, for example, a keyboard and/or mouse 1322, communication devices 1327 and a storage unit 1328 such as a disk drive or other mass storage device which may include instructions/code and data 1330, in one embodiment. Further, an audio I/O 1324 may be coupled to the second bus 1320. Note that other architectures are possible. For example, instead of the point-to-point architecture of FIG. 10 , a system may implement a multi-drop bus or other such architecture.Referring now to FIG. 11 , shown is a block diagram of a second more specific exemplary system 1400 in accordance with an embodiment of the present invention. Like elements in FIGs. 10 and 11 bear like reference numerals, and certain aspects of FIG. 10 have been omitted from FIG. 11 in order to avoid obscuring other aspects of FIG. 11 .FIG. 11 illustrates that the processors 1370, 1380 may include integrated memory and I/O control logic ("CL") 1472 and 1482, respectively. Thus, the CL 1472, 1482 include integrated memory controller units and include I/O control logic. FIG. 11 illustrates that not only are the memories 1332, 1334 coupled to the CL 1472, 1482, but also that I/O devices 1414 are also coupled to the control logic 1472, 1482. Legacy I/O devices 1415 are coupled to the chipset 1390.Referring now to FIG. 12 , shown is a block diagram of a SoC 1500 in accordance with an embodiment of the present invention. Similar elements in FIG. 8 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In FIG. 12 , an interconnect unit(s) 1502 is coupled to: an application processor 1510 which includes a set of one or more cores 1102A-N and shared cache unit(s) 1106; a system agent unit 1110; a bus controller unit(s) 1116; an integrated memory controller unit(s) 1114; a set or one or more coprocessors 1520 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 1530; a direct memory access (DMA) unit 1532; and a display unit 1540 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 1520 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.Program code, such as code 1330 illustrated in FIG. 10 , may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable's (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.Accordingly, embodiments of the invention also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.Emulation (including binary translation, code morphing, etc.)In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.FIG. 13 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. FIG. 13 shows a program in a high level language 1602 may be compiled using an x86 compiler 1604 to generate x86 binary code 1606 that may be natively executed by a processor with at least one x86 instruction set core 1616. The processor with at least one x86 instruction set core 1616 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 1604 represents a compiler that is operable to generate x86 binary code 1606 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 1616. Similarly, FIG. 13 shows the program in the high level language 1602 may be compiled using an alternative instruction set compiler 1608 to generate alternative instruction set binary code 1610 that may be natively executed by a processor without at least one x86 instruction set core 1614 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 1612 is used to convert the x86 binary code 1606 into code that may be natively executed by the processor without an x86 instruction set core 1614. This converted code is not likely to be the same as the alternative instruction set binary code 1610 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 1612 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 1606.Techniques and architectures for branch prediction and/or branch target prediction are described herein. In the above description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of certain embodiments. It will be apparent, however, to one skilled in the art that certain embodiments can be practiced without these specific details. In other instances, structures and devices are shown in block diagram form in order to avoid obscuring the description.Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.Some portions of the detailed description herein are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the computing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the discussion herein, it is appreciated that throughout the description, discussions utilizing terms such as "processing" or "computing" or "calculating" or "determining" or "displaying" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.Certain embodiments also relate to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs) such as dynamic RAM (DRAM), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and coupled to a computer system bus.The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description herein. In addition, certain embodiments are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of such embodiments as described herein.Besides what is described herein, various modifications may be made to the disclosed embodiments and implementations thereof without departing from their scope. Therefore, the illustrations and examples herein should be construed in an illustrative, and not a restrictive sense. The scope of the invention should be measured solely by reference to the claims that follow.
A method that is implemented in a graphics processing pipeline is presented. The method comprises processing a graphics workload with a first graphics processing engine by retrieving vertex data associated with graphics primitives, performing lighting operations on the vertex data, performing transformation operations on the vertex data, and performing foveated render work. Additionally, at least a portion of the graphics workload is offloaded to a second graphics processing engine.
1. A method implemented in a graphics processing pipeline comprising:processing a graphics workload with a first graphics processing engine by:retrieving vertex data associated with graphics primitives;performing lighting operations on the vertex data;performing transformation operations on the vertex data; andperforming foveated render work; andoffloading at least a portion of the graphics workload to a second graphics processing engine.2. The method of claim 1, wherein offloading the portion of the graphics workload to the second graphics processing engine is done at least in part through a wireless connection.3. The method of claim 1 or 2, wherein the first graphics processing engine is housed in a first host side computing system and the second graphics processing engine is housed in a head mounted display.5. The method of claim 3 further comprising receiving eye tracking data in real-time from the head mounted display.5. The method of any one of claims 1 to 4, wherein offloading at least a portion of the graphics workload comprises performing time warp operations with barrel distortion correction.6. The method of any one of claims 3 to 5 further comprising presenting visual content on a display attached to the head mounted display.7. The method of any one of claims 1 to 6, wherein processing the graphics workload with the first graphics processing engine comprises determining lighting data associated with the graphics workload and wherein offloading the portion of the graphics workload to the second graphics processing engine comprises providing the lighting data to the second graphics processing engine.8. A computer program comprising a set of commands for being implemented in a graphics processing pipeline, which when executed by a computing system, cause the computing system to:process a graphics workload with a first graphics processing engine by:retrieving vertex data associated with graphics primitives;performing lighting operations on the vertex data;performing transformation operations on the vertex data; andperforming foveated render work; andoffload at least a portion of the graphics workload to a second graphics processing engine.9. The computer program of claim 8, wherein the set of commands further cause the computing system to offload the portion of the graphics workload to the second graphics processing engine at least in part through a wireless connection.10. The computer program of claim 8 or 9, wherein the first graphics processing engine is housed in a host side computing system and the second graphics processing engine is housed in a head mounted display.11. The computer program of claim 10, wherein the set of commands, which when executed, further cause the computing system to receive eye tracking data in real-time from the head mounted display.12. The computer program of any one of claims 8 to 11, wherein the set of commands, when executed, further cause the computing system to perform time warp operations with barrel distortion correction.13. The computer program of any one of claims 10 to 12,wherein the set of commands, when executed, further cause the computing system to provide visual content on a display attached to the head mounted display.14. The computer program of any one of claims 8 to 13, wherein the set of commands, when executed, further cause the computing system to determine lighting data associated with the graphics workload by the first graphics processing engine and to provide the lighting data to the second graphics processing engine.15. A computer readable storage medium having stored thereon the computer program of any one of claims 8 to 14.
TECHNICAL FIELDEmbodiments generally relate to data processing and to graphics processing via a graphics processing unit. More particularly, embodiments relate to a graphics system with additional context.BACKGROUND OF THE DESCRIPTIONCurrent parallel graphics data processing includes systems and methods developed to perform specific operations on graphics data such as, for example, linear interpolation, tessellation, rasterization, texture mapping, depth testing, etc. Traditionally, graphics processors used fixed function computational units to process graphics data; however, more recently, portions of graphics processors have been made programmable, enabling such processors to support a wider variety of operations for processing vertex and fragment data. Various graphics operations may be divided into workloads.BRIEF DESCRIPTION OF THE DRAWINGSThe various advantages of the embodiments will become apparent to one skilled in the art by reading the following specification and appended claims, and by referencing the following drawings, in which:FIG. 1 is a block diagram illustrating a computer system configured to implement one or more aspects of the embodiments described herein;FIGs. 2A-2D illustrate parallel processor components, according to an embodiment;FIGs. 3A-3B are block diagrams of graphics multiprocessors, according to embodiments;FIGs. 4A-4F illustrate an exemplary architecture in which a plurality of GPUs are communicatively coupled to a plurality of multi-core processors;FIG. 5 illustrates a graphics processing pipeline, according to an embodiment;FIG. 6 is a block diagram of an example of an electronic processing system according to an embodiment;FIG. 7 is a block diagram of an example of a graphics apparatus according to an embodiment;FIGs. 8A to 8C are flowcharts of an example of a method of processing a graphics workload according to an embodiment;FIG. 9 is a block diagram of an example of a graphics system according to an embodiment;FIG. 10 is a block diagram of another example of a graphics system according to an embodiment;FIG. 11 is an illustration of an example of a head mounted display (HMD) system according to an embodiment;FIG. 12 is a block diagram of an example of the functional components included in the HMD system of FIG. 11 according to an embodiment;FIG. 13 is a block diagram of an example of a general processing cluster included in a parallel processing unit according to an embodiment;FIG. 14 is a conceptual illustration of an example of a graphics processing pipeline that may be implemented within a parallel processing unit, according to an embodiment;FIG. 15 is a block diagram of an example of a streaming multi-processor according to an embodiment;FIGs. 16-18 are block diagrams of an example of an overview of a data processing system according to an embodiment;FIG. 19 is a block diagram of an example of a graphics processing engine according to an embodiment;FIGs. 20-22 are block diagrams of examples of execution units according to an embodiment;FIG. 23 is a block diagram of an example of a graphics pipeline according to an embodiment;FIGs. 24A-24B are block diagrams of examples of graphics pipeline programming according to an embodiment;FIG. 25 is a block diagram of an example of a graphics software architecture according to an embodiment;FIG. 26 is a block diagram of an example of an intellectual property (IP) core development system according to an embodiment; andFIG. 27 is a block diagram of an example of a system on a chip integrated circuit according to an embodiment.DETAILED DESCRIPTIONIn the following description, numerous specific details are set forth to provide a more thorough understanding of the present invention. However, it will be apparent to one of skill in the art that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the present invention.System OverviewFIG. 1 is a block diagram illustrating a computing system 100 configured to implement one or more aspects of the embodiments described herein. The computing system 100 includes a processing subsystem 101 having one or more processor(s) 102 and a system memory 104 communicating via an interconnection path that may include a memory hub 105. The memory hub 105 may be a separate component within a chipset component or may be integrated within the one or more processor(s) 102. The memory hub 105 couples with an I/O subsystem 111 via a communication link 106. The I/O subsystem 111 includes an I/O hub 107 that can enable the computing system 100 to receive input from one or more input device(s) 108. Additionally, the I/O hub 107 can enable a display controller, which may be included in the one or more processor(s) 102, to provide outputs to one or more display device(s) 110A. In one embodiment the one or more display device(s) 110A coupled with the I/O hub 107 can include a local, internal, or embedded display device.In one embodiment the processing subsystem 101 includes one or more parallel processor(s) 112 coupled to memory hub 105 via a bus or other communication link 113. The communication link 113 may be one of any number of standards based communication link technologies or protocols, such as, but not limited to PCI Express, or may be a vendor specific communications interface or communications fabric. In one embodiment the one or more parallel processor(s) 112 form a computationally focused parallel or vector processing system that an include a large number of processing cores and/or processing clusters, such as a many integrated core (MIC) processor. In one embodiment the one or more parallel processor(s) 112 form a graphics processing subsystem that can output pixels to one of the one or more display device(s) 110A coupled via the I/O Hub 107. The one or more parallel processor(s) 112 can also include a display controller and display interface (not shown) to enable a direct connection to one or more display device(s) 110B.Within the I/O subsystem 111, a system storage unit 114 can connect to the I/O hub 107 to provide a storage mechanism for the computing system 100. An I/O switch 116 can be used to provide an interface mechanism to enable connections between the I/O hub 107 and other components, such as a network adapter 118 and/or wireless network adapter 119 that may be integrated into the platform, and various other devices that can be added via one or more add-in device(s) 120. The network adapter 118 can be an Ethernet adapter or another wired network adapter. The wireless network adapter 119 can include one or more of a Wi-Fi, Bluetooth, near field communication (NFC), or other network device that includes one or more wireless radios.The computing system 100 can include other components not explicitly shown, including USB or other port connections, optical storage drives, video capture devices, and the like, may also be connected to the I/O hub 107. Communication paths interconnecting the various components in FIG. 1 may be implemented using any suitable protocols, such as PCI (Peripheral Component Interconnect) based protocols (e.g., PCI-Express), or any other bus or point-to-point communication interfaces and/or protocol(s), such as the NVLink high-speed interconnect, or interconnect protocols known in the art.In one embodiment, the one or more parallel processor(s) 112 incorporate circuitry optimized for graphics and video processing, including, for example, video output circuitry, and constitutes a graphics processing unit (GPU). In another embodiment, the one or more parallel processor(s) 112 incorporate circuitry optimized for general purpose processing, while preserving the underlying computational architecture, described in greater detail herein. In yet another embodiment, components of the computing system 100 may be integrated with one or more other system elements on a single integrated circuit. For example, the one or more parallel processor(s), 112 memory hub 105, processor(s) 102, and I/O hub 107 can be integrated into a system on chip (SoC) integrated circuit. Alternatively, the components of the computing system 100 can be integrated into a single package to form a system in package (SIP) configuration. In one embodiment at least a portion of the components of the computing system 100 can be integrated into a multi-chip module (MCM), which can be interconnected with other multi-chip modules into a modular computing system.It will be appreciated that the computing system 100 shown herein is illustrative and that variations and modifications are possible. The connection topology, including the number and arrangement of bridges, the number of processor(s) 102, and the number of parallel processor(s) 112, may be modified as desired. For instance, in some embodiments, system memory 104 is connected to the processor(s) 102 directly rather than through a bridge, while other devices communicate with system memory 104 via the memory hub 105 and the processor(s) 102. In other alternative topologies, the parallel processor(s) 112 are connected to the I/O hub 107 or directly to one of the one or more processor(s) 102, rather than to the memory hub 105. In other embodiments, the I/O hub 107 and memory hub 105 may be integrated into a single chip. Some embodiments may include two or more sets of processor(s) 102 attached via multiple sockets, which can couple with two or more instances of the parallel processor(s) 112.Some of the particular components shown herein are optional and may not be included in all implementations of the computing system 100. For example, any number of add-in cards or peripherals may be supported, or some components may be eliminated. Furthermore, some architectures may use different terminology for components similar to those illustrated in FIG. 1 . For example, the memory hub 105 may be referred to as a Northbridge in some architectures, while the I/O hub 107 may be referred to as a Southbridge.FIG. 2A illustrates a parallel processor 200, according to an embodiment. The various components of the parallel processor 200 may be implemented using one or more integrated circuit devices, such as programmable processors, application specific integrated circuits (ASICs), or field programmable gate arrays (FPGA). The illustrated parallel processor 200 is a variant of the one or more parallel processor(s) 112 shown in FIG. 1 , according to an embodiment.In one embodiment the parallel processor 200 includes a parallel processing unit 202. The parallel processing unit includes an I/O unit 204 that enables communication with other devices, including other instances of the parallel processing unit 202. The I/O unit 204 may be directly connected to other devices. In one embodiment the I/O unit 204 connects with other devices via the use of a hub or switch interface, such as memory hub 105. The connections between the memory hub 105 and the I/O unit 204 form a communication link 113. Within the parallel processing unit 202, the I/O unit 204 connects with a host interface 206 and a memory crossbar 216, where the host interface 206 receives commands directed to performing processing operations and the memory crossbar 216 receives commands directed to performing memory operations.When the host interface 206 receives a command buffer via the I/O unit 204, the host interface 206 can direct work operations to perform those commands to a front end 208. In one embodiment the front end 208 couples with a scheduler 210, which is configured to distribute commands or other work items to a processing cluster array 212. In one embodiment the scheduler 210 ensures that the processing cluster array 212 is properly configured and in a valid state before tasks are distributed to the processing clusters of the processing cluster array 212. In one embodiment the scheduler 210 is implemented via firmware logic executing on a microcontroller. The microcontroller implemented scheduler 210 is configurable to perform complex scheduling and work distribution operations at coarse and fine granularity, enabling rapid preemption and context switching of threads executing on the processing array 212. In one embodiment, the host software can prove workloads for scheduling on the processing array 212 via one of multiple graphics processing doorbells. The workloads can then be automatically distributed across the processing array 212 by the scheduler 210 logic within the scheduler microcontroller.The processing cluster array 212 can include up to "N" processing clusters (e.g., cluster 214A, cluster 214B, through cluster 214N). Each cluster 214A-214N of the processing cluster array 212 can execute a large number of concurrent threads. The scheduler 210 can allocate work to the clusters 214A-214N of the processing cluster array 212 using various scheduling and/or work distribution algorithms, which may vary depending on the workload arising for each type of program or computation. The scheduling can be handled dynamically by the scheduler 210, or can be assisted in part by compiler logic during compilation of program logic configured for execution by the processing cluster array 212. In one embodiment, different clusters 214A-214N of the processing cluster array 212 can be allocated for processing different types of programs or for performing different types of computations.The processing cluster array 212 can be configured to perform various types of parallel processing operations. In one embodiment the processing cluster array 212 is configured to perform general-purpose parallel compute operations. For example, the processing cluster array 212 can include logic to execute processing tasks including filtering of video and/or audio data, performing modeling operations, including physics operations, and performing data transformations.In one embodiment the processing cluster array 212 is configured to perform parallel graphics processing operations. In embodiments in which the parallel processor 200 is configured to perform graphics processing operations, the processing cluster array 212 can include additional logic to support the execution of such graphics processing operations, including, but not limited to texture sampling logic to perform texture operations, as well as tessellation logic and other vertex processing logic. Additionally, the processing cluster array 212 can be configured to execute graphics processing related shader programs such as, but not limited to vertex shaders, tessellation shaders, geometry shaders, and pixel shaders. The parallel processing unit 202 can transfer data from system memory via the I/O unit 204 for processing. During processing the transferred data can be stored to on-chip memory (e.g., parallel processor memory 222) during processing, then written back to system memory.In one embodiment, when the parallel processing unit 202 is used to perform graphics processing, the scheduler 210 can be configured to divide the processing workload into approximately equal sized tasks, to better enable distribution of the graphics processing operations to multiple clusters 214A-214N of the processing cluster array 212. In some embodiments, portions of the processing cluster array 212 can be configured to perform different types of processing. For example a first portion may be configured to perform vertex shading and topology generation, a second portion may be configured to perform tessellation and geometry shading, and a third portion may be configured to perform pixel shading or other screen space operations, to produce a rendered image for display. Intermediate data produced by one or more of the clusters 214A-214N may be stored in buffers to allow the intermediate data to be transmitted between clusters 214A-214N for further processing.During operation, the processing cluster array 212 can receive processing tasks to be executed via the scheduler 210, which receives commands defining processing tasks from front end 208. For graphics processing operations, processing tasks can include indices of data to be processed, e.g., surface (patch) data, primitive data, vertex data, and/or pixel data, as well as state parameters and commands defining how the data is to be processed (e.g., what program is to be executed). The scheduler 210 may be configured to fetch the indices corresponding to the tasks or may receive the indices from the front end 208. The front end 208 can be configured to ensure the processing cluster array 212 is configured to a valid state before the workload specified by incoming command buffers (e.g., batch-buffers, push buffers, etc.) is initiated.Each of the one or more instances of the parallel processing unit 202 can couple with parallel processor memory 222. The parallel processor memory 222 can be accessed via the memory crossbar 216, which can receive memory requests from the processing cluster array 212 as well as the I/O unit 204. The memory crossbar 216 can access the parallel processor memory 222 via a memory interface 218. The memory interface 218 can include multiple partition units (e.g., partition unit 220A, partition unit 220B, through partition unit 220N) that can each couple to a portion (e.g., memory unit) of parallel processor memory 222. In one implementation the number of partition units 220A-220N is configured to be equal to the number of memory units, such that a first partition unit 220A has a corresponding first memory unit 224A, a second partition unit 220B has a corresponding memory unit 224B, and an Nth partition unit 220N has a corresponding Nth memory unit 224N. In other embodiments, the number of partition units 220A-220N may not be equal to the number of memory devices.In various embodiments, the memory units 224A-224N can include various types of memory devices, including dynamic random access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SGRAM), including graphics double data rate (GDDR) memory. In one embodiment, the memory units 224A-224N may also include 3D stacked memory, including but not limited to high bandwidth memory (HBM). Persons skilled in the art will appreciate that the specific implementation of the memory units 224A-224N can vary, and can be selected from one of various conventional designs. Render targets, such as frame buffers or texture maps may be stored across the memory units 224A-224N, allowing partition units 220A-220N to write portions of each render target in parallel to efficiently use the available bandwidth of parallel processor memory 222. In some embodiments, a local instance of the parallel processor memory 222 may be excluded in favor of a unified memory design that utilizes system memory in conjunction with local cache memory.In one embodiment, any one of the clusters 214A-214N of the processing cluster array 212 can process data that will be written to any of the memory units 224A-224N within parallel processor memory 222. The memory crossbar 216 can be configured to transfer the output of each cluster 214A-214N to any partition unit 220A-220N or to another cluster 214A-214N, which can perform additional processing operations on the output. Each cluster 214A-214N can communicate with the memory interface 218 through the memory crossbar 216 to read from or write to various external memory devices. In one embodiment the memory crossbar 216 has a connection to the memory interface 218 to communicate with the I/O unit 204, as well as a connection to a local instance of the parallel processor memory 222, enabling the processing units within the different processing clusters 214A-214N to communicate with system memory or other memory that is not local to the parallel processing unit 202. In one embodiment the memory crossbar 216 can use virtual channels to separate traffic streams between the clusters 214A-214N and the partition units 220A-220N.While a single instance of the parallel processing unit 202 is illustrated within the parallel processor 200, any number of instances of the parallel processing unit 202 can be included. For example, multiple instances of the parallel processing unit 202 can be provided on a single add-in card, or multiple add-in cards can be interconnected. The different instances of the parallel processing unit 202 can be configured to inter-operate even if the different instances have different numbers of processing cores, different amounts of local parallel processor memory, and/or other configuration differences. For example and in one embodiment, some instances of the parallel processing unit 202 can include higher precision floating point units relative to other instances. Systems incorporating one or more instances of the parallel processing unit 202 or the parallel processor 200 can be implemented in a variety of configurations and form factors, including but not limited to desktop, laptop, or handheld personal computers, servers, workstations, game consoles, and/or embedded systems.FIG. 2B is a block diagram of a partition unit 220, according to an embodiment. In one embodiment the partition unit 220 is an instance of one of the partition units 220A-220N of FIG. 2A . As illustrated, the partition unit 220 includes an L2 cache 221, a frame buffer interface 225, and a ROP 226 (raster operations unit). The L2 cache 221 is a read/write cache that is configured to perform load and store operations received from the memory crossbar 216 and ROP 226. Read misses and urgent write-back requests are output by L2 cache 221 to frame buffer interface 225 for processing. Updates can also be sent to the frame buffer via the frame buffer interface 225 for processing. In one embodiment the frame buffer interface 225 interfaces with one of the memory units in parallel processor memory, such as the memory units 224A-224N of FIG. 2 (e.g., within parallel processor memory 222).In graphics applications, the ROP 226 is a processing unit that performs raster operations such as stencil, z test, blending, and the like. The ROP 226 then outputs processed graphics data that is stored in graphics memory. In some embodiments the ROP 226 includes compression logic to compress depth or color data that is written to memory and decompress depth or color data that is read from memory. The compression logic can be lossless compression logic that makes use of one or more of multiple compression algorithms. The type of compression that is performed by the ROP 226 can vary based on the statistical characteristics of the data to be compressed. For example, in one embodiment, delta color compression is performed on depth and color data on a per-tile basis.In some embodiments, the ROP 226 is included within each processing cluster (e.g., cluster 214A-214N of FIG. 2 ) instead of within the partition unit 220. In such embodiment, read and write requests for pixel data are transmitted over the memory crossbar 216 instead of pixel fragment data. The processed graphics data may be displayed on a display device, such as one of the one or more display device(s) 110 of FIG. 1 , routed for further processing by the processor(s) 102, or routed for further processing by one of the processing entities within the parallel processor 200 of FIG. 2A .FIG. 2C is a block diagram of a processing cluster 214 within a parallel processing unit, according to an embodiment. In one embodiment the processing cluster is an instance of one of the processing clusters 214A-214N of FIG. 2 . The processing cluster 214 can be configured to execute many threads in parallel, where the term "thread" refers to an instance of a particular program executing on a particular set of input data. In some embodiments, single-instruction, multiple-data (SIMD) instruction issue techniques are used to support parallel execution of a large number of threads without providing multiple independent instruction units. In other embodiments, single-instruction, multiple-thread (SIMT) techniques are used to support parallel execution of a large number of generally synchronized threads, using a common instruction unit configured to issue instructions to a set of processing engines within each one of the processing clusters. Unlike a SIMD execution regime, where all processing engines typically execute identical instructions, SIMT execution allows different threads to more readily follow divergent execution paths through a given thread program. Persons skilled in the art will understand that a SIMD processing regime represents a functional subset of a SIMT processing regime.Operation of the processing cluster 214 can be controlled via a pipeline manager 232 that distributes processing tasks to SIMT parallel processors. The pipeline manager 232 receives instructions from the scheduler 210 of Fig. 2 and manages execution of those instructions via a graphics multiprocessor 234 and/or a texture unit 236. The illustrated graphics multiprocessor 234 is an exemplary instance of a SIMT parallel processor. However, various types of SIMT parallel processors of differing architectures may be included within the processing cluster 214. One or more instances of the graphics multiprocessor 234 can be included within a processing cluster 214. The graphics multiprocessor 234 can process data and a data crossbar 240 can be used to distribute the processed data to one of multiple possible destinations, including other shader units. The pipeline manager 232 can facilitate the distribution of processed data by specifying destinations for processed data to be distributed via the data crossbar 240.Each graphics multiprocessor 234 within the processing cluster 214 can include an identical set of functional execution logic (e.g., arithmetic logic units, load-store units, etc.). The functional execution logic can be configured in a pipelined manner in which new instructions can be issued before previous instructions are complete. The functional execution logic supports a variety of operations including integer and floating point arithmetic, comparison operations, Boolean operations, bit-shifting, and computation of various algebraic functions. In one embodiment the same functional-unit hardware can be leveraged to perform different operations and any combination of functional units may be present.The instructions transmitted to the processing cluster 214 constitutes a thread. A set of threads executing across the set of parallel processing engines is a thread group. A thread group executes the same program on different input data. Each thread within a thread group can be assigned to a different processing engine within a graphics multiprocessor 234. A thread group may include fewer threads than the number of processing engines within the graphics multiprocessor 234. When a thread group includes fewer threads than the number of processing engines, one or more of the processing engines may be idle during cycles in which that thread group is being processed. A thread group may also include more threads than the number of processing engines within the graphics multiprocessor 234. When the thread group includes more threads than the number of processing engines within the graphics multiprocessor 234 processing can be performed over consecutive clock cycles. In one embodiment multiple thread groups can be executed concurrently on a graphics multiprocessor 234.In one embodiment the graphics multiprocessor 234 includes an internal cache memory to perform load and store operations. In one embodiment, the graphics multiprocessor 234 can forego an internal cache and use a cache memory (e.g., L1 cache 308) within the processing cluster 214. Each graphics multiprocessor 234 also has access to L2 caches within the partition units (e.g., partition units 220A-220N of FIG. 2 ) that are shared among all processing clusters 214 and may be used to transfer data between threads. The graphics multiprocessor 234 may also access off-chip global memory, which can include one or more of local parallel processor memory and/or system memory. Any memory external to the parallel processing unit 202 may be used as global memory. Embodiments in which the processing cluster 214 includes multiple instances of the graphics multiprocessor 234 can share common instructions and data, which may be stored in the L1 cache 308.Each processing cluster 214 may include an MMU 245 (memory management unit) that is configured to map virtual addresses into physical addresses. In other embodiments, one or more instances of the MMU 245 may reside within the memory interface 218 of FIG. 2 . The MMU 245 includes a set of page table entries (PTEs) used to map a virtual address to a physical address of a tile (talk more about tiling) and optionally a cache line index. The MMU 245 may include address translation lookaside buffers (TLB) or caches that may reside within the graphics multiprocessor 234 or the L1 cache or processing cluster 214. The physical address is processed to distribute surface data access locality to allow efficient request interleaving among partition units. The cache line index may be used to determine whether a request for a cache line is a hit or miss.In graphics and computing applications, a processing cluster 214 may be configured such that each graphics multiprocessor 234 is coupled to a texture unit 236 for performing texture mapping operations, e.g., determining texture sample positions, reading texture data, and filtering the texture data. Texture data is read from an internal texture L1 cache (not shown) or in some embodiments from the L1 cache within graphics multiprocessor 234 and is fetched from an L2 cache, local parallel processor memory, or system memory, as needed. Each graphics multiprocessor 234 outputs processed tasks to the data crossbar 240 to provide the processed task to another processing cluster 214 for further processing or to store the processed task in an L2 cache, local parallel processor memory, or system memory via the memory crossbar 216. A preROP 242 (pre-raster operations unit) is configured to receive data from graphics multiprocessor 234, direct data to ROP units, which may be located with partition units as described herein (e.g., partition units 220A-220N of FIG. 2 ). The preROP 242 unit can perform optimizations for color blending, organize pixel color data, and perform address translations.It will be appreciated that the core architecture described herein is illustrative and that variations and modifications are possible. Any number of processing units, e.g., graphics multiprocessor 234, texture units 236, preROPs 242, etc., may be included within a processing cluster 214. Further, while only one processing cluster 214 is shown, a parallel processing unit as described herein may include any number of instances of the processing cluster 214. In one embodiment, each processing cluster 214 can be configured to operate independently of other processing clusters 214 using separate and distinct processing units, L1 caches, etc.FIG. 2D shows a graphics multiprocessor 234, according to one embodiment. In such embodiment the graphics multiprocessor 234 couples with the pipeline manager 232 of the processing cluster 214. The graphics multiprocessor 234 has an execution pipeline including but not limited to an instruction cache 252, an instruction unit 254, an address mapping unit 256, a register file 258, one or more general purpose graphics processing unit (GPGPU) cores 262, and one or more load/store units 266. The GPGPU cores 262 and load/store units 266 are coupled with cache memory 272 and shared memory 270 via a memory and cache interconnect 268.In one embodiment, the instruction cache 252 receives a stream of instructions to execute from the pipeline manager 232. The instructions are cached in the instruction cache 252 and dispatched for execution by the instruction unit 254. The instruction unit 254 can dispatch instructions as thread groups (e.g., warps), with each thread of the thread group assigned to a different execution unit within GPGPU core 262. An instruction can access any of a local, shared, or global address space by specifying an address within a unified address space. The address mapping unit 256 can be used to translate addresses in the unified address space into a distinct memory address that can be accessed by the load/store units 266.The register file 258 provides a set of registers for the functional units of the graphics multiprocessor 324. The register file 258 provides temporary storage for operands connected to the data paths of the functional units (e.g., GPGPU cores 262, load/store units 266) of the graphics multiprocessor 324. In one embodiment, the register file 258 is divided between each of the functional units such that each functional unit is allocated a dedicated portion of the register file 258. In one embodiment, the register file 258 is divided between the different warps being executed by the graphics multiprocessor 324.The GPGPU cores 262 can each include floating point units (FPUs) and/or integer arithmetic logic units (ALUs) that are used to execute instructions of the graphics multiprocessor 324. The GPGPU cores 262 can be similar in architecture or can differ in architecture, according to embodiments. For example and in one embodiment, a first portion of the GPGPU cores 262 include a single precision FPU and an integer ALU while a second portion of the GPGPU cores include a double precision FPU. In one embodiment the FPUs can implement the IEEE 754-2008 standard for floating point arithmetic or enable variable precision floating point arithmetic. The graphics multiprocessor 324 can additionally include one or more fixed function or special function units to perform specific functions such as copy rectangle or pixel blending operations. In one embodiment one or more of the GPGPU cores can also include fixed or special function logic.In one embodiment the GPGPU cores 262 include SIMD logic capable of performing a single instruction on multiple sets of data. In one embodiment GPGPU cores 262 can physically execute SIMD4, SIMD8, and SIMD16 instructions and logically execute SIMD1, SIMD2, and SIMD32 instructions. The SIMD instructions for the GPGPU cores can be generated at compile time by a shader compiler or automatically generated when executing programs written and compiled for single program multiple data (SPMD) or SIMT architectures. Multiple threads of a program configured for the SIMT execution model can executed via a single SIMD instruction. For example and in one embodiment, eight SIMT threads that perform the same or similar operations can be executed in parallel via a single SIMD8 logic unit.The memory and cache interconnect 268 is an interconnect network that connects each of the functional units of the graphics multiprocessor 324 to the register file 258 and to the shared memory 270. In one embodiment, the memory and cache interconnect 268 is a crossbar interconnect that allows the load/store unit 266 to implement load and store operations between the shared memory 270 and the register file 258. The register file 258 can operate at the same frequency as the GPGPU cores 262, thus data transfer between the GPGPU cores 262 and the register file 258 is very low latency. The shared memory 270 can be used to enable communication between threads that execute on the functional units within the graphics multiprocessor 234. The cache memory 272 can be used as a data cache for example, to cache texture data communicated between the functional units and the texture unit 236. The shared memory 270 can also be used as a program managed cached. Threads executing on the GPGPU cores 262 can programmatically store data within the shared memory in addition to the automatically cached data that is stored within the cache memory 272.FIGs. 3A-3B illustrate additional graphics multiprocessors, according to embodiments. The illustrated graphics multiprocessors 325, 350 are variants of the graphics multiprocessor 234 of Fig. 2C . The illustrated graphics multiprocessors 325, 350 can be configured as a streaming multiprocessor (SM) capable of simultaneous execution of a large number of execution threads.FIG. 3A shows a graphics multiprocessor 325 according to an additional embodiment. The graphics multiprocessor 325 includes multiple additional instances of execution resource units relative to the graphics multiprocessor 234 of FIG. 2D . For example, the graphics multiprocessor 325 can include multiple instances of the instruction unit 332A-332B, register file 334A-334B, and texture unit(s) 344A-344B. The graphics multiprocessor 325 also includes multiple sets of graphics or compute execution units (e.g., GPGPU core 336A-336B, GPGPU core 337A-337B, GPGPU core 338A-338B) and multiple sets of load/store units 340A-340B. In one embodiment the execution resource units have a common instruction cache 330, texture and/or data cache memory 342, and shared memory 346.The various components can communicate via an interconnect fabric 327. In one embodiment the interconnect fabric 327 includes one or more crossbar switches to enable communication between the various components of the graphics multiprocessor 325. In one embodiment the interconnect fabric 327 is a separate, high-speed network fabric layer upon which each component of the graphics multiprocessor 325 is stacked. The components of the graphics multiprocessor 325 communicate with remote components via the interconnect fabric 327. For example, the GPGPU cores 336A-336B, 337A-337B, and 3378A-338B can each communicate with shared memory 346 via the interconnect fabric 327. The interconnect fabric 327 can arbitrate communication within the graphics multiprocessor 325 to ensure a fair bandwidth allocation between components.FIG. 3B shows a graphics multiprocessor 350 according to an additional embodiment. The graphics processor includes multiple sets of execution resources 356A-356D, where each set of execution resource includes multiple instruction units, register files, GPGPU cores, and load store units, as illustrated in FIG. 2D and FIG. 3A . The execution resources 356A-356D can work in concert with texture unit(s) 360A-360D for texture operations, while sharing an instruction cache 354, and shared memory 362. In one embodiment the execution resources 356A-356D can share an instruction cache 354 and shared memory 362, as well as multiple instances of a texture and/or data cache memory 358A-358B. The various components can communicate via an interconnect fabric 352 similar to the interconnect fabric 327 of FIG. 3A .Persons skilled in the art will understand that the architecture described in FIGS. 1 , 2A-2D , and 3A-3B are descriptive and not limiting as to the scope of the present embodiments. Thus, the techniques described herein may be implemented on any properly configured processing unit, including, without limitation, one or more mobile application processors, one or more desktop or server central processing units (CPUs) including multi-core CPUs, one or more parallel processing units, such as the parallel processing unit 202 of FIG. 2 , as well as one or more graphics processors or special purpose processing units, without departure from the scope of the embodiments described herein.In some embodiments a parallel processor or GPGPU as described herein is communicatively coupled to host/processor cores to accelerate graphics operations, machine-learning operations, pattern analysis operations, and various general purpose GPU (GPGPU) functions. The GPU may be communicatively coupled to the host processor/cores over a bus or other interconnect (e.g., a high speed interconnect such as PCIe or NVLink). In other embodiments, the GPU may be integrated on the same package or chip as the cores and communicatively coupled to the cores over an internal processor bus/interconnect (i.e., internal to the package or chip). Regardless of the manner in which the GPU is connected, the processor cores may allocate work to the GPU in the form of sequences of commands/instructions contained in a work descriptor. The GPU then uses dedicated circuitry/logic for efficiently processing these commands/instructions.Techniques for GPU to Host Processor InterconnectionFIG. 4A illustrates an exemplary architecture in which a plurality of GPUs 410-413 are communicatively coupled to a plurality of multi-core processors 405-406 over high-speed links 440-443 (e.g., buses, point-to-point interconnects, etc.). In one embodiment, the high-speed links 440-443 support a communication throughput of 4GB/s, 30GB/s, 80GB/s or higher, depending on the implementation. Various interconnect protocols may be used including, but not limited to, PCIe 4.0 or 5.0 and NVLink 2.0. However, the underlying principles of the invention are not limited to any particular communication protocol or throughput.In addition, in one embodiment, two or more of the GPUs 410-413 are interconnected over high-speed links 444-445, which may be implemented using the same or different protocols/links than those used for high-speed links 440-443. Similarly, two or more of the multi-core processors 405-406 may be connected over high speed link 433 which may be symmetric multi-processor (SMP) buses operating at 20GB/s, 30GB/s, 120GB/s or higher. Alternatively, all communication between the various system components shown in FIG. 4A may be accomplished using the same protocols/links (e.g., over a common interconnection fabric). As mentioned, however, the underlying principles of the invention are not limited to any particular type of interconnect technology.In one embodiment, each multi-core processor 405-406 is communicatively coupled to a processor memory 401-402, via memory interconnects 430-431, respectively, and each GPU 410-413 is communicatively coupled to GPU memory 420-423 over GPU memory interconnects 450-453, respectively. The memory interconnects 430-431 and 450-453 may utilize the same or different memory access technologies. By way of example, and not limitation, the processor memories 401-402 and GPU memories 420-423 may be volatile memories such as dynamic random access memories (DRAMs) (including stacked DRAMs), Graphics DDR SDRAM (GDDR) (e.g., GDDR5, GDDR6), or High Bandwidth Memory (HBM) and/or may be non-volatile memories such as 3D XPoint or Nano-Ram. In one embodiment, some portion of the memories may be volatile memory and another portion may be non-volatile memory (e.g., using a two-level memory (2LM) hierarchy).As described below, although the various processors 405-406 and GPUs 410-413 may be physically coupled to a particular memory 401-402, 420-423, respectively, a unified memory architecture may be implemented in which the same virtual system address space (also referred to as the "effective address" space) is distributed among all of the various physical memories. For example, processor memories 401-402 may each comprise 64GB of the system memory address space and GPU memories 420-423 may each comprise 32GB of the system memory address space (resulting in a total of 256GB addressable memory in this example).FIG. 4B illustrates additional details for an interconnection between a multi-core processor 407 and a graphics acceleration module 446 in accordance with one embodiment. The graphics acceleration module 446 may include one or more GPU chips integrated on a line card which is coupled to the processor 407 via the high-speed link 440. Alternatively, the graphics acceleration module 446 may be integrated on the same package or chip as the processor 407.The illustrated processor 407 includes a plurality of cores 460A-460D, each with a translation lookaside buffer 461A-461D and one or more caches 462A-462D. The cores may include various other components for executing instructions and processing data which are not illustrated to avoid obscuring the underlying principles of the invention (e.g., instruction fetch units, branch prediction units, decoders, execution units, reorder buffers, etc.). The caches 462A-462D may comprise level 1 (L1) and level 2 (L2) caches. In addition, one or more shared caches 426 may be included in the caching hierarchy and shared by sets of the cores 460A-460D. For example, one embodiment of the processor 407 includes 24 cores, each with its own L1 cache, twelve shared L2 caches, and twelve shared L3 caches. In this embodiment, one of the L2 and L3 caches are shared by two adjacent cores. The processor 407 and the graphics accelerator integration module 446 connect with system memory 441, which may include processor memories 401-402Coherency is maintained for data and instructions stored in the various caches 462A-462D, 456 and system memory 441 via inter-core communication over a coherence bus 464. For example, each cache may have cache coherency logic/circuitry associated therewith to communicate to over the coherence bus 464 in response to detected reads or writes to particular cache lines. In one implementation, a cache snooping protocol is implemented over the coherence bus 464 to snoop cache accesses. Cache snooping/coherency techniques are well understood by those of skill in the art and will not be described in detail here to avoid obscuring the underlying principles of the invention.In one embodiment, a proxy circuit 425 communicatively couples the graphics acceleration module 446 to the coherence bus 464, allowing the graphics acceleration module 446 to participate in the cache coherence protocol as a peer of the cores. In particular, an interface 435 provides connectivity to the proxy circuit 425 over high-speed link 440 (e.g., a PCIe bus, NVLink, etc.) and an interface 437 connects the graphics acceleration module 446 to the link 440.In one implementation, an accelerator integration circuit 436 provides cache management, memory access, context management, and interrupt management services on behalf of a plurality of graphics processing engines 431, 432, N of the graphics acceleration module 446. The graphics processing engines 431, 432, N may each comprise a separate graphics processing unit (GPU). Alternatively, the graphics processing engines 431, 432, N may comprise different types of graphics processing engines within a GPU such as graphics execution units, media processing engines (e.g., video encoders/decoders), samplers, and blit engines. In other words, the graphics acceleration module may be a GPU with a plurality of graphics processing engines 431-432, N or the graphics processing engines 431-432, N may be individual GPUs integrated on a common package, line card, or chip.In one embodiment, the accelerator integration circuit 436 includes a memory management unit (MMU) 439 for performing various memory management functions such as virtual-to-physical memory translations (also referred to as effective-to-real memory translations) and memory access protocols for accessing system memory 441. The MMU 439 may also include a translation lookaside buffer (TLB) (not shown) for caching the virtual/effective to physical/real address translations. In one implementation, a cache 438 stores commands and data for efficient access by the graphics processing engines 431-432, N. In one embodiment, the data stored in cache 438 and graphics memories 433-434, N is kept coherent with the core caches 462A-462D, 456 and system memory 411. As mentioned, this may be accomplished via proxy circuit 425 which takes part in the cache coherency mechanism on behalf of cache 438 and memories 433-434, N (e.g., sending updates to the cache 438 related to modifications/accesses of cache lines on processor caches 462A-462D, 456 and receiving updates from the cache 438).A set of registers 445 store context data for threads executed by the graphics processing engines 431-432, N and a context management circuit 448 manages the thread contexts. For example, the context management circuit 448 may perform save and restore operations to save and restore contexts of the various threads during contexts switches (e.g., where a first thread is saved and a second thread is stored so that the second thread can be execute by a graphics processing engine). For example, on a context switch, the context management circuit 448 may store current register values to a designated region in memory (e.g., identified by a context pointer). It may then restore the register values when returning to the context. In one embodiment, an interrupt management circuit 447 receives and processes interrupts received from system devices.In one implementation, virtual/effective addresses from a graphics processing engine 431 are translated to real/physical addresses in system memory 411 by the MMU 439. One embodiment of the accelerator integration circuit 436 supports multiple (e.g., 4, 8, 16) graphics accelerator modules 446 and/or other accelerator devices. The graphics accelerator module 446 may be dedicated to a single application executed on the processor 407 or may be shared between multiple applications. In one embodiment, a virtualized graphics execution environment is presented in which the resources of the graphics processing engines 431-432, N are shared with multiple applications or virtual machines (VMs). The resources may be subdivided into "slices" which are allocated to different VMs and/or applications based on the processing requirements and priorities associated with the VMs and/or applications.Thus, the accelerator integration circuit acts as a bridge to the system for the graphics acceleration module 446 and provides address translation and system memory cache services. In addition, the accelerator integration circuit 436 may provide virtualization facilities for the host processor to manage virtualization of the graphics processing engines, interrupts, and memory management.Because hardware resources of the graphics processing engines 431-432, N are mapped explicitly to the real address space seen by the host processor 407, any host processor can address these resources directly using an effective address value. One function of the accelerator integration circuit 436, in one embodiment, is the physical separation of the graphics processing engines 431-432, N so that they appear to the system as independent units.As mentioned, in the illustrated embodiment, one or more graphics memories 433-434, M are coupled to each of the graphics processing engines 431-432, N, respectively. The graphics memories 433-434, M store instructions and data being processed by each of the graphics processing engines 431-432, N. The graphics memories 433-434, M may be volatile memories such as DRAMs (including stacked DRAMs), GDDR memory (e.g., GDDR5, GDDR6), or HBM, and/or may be non-volatile memories such as 3D XPoint or Nano-Ram.In one embodiment, to reduce data traffic over link 440, biasing techniques are used to ensure that the data stored in graphics memories 433-434, M is data which will be used most frequently by the graphics processing engines 431-432, N and preferably not used by the cores 460A-460D (at least not frequently). Similarly, the biasing mechanism attempts to keep data needed by the cores (and preferably not the graphics processing engines 431-432, N) within the caches 462A-462D, 456 of the cores and system memory 411.FIG. 4C illustrates another embodiment in which the accelerator integration circuit 436 is integrated within the processor 407. In this embodiment, the graphics processing engines 431-432, N communicate directly over the high-speed link 440 to the accelerator integration circuit 436 via interface 437 and interface 435 (which, again, may be utilize any form of bus or interface protocol). The accelerator integration circuit 436 may perform the same operations as those described with respect to FIG. 4B , but potentially at a higher throughput given its close proximity to the coherency bus 462 and caches 462A-462D, 426.One embodiment supports different programming models including a dedicated-process programming model (no graphics acceleration module virtualization) and shared programming models (with virtualization). The latter may include programming models which are controlled by the accelerator integration circuit 436 and programming models which are controlled by the graphics acceleration module 446.In one embodiment of the dedicated process model, graphics processing engines 431-432, N are dedicated to a single application or process under a single operating system. The single application can funnel other application requests to the graphics engines 431-432, N, providing virtualization within a VM/partition.In the dedicated-process programming models, the graphics processing engines 431-432, N, may be shared by multiple VM/application partitions. The shared models require a system hypervisor to virtualize the graphics processing engines 431-432, N to allow access by each operating system. For single-partition systems without a hypervisor, the graphics processing engines 431-432, N are owned by the operating system. In both cases, the operating system can virtualize the graphics processing engines 431-432, N to provide access to each process or application.For the shared programming model, the graphics acceleration module 446 or an individual graphics processing engine 431-432, N selects a process element using a process handle. In one embodiment, process elements are stored in system memory 411 and are addressable using the effective address to real address translation techniques described herein. The process handle may be an implementation-specific value provided to the host process when registering its context with the graphics processing engine 431-432, N (that is, calling system software to add the process element to the process element linked list). The lower 16-bits of the process handle may be the offset of the process element within the process element linked list.FIG. 4D illustrates an exemplary accelerator integration slice 490. As used herein, a "slice" comprises a specified portion of the processing resources of the accelerator integration circuit 436. Application effective address space 482 within system memory 411 stores process elements 483. In one embodiment, the process elements 483 are stored in response to GPU invocations 481 from applications 480 executed on the processor 407. A process element 483 contains the process state for the corresponding application 480. A work descriptor (WD) 484 contained in the process element 483 can be a single job requested by an application or may contain a pointer to a queue of jobs. In the latter case, the WD 484 is a pointer to the job request queue in the application's address space 482.The graphics acceleration module 446 and/or the individual graphics processing engines 431-432, N can be shared by all or a subset of the processes in the system. Embodiments of the invention include an infrastructure for setting up the process state and sending a WD 484 to a graphics acceleration module 446 to start a job in a virtualized environment.In one implementation, the dedicated-process programming model is implementation-specific. In this model, a single process owns the graphics acceleration module 446 or an individual graphics processing engine 431. Because the graphics acceleration module 446 is owned by a single process, the hypervisor initializes the accelerator integration circuit 436 for the owning partition and the operating system initializes the accelerator integration circuit 436 for the owning process at the time when the graphics acceleration module 446 is assigned.In operation, a WD fetch unit 491 in the accelerator integration slice 490 fetches the next WD 484 which includes an indication of the work to be done by one of the graphics processing engines of the graphics acceleration module 446. Data from the WD 484 may be stored in registers 445 and used by the MMU 439, interrupt management circuit 447 and/or context management circuit 446 as illustrated. For example, one embodiment of the MMU 439 includes segment/page walk circuitry for accessing segment/page tables 486 within the OS virtual address space 485. The interrupt management circuit 447 may process interrupt events 492 received from the graphics acceleration module 446. When performing graphics operations, an effective address 493 generated by a graphics processing engine 431-432, N is translated to a real address by the MMU 439.In one embodiment, the same set of registers 445 are duplicated for each graphics processing engine 431-432, N and/or graphics acceleration module 446 and may be initialized by the hypervisor or operating system. Each of these duplicated registers may be included in an accelerator integration slice 490. Exemplary registers that may be initialized by the hypervisor are shown in Table 1.Table 1 - Hypervisor Initialized Registers1Slice Control Register2Real Address (RA) Scheduled Processes Area Pointer3Authority Mask Override Register4Interrupt Vector Table Entry Offset5Interrupt Vector Table Entry Limit6State Register7Logical Partition ID8Real address (RA) Hypervisor Accelerator Utilization Record Pointer9Storage Description RegisterExemplary registers that may be initialized by the operating system are shown in Table 2.Table 2 - Operating System Initialized Registers1Process and Thread Identification2Effective Address (EA) Context Save/Restore Pointer3Virtual Address (VA) Accelerator Utilization Record Pointer4Virtual Address (VA) Storage Segment Table Pointer5Authority Mask6Work descriptorIn one embodiment, each WD 484 is specific to a particular graphics acceleration module 446 and/or graphics processing engine 431-432, N. It contains all the information a graphics processing engine 431-432, N requires to do its work or it can be a pointer to a memory location where the application has set up a command queue of work to be completed.FIG. 4E illustrates additional details for one embodiment of a shared model. This embodiment includes a hypervisor real address space 498 in which a process element list 499 is stored. The hypervisor real address space 498 is accessible via a hypervisor 496 which virtualizes the graphics acceleration module engines for the operating system 495.The shared programming models allow for all or a subset of processes from all or a subset of partitions in the system to use a graphics acceleration module 446. There are two programming models where the graphics acceleration module 446 is shared by multiple processes and partitions: time-sliced shared and graphics directed shared.In this model, the system hypervisor 496 owns the graphics acceleration module 446 and makes its function available to all operating systems 495. For a graphics acceleration module 446 to support virtualization by the system hypervisor 496, the graphics acceleration module 446 may adhere to the following requirements: 1) An application's job request must be autonomous (that is, the state does not need to be maintained between jobs), or the graphics acceleration module 446 must provide a context save and restore mechanism. 2) An application's job request is guaranteed by the graphics acceleration module 446 to complete in a specified amount of time, including any translation faults, or the graphics acceleration module 446 provides the ability to preempt the processing of the job. 3) The graphics acceleration module 446 must be guaranteed fairness between processes when operating in the directed shared programming model.In one embodiment, for the shared model, the application 480 is required to make an operating system 495 system call with a graphics acceleration module 446 type, a work descriptor (WD), an authority mask register (AMR) value, and a context save/restore area pointer (CSRP). The graphics acceleration module 446 type describes the targeted acceleration function for the system call. The graphics acceleration module 446 type may be a system-specific value. The WD is formatted specifically for the graphics acceleration module 446 and can be in the form of a graphics acceleration module 446 command, an effective address pointer to a user-defined structure, an effective address pointer to a queue of commands, or any other data structure to describe the work to be done by the graphics acceleration module 446. In one embodiment, the AMR value is the AMR state to use for the current process. The value passed to the operating system is similar to an application setting the AMR. If the accelerator integration circuit 436 and graphics acceleration module 446 implementations do not support a User Authority Mask Override Register (UAMOR), the operating system may apply the current UAMOR value to the AMR value before passing the AMR in the hypervisor call. The hypervisor 496 may optionally apply the current Authority Mask Override Register (AMOR) value before placing the AMR into the process element 483. In one embodiment, the CSRP is one of the registers 445 containing the effective address of an area in the application's address space 482 for the graphics acceleration module 446 to save and restore the context state. This pointer is optional if no state is required to be saved between jobs or when a job is preempted. The context save/restore area may be pinned system memory.Upon receiving the system call, the operating system 495 may verify that the application 480 has registered and been given the authority to use the graphics acceleration module 446. The operating system 495 then calls the hypervisor 496 with the information shown in Table 3.Table 3 - OS to Hypervisor Call Parameters1A work descriptor (WD)2An Authority Mask Register (AMR) value (potentially masked).3An effective address (EA) Context Save/Restore Area Pointer (CSRP)4A process ID (PID) and optional thread ID (TID)5A virtual address (VA) accelerator utilization record pointer (AURP)6The virtual address of the storage segment table pointer (SSTP)7A logical interrupt service number (LISN)Upon receiving the hypervisor call, the hypervisor 496 verifies that the operating system 495 has registered and been given the authority to use the graphics acceleration module 446. The hypervisor 496 then puts the process element 483 into the process element linked list for the corresponding graphics acceleration module 446 type. The process element may include the information shown in Table 4.Table 4 - Process Element Information1A work descriptor (WD)2An Authority Mask Register (AMR) value (potentially masked).3An effective address (EA) Context Save/Restore Area Pointer (CSRP)4A process ID (PID) and optional thread ID (TID)5A virtual address (VA) accelerator utilization record pointer (AURP)6The virtual address of the storage segment table pointer (SSTP)7A logical interrupt service number (LISN)8Interrupt vector table, derived from the hypervisor call parameters.9A state register (SR) value10A logical partition ID (LPID)11A real address (RA) hypervisor accelerator utilization record pointer12The Storage Descriptor Register (SDR)In one embodiment, the hypervisor initializes a plurality of accelerator integration slice 490 registers 445.As illustrated in FIG. 4F , one embodiment of the invention employs a unified memory addressable via a common virtual memory address space used to access the physical processor memories 401-402 and GPU memories 420-423. In this implementation, operations executed on the GPUs 410-413 utilize the same virtual/effective memory address space to access the processors memories 401-402 and vice versa, thereby simplifying programmability. In one embodiment, a first portion of the virtual/effective address space is allocated to the processor memory 401, a second portion to the second processor memory 402, a third portion to the GPU memory 420, and so on. The entire virtual/effective memory space (sometimes referred to as the effective address space) is thereby distributed across each of the processor memories 401-402 and GPU memories 420-423, allowing any processor or GPU to access any physical memory with a virtual address mapped to that memory.In one embodiment, bias/coherence management circuitry 494A-494E within one or more of the MMUs 439A-439E ensures cache coherence between the caches of the host processors (e.g., 405) and the GPUs 410-413 and implements biasing techniques indicating the physical memories in which certain types of data should be stored. While multiple instances of bias/coherence management circuitry 494A-494E are illustrated in FIG. 4F , the bias/coherence circuitry may be implemented within the MMU of one or more host processors 405 and/or within the accelerator integration circuit 436.One embodiment allows GPU-attached memory 420-423 to be mapped as part of system memory, and accessed using shared virtual memory (SVM) technology, but without suffering the typical performance drawbacks associated with full system cache coherence. The ability to GPU-attached memory 420-423 to be accessed as system memory without onerous cache coherence overhead provides a beneficial operating environment for GPU offload. This arrangement allows the host processor 405 software to setup operands and access computation results, without the overhead of tradition I/O DMA data copies. Such traditional copies involve driver calls, interrupts and memory mapped I/O (MMIO) accesses that are all inefficient relative to simple memory accesses. At the same time, the ability to access GPU attached memory 420-423 without cache coherence overheads can be critical to the execution time of an offloaded computation. In cases with substantial streaming write memory traffic, for example, cache coherence overhead can significantly reduce the effective write bandwidth seen by a GPU 410-413. The efficiency of operand setup, the efficiency of results access, and the efficiency of GPU computation all play a role in determining the effectiveness of GPU offload.In one implementation, the selection of between GPU bias and host processor bias is driven by a bias tracker data structure. A bias table may be used, for example, which may be a page-granular structure (i.e., controlled at the granularity of a memory page) that includes 1 or 2 bits per GPU-attached memory page. The bias table may be implemented in a stolen memory range of one or more GPU-attached memories 420-423, with or without a bias cache in the GPU 410-413 (e.g., to cache frequently/recently used entries of the bias table). Alternatively, the entire bias table may be maintained within the GPU.In one implementation, the bias table entry associated with each access to the GPU-attached memory 420-423 is accessed prior the actual access to the GPU memory, causing the following operations. First, local requests from the GPU 410-413 that find their page in GPU bias are forwarded directly to a corresponding GPU memory 420-423. Local requests from the GPU that find their page in host bias are forwarded to the processor 405 (e.g., over a high-speed link as discussed above). In one embodiment, requests from the processor 405 that find the requested page in host processor bias complete the request like a normal memory read. Alternatively, requests directed to a GPU-biased page may be forwarded to the GPU 410-413. The GPU may then transition the page to a host processor bias if it is not currently using the page.The bias state of a page can be changed either by a software-based mechanism, a hardware-assisted software-based mechanism, or, for a limited set of cases, a purely hardware-based mechanism.One mechanism for changing the bias state employs an API call (e.g. OpenCL), which, in turn, calls the GPU's device driver which, in turn, sends a message (or enqueues a command descriptor) to the GPU directing it to change the bias state and, for some transitions, perform a cache flushing operation in the host. The cache flushing operation is required for a transition from host processor 405 bias to GPU bias, but is not required for the opposite transition.In one embodiment, cache coherency is maintained by temporarily rendering GPU-biased pages uncacheable by the host processor 405. To access these pages, the processor 405 may request access from the GPU 410 which may or may not grant access right away, depending on the implementation. Thus, to reduce communication between the processor 405 and GPU 410 it is beneficial to ensure that GPU-biased pages are those which are required by the GPU but not the host processor 405 and vice versa.Graphics Processing PipelineFIG. 5 illustrates a graphics processing pipeline 500, according to an embodiment. In one embodiment a graphics processor can implement the illustrated graphics processing pipeline 500. The graphics processor can be included within the parallel processing subsystems as described herein, such as the parallel processor 200 of FIG. 2 , which, in one embodiment, is a variant of the parallel processor(s) 112 of FIG. 1 . The various parallel processing systems can implement the graphics processing pipeline 500 via one or more instances of the parallel processing unit (e.g., parallel processing unit 202 of FIG. 2 ) as described herein. For example, a shader unit (e.g., graphics multiprocessor 234 of FIG. 3 ) may be configured to perform the functions of one or more of a vertex processing unit 504, a tessellation control processing unit 508, a tessellation evaluation processing unit 512, a geometry processing unit 516, and a fragment/pixel processing unit 524. The functions of data assembler 502, primitive assemblers 506, 514, 518, tessellation unit 510, rasterizer 522, and raster operations unit 526 may also be performed by other processing engines within a processing cluster (e.g., processing cluster 214 of FIG. 3 ) and a corresponding partition unit (e.g., partition unit 220A-220N of FIG. 2 ). The graphics processing pipeline 500 may also be implemented using dedicated processing units for one or more functions. In one embodiment, one or more portions of the graphics processing pipeline 500 can be performed by parallel processing logic within a general purpose processor (e.g., CPU). In one embodiment, one or more portions of the graphics processing pipeline 500 can access on-chip memory (e.g., parallel processor memory 222 as in FIG. 2 ) via a memory interface 528, which may be an instance of the memory interface 218 of FIG. 2 .In one embodiment the data assembler 502 is a processing unit that collects vertex data for surfaces and primitives. The data assembler 502 then outputs the vertex data, including the vertex attributes, to the vertex processing unit 504. The vertex processing unit 504 is a programmable execution unit that executes vertex shader programs, lighting and transforming vertex data as specified by the vertex shader programs. The vertex processing unit 504 reads data that is stored in cache, local or system memory for use in processing the vertex data and may be programmed to transform the vertex data from an object-based coordinate representation to a world space coordinate space or a normalized device coordinate space.A first instance of a primitive assembler 506 receives vertex attributes from the vertex processing unit 504. The primitive assembler 506 readings stored vertex attributes as needed and constructs graphics primitives for processing by tessellation control processing unit 508. The graphics primitives include triangles, line segments, points, patches, and so forth, as supported by various graphics processing application programming interfaces (APIs).The tessellation control processing unit 508 treats the input vertices as control points for a geometric patch. The control points are transformed from an input representation from the patch (e.g., the patch's bases) to a representation that is suitable for use in surface evaluation by the tessellation evaluation processing unit 512. The tessellation control processing unit 508 can also compute tessellation factors for edges of geometric patches. A tessellation factor applies to a single edge and quantifies a view-dependent level of detail associated with the edge. A tessellation unit 510 is configured to receive the tessellation factors for edges of a patch and to tessellate the patch into multiple geometric primitives such as line, triangle, or quadrilateral primitives, which are transmitted to a tessellation evaluation processing unit 512. The tessellation evaluation processing unit 512 operates on parameterized coordinates of the subdivided patch to generate a surface representation and vertex attributes for each vertex associated with the geometric primitives.A second instance of a primitive assembler 514 receives vertex attributes from the tessellation evaluation processing unit 512, reading stored vertex attributes as needed, and constructs graphics primitives for processing by the geometry processing unit 516. The geometry processing unit 516 is a programmable execution unit that executes geometry shader programs to transform graphics primitives received from primitive assembler 514 as specified by the geometry shader programs. In one embodiment the geometry processing unit 516 is programmed to subdivide the graphics primitives into one or more new graphics primitives and calculate parameters used to rasterize the new graphics primitives.In some embodiments the geometry processing unit 516 can add or delete elements in the geometry stream. The geometry processing unit 516 outputs the parameters and vertices specifying new graphics primitives to primitive assembler 518. The primitive assembler 518 receives the parameters and vertices from the geometry processing unit 516 and constructs graphics primitives for processing by a viewport scale, cull, and clip unit 520. The geometry processing unit 516 reads data that is stored in parallel processor memory or system memory for use in processing the geometry data. The viewport scale, cull, and clip unit 520 performs clipping, culling, and viewport scaling and outputs processed graphics primitives to a rasterizer 522.The rasterizer 522 can perform depth culling and other depth-based optimizations. The rasterizer 522 also performs scan conversion on the new graphics primitives to generate fragments and output those fragments and associated coverage data to the fragment/pixel processing unit 524. The fragment/pixel processing unit 524 is a programmable execution unit that is configured to execute fragment shader programs or pixel shader programs. The fragment/pixel processing unit 524 transforming fragments or pixels received from rasterizer 522, as specified by the fragment or pixel shader programs. For example, the fragment/pixel processing unit 524 may be programmed to perform operations included but not limited to texture mapping, shading, blending, texture correction and perspective correction to produce shaded fragments or pixels that are output to a raster operations unit 526. The fragment/pixel processing unit 524 can read data that is stored in either the parallel processor memory or the system memory for use when processing the fragment data. Fragment or pixel shader programs may be configured to shade at sample, pixel, tile, or other granularities depending on the sampling rate configured for the processing units.The raster operations unit 526 is a processing unit that performs raster operations including, but not limited to stencil, z test, blending, and the like, and outputs pixel data as processed graphics data to be stored in graphics memory (e.g., parallel processor memory 222 as in FIG. 2 , and/or system memory 104 as in FIG 1 , to be displayed on the one or more display device(s) 110 or for further processing by one of the one or more processor(s) 102 or parallel processor(s) 112. In some embodiments the raster operations unit 526 is configured to compress z or color data that is written to memory and decompress z or color data that is read from memory.Graphics System With Additional Context ExamplesTurning now to FIG. 6 , an embodiment of an electronic processing system 600 may include an application processor 611, persistent storage media 612 communicatively coupled to the application processor 611, and a graphics subsystem 613 communicatively coupled to the application processor 611. The graphics subsystem 613 may include a first graphics engine 614 to process a graphics workload, and a second graphics engine 615 to offload at least a portion of the graphics workload from the first graphics engine 614. For example, the second graphics engine 615 may include a low precision compute engine (e.g. as described in more detail below). In some embodiments, the system 600 may include a wearable device to house the second graphics engine 615 (e.g. as described in more detail below).Embodiments of each of the above application processor 611, persistent storage media 612, graphics subsystem 613, first graphics engine 614, second graphics engine 615, and other system components may be implemented in hardware, software, or any suitable combination thereof. For example, hardware implementations may include configurable logic such as, for example, programmable logic arrays (PLAs), FPGAs, complex programmable logic devices (CPLDs), or in fixed-functionality logic hardware using circuit technology such as, for example, ASIC, complementary metal oxide semiconductor (CMOS) or transistor-transistor logic (TTL) technology, or any combination thereof. Alternatively, or additionally, these components may be implemented in one or more modules as a set of logic instructions stored in a machine- or computer-readable storage medium such as random access memory (RAM), read only memory (ROM), programmable ROM (PROM), firmware, flash memory, etc., to be executed by a processor or computing device. For example, computer program code to carry out the operations of the components may be written in any combination of one or more operating system applicable/appropriate programming languages, including an object-oriented programming language such as PYTHON, PERL, JAVA, SMALLTALK, C++, C# or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages.For example, the system 600 may include similar components and/or features as system 100, further configured to offload graphics work to a second graphics engine. For example, the graphics subsystem 613 may include similar components and/or features as the parallel processor 200, further configured with a second graphics engine as described herein. The system 600 may also be adapted to work with a stereo head mounted system such as, for example, the system described in connection with FIGs. 11-15 below.Turning now to FIG. 7 , an embodiment of a graphics apparatus 700 may include a first graphics engine 721 to process a graphics workload, and a second graphics engine 722 to offload at least a portion of the graphics workload from the first graphics engine 721. For example, the second graphics engine may include a low precision compute engine 723. In some embodiments, the low precision compute engine may be configured to perform at least one of time warp (e.g. which may also be referred to as reprojection in some embodiments), space warp (e.g. or frame rate up conversion in some embodiments), and machine learning. The apparatus 700 may also include a second context for the second graphics engine 722 which is independent of a first context for the first graphics engine 721.In some embodiments, the apparatus 700 may further include a wearable device to house the second graphics engine 722. The second graphics engine 722 may be further configured to offload render work from the first graphics engine 721. For example, the wearable device may include a head mounted display and the second graphics engine 722 may be configured to offload foveated render work from the first graphics engine 721.Embodiments of each of the above first graphics engine 721, second graphics engine 722, low precision compute engine 723, and other components of the apparatus 700 may be implemented in hardware, software, or any combination thereof. For example, portions or all of the apparatus 700 may be implemented as part of the parallel processor 200, further configured with a second graphics engine as described herein. The apparatus 700 may also be adapted to work with a stereo head mounted system such as, for example, the system described in connection with FIGs. 11-15 below. For example, hardware implementations may include configurable logic such as, for example, PLAs, FPGAs, CPLDs, or in fixed-functionality logic hardware using circuit technology such as, for example, ASIC, CMOS, or TTL technology, or any combination thereof. Alternatively, or additionally, these components may be implemented in one or more modules as a set of logic instructions stored in a machine- or computer-readable storage medium such as RAM, ROM, PROM, firmware, flash memory, etc., to be executed by a processor or computing device. For example, computer program code to carry out the operations of the components may be written in any combination of one or more operating system applicable/appropriate programming languages, including an object-oriented programming language such as PYTHON, PERL, JAVA, SMALLTALK, C++, C# or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages.Turning now to FIGs. 8A to 8C , an embodiment of a method 800 of may include processing a graphics workload with a first graphics engine at block 831, and offloading at least a portion of the graphics workload from the first graphics engine to a second graphics engine at block 832. The method 800 may also include providing a low precision compute engine for the second graphics engine at block 833. For example, the method 800 may include performing at least one of time warp, space warp, and machine learning with the low precision compute engine at block 834 and/or providing a second context for the second graphics engine which is independent of a first context for the first graphics engine at block 835.In some embodiments, the method 800 may further include providing a wearable device to house the second graphics engine at block 836. The method 800 may also include offloading render work from the first graphics engine to the second graphics engine at block 837. For example, the method 800 may include providing a head mounted display to house the second graphics engine at block 838, and offloading foveated render work from the first graphics engine to the second graphics engine at block 839.Embodiments of the method 800 may be implemented in a system, apparatus, GPU, PPU, or a graphics processor pipeline apparatus such as, for example, those described herein. More particularly, hardware implementations of the method 800 may include configurable logic such as, for example, PLAs, FPGAs, CPLDs, or in fixed-functionality logic hardware using circuit technology such as, for example, ASIC, CMOS, or TTL technology, or any combination thereof. Alternatively, or additionally, the method 800 may be implemented in one or more modules as a set of logic instructions stored in a machine- or computer-readable storage medium such as RAM, ROM, PROM, firmware, flash memory, etc., to be executed by a processor or computing device. For example, computer program code to carry out the operations of the components may be written in any combination of one or more operating system applicable/appropriate programming languages, including an object-oriented programming language such as PYTHON, PERL, JAVA, SMALLTALK, C++, C# or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. For example, the method 800 may be implemented on a computer readable medium as described in connection with Examples 18 to 24 below.For example, embodiments or portions of the method 800 may be implemented in applications (e.g. through an API) or driver software. Other embodiments or portions of the method 800 may be implemented in specialized code (e.g. shaders) to be executed on a GPU. Other embodiments or portions of the method 800 may be implemented in fixed function logic or specialized hardware (e.g. in the GPU).Low Precision Compute Engine ExamplesAdvantageously, some embodiments may provide a fixed-function warping engine for virtual reality (VR) to avoid a 3D pipeline context switch. For example, some embodiments may reduce an overhead of execution unit (EU) load balancing by using fixed function logic to perform time-warping when using VR. Advantageously, some embodiments may not require any context switch. Providing an additional asynchronous engine may also eliminate an otherwise complex asynchronous compute for the time warp operation when a context switch is involved. For example, some embodiments may provide a hardware block to implement time warp apart from the regular 3D hardware to avoid the 3D pipeline context switch. The hardware block may include a dedicated cache for time warp data.Turning now to FIG. 9 , a graphics system 900 may include a 3D pipeline 901 to process a graphics workload to produce frame information for an n-th frame (3DFrame[n]). The system 900 may offload a time warp operation 903 to a dedicated low precision compute engine 905 (e.g. the processing cost and hardware is dedicated to the time warp operation 903). The low precision compute engine 905 may share a memory interface 907 with the 3D pipeline 901. The time warp operation 903 may produce an alternate frame (TWFrame[n+1]) for a next frame. A selector 909 may select between the next regular 3D frame (3DFrame[n+1]) and the alternate frame (TWFrame[n+1]) for the next frame based on a detected condition. For example, if the next regular 3D frame is not ready after a pre-determined amount of time (e.g. 20 ms) the selector 909 may select the alternate frame.Some embodiments may take all or most of the function for the time warp operation which might otherwise be done on GPU/API etc. and provide a dedicated asynchronous compute mechanism to perform the time warp operation. Advantageously, the GPU doesn't need to be disturbed for time warp because there is a dedicated engine for time warp. For example, the time warp engine may advantageously include a low-power, fixed point image processing/compute engine because floating point may not be needed for the time warp operation. Some embodiments may advantageously use less power by performing time warp operations on a low precision compute engine instead of the more fully featured 3D pipeline. The low precision compute engine may be a fixed function engine, but more preferably may be a programmable fixed point general purpose unit that may also be used for space warp, machine learning, or other processing suitable for a low precision compute engine.The architecture for the low precision compute engine may be similar to the GPU architecture, but with a fewer number of compute units and all integer arithmetic (various fixed function stages of the GPU may be omitted). The low precision compute engine may include its own cache, etc., and may have a separate command stream but share a same memory interface with the GPU. For example, the low precision compute engine may have its own command stream such that time warp may be invoked by a separate task queue. The separate queue could be provided to the separate engine. Some embodiments may include a separate low precision compute engine per GPU and may scale with the number of GPUs. The command streamer may be outside the GPUs so it can stream to any resource inside the GPUs. Advantageously, some embodiments may reduce motion to photon latency.Wearable Device ExamplesSome embodiments may advantageously provide an improved or optimized second phase time warp on a head-mounted display (HMD) with gaze information. Some embodiments may improve or optimize time warp on the HMD to reduce processing on the HMD to provide power and/or latency benefits. For examples, portions of the GPU or graphics pipeline may be replicated on the HMD to handle more complex processing. In some embodiments, foveated render tasks may be offloaded to the HMD. In some embodiments, a wearable device may alternatively, or additionally, include a shoulder mounted display and/or a neck mounted display.For time warp, foveated rendering may not be as efficient due to the need for a larger fovea radius (e.g. to cover potential head movement). By performing foveated rendering on the HMD side, some embodiments may improve or optimize the foveated rendering. Performing the foveated rendering on the HMD side may account for the larger radius for time warp. A first phase may refer to a shifting aspect of time warp, while the second phase may refer to foveated rendering on the HMD. The first phase may not be as efficient on the host side, because the host may maintain a higher precision and larger buffer for the time warp operation than needed.Turning now to FIG. 10 , a graphics system 1000 may include a host side 1011 communicatively coupled to an HMD side 1012 (e.g. communication may be wired or wireless). The graphics system 1000 may offload foveated render tasks 1013 to the HMD side 1012. The HMD side 1012 may improve or optimize the foveated render tasks 1013a for time warp. For example, when combining foveated render tasks with time warp, some embodiments may define an elliptical fovea aligned with a direction of motion. The host side 1011 may still perform some time warp and/or foveated render operations. But by boosting the processing power on the HMD side 1012, some embodiments may be able to offload more and/or balance the graphics workload.Some other HMD architectures may just include a display port and some translation hardware to send the translated info to the display panels. Some embodiments may advantageously include a graphics data interface (e.g. more like universal serial bus (USB)), where graphics may be transmitted as data instead of a stream. Some embodiments may include additional compute capability on the HMD side 1012 to process/post-process the data before sending it to the display(s). For example, the HMD side 1012 may include media decode/encode capabilities. For time warp operations in the HMD side 1012, further capabilities may include barrel distortion correction, chromatic aberration correction, and/or other compositor capability shifted to the HMD side 1012. The host side may retain those functions as well, but the system 100 may offload some more work to the HMD side 1012 if the HMD side 1012 is capable. Some rasterization capability may also be shifted to the HMD side 1012. The particular partition between what is done on host side 1011 and what is done on HMD side 1012 may depend on the particular workload and/or processing ability of the HMD side 1012. With more processing capability on the HMD side 1012, the system 1000 may only need to transmit delta information for some frames (e.g. transmit only modified pixel blocks only from the host side 1011 to the HMD side 1012).Head-Mounted Display System OverviewFIG. 11 shows a head mounted display (HMD) system 1100 that is being worn by a user while experiencing an immersive environment such as, for example, a virtual reality (VR) environment, an augmented reality (AR) environment, a multi-player three-dimensional (3D) game, and so forth. In the illustrated example, one or more straps 1120 hold a frame 1102 of the HMD system 1100 in front of the eyes of the user. Accordingly, a left-eye display 1104 may be positioned to be viewed by the left eye of the user and a right-eye display 1106 may be positioned to be viewed by the right eye of the user. The left-eye display 1104 and the right-eye display 1106 may alternatively be integrated into a single display in certain examples such as, for example, a smart phone being worn by the user. In the case of AR, the displays 1104, 1106 may be view-through displays that permit the user to view the physical surroundings, with other rendered content (e.g., virtual characters, informational annotations, heads up display/HUD) being presented on top a live feed of the physical surroundings.In one example, the frame 1102 includes a left look-down camera 1108 to capture images from an area generally in front of the user and beneath the left eye (e.g., left hand gestures). Additionally, a right look-down camera 1110 may capture images from an area generally in front of the user and beneath the right eye (e.g., right hand gestures). The illustrated frame 1102 also includes a left look-front camera 1112 and a right look-front camera 1114 to capture images in front of the left and right eyes, respectively, of the user. The frame 1102 may also include a left look-side camera 1116 to capture images from an area to the left of the user and a right look-side camera 1118 to capture images from an area to the right of the user.The images captured by the cameras 1108, 1110, 1112, 1114, 1116, 1118, which may have overlapping fields of view, may be used to detect gestures made by the user as well as to analyze and/or reproduce the external environment on the displays 1104, 1106. In one example, the detected gestures are used by a graphics processing architecture (e.g., internal and/or external) to render and/or control a virtual representation of the user in a 3D game. Indeed, the overlapping fields of view may enable the capture of gestures made by other individuals (e.g., in a multi-player game), where the gestures of other individuals may be further used to render/control the immersive experience. The overlapping fields of view may also enable the HMD system 1100 to automatically detect obstructions or other hazards near the user. Such an approach may be particularly advantageous in advanced driver assistance system (ADAS) applications.In one example, providing the left look-down camera 1108 and the right look-down camera 1110 with overlapping fields of view provides a stereoscopic view having an increased resolution. The increased resolution may in turn enable very similar user movements to be distinguished from one another (e.g., at sub-millimeter accuracy). The result may be an enhanced performance of the HMD system 1100 with respect to reliability. Indeed, the illustrated solution may be useful in a wide variety of applications such as, for example, coloring information in AR settings, exchanging virtual tools/devices between users in a multi-user environment, rendering virtual items (e.g., weapons, swords, staffs), and so forth. Gestures of other objects, limbs and/or body parts may also be detected and used to render/control the virtual environment. For example, myelographic signals, electroencephalographic signals, eye tracking, breathing or puffing, hand motions, etc., may be tracked in real-time, whether from the wearer or another individual in a shared environment. The images captured by the cameras 1108, 1110, 1112, 1114, 1116, 1118, may also serve as contextual input. For example, it might be determined that the user is indicating a particular word to edit or key to press in a word processing application, a particular weapon to deployed or a travel direction in a game, and so forth.Additionally, the images captured by the cameras 1108, 1110, 1112, 1114, 1116, 1118, may be used to conduct shared communication or networked interactivity in equipment operation, medical training, and/or remote/tele-operation guidance applications. Task specific gesture libraries or neural network machine learning could enable tool identification and feedback for a task. For example, a virtual tool that translates into remote, real actions may be enabled. In yet another example, the HMD system 1100 translates the manipulation of a virtual drill within a virtual scene to the remote operation of a drill on a robotic device deployed to search a collapsed building. Moreover, the HMD system 1100 may be programmable to the extent that it includes, for example, a protocol that enables the user to add a new gesture to a list of identifiable gestures associated with user actions.In addition, the various cameras in the HMD 1100 may be configurable to detect spectrum frequencies in addition to the visible wavelengths of the spectrum. Multi-spectral imaging capabilities in the input cameras allows position tracking of the user and/or objects by eliminating nonessential image features (e.g., background noise). For example, in augmented reality (AR) applications such as surgery, instruments and equipment may be tracked by their infrared reflectivity without the need for additional tracking aids. Moreover, HMD 1100 could be employed in situations of low visibility where a "live feed" from the various cameras could be enhanced or augmented through computer analysis and displayed to the user as visual or audio cues.The HMD system 1100 may also forego performing any type of data communication with a remote computing system or need power cables (e.g., independent mode of operation). In this regard, the HMD system 1100 may be a "cordless" device having a power unit that enables the HMD system 1100 to operate independently of external power systems. Accordingly, the user might play a full featured game without being tethered to another device (e.g., game console) or power supply. In a word processing example, the HMD system 1100 might present a virtual keyboard and/or virtual mouse on the displays 1104 and 1106 to provide a virtual desktop or word processing scene. Thus, gesture recognition data captured by one or more of the cameras may represent user typing activities on the virtual keyboard or movements of the virtual mouse. Advantages include, but are not limited to, ease of portability and privacy of the virtual desktop from nearby individuals. The underlying graphics processing architecture may support compression and/or decompression of video and audio signals. Moreover, providing separate images to the left eye and right eye of the user may facilitate the rendering, generation and/or perception of 3D scenes. The relative positions of the left-eye display 1104 and the right-eye display 1106 may also be adjustable to match variations in eye separation between different users.The number of cameras illustrated in FIG. 11 is to facilitate discussion only. Indeed, the HMD system 1100 may include less than six or more than six cameras, depending on the circumstances.Functional Components of the HMD SystemFIG. 12 shows the HMD system in greater detail. In the illustrated example, the frame 1102 includes a power unit 1200 (e.g., battery power, adapter) to provide power to the HMD system. The illustrated frame 1102 also includes a motion tracking module 1220 (e.g., accelerometers, gyroscopes), wherein the motion tracking module 1220 provides motion tracking data, orientation data and/or position data to a processor system 1204. The processor system 1204 may include a network adapter 1224 that is coupled to an I/O bridge 1206. The I/O bridge 1206 may enable communications between the network adapter 1224 and various components such as, for example, audio input modules 1210, audio output modules 1208, a display device 1207, input cameras 1202, and so forth.In the illustrated example, the audio input modules 1210 include a right-audio input 1218 and a left-audio input 1216, which detect sound that may be processed in order to recognize voice commands of the user as well as nearby individuals. The voice commands recognized in the captured audio signals may augment gesture recognition during modality switching and other applications. Moreover, the captured audio signals may provide 3D information that is used to enhance the immersive experience.The audio output modules 1208 may include a right-audio output 1214 and a left-audio output 1212. The audio output modules 1208 may deliver sound to the ears of the user and/or other nearby individuals. The audio output modules 1208, which may be in the form of earbuds, on-ear speakers, over the ear speakers, loudspeakers, etc., or any combination thereof, may deliver stereo and/or 3D audio content to the user (e.g., spatial localization). The illustrated frame 1102 also includes a wireless module 1222, which may facilitate communications between the HMD system and various other systems (e.g., computers, wearable devices, game consoles). In one example, the wireless module 1222 communicates with the processor system 1204 via the network adapter 1224.The illustrated display device 1207 includes the left-eye display 1104 and the right-eye display 1106, wherein the visual content presented on the displays 1104, 1106 may be obtained from the processor system 1204 via the I/O bridge 1206. The input cameras 1202 may include the left look-side camera 1116 the right look-side camera 1118, the left look-down camera 1108, the left look-front camera 1112, the right look-front camera 1114 and the right look-down camera 1110, already discussed.Turning now FIG. 13 , a general processing cluster (GPC) 1300 is shown. The illustrated GPC 1300 may be incorporated into a processing system such as, for example, the processor system 1204 ( FIG. 12 ), already discussed. The GPC 1300 may include a pipeline manager 1302 that communicates with a scheduler. In one example, the pipeline manager 1302 receives tasks from the scheduler and distributes the tasks to one or more streaming multi-processors (SM's) 1304. Each SM 1304 may be configured to process thread groups, wherein a thread group may be considered a plurality of related threads that execute the same or similar operations on different input data. Thus, each thread in the thread group may be assigned to a particular SM 1304. In another example, the number of threads may be greater than the number of execution units in the SM 1304. In this regard, the threads of a thread group may operate in parallel. The pipeline manager 1302 may also specify processed data destinations to a work distribution crossbar 1308, which communicates with a memory crossbar.Thus, as each SM 1304 transmits a processed task to the work distribution crossbar 1308, the processed task may be provided to another GPC 1300 for further processing. The output of the SM 1304 may also be sent to a pre-raster operations (preROP) unit 1314, which in turn directs data to one or more raster operations units, or performs other operations (e.g., performing address translations, organizing picture color data, blending color, and so forth). The SM 1304 may include an internal level one (L1) cache (not shown) to which the SM 1304 may store data. The SM 1304 may also have access to a level two (L2) cache (not shown) via a memory management unit (MMU) 1310 and a level one point five (L1.5) cache 1306. The MMU 1310 may map virtual addresses to physical addresses. In this regard, the MMU 1310 may include page table entries (PTE's) that are used to map virtual addresses to physical addresses of a tile, memory page and/or cache line index. The illustrated GPC 1300 also includes a texture unit 1312.Graphics Pipeline ArchitectureTurning now to FIG. 14 , a graphics pipeline 1400 is shown. In the illustrated example, a world space pipeline 1420 includes a primitive distributor (PD) 1402. The PD 1402 may collect vertex data associated with high-order services, graphics primitives, triangles, etc., and transmit the vertex data to a vertex attribute fetch unit (VAF) 1404. The VAF 1404 may retrieve vertex attributes associated with each of the incoming vertices from shared memory and store the vertex data, along with the associated vertex attributes, into shared memory.The illustrated world space pipeline 1420 also includes a vertex, tessellation, geometry processing unit (VTG) 1406. The VTG 1406 may include, for example, a vertex processing unit, a tessellation initialization processing unit, a task distributor, a task generation unit, a topology generation unit, a geometry processing unit, a tessellation processing unit, etc., or any combination thereof. In one example, the VTG 1406 is a programmable execution unit that is configured to execute geometry programs, tessellation programs, and vertex shader programs. The programs executed by the VTG 1406 may process the vertex data and vertex attributes received from the VAF 1404. Moreover, the programs executed by the VTG 1406 may produce graphics primitives, color values, surface normal factors and transparency values at each vertex for the graphics primitives for further processing within the graphics processing pipeline 1400.The vertex processing unit of the VTG 1406 may be a programmable execution unit that executes vertex shader programs, lighting and transforming vertex data as specified by the vertex shader programs. For example, the vertex processing unit might be programmed to transform the vertex data from an object-based coordinate representation (e.g. object space) to an alternatively based coordinate system such as world space or normalize device coordinates (NDC) space. Additionally, the vertex processing unit may read vertex data and vertex attributes that are stored in shared memory by the VAF 1404 and process the vertex data and vertex attributes. In one example, the vertex processing unit stores processed vertices in shared memory.The tessellation initialization processing unit (e.g., hull shader, tessellation control shader) may execute tessellation initialization shader programs. In one example, the tessellation initialization processing unit processes vertices produced by the vertex processing unit and generates graphics primitives sometimes referred to as "patches". The tessellation initialization processing unit may also generate various patch attributes, wherein the patch data and the patch attributes are stored to shared memory. The task generation unit of the VTG 1406 may retrieve data and attributes for vertices and patches from shared memory. In one example, the task generation unit generates tasks for processing the vertices and patches for processing by the later stages in the graphics processing pipeline 1400.The tasks produced by the task generation unit may be redistributed by the task distributor of the VTG 1406. For example, the tasks produced by the various instances of the vertex shader program and the tessellation initialization program may vary significantly between one graphics processing pipeline 1400 and another. Accordingly, the task distributor may redistribute these tasks such that each graphics processing pipeline 1400 has approximately the same workload during later pipeline stages.As already noted, the VTG 1406 may also include a topology generation unit. In one example, the topology generation unit retrieves tasks distributed by the task distributor, indexes the vertices, including vertices associated with patches, and computes coordinates (UV) for tessellation vertices and the indices that connect the tessellation vertices to form graphics primitives. The indexed vertices may be stored by the topology generation unit in shared memory. The tessellation processing unit of the VTG 1406 may be configured to execute tessellation shader programs (e.g., domain shaders, tessellation evaluation shaders). The tessellation processing unit may read input data from shared memory and write output data to shared memory. The output data may be passed from the shared memory to the geometry processing unit (e.g., the next shader stage) as input data.The geometry processing unit of the VTG 1406 may execute geometry shader programs to transform graphics primitives (e.g., triangles, line segments, points, etc.). In one example, vertices are grouped to construct graphics primitives, wherein the geometry processing unit subdivides the graphics primitives into one or more new graphics primitives. The geometry processing unit may also calculate parameters such as, for example, plain equation coefficients, that may be used to rasterize the new graphics primitives.The illustrated world space pipeline 1420 also includes a viewport scale, cull, and clip unit (VPC) 1408 that receives the parameters and vertices specifying new graphics primitives from the VTG 1406. In one example, the VPC 1408 performs clipping, cuffing, perspective correction, and viewport transformation to identify the graphics primitives that are potentially viewable in the final rendered image. The VPC 1408 may also identify the graphics primitives that may not be viewable.The graphics processing pipeline 1400 may also include a tiling unit 1410 coupled to the world space pipeline 1420. The tiling unit 1410 may be a graphics primitive sorting engine, wherein graphics primitives are processed in the world space pipeline 1420 and then transmitted to the tiling unit 1410. In this regard, the graphics processing pipeline 1400 may also include a screen space pipeline 1422, wherein the screen space may be divided into cache tiles. Each cache tile may therefore be associated with a portion of the screen space. For each graphics primitive, the tiling unit 1410 may identify the set of cache tiles that intersect with the graphics primitive (e.g. "tiling"). After tiling a number of graphics primitives, the tiling unit 1410 may process the graphics primitives on a cache tile basis. In one example, graphics primitives associated with a particular cache tile are transmitted to a setup unit 1412 in the screen space pipeline 1422 one tile at a time. Graphics primitives that intersect with multiple cache tiles may be processed once in the world space pipeline 1420, while being transmitted multiple times to the screen space pipeline 1422.In one example, the setup unit 1412 receives vertex data from the VPC 1408 via the tiling unit 1410 and calculates parameters associated with the graphics primitives. The parameters may include, for example, edge equations, partial plane equations, and depth plain equations. The screen space pipeline 1422 may also include a rasterizer 1414 coupled to the setup unit 1412. The rasterizer may scan convert the new graphics primitives and transmit fragments and coverage data to a pixel shading unit (PS) 1416. The rasterizer 1414 may also perform Z culling and other Z-based optimizations.The PS 1416, which may access shared memory, may execute fragment shader programs that transform fragments received from the rasterizer 1414. More particularly, the fragment shader programs may shade fragments at pixel-level granularity (e.g., functioning as pixel shader programs). In another example, the fragment shader programs shade fragments at sample-level granularity, where each pixel includes multiple samples, and each sample represents a portion of a pixel. Moreover, the fragment shader programs may shade fragments at any other granularity, depending on the circumstances (e.g., sampling rate). The PS 1416 may perform blending, shading, perspective correction, texture mapping, etc., to generate shaded fragments.The illustrated screen space pipeline 1422 also includes a raster operations unit (ROP) 1418, which may perform raster operations such as, for example, stenciling, Z-testing, blending, and so forth. The ROP 1418 may then transmit pixel data as processed graphics data to one or more rendered targets (e.g., graphics memory). The ROP 1418 may be configured to compress Z or color data that is written to memory and decompress Z or color data that is read from memory. The location of the ROP 1418 may vary depending on the circumstances.The graphics processing pipeline 1400 may be implemented by one or more processing elements. For example, the VTG 1406 and/or the PS 1416 may be implemented in one or more SM's, the PD 1402, the VAF 1404, the VPC 1408, the tiling unit 1410, the setup unit 1412, the rasterizer 1414 and/or the ROP 1418 might be implemented in processing elements of a particular GPC in conjunction with a corresponding partition unit. The graphics processing pipeline 1400 may also be implemented in fixed-functionality hardware logic. Indeed, the graphics processing pipeline 1400 may be implemented in a PPU.Thus, the illustrated world space pipeline 1420 processes graphics objects in 3D space, where the position of each graphics object is known relative to other graphics objects and relative to a 3D coordinate system. By contrast, the screen space pipeline 1422 may process graphics objects that have been projected from the 3D coordinate system onto a 2D planar surface that represents the surface of the display device. Additionally, the world space pipeline 1420 may be divided into an alpha phase pipeline and a beta phase pipeline, wherein the alpha phase pipeline includes pipeline stages from the PD 1402 through the task generation unit. The beta phase pipeline might include pipeline stages from the topology generation unit through the VPC 1408. In such a case, the graphics processing pipeline 1400 may perform a first set of operations (e.g., a single thread, a thread group, multiple thread groups acting in unison) in the alpha phase pipeline and a second set of operations (e.g., a single thread, a thread group, multiple thread groups acting in unison) in the beta phase pipeline.If multiple graphics processing pipelines 1400 are in use, the vertex data and vertex attributes associated with a set of graphics objects may be divided so that each graphics processing pipeline 1400 has a similar workload through the alpha phase. Accordingly, alpha phase processing may substantially expand the amount of vertex data and vertex attributes, such that the amount of vertex data and vertex attributes produced by the task generation unit is significantly larger than the amount of vertex data and vertex attributes processed by the PD 1402 and the VAF 1404. Moreover, the task generation units associated with different graphics processing pipelines 1400 may produce vertex data and vertex attributes having different levels of quality, even when beginning the alpha phase with the same quantity of attributes. In such cases, the task distributor may redistribute the attributes produced by the alpha phase pipeline so that each graphics processing pipeline 1400 has approximately the same workload at the beginning of the beta phase pipeline.Turning now to FIG. 15 , a streaming multi-processor (SM) 1500 is shown. The illustrated SM 1500 includes K scheduler units 1504 coupled to an instruction cache 1502, wherein each scheduler unit 1504 receives a thread block array from a pipeline manager (not shown) and manages instruction scheduling for one or more thread blocks of each active thread block array. The scheduler unit 1504 may schedule threads for execution in groups of parallel threads, where each group may be referred to as a "warp". Thus, each warp might include, for example, sixty-four threads. Additionally, the scheduler unit 1504 may manage a plurality of different thread blocks, allocating the thread blocks to warps for execution. The scheduler unit may then schedule instructions from the plurality of different warps on various functional units during each clock cycle. Each scheduler unit 1504 may include one or more instructions dispatch units 1522, wherein each dispatch unit 1522 transmits instructions to one or more of the functional units. The number of dispatch units 1522 may vary depending on the circumstances. In the illustrated example, the scheduler unit 1504 includes two dispatch units 1522 that enable two different instructions from the same warp to be dispatched during each clock cycle.The SM 1500 may also include a register file 1506. The register file 1506 may include a set of registers that are divided between the functional units such that each functional unit is allocated a dedicated portion of the register file 1506. The register file 1506 may also be divided between different warps being executed by the SM 1500. In one example the register file 1506 provides temporary storage for operands connected to the data paths of the functional units. The illustrated SM 1500 also includes L processing cores 1508, wherein L may be a relatively large number (e.g., 192). Each core 1508 may be a pipelined, single-precision processing unit that includes a floating point arithmetic logic unit (e.g., IEEE 754-2008) as well as an integer arithmetic logic unit.The illustrated SM 1500 also includes M double precision units (DPU's) 1510, N special function units (SFU's) 1512 and P load/store units (LSU's) 1514. Each DPU 1510 may implement double-precision floating point arithmetic and each SFU 1512 may perform special functions such as, for example, rectangle copying pixel blending, etc. Additionally, each LSU 1514 may conduct load and store operations between a shared memory 1518 and the register file 1506. In one example, the load and store operations are conducted through J texture unit/L1 caches 1520 and an interconnected network 1516. In one example, the J texture unit/L1 caches 1520 are also coupled to a crossbar (not shown). Thus, the interconnect network 1516 may connect each of the functional units to the register file 1506 and to the shared memory 1518. In one example, the interconnect network 1516 functions as a crossbar that connects any of the functional units to any of the registers in the register file 1506.The SM 1500 may be implemented within a graphics processor (e.g., graphics processing unit/GPU), wherein the texture unit/L1 caches 1520 may access texture maps from memory and sample the texture maps to produce sampled texture values for use in shader programs. Texture operations performed by the texture unit/L1 caches 1520 include, but are not limited to, antialiasing based on mipmaps.Additional System Overview ExampleFIG. 16 is a block diagram of a processing system 1600, according to an embodiment. In various embodiments the system 1600 includes one or more processors 1602 and one or more graphics processors 1608, and may be a single processor desktop system, a multiprocessor workstation system, or a server system having a large number of processors 1602 or processor cores 1607. In on embodiment, the system 1600 is a processing platform incorporated within a system-on-a-chip (SoC) integrated circuit for use in mobile, handheld, or embedded devices.An embodiment of system 1600 can include, or be incorporated within a server-based gaming platform, a game console, including a game and media console, a mobile gaming console, a handheld game console, or an online game console. In some embodiments system 1600 is a mobile phone, smart phone, tablet computing device or mobile Internet device. Data processing system 1600 can also include, couple with, or be integrated within a wearable device, such as a smart watch wearable device, smart eyewear device, augmented reality device, or virtual reality device. In some embodiments, data processing system 1600 is a television or set top box device having one or more processors 1602 and a graphical interface generated by one or more graphics processors 1608.In some embodiments, the one or more processors 1602 each include one or more processor cores 1607 to process instructions which, when executed, perform operations for system and user software. In some embodiments, each of the one or more processor cores 1607 is configured to process a specific instruction set 1609. In some embodiments, instruction set 1609 may facilitate Complex Instruction Set Computing (CISC), Reduced Instruction Set Computing (RISC), or computing via a Very Long Instruction Word (VLIW). Multiple processor cores 1607 may each process a different instruction set 1609, which may include instructions to facilitate the emulation of other instruction sets. Processor core 1607 may also include other processing devices, such a Digital Signal Processor (DSP).In some embodiments, the processor 1602 includes cache memory 1604. Depending on the architecture, the processor 1602 can have a single internal cache or multiple levels of internal cache. In some embodiments, the cache memory is shared among various components of the processor 1602. In some embodiments, the processor 1602 also uses an external cache (e.g., a Level-3 (L3) cache or Last Level Cache (LLC)) (not shown), which may be shared among processor cores 1607 using known cache coherency techniques. A register file 1606 is additionally included in processor 1602 which may include different types of registers for storing different types of data (e.g., integer registers, floating point registers, status registers, and an instruction pointer register). Some registers may be general-purpose registers, while other registers may be specific to the design of the processor 1602.In some embodiments, processor 1602 is coupled to a processor bus 1610 to transmit communication signals such as address, data, or control signals between processor 1602 and other components in system 1600. In one embodiment the system 1600 uses an exemplary 'hub' system architecture, including a memory controller hub 1616 and an Input Output (I/O) controller hub 1630. A memory controller hub 1616 facilitates communication between a memory device and other components of system 1600, while an I/O Controller Hub (ICH) 1630 provides connections to I/O devices via a local I/O bus. In one embodiment, the logic of the memory controller hub 1616 is integrated within the processor.Memory device 1620 can be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as process memory. In one embodiment the memory device 1620 can operate as system memory for the system 1600, to store data 1622 and instructions 1621 for use when the one or more processors 1602 executes an application or process. Memory controller hub 1616 also couples with an optional external graphics processor 1612, which may communicate with the one or more graphics processors 1608 in processors 1602 to perform graphics and media operations.In some embodiments, ICH 1630 enables peripherals to connect to memory device 1620 and processor 1602 via a high-speed I/O bus. The I/O peripherals include, but are not limited to, an audio controller 1646, a firmware interface 1628, a wireless transceiver 1626 (e.g., Wi-Fi, Bluetooth), a data storage device 1624 (e.g., hard disk drive, flash memory, etc.), and a legacy I/O controller 1640 for coupling legacy (e.g., Personal System 2 (PS/2)) devices to the system. One or more Universal Serial Bus (USB) controllers 1642 connect input devices, such as keyboard and mouse 1644 combinations. A network controller 1634 may also couple to ICH 1630. In some embodiments, a high-performance network controller (not shown) couples to processor bus 1610. It will be appreciated that the system 1600 shown is exemplary and not limiting, as other types of data processing systems that are differently configured may also be used. For example, the I/O controller hub 1630 may be integrated within the one or more processor 1602, or the memory controller hub 1616 and I/O controller hub 1630 may be integrated into a discreet external graphics processor, such as the external graphics processor 1612.FIG. 17 is a block diagram of an embodiment of a processor 1700 having one or more processor cores 1702A-1702N, an integrated memory controller 1714, and an integrated graphics processor 1708. Those elements of FIG. 17 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. Processor 1700 can include additional cores up to and including additional core 1702N represented by the dashed lined boxes. Each of processor cores 1702A-1702N includes one or more internal cache units 1704A-1704N. In some embodiments each processor core also has access to one or more shared cached units 1706.The internal cache units 1704A-1704N and shared cache units 1706 represent a cache memory hierarchy within the processor 1700. The cache memory hierarchy may include at least one level of instruction and data cache within each processor core and one or more levels of shared mid-level cache, such as a Level 2 (L2), Level 3 (L3), Level 4 (L4), or other levels of cache, where the highest level of cache before external memory is classified as the LLC. In some embodiments, cache coherency logic maintains coherency between the various cache units 1706 and 1704A-1704N.In some embodiments, processor 1700 may also include a set of one or more bus controller units 1716 and a system agent core 1710. The one or more bus controller units 1716 manage a set of peripheral buses, such as one or more Peripheral Component Interconnect buses (e.g., PCI, PCI Express). System agent core 1710 provides management functionality for the various processor components. In some embodiments, system agent core 1710 includes one or more integrated memory controllers 1714 to manage access to various external memory devices (not shown).In some embodiments, one or more of the processor cores 1702A-1702N include support for simultaneous multi-threading. In such embodiment, the system agent core 1710 includes components for coordinating and operating cores 1702A-1702N during multi-threaded processing. System agent core 1710 may additionally include a power control unit (PCU), which includes logic and components to regulate the power state of processor cores 1702A-1702N and graphics processor 1708.In some embodiments, processor 1700 additionally includes graphics processor 1708 to execute graphics processing operations. In some embodiments, the graphics processor 1708 couples with the set of shared cache units 1706, and the system agent core 1710, including the one or more integrated memory controllers 1714. In some embodiments, a display controller 1711 is coupled with the graphics processor 1708 to drive graphics processor output to one or more coupled displays. In some embodiments, display controller 1711 may be a separate module coupled with the graphics processor via at least one interconnect, or may be integrated within the graphics processor 1708 or system agent core 1710.In some embodiments, a ring based interconnect unit 1712 is used to couple the internal components of the processor 1700. However, an alternative interconnect unit may be used, such as a point-to-point interconnect, a switched interconnect, or other techniques, including techniques well known in the art. In some embodiments, graphics processor 1708 couples with the ring interconnect 1712 via an I/O link 1713.The exemplary I/O link 1713 represents at least one of multiple varieties of I/O interconnects, including an on package I/O interconnect which facilitates communication between various processor components and a high-performance embedded memory module 1718, such as an eDRAM module. In some embodiments, each of the processor cores 1702-1702N and graphics processor 1708 use embedded memory modules 1718 as a shared Last Level Cache.In some embodiments, processor cores 1702A-1702N are homogenous cores executing the same instruction set architecture. In another embodiment, processor cores 1702A-1702N are heterogeneous in terms of instruction set architecture (ISA), where one or more of processor cores 1702A-N execute a first instruction set, while at least one of the other cores executes a subset of the first instruction set or a different instruction set. In one embodiment processor cores 1702A-1702N are heterogeneous in terms of microarchitecture, where one or more cores having a relatively higher power consumption couple with one or more power cores having a lower power consumption. Additionally, processor 1700 can be implemented on one or more chips or as an SoC integrated circuit having the illustrated components, in addition to other components.FIG. 18 is a block diagram of a graphics processor 1800, which may be a discrete graphics processing unit, or may be a graphics processor integrated with a plurality of processing cores. In some embodiments, the graphics processor communicates via a memory mapped I/O interface to registers on the graphics processor and with commands placed into the processor memory. In some embodiments, graphics processor 1800 includes a memory interface 1814 to access memory. Memory interface 1814 can be an interface to local memory, one or more internal caches, one or more shared external caches, and/or to system memory.In some embodiments, graphics processor 1800 also includes a display controller 1802 to drive display output data to a display device 1820. Display controller 1802 includes hardware for one or more overlay planes for the display and composition of multiple layers of video or user interface elements. In some embodiments, graphics processor 1800 includes a video codec engine 1806 to encode, decode, or transcode media to, from, or between one or more media encoding formats, including, but not limited to Moving Picture Experts Group (MPEG) formats such as MPEG-2, Advanced Video Coding (AVC) formats such as H.264/MPEG-4 AVC, as well as the Society of Motion Picture & Television Engineers (SMPTE) 421M/VC-1, and Joint Photographic Experts Group (JPEG) formats such as JPEG, and Motion JPEG (MJPEG) formats.In some embodiments, graphics processor 1800 includes a block image transfer (BLIT) engine 1804 to perform two-dimensional (2D) rasterizer operations including, for example, bit-boundary block transfers. However, in one embodiment, 2D graphics operations are performed using one or more components of graphics processing engine (GPE) 1810. In some embodiments, graphics processing engine 1810 is a compute engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations.In some embodiments, GPE 1810 includes a 3D pipeline 1812 for performing 3D operations, such as rendering three-dimensional images and scenes using processing functions that act upon 3D primitive shapes (e.g., rectangle, triangle, etc.). The 3D pipeline 1812 includes programmable and fixed function elements that perform various tasks within the element and/or spawn execution threads to a 3D/Media sub-system 1815. While 3D pipeline 1812 can be used to perform media operations, an embodiment of GPE 1810 also includes a media pipeline 1816 that is specifically used to perform media operations, such as video post-processing and image enhancement.In some embodiments, media pipeline 1816 includes fixed function or programmable logic units to perform one or more specialized media operations, such as video decode acceleration, video de-interlacing, and video encode acceleration in place of, or on behalf of video codec engine 1806. In some embodiments, media pipeline 1816 additionally includes a thread spawning unit to spawn threads for execution on 3D/Media sub-system 1815. The spawned threads perform computations for the media operations on one or more graphics execution units included in 3D/Media sub-system 1815.In some embodiments, 3D/Media subsystem 1815 includes logic for executing threads spawned by 3D pipeline 1812 and media pipeline 1816. In one embodiment, the pipelines send thread execution requests to 3D/Media subsystem 1815, which includes thread dispatch logic for arbitrating and dispatching the various requests to available thread execution resources. The execution resources include an array of graphics execution units to process the 3D and media threads. In some embodiments, 3D/Media subsystem 1815 includes one or more internal caches for thread instructions and data. In some embodiments, the subsystem also includes shared memory, including registers and addressable memory, to share data between threads and to store output data.3D/Media ProcessingFIG. 19 is a block diagram of a graphics processing engine 1910 of a graphics processor in accordance with some embodiments. In one embodiment, the GPE 1910 is a version of the GPE 1810 shown in FIG. 18 . Elements of FIG. 19 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such.In some embodiments, GPE 1910 couples with a command streamer 1903, which provides a command stream to the GPE 3D and media pipelines 1912, 1916. In some embodiments, command streamer 1903 is coupled to memory, which can be system memory, or one or more of internal cache memory and shared cache memory. In some embodiments, command streamer 1903 receives commands from the memory and sends the commands to 3D pipeline 1912 and/or media pipeline 1916. The commands are directives fetched from a ring buffer, which stores commands for the 3D and media pipelines 1912, 1916. In one embodiment, the ring buffer can additionally include batch command buffers storing batches of multiple commands. The 3D and media pipelines 1912, 1916 process the commands by performing operations via logic within the respective pipelines or by dispatching one or more execution threads to an execution unit array 1914. In some embodiments, execution unit array 1914 is scalable, such that the array includes a variable number of execution units based on the target power and performance level of GPE 1910.In some embodiments, a sampling engine 1930 couples with memory (e.g., cache memory or system memory) and execution unit array 1914. In some embodiments, sampling engine 1930 provides a memory access mechanism for execution unit array 1914 that allows execution array 1914 to read graphics and media data from memory. In some embodiments, sampling engine 1930 includes logic to perform specialized image sampling operations for media.In some embodiments, the specialized media sampling logic in sampling engine 1930 includes a de-noise/de-interlace module 1932, a motion estimation module 1934, and an image scaling and filtering module 1936. In some embodiments, de-noise/de-interlace module 1932 includes logic to perform one or more of a de-noise or a de-interlace algorithm on decoded video data. The de-interlace logic combines alternating fields of interlaced video content into a single fame of video. The de-noise logic reduces or removes data noise from video and image data. In some embodiments, the de-noise logic and de-interlace logic are motion adaptive and use spatial or temporal filtering based on the amount of motion detected in the video data. In some embodiments, the de-noise/de-interlace module 1932 includes dedicated motion detection logic (e.g., within the motion estimation engine 1934).In some embodiments, motion estimation engine 1934 provides hardware acceleration for video operations by performing video acceleration functions such as motion vector estimation and prediction on video data. The motion estimation engine determines motion vectors that describe the transformation of image data between successive video frames. In some embodiments, a graphics processor media codec uses video motion estimation engine 1934 to perform operations on video at the macro-block level that may otherwise be too computationally intensive to perform with a general-purpose processor. In some embodiments, motion estimation engine 1934 is generally available to graphics processor components to assist with video decode and processing functions that are sensitive or adaptive to the direction or magnitude of the motion within video data.In some embodiments, image scaling and filtering module 1936 performs image-processing operations to enhance the visual quality of generated images and video. In some embodiments, scaling and filtering module 1936 processes image and video data during the sampling operation before providing the data to execution unit array 1914.In some embodiments, the GPE 1910 includes a data port 1944, which provides an additional mechanism for graphics subsystems to access memory. In some embodiments, data port 1944 facilitates memory access for operations including render target writes, constant buffer reads, scratch memory space reads/writes, and media surface accesses. In some embodiments, data port 1944 includes cache memory space to cache accesses to memory. The cache memory can be a single data cache or separated into multiple caches for the multiple subsystems that access memory via the data port (e.g., a render buffer cache, a constant buffer cache, etc.). In some embodiments, threads executing on an execution unit in execution unit array 1914 communicate with the data port by exchanging messages via a data distribution interconnect that couples each of the sub-systems of GPE 1910.Execution UnitsFIG. 20 is a block diagram of another embodiment of a graphics processor 2000. Elements of FIG. 20 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such.In some embodiments, graphics processor 2000 includes a ring interconnect 2002, a pipeline front-end 2004, a media engine 2037, and graphics cores 2080A-2080N. In some embodiments, ring interconnect 2002 couples the graphics processor to other processing units, including other graphics processors or one or more general-purpose processor cores. In some embodiments, the graphics processor is one of many processors integrated within a multi-core processing system.In some embodiments, graphics processor 2000 receives batches of commands via ring interconnect 2002. The incoming commands are interpreted by a command streamer 2003 in the pipeline front-end 2004. In some embodiments, graphics processor 2000 includes scalable execution logic to perform 3D geometry processing and media processing via the graphics core(s) 2080A-2080N. For 3D geometry processing commands, command streamer 2003 supplies commands to geometry pipeline 2036. For at least some media processing commands, command streamer 2003 supplies the commands to a video front end 2034, which couples with a media engine 2037. In some embodiments, media engine 2037 includes a Video Quality Engine (VQE) 2030 for video and image post-processing and a multi-format encode/decode (MFX) 2033 engine to provide hardware-accelerated media data encode and decode. In some embodiments, geometry pipeline 2036 and media engine 2037 each generate execution threads for the thread execution resources provided by at least one graphics core 2080A.In some embodiments, graphics processor 2000 includes scalable thread execution resources featuring modular cores 2080A-2080N (sometimes referred to as core slices), each having multiple sub-cores 2050A-2050N, 2060A-2060N (sometimes referred to as core sub-slices). In some embodiments, graphics processor 2000 can have any number of graphics cores 2080A through 2080N. In some embodiments, graphics processor 2000 includes a graphics core 2080A having at least a first sub-core 2050A and a second core sub-core 2060A. In other embodiments, the graphics processor is a low power processor with a single sub-core (e.g., 2050A). In some embodiments, graphics processor 2000 includes multiple graphics cores 2080A-2080N, each including a set of first sub-cores 2050A-2050N and a set of second sub-cores 2060A-2060N. Each sub-core in the set of first sub-cores 2050A-2050N includes at least a first set of execution units 2052A-2052N and media/texture samplers 2054A-2054N. Each sub-core in the set of second sub-cores 2060A-2060N includes at least a second set of execution units 2062A-2062N and samplers 2064A-2064N. In some embodiments, each sub-core 2050A-2050N, 2060A-2060N shares a set of shared resources 2070A-2070N. In some embodiments, the shared resources include shared cache memory and pixel operation logic. Other shared resources may also be included in the various embodiments of the graphics processor.FIG. 21 illustrates thread execution logic 2100 including an array of processing elements employed in some embodiments of a GPE. Elements of FIG. 21 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such.In some embodiments, thread execution logic 2100 includes a pixel shader 2102, a thread dispatcher 2104, instruction cache 2106, a scalable execution unit array including a plurality of execution units 2108A-2108N, a sampler 2110, a data cache 2112, and a data port 2114. In one embodiment the included components are interconnected via an interconnect fabric that links to each of the components. In some embodiments, thread execution logic 2100 includes one or more connections to memory, such as system memory or cache memory, through one or more of instruction cache 2106, data port 2114, sampler 2110, and execution unit array 2108A-2108N. In some embodiments, each execution unit (e.g. 2108A) is an individual vector processor capable of executing multiple simultaneous threads and processing multiple data elements in parallel for each thread. In some embodiments, execution unit array 2108A-2108N includes any number individual execution units.In some embodiments, execution unit array 2108A-2108N is primarily used to execute "shader" programs. In some embodiments, the execution units in array 2108A-2108N execute an instruction set that includes native support for many standard 3D graphics shader instructions, such that shader programs from graphics libraries (e.g., Direct 3D and OpenGL) are executed with a minimal translation. The execution units support vertex and geometry processing (e.g., vertex programs, geometry programs, vertex shaders), pixel processing (e.g., pixel shaders, fragment shaders) and general-purpose processing (e.g., compute and media shaders).Each execution unit in execution unit array 2108A-2108N operates on arrays of data elements. The number of data elements is the "execution size," or the number of channels for the instruction. An execution channel is a logical unit of execution for data element access, masking, and flow control within instructions. The number of channels may be independent of the number of physical Arithmetic Logic Units (ALUs) or Floating Point Units (FPUs) for a particular graphics processor. In some embodiments, execution units 2108A-2108N support integer and floating-point data types.The execution unit instruction set includes single instruction multiple data (SIMD) instructions. The various data elements can be stored as a packed data type in a register and the execution unit will process the various elements based on the data size of the elements. For example, when operating on a 256-bit wide vector, the 256 bits of the vector are stored in a register and the execution unit operates on the vector as four separate 64-bit packed data elements (Quad-Word (QW) size data elements), eight separate 32-bit packed data elements (Double Word (DW) size data elements), sixteen separate 16-bit packed data elements (Word (W) size data elements), or thirty-two separate 8-bit data elements (byte (B) size data elements). However, different vector widths and register sizes are possible.One or more internal instruction caches (e.g., 2106) are included in the thread execution logic 2100 to cache thread instructions for the execution units. In some embodiments, one or more data caches (e.g., 2112) are included to cache thread data during thread execution. In some embodiments, sampler 2110 is included to provide texture sampling for 3D operations and media sampling for media operations. In some embodiments, sampler 2110 includes specialized texture or media sampling functionality to process texture or media data during the sampling process before providing the sampled data to an execution unit.During execution, the graphics and media pipelines send thread initiation requests to thread execution logic 2100 via thread spawning and dispatch logic. In some embodiments, thread execution logic 2100 includes a local thread dispatcher 2104 that arbitrates thread initiation requests from the graphics and media pipelines and instantiates the requested threads on one or more execution units 2108A-2108N. For example, the geometry pipeline (e.g., 2036 of FIG. 20 ) dispatches vertex processing, tessellation, or geometry processing threads to thread execution logic 2100 ( FIG. 21 ). In some embodiments, thread dispatcher 2104 can also process runtime thread spawning requests from the executing shader programs.Once a group of geometric objects has been processed and rasterized into pixel data, pixel shader 2102 is invoked to further compute output information and cause results to be written to output surfaces (e.g., color buffers, depth buffers, stencil buffers, etc.). In some embodiments, pixel shader 2102 calculates the values of the various vertex attributes that are to be interpolated across the rasterized object. In some embodiments, pixel shader 2102 then executes an application programming interface (API)-supplied pixel shader program. To execute the pixel shader program, pixel shader 2102 dispatches threads to an execution unit (e.g., 2108A) via thread dispatcher 2104. In some embodiments, pixel shader 2102 uses texture sampling logic in sampler 2110 to access texture data in texture maps stored in memory. Arithmetic operations on the texture data and the input geometry data compute pixel color data for each geometric fragment, or discards one or more pixels from further processing.In some embodiments, the data port 2114 provides a memory access mechanism for the thread execution logic 2100 output processed data to memory for processing on a graphics processor output pipeline. In some embodiments, the data port 2114 includes or couples to one or more cache memories (e.g., data cache 2112) to cache data for memory access via the data port.FIG. 22 is a block diagram illustrating a graphics processor instruction formats 2200 according to some embodiments. In one or more embodiment, the graphics processor execution units support an instruction set having instructions in multiple formats. The solid lined boxes illustrate the components that are generally included in an execution unit instruction, while the dashed lines include components that are optional or that are only included in a sub-set of the instructions. In some embodiments, instruction format 2200 described and illustrated are macro-instructions, in that they are instructions supplied to the execution unit, as opposed to micro-operations resulting from instruction decode once the instruction is processed.In some embodiments, the graphics processor execution units natively support instructions in a 128-bit format 2210. A 64-bit compacted instruction format 2230 is available for some instructions based on the selected instruction, instruction options, and number of operands. The native 128-bit format 2210 provides access to all instruction options, while some options and operations are restricted in the 64-bit format 2230. The native instructions available in the 64-bit format 2230 vary by embodiment. In some embodiments, the instruction is compacted in part using a set of index values in an index field 2213. The execution unit hardware references a set of compaction tables based on the index values and uses the compaction table outputs to reconstruct a native instruction in the 128-bit format 2210.For each format, instruction opcode 2212 defines the operation that the execution unit is to perform. The execution units execute each instruction in parallel across the multiple data elements of each operand. For example, in response to an add instruction the execution unit performs a simultaneous add operation across each color channel representing a texture element or picture element. By default, the execution unit performs each instruction across all data channels of the operands. In some embodiments, instruction control field 2214 enables control over certain execution options, such as channels selection (e.g., predication) and data channel order (e.g., swizzle). For 128-bit instructions 2210 an exec-size field 2216 limits the number of data channels that will be executed in parallel. In some embodiments, exec-size field 2216 is not available for use in the 64-bit compact instruction format 2230.Some execution unit instructions have up to three operands including two source operands, src0 2220, src1 2222, and one destination 2218. In some embodiments, the execution units support dual destination instructions, where one of the destinations is implied. Data manipulation instructions can have a third source operand (e.g., SRC2 2224), where the instruction opcode 2212 determines the number of source operands. An instruction's last source operand can be an immediate (e.g., hard-coded) value passed with the instruction.In some embodiments, the 128-bit instruction format 2210 includes an access/address mode information 2226 specifying, for example, whether direct register addressing mode or indirect register addressing mode is used. When direct register addressing mode is used, the register address of one or more operands is directly provided by bits in the instruction 2210.In some embodiments, the 128-bit instruction format 2210 includes an access/address mode field 2226, which specifies an address mode and/or an access mode for the instruction. In one embodiment the access mode to define a data access alignment for the instruction. Some embodiments support access modes including a 16-byte aligned access mode and a 1-byte aligned access mode, where the byte alignment of the access mode determines the access alignment of the instruction operands. For example, when in a first mode, the instruction 2210 may use byte-aligned addressing for source and destination operands and when in a second mode, the instruction 2210 may use 16-byte-aligned addressing for all source and destination operands.In one embodiment, the address mode portion of the access/address mode field 2226 determines whether the instruction is to use direct or indirect addressing. When direct register addressing mode is used bits in the instruction 2210 directly provide the register address of one or more operands. When indirect register addressing mode is used, the register address of one or more operands may be computed based on an address register value and an address immediate field in the instruction.In some embodiments instructions are grouped based on opcode 2212 bit-fields to simplify Opcode decode 2240. For an 8-bit opcode, bits 4, 5, and 6 allow the execution unit to determine the type of opcode. The precise opcode grouping shown is merely an example. In some embodiments, a move and logic opcode group 2242 includes data movement and logic instructions (e.g., move (mov), compare (cmp)). In some embodiments, move and logic group 2242 shares the five most significant bits (MSB), where move (mov) instructions are in the form of 0000xxxxb and logic instructions are in the form of 0001xxxxb. A flow control instruction group 2244 (e.g., call, jump (jmp)) includes instructions in the form of 0010xxxxb (e.g., 0x20). A miscellaneous instruction group 2246 includes a mix of instructions, including synchronization instructions (e.g., wait, send) in the form of 0011xxxxb (e.g., 0x30). A parallel math instruction group 2248 includes component-wise arithmetic instructions (e.g., add, multiply (mul)) in the form of 0100xxxxb (e.g., 0x40). The parallel math group 2248 performs the arithmetic operations in parallel across data channels. The vector math group 2250 includes arithmetic instructions (e.g., dp4) in the form of 0101xxxxb (e.g., 0x50). The vector math group performs arithmetic such as dot product calculations on vector operands.Graphics PipelineFIG. 23 is a block diagram of another embodiment of a graphics processor 2300. Elements of FIG. 23 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such.In some embodiments, graphics processor 2300 includes a graphics pipeline 2320, a media pipeline 2330, a display engine 2340, thread execution logic 2350, and a render output pipeline 2370. In some embodiments, graphics processor 2300 is a graphics processor within a multi-core processing system that includes one or more general purpose processing cores. The graphics processor is controlled by register writes to one or more control registers (not shown) or via commands issued to graphics processor 2300 via a ring interconnect 2302. In some embodiments, ring interconnect 2302 couples graphics processor 2300 to other processing components, such as other graphics processors or general-purpose processors. Commands from ring interconnect 2302 are interpreted by a command streamer 2303, which supplies instructions to individual components of graphics pipeline 2320 or media pipeline 2330.In some embodiments, command streamer 2303 directs the operation of a vertex fetcher 2305 that reads vertex data from memory and executes vertex-processing commands provided by command streamer 2303. In some embodiments, vertex fetcher 2305 provides vertex data to a vertex shader 2307, which performs coordinate space transformation and lighting operations to each vertex. In some embodiments, vertex fetcher 2305 and vertex shader 2307 execute vertex-processing instructions by dispatching execution threads to execution units 2352A, 2352B via a thread dispatcher 2331.In some embodiments, execution units 2352A, 2352B are an array of vector processors having an instruction set for performing graphics and media operations. In some embodiments, execution units 2352A, 2352B have an attached L1 cache 2351 that is specific for each array or shared between the arrays. The cache can be configured as a data cache, an instruction cache, or a single cache that is partitioned to contain data and instructions in different partitions.In some embodiments, graphics pipeline 2320 includes tessellation components to perform hardware-accelerated tessellation of 3D objects. In some embodiments, a programmable hull shader 2311 configures the tessellation operations. A programmable domain shader 2317 provides back-end evaluation of tessellation output. A tessellator 2313 operates at the direction of hull shader 2311 and contains special purpose logic to generate a set of detailed geometric objects based on a coarse geometric model that is provided as input to graphics pipeline 2320. In some embodiments, if tessellation is not used, tessellation components 2311, 2313, 2317 can be bypassed.In some embodiments, complete geometric objects can be processed by a geometry shader 2319 via one or more threads dispatched to execution units 2352A, 2352B, or can proceed directly to the clipper 2329. In some embodiments, the geometry shader operates on entire geometric objects, rather than vertices or patches of vertices as in previous stages of the graphics pipeline. If the tessellation is disabled the geometry shader 2319 receives input from the vertex shader 2307. In some embodiments, geometry shader 2319 is programmable by a geometry shader program to perform geometry tessellation if the tessellation units are disabled.Before rasterization, a clipper 2329 processes vertex data. The clipper 2329 may be a fixed function clipper or a programmable clipper having clipping and geometry shader functions. In some embodiments, a rasterizer 2373 (e.g., depth test component) in the render output pipeline 2370 dispatches pixel shaders to convert the geometric objects into their per pixel representations. In some embodiments, pixel shader logic is included in thread execution logic 2350. In some embodiments, an application can bypass the rasterizer 2373 and access un-rasterized vertex data via a stream out unit 2323.The graphics processor 2300 has an interconnect bus, interconnect fabric, or some other interconnect mechanism that allows data and message passing amongst the major components of the processor. In some embodiments, execution units 2352A, 2352B and associated cache(s) 2351, texture and media sampler 2354, and texture/sampler cache 2358 interconnect via a data port 2356 to perform memory access and communicate with render output pipeline components of the processor. In some embodiments, sampler 2354, caches 2351, 2358 and execution units 2352A, 2352B each have separate memory access paths.In some embodiments, render output pipeline 2370 contains a rasterizer 2373 that converts vertex-based objects into an associated pixel-based representation. In some embodiments, the rasterizer logic includes a windower/masker unit to perform fixed function triangle and line rasterization. An associated render cache 2378 and depth cache 2379 are also available in some embodiments. A pixel operations component 2377 performs pixel-based operations on the data, though in some instances, pixel operations associated with 2D operations (e.g. bit block image transfers with blending) are performed by the 2D engine 2341, or substituted at display time by the display controller 2343 using overlay display planes. In some embodiments, a shared L3 cache 2375 is available to all graphics components, allowing the sharing of data without the use of main system memory.In some embodiments, graphics processor media pipeline 2330 includes a media engine 2337 and a video front end 2334. In some embodiments, video front end 2334 receives pipeline commands from the command streamer 2303. In some embodiments, media pipeline 2330 includes a separate command streamer. In some embodiments, video front-end 2334 processes media commands before sending the command to the media engine 2337. In some embodiments, media engine 2337 includes thread spawning functionality to spawn threads for dispatch to thread execution logic 2350 via thread dispatcher 2331.In some embodiments, graphics processor 2300 includes a display engine 2340. In some embodiments, display engine 2340 is external to processor 2300 and couples with the graphics processor via the ring interconnect 2302, or some other interconnect bus or fabric. In some embodiments, display engine 2340 includes a 2D engine 2341 and a display controller 2343. In some embodiments, display engine 2340 contains special purpose logic capable of operating independently of the 3D pipeline. In some embodiments, display controller 2343 couples with a display device (not shown), which may be a system integrated display device, as in a laptop computer, or an external display device attached via a display device connector.In some embodiments, graphics pipeline 2320 and media pipeline 2330 are configurable to perform operations based on multiple graphics and media programming interfaces and are not specific to any one application programming interface (API). In some embodiments, driver software for the graphics processor translates API calls that are specific to a particular graphics or media library into commands that can be processed by the graphics processor. In some embodiments, support is provided for the Open Graphics Library (OpenGL) and Open Computing Language (OpenCL) from the Khronos Group, the Direct3D library from the Microsoft Corporation, or support may be provided to both OpenGL and D3D. Support may also be provided for the Open Source Computer Vision Library (OpenCV). A future API with a compatible 3D pipeline would also be supported if a mapping can be made from the pipeline of the future API to the pipeline of the graphics processor.Graphics Pipeline ProgrammingFIG. 24A is a block diagram illustrating a graphics processor command format 2400 according to some embodiments. FIG. 24B is a block diagram illustrating a graphics processor command sequence 2410 according to an embodiment. The solid lined boxes in FIG. 24A illustrate the components that are generally included in a graphics command while the dashed lines include components that are optional or that are only included in a sub-set of the graphics commands. The exemplary graphics processor command format 2400 of FIG. 24A includes data fields to identify a target client 2402 of the command, a command operation code (opcode) 2404, and the relevant data 2406 for the command. A sub-opcode 2405 and a command size 2408 are also included in some commands.In some embodiments, client 2402 specifies the client unit of the graphics device that processes the command data. In some embodiments, a graphics processor command parser examines the client field of each command to condition the further processing of the command and route the command data to the appropriate client unit. In some embodiments, the graphics processor client units include a memory interface unit, a render unit, a 2D unit, a 3D unit, and a media unit. Each client unit has a corresponding processing pipeline that processes the commands. Once the command is received by the client unit, the client unit reads the opcode 2404 and, if present, sub-opcode 2405 to determine the operation to perform. The client unit performs the command using information in data field 2406. For some commands an explicit command size 2408 is expected to specify the size of the command. In some embodiments, the command parser automatically determines the size of at least some of the commands based on the command opcode. In some embodiments commands are aligned via multiples of a double word.The flow diagram in FIG. 24B shows an exemplary graphics processor command sequence 2410. In some embodiments, software or firmware of a data processing system that features an embodiment of a graphics processor uses a version of the command sequence shown to set up, execute, and terminate a set of graphics operations. A sample command sequence is shown and described for purposes of example only as embodiments are not limited to these specific commands or to this command sequence. Moreover, the commands may be issued as batch of commands in a command sequence, such that the graphics processor will process the sequence of commands in at least partially concurrence.In some embodiments, the graphics processor command sequence 2410 may begin with a pipeline flush command 2412 to cause any active graphics pipeline to complete the currently pending commands for the pipeline. In some embodiments, the 3D pipeline 2422 and the media pipeline 2424 do not operate concurrently. The pipeline flush is performed to cause the active graphics pipeline to complete any pending commands. In response to a pipeline flush, the command parser for the graphics processor will pause command processing until the active drawing engines complete pending operations and the relevant read caches are invalidated. Optionally, any data in the render cache that is marked 'dirty' can be flushed to memory. In some embodiments, pipeline flush command 2412 can be used for pipeline synchronization or before placing the graphics processor into a low power state.In some embodiments, a pipeline select command 2413 is used when a command sequence requires the graphics processor to explicitly switch between pipelines. In some embodiments, a pipeline select command 2413 is required only once within an execution context before issuing pipeline commands unless the context is to issue commands for both pipelines. In some embodiments, a pipeline flush command is 2412 is required immediately before a pipeline switch via the pipeline select command 2413.In some embodiments, a pipeline control command 2414 configures a graphics pipeline for operation and is used to program the 3D pipeline 2422 and the media pipeline 2424. In some embodiments, pipeline control command 2414 configures the pipeline state for the active pipeline. In one embodiment, the pipeline control command 2414 is used for pipeline synchronization and to clear data from one or more cache memories within the active pipeline before processing a batch of commands.In some embodiments, return buffer state commands 2416 are used to configure a set of return buffers for the respective pipelines to write data. Some pipeline operations require the allocation, selection, or configuration of one or more return buffers into which the operations write intermediate data during processing. In some embodiments, the graphics processor also uses one or more return buffers to store output data and to perform cross thread communication. In some embodiments, the return buffer state 2416 includes selecting the size and number of return buffers to use for a set of pipeline operations.The remaining commands in the command sequence differ based on the active pipeline for operations. Based on a pipeline determination 2420, the command sequence is tailored to the 3D pipeline 2422 beginning with the 3D pipeline state 2430, or the media pipeline 2424 beginning at the media pipeline state 2440.The commands for the 3D pipeline state 2430 include 3D state setting commands for vertex buffer state, vertex element state, constant color state, depth buffer state, and other state variables that are to be configured before 3D primitive commands are processed. The values of these commands are determined at least in part based the particular 3D API in use. In some embodiments, 3D pipeline state 2430 commands are also able to selectively disable or bypass certain pipeline elements if those elements will not be used.In some embodiments, 3D primitive 2432 command is used to submit 3D primitives to be processed by the 3D pipeline. Commands and associated parameters that are passed to the graphics processor via the 3D primitive 2432 command are forwarded to the vertex fetch function in the graphics pipeline. The vertex fetch function uses the 3D primitive 2432 command data to generate vertex data structures. The vertex data structures are stored in one or more return buffers. In some embodiments, 3D primitive 2432 command is used to perform vertex operations on 3D primitives via vertex shaders. To process vertex shaders, 3D pipeline 2422 dispatches shader execution threads to graphics processor execution units.In some embodiments, 3D pipeline 2422 is triggered via an execute 2434 command or event. In some embodiments, a register write triggers command execution. In some embodiments execution is triggered via a 'go' or 'kick' command in the command sequence. In one embodiment command execution is triggered using a pipeline synchronization command to flush the command sequence through the graphics pipeline. The 3D pipeline will perform geometry processing for the 3D primitives. Once operations are complete, the resulting geometric objects are rasterized and the pixel engine colors the resulting pixels. Additional commands to control pixel shading and pixel back end operations may also be included for those operations.In some embodiments, the graphics processor command sequence 2410 follows the media pipeline 2424 path when performing media operations. In general, the specific use and manner of programming for the media pipeline 2424 depends on the media or compute operations to be performed. Specific media decode operations may be offloaded to the media pipeline during media decode. In some embodiments, the media pipeline can also be bypassed and media decode can be performed in whole or in part using resources provided by one or more general purpose processing cores. In one embodiment, the media pipeline also includes elements for general-purpose graphics processor unit (GPGPU) operations, where the graphics processor is used to perform SIMD vector operations using computational shader programs that are not explicitly related to the rendering of graphics primitives.In some embodiments, media pipeline 2424 is configured in a similar manner as the 3D pipeline 2422. A set of media pipeline state commands 2440 are dispatched or placed into in a command queue before the media object commands 2442. In some embodiments, media pipeline state commands 2440 include data to configure the media pipeline elements that will be used to process the media objects. This includes data to configure the video decode and video encode logic within the media pipeline, such as encode or decode format. In some embodiments, media pipeline state commands 2440 also support the use one or more pointers to "indirect" state elements that contain a batch of state settings.In some embodiments, media object commands 2442 supply pointers to media objects for processing by the media pipeline. The media objects include memory buffers containing video data to be processed. In some embodiments, all media pipeline states must be valid before issuing a media object command 2442. Once the pipeline state is configured and media object commands 2442 are queued, the media pipeline 2424 is triggered via an execute command 2444 or an equivalent execute event (e.g., register write). Output from media pipeline 2424 may then be post processed by operations provided by the 3D pipeline 2422 or the media pipeline 2424. In some embodiments, GPGPU operations are configured and executed in a similar manner as media operations.Graphics Software ArchitectureFIG. 25 illustrates exemplary graphics software architecture for a data processing system 2500 according to some embodiments. In some embodiments, software architecture includes a 3D graphics application 2510, an operating system 2520, and at least one processor 2530. In some embodiments, processor 2530 includes a graphics processor 2532 and one or more general-purpose processor core(s) 2534. The graphics application 2510 and operating system 2520 each execute in the system memory 2550 of the data processing system.In some embodiments, 3D graphics application 2510 contains one or more shader programs including shader instructions 2512. The shader language instructions may be in a high-level shader language, such as the High Level Shader Language (HLSL) or the OpenGL Shader Language (GLSL). The application also includes executable instructions 2514 in a machine language suitable for execution by the general-purpose processor core 2534. The application also includes graphics objects 2516 defined by vertex data.In some embodiments, operating system 2520 is a Microsoft® Windows® operating system from the Microsoft Corporation, a proprietary UNIX-like operating system, or an open source UNIX-like operating system using a variant of the Linux kernel. When the Direct3D API is in use, the operating system 2520 uses a front-end shader compiler 2524 to compile any shader instructions 2512 in HLSL into a lower-level shader language. The compilation may be a just-in-time (JIT) compilation or the application can perform shader pre-compilation. In some embodiments, high-level shaders are compiled into low-level shaders during the compilation of the 3D graphics application 2510.In some embodiments, user mode graphics driver 2526 contains a back-end shader compiler 2527 to convert the shader instructions 2512 into a hardware specific representation. When the OpenGL API is in use, shader instructions 2512 in the GLSL high-level language are passed to a user mode graphics driver 2526 for compilation. In some embodiments, user mode graphics driver 2526 uses operating system kernel mode functions 2528 to communicate with a kernel mode graphics driver 2529. In some embodiments, kernel mode graphics driver 2529 communicates with graphics processor 2532 to dispatch commands and instructions.IP Core ImplementationsOne or more aspects of at least one embodiment may be implemented by representative code stored on a machine-readable medium which represents and/or defines logic within an integrated circuit such as a processor. For example, the machine-readable medium may include instructions which represent various logic within the processor. When read by a machine, the instructions may cause the machine to fabricate the logic to perform the techniques described herein. Such representations, known as "IP cores," are reusable units of logic for an integrated circuit that may be stored on a tangible, machine-readable medium as a hardware model that describes the structure of the integrated circuit. The hardware model may be supplied to various customers or manufacturing facilities, which load the hardware model on fabrication machines that manufacture the integrated circuit. The integrated circuit may be fabricated such that the circuit performs operations described in association with any of the embodiments described herein.FIG. 26 is a block diagram illustrating an IP core development system 2600 that may be used to manufacture an integrated circuit to perform operations according to an embodiment. The IP core development system 2600 may be used to generate modular, reusable designs that can be incorporated into a larger design or used to construct an entire integrated circuit (e.g., an SOC integrated circuit). A design facility 2630 can generate a software simulation 2610 of an IP core design in a high level programming language (e.g., C/C++). The software simulation 2610 can be used to design, test, and verify the behavior of the IP core. A register transfer level (RTL) design can then be created or synthesized from the simulation model 2600. The RTL design 2615 is an abstraction of the behavior of the integrated circuit that models the flow of digital signals between hardware registers, including the associated logic performed using the modeled digital signals. In addition to an RTL design 2615, lower-level designs at the logic level or transistor level may also be created, designed, or synthesized. Thus, the particular details of the initial design and simulation may vary.The RTL design 2615 or equivalent may be further synthesized by the design facility into a hardware model 2620, which may be in a hardware description language (HDL), or some other representation of physical design data. The HDL may be further simulated or tested to verify the IP core design. The IP core design can be stored for delivery to a 3rd party fabrication facility 2665 using non-volatile memory 2640 (e.g., hard disk, flash memory, or any non-volatile storage medium). Alternatively, the IP core design may be transmitted (e.g., via the Internet) over a wired connection 2650 or wireless connection 2660. The fabrication facility 2665 may then fabricate an integrated circuit that is based at least in part on the IP core design. The fabricated integrated circuit can be configured to perform operations in accordance with at least one embodiment described herein.FIG. 27 is a block diagram illustrating an exemplary system on a chip integrated circuit 2700 that may be fabricated using one or more IP cores, according to an embodiment. The exemplary integrated circuit includes one or more application processors 2705 (e.g., CPUs), at least one graphics processor 2710, and may additionally include an image processor 2715 and/or a video processor 2720, any of which may be a modular IP core from the same or multiple different design facilities. The integrated circuit includes peripheral or bus logic including a USB controller 2725, UART controller 2730, an SPI/SDIO controller 2735, and an I2S/I2C controller 2740. Additionally, the integrated circuit can include a display device 2745 coupled to one or more of a high-definition multimedia interface (HDMI) controller 2750 and a mobile industry processor interface (MIPI) display interface 2755. Storage may be provided by a flash memory subsystem 2760 including flash memory and a flash memory controller. Memory interface may be provided via a memory controller 2765 for access to SDRAM or SRAM memory devices. Some integrated circuits additionally include an embedded security engine 2770.Additionally, other logic and circuits may be included in the processor of integrated circuit 2700, including additional graphics processors/cores, peripheral interface controllers, or general purpose processor cores.Advantageously, any of the above systems, processors, graphics processors, apparatuses, and/or methods may be integrated or configured with any of the various embodiments described herein (e.g. or portions thereof), including, for example, those described in the following Additional Notes and Examples.Additional Notes and ExamplesExample 1 may include an electronic processing system, comprising an application processor, persistent storage media communicatively coupled to the application processor, and a graphics subsystem communicatively coupled to the application processor, wherein the graphics subsystem includes a first graphics engine to process a graphics workload, and a second graphics engine to offload at least a portion of the graphics workload from the first graphics engine.Example 2 may include the system of Example 1, wherein the second graphics engine comprises a low precision compute engine.Example 3 may include the system of any of Examples 1 to 2, further comprising a wearable device to house the second graphics engine.Example 4 may include a graphics apparatus, comprising a first graphics engine to process a graphics workload, and a second graphics engine to offload at least a portion of the graphics workload from the first graphics engine.Example 5 may include the apparatus of Example 4, wherein the second graphics engine comprises a low precision compute engine.Example 6 may include the apparatus of Example 5, wherein the low precision compute engine is to perform at least one of time warp, space warp, and machine learning.Example 7 may include the apparatus of Example 6, further comprising a second context for the second graphics engine which is independent of a first context for the first graphics engine.Example 8 may include the apparatus of Example 4, further comprising a wearable device to house the second graphics engine.Example 9 may include the apparatus of Example 8, wherein the second graphics engine is further to offload render work from the first graphics engine.Example 10 may include the apparatus of Example 10, wherein the wearable display comprises a head mounted display and wherein the second graphics engine is further to offload foveated render work from the first graphics engine.Example 11 may include a method of processing a graphics workload, comprising processing a graphics workload with a first graphics engine, and offloading at least a portion of the graphics workload from the first graphics engine to a second graphics engine.Example 12 may include the method of Example 11, further comprising providing a low precision compute engine for the second graphics engine.Example 13 may include the method of Example 12, further comprising performing at least one of time warp, space warp, and machine learning with the low precision compute engine.Example 14 may include the method of Example 13, further comprising providing a second context for the second graphics engine which is independent of a first context for the first graphics engine.Example 15 may include the method of Example 11, further comprising providing a wearable device to house the second graphics engine.Example 16 may include the method of Example 15, further comprising offloading render work from the first graphics engine to the second graphics engine.Example 17 may include the method of Example 16, further comprising providing a head mounted display to house the second graphics engine, and offloading foveated render work from the first graphics engine to the second graphics engine.Example 18 may include at least one computer readable medium, comprising a set of instructions, which when executed by a computing device cause the computing device to process a graphics workload with a first graphics engine, and offload at least a portion of the graphics workload from the first graphics engine to a second graphics engine.Example 19 may include the at least one computer readable medium of Example 18, comprising a further set of instructions, which when executed by a computing device cause the computing device to provide a low precision compute engine for the second graphics engine.Example 20 may include the at least one computer readable medium of Example 19, comprising a further set of instructions, which when executed by a computing device cause the computing device to performing at least one of time warp, space warp, and machine learning with the low precision compute engine.Example 21 may include the at least one computer readable medium of Example 20, comprising a further set of instructions, which when executed by a computing device cause the computing device to provide a second context for the second graphics engine which is independent of a first context for the first graphics engine.Example 22 may include the at least one computer readable medium of Example 18, comprising a further set of instructions, which when executed by a computing device cause the computing device to provide a wearable device to house the second graphics engine.Example 23 may include the at least one computer readable medium of Example 22, comprising a further set of instructions, which when executed by a computing device cause the computing device to offload render work from the first graphics engine to the second graphics engine.Example 24 may include the at least one computer readable medium of Example 23, comprising a further set of instructions, which when executed by a computing device cause the computing device to provide a head mounted display to house the second graphics engine, and offload foveated render work from the first graphics engine to the second graphics engine.Example 25 may include a graphics apparatus, comprising means for processing a graphics workload with a first graphics engine, and means for offloading at least a portion of the graphics workload from the first graphics engine to a second graphics engine.Example 26 may include the apparatus of Example 25, further comprising means for providing a low precision compute engine for the second graphics engine.Example 27 may include the apparatus of Example 26, further comprising means for performing at least one of time warp, space warp, and machine learning with the low precision compute engine.Example 28 may include the apparatus of Example 27, further comprising means for providing a second context for the second graphics engine which is independent of a first context for the first graphics engine.Example 29 may include the apparatus of Example 25, further comprising means for providing a wearable device to house the second graphics engine.Example 30 may include the apparatus of Example 29, further comprising means for offloading render work from the first graphics engine to the second graphics engine.Example 31 may include the apparatus of Example 30, further comprising means for providing a head mounted display to house the second graphics engine, and means for offloading foveated render work from the first graphics engine to the second graphics engine.Embodiments are applicable for use with all types of semiconductor integrated circuit ("IC") chips. Examples of these IC chips include but are not limited to processors, controllers, chipset components, programmable logic arrays (PLAs), memory chips, network chips, systems on chip (SoCs), SSD/NAND controller ASICs, and the like. In addition, in some of the drawings, signal conductor lines are represented with lines. Some may be different, to indicate more constituent signal paths, have a number label, to indicate a number of constituent signal paths, and/or have arrows at one or more ends, to indicate primary information flow direction. This, however, should not be construed in a limiting manner. Rather, such added detail may be used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit. Any represented signal lines, whether or not having additional information, may actually comprise one or more signals that may travel in multiple directions and may be implemented with any suitable type of signal scheme, e.g., digital or analog lines implemented with differential pairs, optical fiber lines, and/or single-ended lines.Example sizes/models/values/ranges may have been given, although embodiments are not limited to the same. As manufacturing techniques (e.g., photolithography) mature over time, it is expected that devices of smaller size could be manufactured. In addition, well known power/ground connections to IC chips and other components may or may not be shown within the figures, for simplicity of illustration and discussion, and so as not to obscure certain aspects of the embodiments. Further, arrangements may be shown in block diagram form in order to avoid obscuring embodiments, and also in view of the fact that specifics with respect to implementation of such block diagram arrangements are highly dependent upon the platform within which the embodiment is to be implemented, i.e., such specifics should be well within purview of one skilled in the art. Where specific details (e.g., circuits) are set forth in order to describe example embodiments, it should be apparent to one skilled in the art that embodiments can be practiced without, or with variation of, these specific details. The description is thus to be regarded as illustrative instead of limiting.The term "coupled" may be used herein to refer to any type of relationship, direct or indirect, between the components in question, and may apply to electrical, mechanical, fluid, optical, electromagnetic, electromechanical or other connections. In addition, the terms "first", "second", etc. may be used herein only to facilitate discussion, and carry no particular temporal or chronological significance unless otherwise indicated. Additionally, it is understood that the indefinite articles "a" or "an" carries the meaning of "one or more" or "at least one".As used in this application and in the claims, a list of items joined by the term "one or more of" may mean any combination of the listed terms. For example, the phrases "one or more of A, B or C" may mean A, B, C; A and B; A and C; B and C; or A, B and C.The embodiments have been described above with reference to specific embodiments. Persons skilled in the art, however, will understand that various modifications and changes may be made thereto without departing from the broader spirit and scope of the embodiments as set forth in the appended claims. The foregoing description and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
Semiconductor devices include one or more transistors having a floating gate and a control gate. In at least one embodiment, the floating gate comprises an intermediate portion extending between two end portions. The intermediate portion has an average cross-sectional area less than one or both of the end portions. In some embodiments, the intermediate portion may comprise a single nanowire. In additional embodiments, semiconductor devices have one or more transistors having a control gate and a floating gate in which a surface of the control gate opposes a lateral side surface of a floating gate that defines a recess in the floating gate. Electronic systems include such semiconductor devices. Methods of forming semiconductor devices include, for example, forming a floating gate having an intermediate portion extending between two end portions, and configuring the intermediate portion to have an average cross-sectional area less than one or both of the end portions.
CLAIMS What is claimed is: L A semiconductor device having at least one transistor comprising: a source; a drain; a control gate; and a floating gate comprising: a first end portion proximate the source and the drain; a second end portion proximate the control gate; and an intermediate portion extending between the first end portion and the second end portion, the intermediate portion having an average cross-sectional area less than at least one of an average cross-sectional area of the first end portion and an average cross-sectional area of the second end portion. 2. The semiconductor device of claim 1, wherein the floating gate comprises a polysilicon material doped with a dopant. 3. The semiconductor device of claim 2, wherein the intermediate portion of the floating gate has an average concentration of the dopant differing from an average concentration of the dopant in the first end portion and an average concentration of the dopant in the second end portion. 4. The semiconductor device of claim 3, wherein a concentration of the dopant various substantially continuously through the floating gate between the first end portion and the second end portion. 5. The semiconductor device of claim 1, wherein at least one surface of the floating gate defines a recess in the floating gate, and at least one surface of the control gate comprises a protrusion on the control gate, the protrusion at least partially disposed within the recess. 6. The semiconductor device of claim 5, wherein the at least one surface of the floating gate defining the recess comprises a lateral side surface of the floating gate. 7. The semiconductor device of claim 6, wherein the recess extends substantially entirely around the floating gate. 8. The semiconductor device of claim 5, wherein the protrusion substantially fills the recess. 9. The semiconductor device of claim 1, wherein at least a portion of the control gate has a shape substantially complementary to a shape of at least a portion of the second end portion of the floating gate. 10. The semiconductor device of claim 9, wherein the control gate comprises at least one surface opposing an upper surface of the second end portion of the floating gate and at least one surface opposing a lateral side surface of the second end portion of the floating gate. 11. The semiconductor device of claim 10, wherein the control gate further comprises at least one surface opposing at least a portion of a lateral side surface of the intermediate portion of the floating gate. 12. The semiconductor device of claim 11, wherein the at least one surface of the control gate opposing the lateral side surface of the intermediate portion of the floating gate is disposed on a protrusion of the control gate, the protrusion at least partially disposed in a recess of the floating gate at least partially defined by the lateral side surface of the intermediate portion of the floating gate. 13. The semiconductor device of claim 1, wherein the floating gate has a dumbbell shape. 14. The semiconductor device of claim 1 , wherein the intermediate portion of the floating gate comprises a single nanowire. 15. A semiconductor device having at least one transistor comprising an electrically isolated floating gate and a control gate capacitively coupled with the floating gate, the control gate having at least one surface opposing an upper surface of an end portion of the floating gate and at least one surface opposing a lateral side surface of the floating gate, the lateral side surface of the floating gate defining a recess in the floating gate. 16. The semiconductor device of claim 15, wherein the at least one surface of the control gate opposing the lateral side surface of the floating gate at least partially comprises a protrusion of the control gate at least partially disposed within the recess of the floating gate. 17. The semiconductor device of claim 16, wherein the protrusion of the control gate substantially fills the recess of the floating gate. 18. The semiconductor device of claim 15, wherein the floating gate comprises a polysilicon material doped with a dopant. 19. The semiconductor device of claim 18, wherein an intermediate portion of the floating gate has an average concentration of the dopant differing from an average concentration of the dopant in a first end portion of the floating gate and an average concentration of the dopant in a second end portion of the floating gate. 20. A semiconductor device having at least one transistor comprising an electrically isolated floating gate and a control gate capacitively coupled with the floating gate, the floating gate comprising a single nanowire extending between a first end portion of the floating gate and a second end portion of the floating gate, the control gate disposed at least partially over and around the second end portion of the floating gate and at least partially around a portion of the single nanowire. 21. The semiconductor device of claim 20, wherein at least a portion of the control gate has a shape substantially complementary to a shape of the second end portion of the floating gate. 22. The semiconductor device of claim 21 , wherein the control gate comprises at least one surface opposing an upper surface of the second end portion of the floating gate and at least one surface opposing a lateral side surface of the second end portion of the floating gate. 23. The semiconductor device of claim 22, wherein the control gate further comprises at least one surface opposing at least a portion of a lateral side surface of the single nanowire. 24. The semiconductor device of claim 23, wherein the at least one surface of the control gate opposing the lateral side surface of the single nanowire is disposed on a protrusion of the control gate. 25. An electronic system comprising: at least one electronic signal processor; at least one semiconductor device configured to communicate electrically with the at least one electronic signal processor; and at least one of an input device and an output device configured to communicate electrically with the at least one electronic signal processor, at least one of the at least one electronic signal processor and the at least one semiconductor device having at least one transistor comprising an electrically isolated floating gate and a control gate capacitively coupled with the floating gate, the control gate having at least one surface opposing an upper surface of a end portion of the floating gate and at least one surface opposing a lateral side surface of the floating gate, the lateral side surface of the floating gate defining a recess in the floating gate. 26. The electronic system of claim 25, wherein the electronic system comprises one of a computer, a computer hardware component, a server, a networking hardware component, a cellular telephone, a digital camera, a personal digital assistant, and a portable media player. 27. The electronic system of claim 26, wherein the input device comprises at least one of a pointing device, a keyboard, a touchpad, a touchscreen, and a button, and wherein the output device comprises at least one of a monitor, a display, a touchscreen, a printer, an audio output jack, and a speaker. 28. The electronic system of claim 25, wherein the floating gate of the at least one transistor further comprises: an additional end portion proximate a source and a drain; and an intermediate portion disposed between the end portion and the additional end portion, the intermediate portion having an average cross-sectional area less than at least one of an average cross-sectional area of the first end portion and an average cross-sectional area of the second end portion. 29. The electronic system of claim 28, wherein the intermediate portion comprises a single nanowire. 30. The electronic system of claim 25, wherein the at least one surface of the control gate opposing the lateral side surface of the floating gate at least partially comprises a protrusion of the control gate at least partially disposed within the recess of the floating gate. 31. The electronic system of claim 25, wherein the floating gate comprises a polysilicon material doped with a dopant. 32. The electronic system of claim 31 , wherein an intermediate portion of the floating gate has an average concentration of the dopant differing from an average concentration of the dopant in a first end portion of the floating gate and an average concentration of the dopant in a second end portion of the floating gate. 33. An electronic system comprising: at least one electronic signal processor; at least one semiconductor device configured to communicate electrically with the at least one electronic signal processor; and at least one of an input device and an output device configured to communicate electrically with the at least one electronic signal processor, at least one of the at least one electronic signal processor and the at least one semiconductor device having at least one transistor comprising an electrically isolated floating gate and a control gate capacitively coupled with the floating gate, the floating gate comprising a single nanowire extending between a first end portion of the floating gate and a second end portion of the floating gate, the control gate disposed at least partially over and around the second end portion of the floating gate and at least partially around a portion of the single nanowire. 34. The electronic system of claim 33, wherein the control gate comprises at least one surface opposing an upper surface of the second end portion of the floating gate and at least one surface opposing a lateral side surface of the second end portion of the floating gate. 35. The electronic system of claim 34, wherein the control gate further comprises at least one surface opposing at least a portion of a lateral side surface of the single nanowire. 36. A method of forming a semiconductor device having at least one transistor, comprising: forming a floating gate having a first end portion, a second end portion, and an intermediate portion extending between the first end portion and the second end portion, wherein the intermediate portion has an average cross-sectional area less than at least one of an average cross-sectional area of the first end portion and an average cross-sectional area of the second end portion; and forming a control gate at least partially over and around at least the second end portion of the floating gate. 37. The method of claim 36, wherein forming a floating gate comprises: forming the first end portion of the floating gate; forming the second end portion of the floating gate over the first end portion of the floating gate; forming an opening extending through the second end portion of the floating gate to the first end portion of the floating gate; and filling the opening with a conductive material to form an intermediate portion of the floating gate extending between the first end portion and the second end portion. 38. The method of claim 37, further comprising: forming a first portion of the control gate over the first end portion of the floating gate, wherein the second end portion of the floating gate is formed over the first portion of the control gate, and wherein forming the opening through the second end portion of the floating gate further comprises forming the opening through the first portion of the control gate; and forming a second end portion of the control gate at least partially over and around at least the second end portion of the floating gate. 39. The method of claim 38, further comprising forming an inter-gate dielectric material on at least one surface of the first portion of the control gate within the opening prior to filling the opening with the conductive material. 40. The method of claim 36, wherein forming a floating gate comprises: forming a conductive structure comprising polysilicon material doped with a dopant; and doping an intermediate portion of the conductive structure with an average concentration of the dopant differing from an average concentration of the dopant in a first end portion of the conductive structure and an average concentration of the dopant in a second end portion of the conductive structure; and etching the conductive structure with an etchant at a rate at least partially dependent on the concentration of the dopant in the conductive structure. 41. The method of claim 36, wherein forming a floating gate having an intermediate portion comprises forming a single nanowire extending between the first end portion and the second end portion. 42. The method of claim 41, former comprising forming the first end portion of the floating gate, and wherein forming the single nanowire comprises: forming the single nanowire on a surface of the first end portion of the floating gate and establishing electrical contact between a first end of the single nanowire and the first end portion of the floating gate; surrounding at least a portion of the single nanowire with a dielectric material; and forming the second end portion of the floating gate over a second end of the single nanowire and establishing electrical contact between the second end of the single nanowire and the second end portion of the floating gate.
SEMICONDUCTOR DEVICES AND ELECTRONIC SYSTEMS COMPRISING FLOATING GATE TRANSISTORS AND METHODS OF FORMING THE SAME PRIORITY CLAIM This application claims the benefit of the filing date of United States Patent Application Serial No. 11/763,335, filed June 14, 2007, for "Semiconductor Devices and Electronic Systems Comprising Floating Gate Transistors and Methods of Forming the Same." TECHNICAL FIELD Embodiments of the present invention relate to semiconductor devices having one or more floating gate transistors, to electronic systems including such semiconductor devices, and to methods of forming such semiconductor devices. BACKGROUND OF THE INVENTION Semiconductor devices include one or more integrated circuits that can be used to store data, process electronic signals, etc. Such semiconductor devices are used in virtually all modern electronic devices. There are several different types of semiconductor devices used in modern electronics including, for example memory devices, electronic signal processors, devices for capturing or acquiring images, etc. Each of these semiconductor devices typically comprise a plurality of transistors, which can be used as gates or switches for electrical signals. FIG. 1 is a schematic cross-sectional view of a conventional transistor 10 that may be used in a memory cell of a non- volatile memory device. The transistor 10 may be fabricated on or in a substrate 11, which may comprise a doped semiconductor material. The transistor 10 shown in FIG. 1 has a dual gate structure and includes a control gate 12, a floating gate 14, a source 16, and a drain 18. The source 16 and drain 18 may comprise, for example, doped regions in or on the substrate 11, which itself may be doped of opposite polarity relative to the source 16 and the drain 18. By way of example and not limitation, the source 16 and drain 18 may comprise n-doped regions in or on the substrate 11, and the substrate 11 may be p-doped at least in the region thereof between the source 16 and the drain 18 so as to provide an npn typestructure in the substrate 11 below the floating gate 14. The floating gate 14 is electrically isolated from the control gate 12 by the so-called "inter-gate dielectric" material 20, and from the underlying substrate 11 (including the source 16 and the drain 18) by another dielectric material, which is often referred to as the "tunnel dielectric" material 22 or the "tunnel oxide." The floating gate 14 also may be further electrically isolated from surrounding structures by a passivation layer 24. The control gate 12 and the floating gate 14 are capacitively coupled to one another (i.e., positioned such that an electrical capacitance maybe generated therebetween), and the control gate 12 is used to selectively charge the floating gate 14. In other words, when a sufficient voltage is applied to the control gate 12, electrons may be caused to "tunnel" through the tunnel dielectric 22 from the substrate 11 to the floating gate 14, where they may remain even after the voltage applied to the control gate 12 is interrupted, since the floating gate 14 is electrically isolated by the inter-gate dielectric material 20, the tunnel dielectric material 22, and the passivation layer 24. When a given reading voltage is applied between the source 16 and the drain 18, the presence of electrons on the floating gate 14 may cause a relatively lower current to flow between the source 16 and the drain 18 (and the memory cell maybe characterized as representing a "0"), while the absence of electrons on the floating gate 14 may allow a relatively higher current to flow between the source 16 and the drain 18 (and the memory cell may be characterized as representing a "0"). By utilizing a floating gate 14 that is electrically isolated by the inter-gate dielectric material 20, the tunnel dielectric material 22, and the passivation layer 24, any electrons present on the floating gate 14 may remain thereon even after power to the memory device is interrupted. As a result, memory devices having transistors that include such dual-gate structures are considered non- volatile. Other types of semiconductor devices, including, for example, electronic signal processors and devices for acquiring or capturing images (often referred to as "imagers"), also may include a plurality of transistors for storing data therein. In other words, such semiconductor devices may have subsystems of components that comprise memory. As a result, such semiconductor devices also may comprise transistors such as that described above.As integrated circuit fabrication processes improve, the feature sizes of the various elements in the integrated circuits are reduced so as to enable the fabrication of smaller semiconductor devices and/or semiconductor devices having increased cell densities, and, hence, higher data storage capacities. As previously mentioned, a capacitance is generated between the floating gate and the control gate in transistors having a dual gate structure. Such transistors are conventionally fabricated side-by-side in an array on a substrate. As a result, a capacitance also may be generated between the floating gates of adjacent transistors in the array. Such inter-transistor capacitances can negatively affect the operation of the semiconductor device. The coupling ratio (CR) of a semiconductor device (e.g., a memory device) maybe defined as the ratio of the capacitance CFG-CG between the floating gate and the control gate in each transistor to the capacitance CFG-FG between the floating gates of adjacent transistors (i.e., CR = CFGWCFG-FG)- It is typically desirable to maximize the coupling ratio (when the coupling ratio is defined in this manner) to enhance the reliability and performance of the semiconductor device. As the feature size of the various elements (e.g., the size of the various elements of the transistors, as well as the spacings therebetween) in the integrated circuits of such semiconductor devices are scaled downward, it may be more difficult to maintain a high coupling ratio due, at least in part, to the decreasing surface area between opposing surfaces of the control gate and the floating gate and the decreasing spacing or distance between the floating gates in adjacent transistors. The decreasing surface area between opposing surfaces of the control gate and the floating gate may cause a decrease in the capacitance CFG-CG between the floating gate and the control gate in each transistor, and the decreasing spacing or distance between the floating gates in adjacent transistors may cause an increase in the capacitance CFG-FG between the floating gates of adjacent transistors. For the reasons stated above, and for other reasons which will become apparent to those skilled in the art upon reading and understanding the present specification, there is a need in the art for improved floating gate transistors, such as those that exhibit relatively high coupling ratios, and that can be scaled to smaller feature sizes without decreasing the coupling ratio to an unacceptable level.BRffiF DESCRIPTION OF THE DRAWINGS FIG. 1 is a partial cross-sectional view of a semiconductor device illustrating a transistor having a dual-gate structure known in the art. FIG. 2 A is a partial cross-sectional view of an embodiment of a semiconductor device of the present invention. FIG. 2B is a partial cross-sectional view of the semiconductor device shown in FIG. 2A taken along section line 2B-2B therein. FIG. 3 A is an enlarged view of a control gate and a floating gate of the semiconductor device shown in FIG. 2A. FIG. 3B is an enlarged view of the control gate and the floating gate as illustrated in FIG. 2B. FIGS. 4 through 20 are partial cross-sectional side views of a workpiece and illustrate an embodiment of a method of the present invention that may be used to form a semiconductor device like that shown in FIGS. 2A-2B. FIG. 21 is a partial cross-sectional view of another embodiment of a semiconductor device of the present invention. FIG. 22 is an enlarged view of a control gate and a floating gate of the semiconductor device shown in FIG. 21. FIGS. 23 through 29 are partial cross-sectional side views of a workpiece and illustrate an embodiment of a method of the present invention that may be used to form a semiconductor device like that shown in FIG. 21. FIG. 30 is a schematic block diagram illustrating one embodiment of an electronic system of the present invention that includes a semiconductor device as described herein below. MODE(S) FOR CARRYING OUT THE INVENTION As discussed in further detail below, in some embodiments the present invention includes semiconductor devices, such as, for example, memory devices, electronic signal processors (often referred to as "microprocessors"), and imagers having one or more floating gate transistors. These semiconductor devices include one or more transistors having a floating gate and a control gate. The floating gate comprises two end portions and an intermediate portion extending between the endportions. The intermediate portion may have an average cross-sectional area less than one or both of the end portions. In some embodiments, the intermediate portion may comprise a single nanowire. In additional embodiments, a surface of the control gate may oppose a lateral side surface of the floating gate that defines a recess in the floating gate. In additional embodiments, the present invention includes electronic systems that comprise such semiconductor devices. In yet additional embodiments, the present invention includes methods of forming such semiconductor devices. As used herein, the term "nanowire" means any elongated structure having a length and an average width, the average width being less than about 50 nanometers. As used herein, the term "III-V type semiconductor material" means any material predominantly comprised of one or more elements from group IIIB of the periodic table (B, Al, Ga, In, and Ti) and one or more elements from group VB of the periodic table (N, P, As, Sb, and Bi). As used herein, the term "II- VI type semiconductor material" means any material predominantly comprised of one or more elements from group IIB of the periodic table (Zn, Cd, and Hg) and one or more elements from group VIB of the periodic table (O, S, Se, Te, and Po). As used herein, the term "wafer" means any structure that includes a layer of semiconductor type material including, for example, silicon, germanium, gallium arsenide, indium phosphide, and other III-V or II- VI type semiconductor materials. Wafers include, for example, not only conventional wafers formed completely of a semiconductor material, but other substrates such as silicon-on-insulator (SOI) type substrates, silicon-on-sapphire (SOS) type substrates, and epitaxial layers of silicon supported by a layer of base material. Semiconductor type materials may be doped or undoped. Furthermore, when reference is made to a "wafer" in the following description, previous process steps may have been utilized to at least partially form elements or components of a device, such as a circuit, in or over a surface of the wafer. The illustrations presented herein are not meant to be actual views of any particular semiconductor device, transistor, workpiece, or system, but are merely idealized representations which are employed to describe the present invention. Additionally, elements common between figures may retain the same numerical designation.FIGS. 2 A and 2B are cross-sectional views of a portion of an embodiment of a semiconductor device 30 of the present invention taken substantially transverse to one another, and illustrate a transistor having a dual gate structure. In other words, FIG. 2B is a cross-sectional view of the semiconductor device 30 taken along section line 2B-2B shown in FIG. 2 A, and FIG. 2 A is a cross-sectional view of the semiconductor device 30 taken along section line 2A-2A shown in FIG. 2B. As shown in FIG. 2A, the transistor may comprise a control gate 32, a floating gate 34, a source 36, and a drain 38. The transistor may comprise, for example, at least a portion of a memory cell in an array of memory cells of the semiconductor device 30. In some embodiments, the semiconductor device 30 may comprise a memory device (e.g., a flash memory device) having an array of memory cells, each of which may comprise a transistor as shown in FIGS. 2 A and 2B. The transistor may be fabricated on or in a substrate 31 , which may comprise a doped semiconductor material. The source 36 and the drain 38 may comprise, for example, doped regions in or on the substrate 31 , and the substrate 31 itself may be doped of opposite polarity relative to the source 36 and the drain 38. By way of example and not limitation, the source 36 and drain 38 may comprise n-doped regions in or on the substrate 31, and the substrate 31 may be p-doped at least in the region thereof between the source 36 and the drain 38 so as to provide an npn type structure in the substrate 31 below the floating gate 34. The floating gate 34 is electrically isolated from the control gate 32 by an inter-gate dielectric material 40, and from the underlying substrate 31 (including the source 36 and the drain 38) by another region (which may comprise a layer) of dielectric material, which is referred to herein as a "tunnel dielectric" material 42. The control gate 32 and the floating gate 34 also may be electrically isolated from surrounding structures by yet another region of dielectric material, which is referred to herein as a passivation layer 44 (although the passivation layer 44 may comprise what is often referred to as an interlayer dielectric). By way of example and not limitation, the inter-gate dielectric material 40 and the tunnel dielectric material 42 may comprise an oxide material (e.g., SiO2), a nitride material (e.g., Si3N4), or a combination of oxide and nitride materials such as, for example, an oxynitride material, a re-oxidized oxynitride material, or a so-called "oxide-nitride-oxide" (ONO) structure.The control gate 32 and the floating gate 34 are capacitively coupled to one another (i.e., sized, shaped, and positioned such that an electrical capacitance may be generated therebetween), and the control gate 32 may be used to selectively charge the floating gate 34. When a sufficient voltage is applied to the control gate 32, electrons may be caused to "tunnel" through the tunnel dielectric 42 from the substrate 31 to the floating gate 34, where they may remain even after the voltage applied to the control gate 32 is interrupted, since the floating gate 34 is electrically isolated from surrounding conductive structures by the inter-gate dielectric material 40, the tunnel dielectric material 42, and the passivation layer 44. As shown in FIG. 2B, the source 36 and the drain 38 (which are not visible in FIG. 2B because the source 36 is positioned in front of the plane of FIG. 2B and the drain 38 is positioned behind the plane of FIG. 2B) may be laterally separated from surrounding structures (e.g., elements of adjacent transistors, conductive lines, etc.) by isolation regions 46 (e.g., shallow trench isolation (STI) regions), which may comprise a dielectric material such as, for example, an oxide (e.g., silica (SiO2)). As shown in FIGS. 2 A and 2B, at least a portion of the floating gate 34 may have a dumbbell-shaped cross-section, and at least a portion of the control gate 32 may have a shape that is complementary to that of the floating gate 34. For example, at least a portion of the control gate 32 may have a shape that is complementary to that of at least about one-half (e.g., the upper half shown in FIGS. 2A and 2B) of the floating gate 34. The shape of the floating gate 34 and the complementary shape of the control gate 32 are described in further detail below with reference to FIGS. 3A-3B. FIG. 3 A is an enlarged view of the control gate 32 and the floating gate 34, as shown in FIG. 2 A. Similarly, FIG. 3B is an enlarged view of the control gate 32 and the floating gate 34, as shown in FIG. 2B. FIG. 3B is a cross-sectional view of the structure shown in FIG. 3 A taken along section line 3B-3B shown in FIG. 3 A, and FIG. 3 A is a cross-sectional view of the structure shown in FIG. 3B taken along section line 3A-3A shown in FIG. 3B. The other elements of the semiconductor device 30 are not illustrated in FIGS. 3 A and 3B to simplify the figures and facilitate description of the control gate 32 and the floating gate 34. Referring to FIG. 3A, the floating gate 34 may include a first end portion 50, a second end portion 52, and an intermediate portion 54 extending between the first endportion 50 and the second end portion 54. The first end portion 50 may be located proximate the source 36 and the drain 38 (FIG. 2A), and the second end portion 52 may be located proximate the control gate 32. The intermediate portion 54 may have an average transverse cross-sectional area (i.e., in a plane extending into the plane of FIG. 3A perpendicular to section line 3B-3B shown therein) that is less than that of each of the first end portion 50 and the second end portion 52. In other words, the end portions 50, 52 may be enlarged relative to the intermediate portion 54. In some embodiments, the first end portion 50, the second end portion 52, and the intermediate portion 54 of the floating gate 34 each may have either a substantially circular or a substantially rectangular transverse cross-sectional shape (i.e., in a plane extending into the plane of FIG. 3 A perpendicular to section line 3B-3B shown therein). As a non-limiting example, the first end portion 50 and the second end portion 52 each may have a substantially rectangular (e.g., square) transverse cross-sectional shape, and the intermediate portion 54 may have a generally cylindrical shape and may have a generally circular transverse cross-sectional shape. By way of example and not limitation, the first end portion 50 and the second end portion 52 each may have a length L (FIG. 3A) and a width W (FIG. 3B) of less than about one-hundred nanometers (100 run) (e.g., about seventy nanometers (70 nm) or about 35 nanometers (35 nm)), and a thickness T of less than about seventy nanometers (70 nm) (e.g., between about ten nanometers (10 nm) and about thirty nanometers (30 nm)). The intermediate portion 54 may have a height H of between about fifty nanometers (50 nm) and about three hundred nanometers (300 nm) (e.g., about one hundred nanometers (100 nm)), and an average diameter D of less than about seventy nanometers (70 nm) (e.g., about twenty nanometers (20 nm)). hi other embodiments of the present invention, the first end portion 50, the second end portion 52, and the intermediate portion 54 of the floating gate 34 may have any other size and shape in which at least a portion of the intermediate portion 54 has an average transverse cross-sectional area that is less than that of each of the first end portion 50 and the second end portion 52. Furthermore, the first end portion 50 and the second end portion 52 need not be identical and may have differing sizes, differing shapes, or both differing sizes and shapes.As previously mentioned, at least a portion of the control gate 32 may have a shape that is complementary to that of at least a portion of the floating gate 34. For example, the exterior surfaces of the floating gate 34 may define at least one recess 48 in the lateral sides of the floating gate 34 (FIG. 3A), and the exterior surfaces of the control gate 32 may define at least one protrusion 49 (FIG. 3B), which may be disposed at least partially within the at least one recess 48 of the floating gate 34, as discussed in further detail below. Referring to FIG. 3B, the control gate 32 may have at least one surface 70 opposing an upper surface 56 of the second end portion 52 of the floating gate 34, and at least one surface 72 opposing a lateral side surface 58 of the second end portion 52 of the floating gate 34. hi some embodiments, the control gate 32 also may have at least one surface 74 opposing at least a portion of a lower surface 60 of the second end portion 52 of the floating gate 34 within the recess 48, at least one surface 76 opposing at least a portion of a lateral side surface 62 of the intermediate portion 54 of the floating gate 34, and at least one surface 78 opposing at least a portion of an upper surface 64 of the first end portion 50 of the floating gate 34. In some embodiments, the thickness of the inter-gate dielectric material 40 (FIGS. 2A and 2B) between the control gate 32 and the floating gate 34 may be substantially uniform. Furthermore, the average thickness of the inter-gate dielectric material 40 between the control gate 32 and the floating gate 34 may be less than about twenty nanometers (20 nm) (e.g., about twelve nanometers (12 nm)). In such configurations, the average distance separating opposing surfaces of the control gate 32 and the floating gate 34 may be substantially uniform. By way of example and not limitation, the average distance separating opposing surfaces of the control gate 32 and the floating gate 34 may be less than about twenty nanometers (20 nm) (e.g., about twelve nanometers (12 nm)). As shown in FIG. 3A, in some embodiments, the control gate 32 may not entirely surround the floating gate 34, and the at least one protrusion 49 (FIG. 3B) of the control gate 32 may not substantially fill the recess 48 of the floating gate 34. In other embodiments, however, the control gate 32 may substantially entirely surround the floating gate 34, and the at least one protrusion 49 (FIG. 3B) of the control gate 32 may substantially fill the recess 48 of the floating gate 34 (other than the volume of therecess 48 occupied by the inter-gate dielectric material 40, as shown in FIGS. 2A and 2B)). In such embodiments, the cross-sectional view shown in FIG. 3 A may appear substantially identical to the cross-sectional view shown in FIG. 3B. The control gate 32 and the floating gate 34 (including the end portions 50, 52 and the intermediate portion 54) may comprise a conductive or semiconductor material such as, for example, polysilicon (doped or undoped), a doped or undoped semiconductor material (e.g., silicon, germanium, a IH-V type semiconductor material, a II- VI type semiconductor material), a conductive metal (e.g., copper, aluminum, tungsten, platinum), or a conductive metal suicide (e.g., tungsten suicide, nickel suicide, cobalt suicide, titanium suicide). hi some embodiments, the intermediate portion 54 of the floating gate 34 may comprise a single nanowire having a first end in electrical contact with the first end portion 50 of the floating gate and a second end in electrical contact with the second end portion 52 of the floating gate. By way of example and not limitation, such a nanowire may comprise a nanotube, such as a single wall carbon nanotube (SWCNT) or a multi-walled carbon nanotube (MWCNT). In additional embodiments, such a nanowire may comprise a substantially solid nanowire substantially comprised of a semiconductor material such as, for example, silicon, germanium, gallium, a III-V type semiconductor material, or a II- VI type semiconductor material. Furthermore, each nanowire may comprise a single crystal, hi yet other embodiments, such a nanowire may comprise a substantially solid nanowire substantially comprised of a metal such as, for example, cobalt, copper, gold, nickel, platinum, or silver. Any type of nanowire maybe used as long as the nanowire exhibits sufficient electrical conductivity and can be formed, grown, placed, or otherwise provided within the transistor during fabrication thereof, as discussed in further detail below. One embodiment of a method of the present invention that may be used to manufacture a semiconductor device comprising one or more transistors like that shown in FIGS. 2A-2B is described below with reference to FIGS. 4-19. Referring to FIG. 4, methods known in the art may be used to provide (e.g., form) a workpiece 100 that includes a substrate 31. The substrate 31 may comprise a full or partial semiconductor wafer, and may comprise a doped semiconductor material. Only a portion of the substrate 31 that is to comprise a single transistor is shown in thefigures to facilitate illustration and description. It is contemplated, however, that the substrate 31 may be used to form one or more semiconductor devices (not shown), each of which may comprise a plurality of transistors like that shown in the figures. The workpiece 100 may comprise a plurality of isolation regions 46, as well as a source 36 and a drain 38 (FIG. 2A) for each transistor being formed on the workpiece 100 (although in some devices, such as NAND flash memory devices, at least some transistors may be connected in series, the drain 38 of one transistor being continuous with the source 36 of an adjacent transistor). As also shown in FIG. 4, a layer of tunnel dielectric material 42 may be deposited at least over the regions of the workpiece 100 on which a control gate 32 and floating gate 34 (FIGS. 2A-2B) are to be fabricated. Referring to FIG. 5, a first end portion 50 of a floating gate 34 (FIGS. 3A-3B) may be formed over the tunnel dielectric material 42 that is positioned generally vertically above and horizontally between a source 36 and a drain 38 (FIG. 2A). A dielectric material 102 may be provided around the first end portion 50 of the floating gate 34. The first end portion 50 and the surrounding dielectric material 102 may be formed using conventional lithographic or sub lithographic processes (e.g., photolithography (with or without a so-called "pitch-doubling" process) or nanoimprint lithography). In some embodiments, for example, a layer of conductive material (not shown) may be deposited over the workpiece 100 and patterned using, for example, a masking and etching process to form the first end portion 50 of the floating gate 34. A layer of dielectric material 102 then may be deposited over the workpiece 100 and the first end portion 50. The layer of dielectric material 102 then may be planarized using, for example, a chemical-mechanical polishing (CMP) process to expose the first end portion 50 through the layer of dielectric material 102. In additional embodiments, the layer of dielectric material 102 may be deposited over the substrate 31 and patterned using, for example, a masking and etching process to form a recess (not shown) therein exposing the underlying tunnel dielectric material 42 at the location at which it is desired to form the first end portion 50. A layer of conductive material (not shown) then may be deposited over the layer of dielectric material 102 and within the recess, after which the layer of conductive material may be planarized using, for example, achemical-mechanical polishing (CMP) process to expose the underlying layer of dielectric material 102 and to form the first end portion 50 of the floating gate 34. Referring to FIG. 6, a layer of inter-gate dielectric material 4OA may be deposited, epitaxially grown, or otherwise formed at least over the regions of the workpiece 100 comprising the first end portion 50 of the floating gate 34. By way of example and not limitation, the inter-gate dielectric material 4OA may comprise an oxynitride material and may be deposited using a chemical vapor deposition (CVD) process. As shown in FIG. 7, a conductive structure 104 may be formed over the inter-gate dielectric material 4OA. The conductive structure 104 may be vertically aligned with the first end portion 50 of the floating gate 34. A portion of the conductive structure 104 will be used to form at least a portion of the control gate 32 that includes the previously described at least one protrusion 49 (FIG. 3B) thereof, as described in further detail below. The conductive structure 104 may be formed using conventional lithographic or sublithographic methods as previously described in relation to the first end portion 50 of the floating gate 34 and FIG. 5. Referring to FIG. 8, an opening 106 maybe formed through the conductive structure 104 at a selected location at which it is desired to form an intermediate portion 54 of the floating gate 34 (FIGS. 3A-3B). The opening 106 maybe formed by, for example, depositing a mask layer over the workpiece 100 and forming an aperture in the mask layer (not shown) at the location at which it is desired to form the opening 106 in the underlying conductive structure 104. An etching process (e.g., an anisotropic dry reactive ion or plasma etching process) then may be used to etch through the portion of the conductive structure 104 that is exposed through the mask layer. Referring to FIG. 9, another layer of inter-gate dielectric material 4OB may be deposited, epitaxially grown, or otherwise provided at least over the regions of the workpiece 100 comprising the conductive structure 104 and within the opening 106, By way of example and not limitation, the inter-gate dielectric material 4OB may comprise an oxynitride material and may be deposited using a chemical vapor deposition (CVD) process. As shown in FIG. 10, an anisotropic etching process (e.g., a dry reactive ion or plasma etching process) may be used to etch the layer of inter-gatedielectric material 4OB from the horizontally extending surfaces of the workpiece 100, leaving behind a layer of the inter-gate dielectric material 4OB on the vertically extending sidewalls of the conductive structure 104 within the opening 106. As shown in FIG. 11, a conductive material 108 may be provided within the opening 106 to form the intermediate portion 54 of the floating gate 34 (FIGS. 3A-3B). By way of example and not limitation, a layer of conductive material 108 may be provided over the workpiece 100 to a thickness sufficient to substantially fill the opening 106, as shown in FIG. 11. The layer of conductive material 108 then may be planarized, as shown in FIG. 12, using, for example, a chemical-mechanical polishing (CMP) process, until the underlying dielectric material 102 is exposed. At this point the intermediate portion 54 of the floating gate 34 is separated from the remaining portion of the conductive structure 104 by the layer of the inter-gate dielectric material 4OB on the vertically extending sidewalls of the conductive structure 104 previously formed within the opening 106. hi some embodiments, the end portions 50, 52 and the intermediate portion 54 of the floating gate 34 may comprise a polysilicon material (doped or undoped). In such embodiments, the intermediate portion 54 of the floating gate 34 may be formed using a selective epitaxial chemical vapor deposition (CVD) process in which polysilicon is selectively deposited only on exposed surfaces of previously formed polysilicon, such as the exposed surface of the first end portion 50 of the floating gate 34 within the opening 34. hi embodiments in which the intermediate portion 54 of the floating gate 34 is to comprise a single nanowire, the single nanowire may by grown or otherwise formed in situ on an exposed upper surface of the first end portion 50 of the floating gate 34 within the opening 106. Optionally, a catalytic material or structure configured to catalyze formation of the nanowire may be provided on the exposed upper surface of the first end portion 50 of the floating gate 34 within the opening 106 prior to growing or otherwise forming the nanowire therein. Various methods of forming nano wires using corresponding catalyst materials are known in the art and may be used to form a single nanowire over the first end portion 50 of the floating gate 34. Some of such methods are described in, for example, Younan Xia et al., One-Dimensional Nanostructures: Synthesis,Characterization and Applications, 15 Advanced Materials 353-389 (March 2003). By way of example and not limitation, chemical vapor deposition (CVD) processes, which optionally may employ the so-called vapor-liquid-solid (VLS) mechanism, may be used to grow a nanowire using a catalytic structure, as known in the art. As one non-limiting example, such a catalytic structure may comprise a gold nanoparticle, and the nanowire may comprise a doped silicon (Si) material. Such a doped silicon nanowire may be formed using a chemical vapor deposition (CVD) process and the vapor-liquid-solid (VLS) mechanism, as known in the art. As another non-limiting example, such a catalytic structure may comprise at least one of Ti, Co, Ni, Au, Ta, polysilicon, silicon-germanium, platinum, iridium, titanium nitride, or tantalum nitride, and the nanowire may comprise iridium oxide (IrOx), as described in United States Patent Publication No. 2006/0086314 Al to Zhang et al. Furthermore, as previously discussed, the nanowire may comprise a III-V type semiconductor material or a H-V type semiconductor material. Various types of semiconductor materials that may be used to form nanowires, as well as the reactant precursor materials and catalyst materials that may be used to catalyze formation of such nanowires are disclosed in United States Patent Publication No. 2004/0028812 Al to Wessels et al. In additional embodiments, such a nanowire may be fabricated elsewhere rather than in situ, and may be positioned within the opening 106 using, for example, a selectively oriented electrical field. If the intermediate portion 54 of the floating gate 34 is to comprise a single nanowire, as discussed above, after forming the single nanowire on the first end portion 50 of the floating gate 34, the single nanowire may be surrounded with a dielectric material (e.g., to fill any remaining voids within the opening 106), such as an additional layer of inter-gate dielectric material (not shown), and the resulting structure then may be planarized as previously described in relation to FIG. 12 to expose an end of the nanowire through the dielectric material for subsequently forming the second end portion 52 of the floating gate 34 thereover and establishing electrical contact between the nanowire and the second end portion 52. Referring to FIG. 13, another layer of inter-gate dielectric material 4OC may be deposited, epitaxially grown, or otherwise provided at least over the regions of the workpiece 100 comprising the intermediate portion 54 of the floating gate 34 and theremaining portion of the conductive structure 104. By way of example and not limitation, the inter-gate dielectric material 4OC may comprise an oxynitride material and may be deposited using a chemical vapor deposition (CVD) process. As shown in FIG. 14, the layer of inter-gate dielectric material 4OC may be patterned to provide discrete regions of the inter-gate dielectric material 4OC that will be disposed between the remaining portion of the conductive structure 104 and a second end portion 52 of the floating gate 34 (FIGS. 3A-3B) that will be formed thereover. A second end portion 52 of the floating gate 34 (FIGS. 3A-3B) maybe formed over the intermediate portion 54 of the floating gate 34 and the discrete regions of inter-gate dielectric material 4OC using conventional lithographic or sublithographic techniques. As a non-limiting example, another layer of conductive material 110 may be provided over at least the portions of the workpiece 100 comprising the discrete regions of inter-gate dielectric material 4OC and the intermediate portion 54 of the floating gate 34, as shown in FIG. 15. Referring to FIG. 16, the layer of conductive material 110 (FIG. 15) then may be patterned to form a second end portion 52 of the floating gate 34 (FIGS. 3A-3B). By way of example and not limitation, the layer of conductive material 110 (FIG. 15) may be patterned by providing a mask layer (not shown) over the layer of conductive material 110 and removing portions of the mask layer overlying regions of the layer of conductive material 110 that are to be removed (e.g., regions of the conductive material 110 that do not overlie the discrete regions of inter-gate dielectric material 4OC and the intermediate portion 54 of the floating gate 34). An anisotropic etching process (e.g., a dry reactive ion or plasma etching process) then may be used to etch the regions of the layer of conductive material 110 (FIG. 15) that are exposed through the mask layer. Referring to FIG. 17, an inter-gate dielectric material 4OD may be provided over the exposed surfaces of the second end portion 52 of the floating gate 34. By way of example and not limitation, a layer of the inter-gate dielectric material 4OD (e.g., an oxynitride material deposited using a chemical vapor deposition (CVD) process) may be provided over the workpiece 100, and a mask layer (not shown) may be provided over the exposed horizontally-extending surface of the portion of the layer of the inter-gate dielectric material 4OD that overlies the second end portion 52 of the floating gate 34. An anisotropic etching process (e.g., a dry reactive ion or plasma etchingprocess) then may be used to etch regions of the layer of inter-gate dielectric material 4OD that are exposed through the mask layer to form the structure shown in FIG. 17. After forming the floating gate 34 (FIGS. 3A-3B) and providing the inter-gate dielectric material 4OD over the second end portion 52 thereof, as discussed above, the remaining portion of the control gate 32 (FIGS. 3A-3B) may be formed over and around the second end portion 52 of the floating gate 34. By way of example and not limitation, another layer of conductive material 112 may be deposited over at least the portion of the workpiece 100 comprising the second end portion 52 of the floating gate 34 and the remaining portion of the conductive structure 104, as shown in FIG. 18. Referring to FIG. 19, the layer of conductive material 112 then may be patterned to complete formation of the control gate 32 (FIGS. 3A-3B). By way of example and not limitation, the layer of conductive material 112 may be patterned by providing a mask layer (not shown) over the layer of conductive material 112 and removing portions of the mask layer overlying regions of the layer of conductive material 112 that are to be removed (e.g., regions of the conductive material 112 that do not overlie the second end portion 52 of the floating gate 34 and the remaining portion of the conductive structure 104). An anisotropic etching process (e.g., a dry reactive ion or plasma etching process) then may be used to etch the regions of the layer of conductive material 112 that are exposed through the mask layer. Referring to FIG. 20, additional dielectric material may be provided over and around the control gate 32 as necessary or desired so as to complete formation of the passivation layer 44. As can be seen by comparison of FIGS. 20 and 2B, in some embodiments, the passivation layer 44 may comprise various regions of dielectric material, each deposited or otherwise provided at different times during fabrication of the control gate 32 and the floating gate 34. Similarly, as can be seen with comparison of FIGS. 19 and 20, the inter-gate dielectric material 40 previously described with reference to FIGS. 2A-2B and FIGS. 3A-3B may comprise the various regions of inter-gate dielectric material 4OA, 4OB, 4OC, and 4OD that are also deposited or otherwise provided at different times during fabrication of the control gate 32 and the floating gate 34.FIG. 21 is a partial cross-sectional view of a portion of another embodiment of a semiconductor device 130 of the present invention, and illustrates a transistor having a dual gate structure. The transistor shown in FIG. 21, like that previously described with reference to FIGS. 2A-2B and 3A-3B, may comprise a control gate 132, a floating gate 134, a source 36, and a drain 38. The transistor may comprise, for example, at least a portion of a memory cell in an array of memory cells of the semiconductor device 130. In some embodiments, the semiconductor device 130 may comprise a memory device having an array of memory cells, each of which may comprise a transistor as shown in FIG. 21. The floating gate 134 is electrically isolated from the control gate 132 by an inter-gate dielectric material 40, and from the underlying substrate 31 (including the source 36 and the drain 38) by tunnel dielectric material 42. The control gate 132 and the floating gate 134 also may be electrically isolated from surrounding structures by a passivation layer 44. As shown in FIG. 21, at least a portion of the floating gate 134 may have a shape generally similar to the floating gate 34 shown in FIGS. 3A-3B and has two enlarged ends separated by a relatively smaller (e.g., narrower) intermediate section therebetween, as described in further detail below. At least a portion of the control gate 132 may have a shape that is complementary to that of the floating gate 134. For example, at least a portion of the control gate 132 may have a shape that is complementary to that of at least about one-half (e.g., the upper half shown in FIG. 21) of the floating gate 134. The shape of the floating gate 134 and the complementary shape of the control gate 132 are described in further detail below with reference to FIG. 22. FIG. 22 is an enlarged view of the control gate 132 and the floating gate 134, as shown in FIG. 21. The other elements of the semiconductor device 130 are not illustrated in FIG. 22 to simplify the figure and facilitate description of the control gate 132 and the floating gate 134. As shown in FIG. 22, the floating gate 134 may include a first end portion 150, a second end portion 152, and an intermediate portion 154 extending between the first end portion 150 and the second end portion 154. The first end portion 150 may be located proximate the source 36 and the drain 38 (FIG. 21), and the second endportion 152 may be located proximate the control gate 132. The intermediate portion 154 may have an average transverse cross-sectional area that is less than that of each of the first end portion 150 and the second end portion 152. In other words, the end portions 150, 152 may be enlarged relative to the intermediate portion 154. In some embodiments, the first end portion 150, second end portion 152, and intermediate portion 154 of the floating gate 134 each may have a substantially circular or a substantially rectangular cross-sectional shape. As a non-limiting example, the first end portion 150, the second end portion 152, and the intermediate portion 154 each may have a substantially rectangular (e.g., square) cross-sectional shape. In contrast to the previously described floating gate 34, there may be no readily identifiable boundary between the first end portion 150, second end portion 152, and the intermediate portion 154 of the floating gate 134, as shown in FIGS. 21 and 22. In some embodiments of the present invention, the floating gate 134 may comprise a doped polysilicon material, and the concentration of at least one dopant in the intermediate portion 154 of the floating gate 134 may differ from the concentration of the dopant in each of the first end portion 150 and the second end portion 152 of the floating gate 134. Such a configuration may facilitate fabrication of the floating gate 134, as discussed in further detail below. As non-limiting examples, the first end portion 150, second end portion 152, and the intermediate portion 154 of the floating gate 134 may have average dimensions similar to those previously described in relation to the first end portion 50, second end portion 52, and the intermediate portion 54, respectively, of the floating gate 34 shown in FIGS. 3A-3B. As previously mentioned, at least a portion of the control gate 132 may have a shape that is complementary to that of the floating gate 134. For example, the exterior surfaces of the floating gate 134 may define at least one recess 148 in the lateral sides of the floating gate 134, and the exterior surfaces of the control gate 132 may define at least one protrusion 149, which may be disposed at least partially within the at least one recess 148 of the floating gate 134, as discussed in further detail below. As shown in FIG. 22, the control gate 132 may comprise at least one surface 133 opposing a lateral side surface 135 of the floating gate 134. The lateralside surface 135 of the floating gate 134 may at least partially define the recess 148 in the lateral sides of the floating gate 134. In some embodiments, the control gate 132 may entirely surround the floating gate 134, and the at least one protrusion 149 (FIG. 3B) of the control gate 132 may substantially fill the recess 148 of the floating gate 134. In other embodiments, however, the control gate 132 may not substantially entirely surround the floating gate 134, and the at least one protrusion 149 (FIG. 3B) of the control gate 132 may not substantially fill the recess 148 of the floating gate 134. One example of an embodiment of a method of the present invention that may be used to manufacture a semiconductor device comprising one or more transistors like that shown in FIGS. 21-22 is described below with reference to FIGS. 23-29. Referring to FIG. 23, a workpiece 200 may be provided that includes a substrate 31 using methods known in the art. The substrate 31 may comprise a full or partial semiconductor wafer, and may comprise a doped semiconductor material. Only a portion of the substrate 31 that is to comprise a single transistor is shown in the figures to facilitate illustration and description. It is contemplated, however, that the substrate 31 maybe used to form one or more semiconductor devices (not shown), each of which may comprise a plurality of transistors like that shown in the figures. The workpiece 100 may comprise a source 36 and a drain 38 for each transistor being formed on the workpiece 100 (although in some devices, such as NAND flash memory devices, at least some transistors may be connected in series, the drain 38 of one transistor being continuous with the source 36 of an adjacent transistor), as well as a plurality of isolation regions (not shown in the figures) similar to the isolation regions 46 shown in FIG. 4. As also shown in FIG. 23, a layer of tunnel dielectric material 42 may be deposited at least over the regions of the workpiece 200 on which a control gate 132 and floating gate 134 (FIGS. 2A-2B) are to be fabricated. As shown in FIG. 24, a conductive structure 112 that may be used to form a floating gate 134 (FIGS. 21 and 22) may be formed vertically over the tunnel dielectric material 42 and generally horizontally between a source 36 and a drain 38 (FIG. 2A). The conductive structure 112 may be formed using conventional lithographic or sublithographic processes (e.g., photolithography (with or without a so-called "pitch doubling" process) or nanoimprint lithography). In some embodiments, for example, alayer of conductive material (not shown) maybe deposited or otherwise provided over the substrate 31 and patterned using, for example, a masking and etching process to form the conductive structure 112. In other words, a layer of mask material (not shown) may be provided over the layer of conductive material, and the mask layer may be patterned to form a discrete region of mask material 114 overlying the layer of conductive material at a location at which it is desired to form the conductive structure 112. The layer of conductive material surrounding the discrete region of mask material 114 then may be selectively etched using, for example, an anisotropic etching process (e.g., an anisotropic dry reactive ion or plasma etching process). The layer of conductive material (not shown) and the resulting conductive structure 112 may comprise a material composition that, when subsequently etched with a selected etchant, oxidized, or otherwise processed, will form a floating gate 134 having the general structure shown in FIG. 25. By way of example and not limitation, it is known in the art that the concentration of a dopant in a doped polysilicon material can affect the rate at which the doped polysilicon material is etched with particular etchants, and can also affect the rate at which the doped polysilicon material is oxidized when exposed to an oxidant. Referring again to FIG. 24, in some embodiments, the conductive structure 112 may have a first lower region 120, a second upper region 122, and a third intermediate region 124 disposed between the first lower region 120 and the second upper region 122. The first lower region 120 is roughly illustrated in FIG. 24 as the portion of the conductive structure 112 below the dashed line 121, the second upper region 122 is roughly illustrated as the portion of the conductive structure 112 above the dashed line 123, and the third intermediate region 124 is roughly illustrated as the portion of the conductive structure 112 between the dashed line 121 and the dashed line 123. In actuality, there may be no readily identifiable boundary between the first lower region 120, the second upper region 122, and the third intermediate region 124 other than the concentration of the dopant therein. The third intermediate region 124 may have an average dopant concentration that differs from both the average dopant concentration in the first lower region 120 and the average dopant concentration in the second upper region 122. For example, the conductive structure 112 may comprise polysilicon that is doped with an n-type dopant(e.g., phosphorous or arsenic). The concentration of the dopant in the third intermediate region 124 of the conductive structure 112 may be relatively higher that the concentrations of the dopant in each of the first lower region 120 and the second upper region 122. hi some embodiments, the dopant concentration may continuously vary between the first lower region 120 and the second upper region 122, while in other embodiments, the dopant concentration may vary in a step-wise manner between the first lower region 120 and the third intermediate region 124 and between the third intermediate region 124 and the second upper region 122. After forming a conductive structure 112 as shown in FIG. 24, the conductive structure 112 may be exposed to an etchant to form the structure shown in FIG. 25. By way of example and not limitation, the conductive structure 112 may be exposed to a wet chemical etching process or to a dry reactive ion or plasma etching process, hi additional embodiments, the conductive structure 112 may be exposed to an oxidant to form an oxide layer (not shown) in the exposed surfaces of the conductive structure 112. After such an oxidation process, the un-oxidized portion of the conductive structure 112 may have a shape as shown in FIG. 25. After oxidation, the oxide layer optionally may be removed from the un-oxidized portion of the conductive structure 112 (using, for example, an etchant selective to the oxide layer) such that the resulting structure has a general shape as illustrated in FIG. 25. The discrete region of mask material 114 optionally may be left over the conductive structure 112 as shown in FIG. 25 to protect the upper surface of the conductive structure 112 from the etchant or the oxidant. The discrete region of mask material 114 may be removed from the conductive structure 112 after the etching process, as shown in FIG. 26. As also shown in FIG. 26, a layer of inter-gate dielectric material 40 may be deposited, epitaxially grown, or otherwise provided on the workpiece 200 over at least the exposed surfaces of the floating gate 134. Optionally, any inter-gate dielectric material 40 provided over surfaces of the workpiece 200 other than the exposed surfaces of the floating gate 134 may be selectively removed from the workpiece 200 as necessary or desired. Referring to FIG. 27, a layer of dielectric material 116 may be provided over the workpiece 200 around the floating gate 134 to a thickness selected to provide adesired distance between the control gate 132 (FIGS. 21 and 22) to be formed thereover and the underlying source 36 and drain 38. By way of example and not limitation, a substantially conformal layer of dielectric material 116 maybe deposited or otherwise provided over the workpiece 200. A masking and etching process then may be used to remove any dielectric material 116 undesirably deposited on surfaces of the inter-gate dielectric material 40 on the floating gate 134. In additional embodiments, a layer of lift-off material (not shown) may be selectively provided over at least a portion of the exposed surfaces of the inter-gate dielectric material 40 on the floating gate 134, after which the layer of dielectric material 116 may be deposited or otherwise provided over the lift-off layer. The lift-off layer then may be stripped away from the workpiece 200, and the overlying dielectric material 116 may be removed from the workpiece 200 together with the underlying lift-off layer. Referring to FIG. 28, a control gate 132 may be formed over and around the floating gate 134 (and the inter-gate dielectric material 40 thereon). By way of example and not limitation, another layer of conductive material (not shown) may be deposited over at least the portion of the workpiece 200 comprising the floating gate 134, and the layer of conductive material may be patterned to form the control gate 132. The layer of conductive material may be patterned by providing a mask layer (not shown) over the layer of conductive material and removing portions of the mask layer overlying regions of the layer of conductive material that are to be removed. An anisotropic etching process (e.g., a dry reactive ion or plasma etching process) then may be used to etch the regions of the layer of conductive material that are exposed through the mask layer to form the control gate 132. Referring to FIG. 29, additional dielectric material may be provided over and around the control gate 132 as necessary or desired so as to complete formation of the passivation layer 44. At least one transistor having a floating gate and a control gate as described herein may be used in any type of semiconductor device including, for example, flash memory devices (e.g., NOR flash memory devices and NAND flash memory devices) and electrically erasable programmable read-only memory (EEPROM) devices. Embodiments of semiconductor devices of the present invention that comprise floating gate transistors as described above may exhibit relatively higher couplingratios than semiconductor devices presently known in the art, and may be scaled to smaller feature sizes without decreasing the coupling ratio to an unacceptable level. In particular, by increasing the surface area of the opposing surfaces between the floating gate and the control gate of a transistor having a dual-gate structure, the capacitance CFG-CG between the floating gate and the control gate in each transistor between may be increased, which may increase the coupling ratio (CR) of the semiconductor device when the coupling ratio is defined as the ratio of the capacitance CFG-CG between the floating gate and the control gate in each transistor to the capacitance CFG-FG between the floating gates of adjacent transistors (i.e., CR = CFG-CG/CFG-FG)- Semiconductor devices like those previously described herein may be used in embodiments of electronic systems of the present invention. For example, FIG. 30 is a block diagram of an illustrative electronic system 300 according to the present invention. The electronic system 300 may comprise, for example, a computer or computer hardware component, a server or other networking hardware component, a cellular telephone, a digital camera, a personal digital assistant (PDAs), portable media (e.g., music) player, etc. The electronic system 300 includes at least one memory device 301. The system 300 further may include at least one electronic signal processor device 302 (often referred to as a "microprocessor"). At least one of the electronic signal processor device 302 and the at least one memory device 301 may comprise, for example, an embodiment of the semiconductor device 30 shown in FIGS. 2 A and 2B or an embodiment of the semiconductor device 120 shown in FIG. 21. In other words, at least one at least one of the electronic signal processor device 302 and the at least one memory device 301 may comprise an embodiment of a transistor having a dual-gate structure as previously described in relation to either the semiconductor device 30 shown in FIGS. 2A and 2B or the semiconductor device 120 shown in FIG. 21. The electronic system 300 may further include one or more input devices 304 for inputting information into the electronic system 300 by a user, such as, for example, a mouse or other pointing device, a keyboard, a touchpad, a button, or a control panel. The electronic system 300 may further include one or more output devices 306 for outputting information (e.g., visual or audio output) to a user such as, for example, a monitor, a display, a printer, an audio output jack, a speaker, etc. In some embodiments, the input device 304 and the output device 306 may comprise asingle touchscreen device that can be used both to input information to the system 300 and to output visual information to a user. The one or more input devices 304 and output devices 306 may communicate electrically with at least one of the memory device 301 and the electronic signal processor device 302. While the present invention has been described in terms of certain illustrated embodiments and variations thereof, it will be understood and appreciated by those of ordinary skill in the art that the invention is not so limited. Rather, additions, deletions and modifications to the illustrated embodiments may be effected without departing from the spirit and scope of the invention as defined by the claims that follow.
A stack of heat generating integrated circuit chips may be provided with intervening cooling integrated circuit chips. The cooling integrated circuit chips may include microchannels for the flow of the cooling fluid. The cooling fluid may be pumped using the integrated electroosmotic pumps. Removal of cooling fluid gases may be accomplished using integrated re-combiners in some embodiments.
1. A method comprising: forming a stack including at least two cooling integrated circuit chips sandwiching a heat generating integrated circuit chip, said cooling integrated circuit chips including microchannels for the circulation of a cooling fluid; and securing a second heat generating integrated circuit chip on one of said cooling chips. 2. The method of claim 1 including forming electroosmotic pumps in said integrated circuit cooling chips. 3. The method of claim 1 including forming recombiners integrated in said cooling integrated circuit chips. 4. The method of claim 1 including sealing the edges of said stack except for ports to access said microchannels. 5. The method of claim 4 including providing a fluid inlet reservoir and a fluid outlet reservoir in communication with said microchannels. 6. The method of claim 5 including forming said reservoirs in a package including said stack. 7. The method of claim 6 including isolating said inlet and outlet reservoirs in said package. 8. The method of claim 7 including coupling said inlet and outlet reservoirs exteriorly of said package. <Desc/Clms Page number 15> 9. The method of claim 1 including providing electrical connections between said cooling integrated circuit chips and said heat generating integrated circuit chips. 10. The method of claim 9 including using vias to provide said electrical connections. 11. A packaged integrated circuit structure comprising: a pair of integrated circuit chips; a cooling integrated circuit chip between said pair of integrated circuit chips, said cooling integrated circuit chip including microchannels for the circulation of a cooling fluid; and a package containing said integrated circuit chips. 12. The structure of claim 11 including a first trench for containing a fluid so as to communicate from the exterior of said cooling integrated circuit chip with said channels. 13. The structure of claim 12 including a second trench isolated from said first trench and abutting said cooling integrated circuit chip in said package. 14. The structure of claim 13 wherein said second trench to contain fluid and to fluidically communicate with said microchannels. 15. The structure of claim 14 including ports to communicate with said first trench and said second trench from the exterior of said package. <Desc/Clms Page number 16> 16. The structure of claim 11 including an integrated electroosmotic pump in said integrated circuit cooling chip. 17. The structure of claim 11 including integrated recombiners in said cooling integrated circuit chip. 18. The structure of claim 11 wherein the edges of said heat generating integrated circuit chips are sealed. 19. The structure of claim 15 wherein said ports are connected exteriorly of said package. 20. The structure of claim 11 including electrical vias coupling said integrated circuit chips. 21. The structure of claim 11 including a controller, electroosmotic pumps, and temperature sensors within said integrated circuit chips to selectively operate said electroosmotic pumps to cool particular regions of said heat generating integrated circuit chips. 22. A packaged integrated circuit structure comprising : a stack including a pair of integrated circuit chips and a cooling integrated circuit chip between said pair of integrated circuit chips, said cooling integrated circuit chip including microchannels for the circulation of a cooling fluid; a package receiving said stack, said package having formed therein an inlet fluid reservoir and an outlet fluid reservoir to communicate with said microchannels; and a path to recycle fluid from said outlet fluid reservoir to said inlet fluid reservoir. <Desc/Clms Page number 17> 23. The structure of claim 22 including a second cooling integrated circuit chip on one of said integrated circuit chips. 24. The structure of claim 22 including a path on the exterior of said package. 25. The structure of claim 22 wherein the edges of said integrated circuit chips are sealed. 26. The structure of claim 22 wherein said stack is in contact with said fluid reservoirs. 27. The structure of claim 26 wherein said microchannels communicate with the edges of said cooling integrated circuit chip. 28. The structure of claim 22 including electroosmotic pumps in said cooling integrated circuit chip. 29. The structure of claim 28 including a re-combiner coupled to each of said electroosmotic pumps. 30. The structure of claim 27 wherein said cooling electroosmotic pumps may be selectively operated to provide localized cooling. 31. The structure of claim 22 including a plurality of temperature sensors to enable temperature controlled cooling.
<Desc/Clms Page number 1> ELECTROOSMOTIC PUMPS USING POROUS FRITS FOR COOLING INTEGRATED CIRCUIT STACKS Background This invention relates generally to cooling stacks of integrated circuits. Stacking of multiple integrated circuit chips may improve integrated circuit functionality, while at the same time reducing space requirements. As transistor dimensions shrink, the stacking of heat producing integrated circuits will increase heat dissipation problems. Conventional integrated circuit technologies may not be able to adequately remove the amount of heat generated from stacking a series of heat producing chips. Thus, there is a need for better ways of cooling stacks of integrated circuits. Brief Description of the Drawings Figure 1 is a schematic depiction of the operation of the embodiment in accordance with one embodiment of the present invention; Figure 2 is an enlarged cross-sectional view of one embodiment of the present invention at an early stage of manufacture; Figure 3 is an enlarged cross-sectional view at a subsequent stage of manufacture in accordance with one embodiment of the present invention; Figure 4 is an enlarged cross-sectional view at a subsequent stage of manufacture in accordance with one embodiment of the present invention; Figure 5 is an enlarged cross-sectional view at a subsequent stage of manufacture in accordance with one embodiment of the present invention; <Desc/Clms Page number 2> Figure 6 is an enlarged cross-sectional view at a subsequent stage of manufacture in accordance with one embodiment of the present invention; Figure 7 is an enlarged cross-sectional view taken along the lines 7-7 in Figure 8 at a subsequent stage of manufacture in accordance with one embodiment of the present invention; Figure 8 is a top plan view of the embodiment shown in Figure 8 in accordance with one embodiment of the present invention; Figure 9 is an enlarged cross-sectional view of a completed structure in accordance with one embodiment of the present invention; Figure 10 is a depiction of a recombiner at an early stage of manufacture; Figure 11 is an enlarged cross-sectional view at a subsequent stage of manufacture in accordance with one embodiment of the present invention; Figure 12 is an enlarged top plan view at a subsequent stage of manufacture in accordance with one embodiment of the present invention; Figure 13 is a cross-sectional view taken general along the line 13-13 in Figure 12 in accordance with one embodiment of the present invention; Figure 14 is an enlarged cross-sectional view at a subsequent stage of manufacture in accordance with one embodiment of the present invention; Figure 15 is a top plan view of the embodiment shown in Figure 14 at a subsequent stage of manufacture in accordance with one embodiment of the present invention; Figure 16 is a cross-sectional view taken generally along the line 16-16 in Figure 15 in accordance with one embodiment of the present invention; <Desc/Clms Page number 3> Figure 17 is a cross-sectional view corresponding to Figure 16 at a subsequent stage of manufacture in accordance with one embodiment of the present invention; Figure 17A is a side-elevational view of a re-combiner in accordance with one embodiment of the present invention; Figure 18 is a cross-sectional view of a system in accordance with one embodiment of the present invention; Figure 19 is a schematic view of a packaged system in accordance with one embodiment of the present invention; Figure 20 is a cross-sectional view of a packaged system in accordance with another embodiment of the present invention; Figure 21 is a cross-sectional view of a packaged system in accordance with another embodiment of the present invention; Figure 22 is a schematic view of a cooling system in accordance with another embodiment of the present invention; Figure 23 is a schematic view of still another embodiment of the present invention; Figure 24 is a schematic view of still another embodiment of the present invention; Figure 25 is a schematic view of still another embodiment of the present invention; Figure 26 is a schematic view of still another embodiment of the present invention; Figure 27 is a schematic view of still another embodiment of the present invention; Figure 28 is an enlarged, cross-sectional view through one embodiment of the present invention taken generally along the line 28-28 in Figure 29; and Figure 29 is a cross-sectional view taken generally along the line 29-29 in Figure 28 in accordance with one embodiment of the present invention. <Desc/Clms Page number 4> Detailed Description Referring to Figure 1, an electroosmotic pump 28 fabricated in silicon is capable of pumping a fluid, such as a cooling fluid, through a frit 18. The frit 18 may be coupled on opposed ends to electrodes 29 that generate an electric field that results in the transport of a liquid through the frit 18. This process is known as the Electroosmotic effect. The liquid may be, for example, water and the frit may be composed of silicon dioxide in one embodiment. In this case hydrogen from hydroxyl groups on the wall of the frit deprotonate resulting in an excess of protons moving transversely to the wall or transversely to the direction of fluid movement, indicated by the arrows A. The hydrogen ions move in response to the electric field applied by the electrodes 29 in the direction of the arrows A. The non-charged water atoms also move in response to the applied electric field because of drag forces that exist between the ions and the water atoms. As a result, a pumping effect may be achieved without any moving parts. In addition, the structure may be fabricated in silicon at extremely small sizes making such devices applicable as pumps for cooling integrated circuits. In accordance with one embodiment of the present invention, the frit 18 may be made of an open and connected cell dielectric thin film having open nanopores. By the term"nanopores, "it is intended to refer to films having pores on the order of 10 to 1000 nanometers. In one embodiment, the open cell porosity may be introduced using the sol-gel process. In this embodiment, the open cell porosity may be introduced by burning out the porogen phase. However, any process that forms a dielectric film having interconnected or open pores on the order of 10 to 1000 <Desc/Clms Page number 5> nanometers may be suitable in some embodiments of the present invention. For example, suitable materials may be formed of organosilicate resins, chemically induced phase separation, and sol-gels, to mention a few examples. Commercially available sources of such products are available from a large number of manufacturers who provide those films for extremely low dielectric constant dielectric film semiconductor applications. In one embodiment, an open cell xerogel can be fabricated with 20 nanometer open pore geometries that increase maximum pumping pressure by a few orders of magnitude. The xerogel may be formed with a less polar solvent such as ethanol to avoid any issues of water tension attacking the xerogel. Also, the pump may be primed with a gradual mix of hexamethyldisilazane (HMDS), ethanol and water to reduce the surface tension forces. Once the pump is in operation with water, there may be no net forces on the pump sidewalls due to surface tension. Referring to Figures 2-9, the fabrication of an integrated electroosmotic pump 28 using a nanoporous open cell dielectric frit 18 begins by patterning and etching to define an electroosmotic trench. Referring to Figure 2, a thin dielectric layer 16 may be grown over the trench in one embodiment. Alternatively, a thin etch or polish-stop layer 16, such as a silicon nitride, may be formed by chemical vapor deposition. Other techniques may also be used to form the thin dielectric layer 16. The nanoporous dielectric layer 18 may than be formed, for example, by spin-on deposition. In one embodiment, the dielectric layer 18 may be in the form of a sol-gel. The deposited dielectric layer 18 may be allowed to cure. <Desc/Clms Page number 6> Then, referring to Figure 3, the structure of Figure 2 may be polished or etched back to the stop layer 16. As a result, a nanoporous dielectric frit 18 may be defined within the layer 16, filling the substrate trench. Referring next to Figure 4, openings 24 may be defined in a resist layer 22 in one embodiment of the present invention. The openings 24 may be effective to enable electrical connections to be formed to the ends of the frit 18. Thus, the openings 24 may be formed down to a deposited oxide layer 20 that may encapsulate the underlying frit 18. In some embodiments, the deposited oxide layer 20 may not be needed. The resist 22 is patterned as shown in Figure 4, the exposed areas are etched and then used as a mask to form the trenches 26 alongside the nanoporous dielectric layer 18 as shown in Figure 5. Once the trenches 26 have been formed, a metal 29 may be deposited on top of the wafer In one emobodiment, sputtering can be used to deposit the metal. The metal 29 can be removed by etching or lift-off techniques in such a manner as to leave metal only in the trench at the bottom of the trenches 26 as shown in Figure 6. The metal 29 is advantageously made as thin as possible to avoid occluding liquid access to the exposed edge regions of the frit 18, which will ultimately act as the entrance and exit openings to the pump 28. The metal 30 may be thick enough, however, to assure adequate current flow without damage to the electrodes. Additionally, it is advantageous if the metal 29 also is deposited along the edges of the frit to a thickness which does not block the pore openings. This assures a uniform electric field along the entire depth of the frit. Referring to Figure 7, a chemical vapor deposition material 34 may be formed over the frit 18 and may be <Desc/Clms Page number 7> patterned with photoresist and etched, as indicated at 32, to provide for the formation of microchannels 38 shown in Figure 8. The microchannels 38 act as conduits to convey liquid to and from the rest of the pump 41. Also, electrical interconnections 36 may be fabricated by depositing metal (for example by sputtering), and removing the metal in selected areas (for example by lithographic patterning and etching across the wafer to enable electrical current to be supplied to the electrodes 29. This current sets up an electric field that is used to draw the fluid through the pump 28. Referring to Figure 9, the fluid may pass through the microchannels 38 and enter the frit 18 by passing over the first electrode 29. The fluid is drawn through the frit 18 by the electric field and the disassociation process described previously. As a result, the fluid, which may be water, is pumped through the pump 28. Referring now to Figures 10 through 17, one embodiment of a fabrication technique for making an integrated re- combiner is illustrated. Initially, a semiconductor substrate 60, such as a silicon wafer, may have a trench 62 formed therein by patterning and etching techniques, for example. Thereafter, a catalyst material 64, such as platinum or lead, is sputter deposited as shown in Figure 10. The catalyst material 64 is polished off the top of the wafer substrate 60 so only the portion 66 remains as shown in Figure 11. A resist may be spun-on and patterned to form microchannels 68a and 68b, shown in Figures 12 and 13. The microchannels 68a and 68b may be etched to the depth of the top of the catalyst material 66 and the resist used to do the etching may be cleaned. Then a resist 70 may be spun-on and ashed to clear the top of the wafer substrate 60, as shown in Figure 14. A barrier, such as TiTiN, and <Desc/Clms Page number 8> copper 72 may be sputtered on top of the wafer substrate 60. A resist lift off may be used to remove the copper from the top of the catalyst material 66 and the microchannels 68a and 68b as shown in Figure 17. A porous Teflon layer (not shown) may be deposited over the wafer surface and either etched back or polished so that the Teflon covers the catalyst material 66 while having the copper 72 exposed. The Teflon layer protects the catalyst material 66 if re-combined gas turns into water. A pair of identical substrates 60, processed as described above, may then be combined in face-to-face abutment to form a re-combiner 30 as shown in Figure 17A. The substrates 60 may be joined by copper-to-copper bonding where there is no trench 16 or channel 68. Other bonding techniques, such as eutectic or direct bonding, may also be used to join the two wafers together. The trenches 16 and channels 68 may be aligned to form a passage for cooling fluid circulation over the catalyst material 66. The re-combiner 30 may be used to reduce the buildup of gas in the cooling fluid pumped by the pump 28. Exposure of the gases to catalytic material 66 results in gas recombination. The re-combiner 30 may be made deep enough to avoid being covered with water formed from recombined gas. Referring to Figure 18, a stack 110 may include alternating integrated circuits 112 and cooling chips 124. In particular, starting from the bottom, the integrated circuit 112a is coupled by surface mount connections 118 to a structure 114 such as a printed circuit board. Over the integrated circuit 112a is a cooling chip 124 which may include microchannels 122 for the circulation of cooling fluid. Above the cooling chip 124 is another integrated circuit 112b, followed by another cooling chip 124 and <Desc/Clms Page number 9> another integrated circuit 112c under another cooling chip 124. The exact number of alternating layers is subject to considerable variability. Each integrated circuit 112 may be coupled to an overlying cooling chip 124 using a variety of bonding techniques in the wafer bonding layer 120. The wafer bonding layer 120 may be copper or oxide wafer bonding layers in some embodiments of the present invention. Eutectic bonding may also be employed. Each integrated circuit 112 may have a specialized function and all the integrated circuits 112 may have the same function in one embodiment. Each integrated circuit 112 may include an active layer 116a and a bulk semiconductor substrate 116b. Electrical connections may be made between the integrated circuits 112. For example, the electrical via 126a may couple the integrated circuit 112a and the integrated circuit 112c. The via 126b may couple the integrated circuits 112c and 112b. The electrical via 126c may couple the integrated circuit 112b with the integrated circuit 112a. In some cases, some integrated electronics may be included on the cooling chips 124. For example, an integrated temperature sensor such as a thermister may be formed on the circuit as well as other control elements. The microchannels 122 may be formed by techniques described previously in connection with the formation of the microchannels 68 and the trenches 16. Basically, the microchannels 122 circulate cooling fluid through the cooling chips 124 for cooling the proximate integrated circuits 112. The number and placement of these cooling channels 122, as well as their orientation, is subject to considerable variability. <Desc/Clms Page number 10> Referring to Figure 19, the stack 110 from Figure 18 may be entirely contained within a package 138 mounted on a support structure 118. External to the package 138 is a pump and re-combiner unit 130. In one embodiment, the re- combiner can be made as described previously and may be formed on an integrated circuit. Lines 136 and 134 couple the pump and re-combiner 130 to the stack 110 and a radiator 132 for heat dissipation. As shown in Figure 19, the lines 134 and 136 may be tubing such as plastic or metal tubes. Referring to Figure 20, in accordance with another embodiment, the stack 110 may be integrated within a stack as well. In this case, the stack 110 may be coupled to a bonding layer 120, such as a copper bonding layer. The bonding layer 120 may be coupled to a glass layer 140 used to insulate the upper portion of a stack from the overlying structure 142. The structure 142 may include electroosmotic pumps 18 formed therein in order to supply the cooling fluid to the microchannels 122. The copper heat sink 146 may be located over the pumps 18. The copper heat sink 146 may work to provide for heat dissipation through a thinned heat sink 132. Fluid flow from the pumps 18 may be conveyed by vertical channels formed as vias through the structure 142 to communicate with the underlying microchannels 122. Turning next to Figure 21, in this case, the stack 110 is contained entirely within the package 138, together with the pump/re-combiner 130a. Again, the pump/re-combiner 130a may be formed using the techniques illustrated in Figures 1- 17A and may be an integrated circuit coupled by channels 136 and 150 to the microchannels 122 within the stack 110. A fluid pipe 136 goes from stack 110 to the pump re-combiner 130a. Another pipe 135 leads from the pump re-combiner 130a to the radiator 132. Still another pipe 134 leads from the radiator back to the stack 110. The fluid circulates in a <Desc/Clms Page number 11> loop from the pump 130a to the stack 110 to the radiator 132 to dissipate heat. Also, having the heat sink 132 separated from the stack 110 may achieve a greater difference in temperature between the heat sink 132 and the stack 110, resulting in more heat dissipation. In one embodiment, the layer 148 may be a series of build-up layers formed in silicon. Referring to Figure 22, the flow of fluid through channels 122 may be subject to considerable variability. For example, each of the channels 122 may receive a fluid input 134 and may pass a fluid output 132 back to a pump or re-combiner. Thus, in such case, the fluid flow through each cooling chip 124 may be substantially parallel. Referring to Figure 23, the fluid flow through the cooling chips 124 may be arranged in a serial fashion where the flow proceeds from one cooling chip 124 to another through connecting elements 135. While the connecting elements 135 are shown as being external to the stack 110, they may also be internal, formed as vias connecting the channels in one chip 124 to the channels 122 in a lower chip 24, in one embodiment of the present invention. Referring to Figure 24, in accordance with one embodiment of the present invention, a series of channels 122a through 122d in one cooling chip 124 may be arranged over a series of channels 122e through 122h in another chip 124, in turn arranged over a series of channels 122i through 1221 in still another chip 124. Referring to Figure 25, each of the sets of channels 122 in any given chip 124, such as the channels 122a through 122d, may be arranged to provide for serial flow as indicated. The serial flow may be simply formed by channels formed within the chip 124 itself. Alternatively, the flow through any given layer, such as the layer including the <Desc/Clms Page number 12> channels 122a through 122d, may be parallel as suggested in Figure 26. Thus, the flow within any given chip 124 may be serial or parallel and the flow from chip 124 to chip 124 may be serial or parallel in some embodiments of the present invention. Referring to Figure 27, in accordance with one embodiment of the present invention, a controller 154 may be integrated into one of the cooling chips 124 in one embodiment of the present invention. The controller 154 electrically communicates with temperature sensors 152 contained in each cooling chip 124. The temperature sensors 152 sense the local temperature and indicate whether cooling is needed or not. When cooling is needed, the flow of fluid may be provided. For example, fluid flow may be provided through one layer and not another layer in one embodiment of the present invention. In such case where only one integrated circuit 112 is in need of cooling, the cooling flow may be controlled to pass the cooling fluid in the cooling chip 124 associated with the hot integrated circuit 112. This temperature responsive cooling control may be provided in a number of ways. One way to do so is to provide a number of electroosmotic pumps 28, each associated with one or more cooling channels. Those electroosmotic pumps may be either operated or not operated based on signals from the controller 154. Thus, relatively fine control of how much cooling is provided and where that cooling is provided may be facilitated in some embodiments of the present invention. In accordance with some embodiments of the present invention, the stack 110 may be effectively edge sealed so that the stack 110 may be partially immersed in a liquid. However, because of the hermetic sealing of the edge regions <Desc/Clms Page number 13> of the stack 110, the liquid may only enter the stack 110 through ports which communicate with the microchannels 122 formed in the cooling chips 124. For example, referring to Figure 28, a package 156 may have a first trench 154 and a second trench 160 which are isolated from one another. Interior edges of the trenches 154,160 are defined by the stack 110 which is inserted into the package 156. The trenches 154 and 160 may communicate with ports 158 and 162 which allow fluid to be added or exhausted from the package exterior. The edges of the stack 110 are in communication with the fluid filled trench 154. Fluid from the fluid filled trench 154 may enter the stack 110 and may leave through the fluid filled trench 160. Fluid may be recirculated by tubing 168 which connects the ports 162 and 158. Referring to Figure 29, the fluid filled trench 154 may fluidically communicate with one or more microchannels 122, that in turn communicate with one or more electroosmotic pumps 28 and re-combiners 30. In this way, fluid may be pumped by the electroosmotic pump 28 for selective cooling of hot areas of the multichip stack 110. Upper and lower covers 164 and 166 may be included on the package in one embodiment of the present invention. While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention. What is claimed is:
A transfer request bus and transfer request bus node is described which is suitable for use in a data transfer controller processing multiple concurrent transfer requests despite the attendant collisions which result when conflicting transfer requests occur. Transfer requests are passed from an upstream transfer request node to downstream transfer request node and then to a transfer request controller with queue. At each node a local transfer request can also be inserted to be passed on to the transfer controller queue. Collisions at each transfer request node are resolved using a token passing scheme wherein a transfer request node possessing the token allows a local request to be inserted in preference to the upstream request.
What is claimed is:1. A method for scheduling service requests from a plurality of nodes, each capable of generating a service request, said method comprising the steps of:disposing the plurality of nodes in a chain having an upstream most node and a downstream most node, said downstream most node connected to an application device capable of servicing the service requests;sequentially passing a token among the plurality of nodes from the upstream most node to the downstream most node following the chain, said token passing from the downstream most node to the upstream most node in a loop;determining at each node whether a service request is received from a next upstream node;determining at each node whether that node generates a service request;determining at each node whether that node holds the token;passing a service request received at one of the plurality of nodes from a next upstream node to a next downstream node if that node does not generate a service request, the downstream most node passing the service request to the application device;generating at the application device a service request acknowledgment upon receipt and acceptance of a service request by the application device, the service request acknowledgment indicating the node generating the accepted service request;passing a service request received at one of the plurality of nodes from the next upstream node to the next downstream node if that node generates a service request and that node does not hold the token;passing a service request generated by a node to the next downstream node if that node generates a service request and does not receive a service request from the next upstream node;passing a service request generated by a node to the next downstream node if that node generates a service request and that node holds the token;passing a service request acknowledgment received at one of the plurality of nodes from a next downstream node to said next upstream node, said downstream most node receiving said service request acknowledgment from the application device and said upstream most node discarding a received service request acknowledgment.2. The method of claim 1, further comprising:at each node determining if a received service request acknowledgment indicates that node;at each node if a received service request acknowledgment indicates that node changing status at that node.3. The method of claim 1, further comprising:generating at the application device a service request completion signal upon completion of a service request by the application device, the service request completion signal indicating the node generating the accepted service request;passing a service request completion signal received at one of the plurality of nodes from a next downstream node to said next upstream node, said downstream most node receiving said service request completion signal from the application device and said upstream most node discarding a received service request completion signal.4. The method of claim 3, further comprising:at each node determining if a received service request completion signal indicates that node;at each node if a received service request completion signal indicates that node changing status at that node.5. The method of claim 1, wherein:said service requests are data transfer requests for transfer of data; andtransferring data under control of said application device in response to receipt of a data transfer request.6. A data processing apparatus comprising:an application device capable of servicing requested data processing operations in response to corresponding service requests and generating a service request acknowledgment indicating a requesting node upon receipt and acceptance of a service request;a plurality of nodes disposed in a chain having an upstream most node and a downstream most node, each of said plurality of nodes havingan operation unit capable of generating service requests,a token input for receiving a token from a next upstream node in said chain, said token input of said upstream most node receiving said token from said downstream most node,an upstream service request input for receiving a service request from a next upstream node in said chain, said upstream most node not receiving any signal on said upstream service request input,a local service request input for receiving a service request from said operation unit,a downstream service request acknowledgment input for receiving a service request acknowledgment from a next downstream node in said chain, said downstream most node receiving said service request acknowledgment from said application device,a token output for supplying said token to a next downstream node in said chain, said downstream most node supplying said token to said token input of said upstream most node,a downstream service request output for supplying a service request to a next downstream node in said chain, said downstream most node supplying said service request to said application device,an upstream service request acknowledgment output for supplying a service request acknowledgment to a next upstream node, said upstream most node not supplying said service request acknowledgment to any node,a control block connected to said token input, said token output, said upstream service request input, said local service request input and said downstream service request output, said control block operative topass said token from said token input to said token output,pass a service request received at said upstream service request input to said downstream service request output if that node does not generate a local service request, the downstream most node passing the service request to the application device;pass a service request received at said next upstream service request input to said downstream service request output if that node generates a local service request and that node does not hold the token;pass a local service request to said downstream service request output if that node generates a local service request and does not receive a service request from said upstream service request input;pass a local service request to said downstream service request output if that node generates a local service request and that node holds the token; andpass a service request acknowledgment received at said downstream service request acknowledgment input to said upstream service request acknowledgment output.7. The data processing system of claim 6, wherein:wherein said control block is further operative todetermine if a received service request acknowledgment indicates that node, andchange status at that node if a received service request acknowledgment indicates that node.8. The data processing system of claim 6, wherein:said application device generates a service request completion signal upon completion of a service request by the application device, the service request completion signal indicating the node generating the accepted service request;each of said plurality of nodes includesa downstream service completion signal input for receiving a service request completion signal from a next downstream node, said downstream most node receiving said service request completion signal from said application device,an upstream service completion signal output for supplying a service request completion signal to a next upstream node, said upstream most node not supplying said service request completion signal to any node,said control block further operative topass a service request completion signal received at said downstream service completion signal input to said upstream service completion signal output.9. The data processing system of claim 8, wherein:wherein said control block is further operative todetermine if a received service request completion signal indicates that node, andchange status at that node if a received service request completion signal indicates that node.10. The data processing system of claim 6, further comprising:a system memory connected to said application device;wherein said operation unit of each node is capable of generating data transfer service requests; andwherein said application device is capable of transferring data with said system memory in response to data transfer service requests.
This application claims priority under 35 USC [section]119(e)(1) of Provisional Application No. 60/173,763, filed Dec. 30, 1999.TECHNICAL FIELD OF THE INVENTIONThe technical field of this invention is digital device functional blocks, which relates generally to the area of microprocessor design and relates more specifically to the area of digital signal processor devices. In particular this invention relates to distributed service request busses such as data transfer request busses.BACKGROUND OF THE INVENTIONThe present invention deals with the data transfer connecting various memory port nodes as applied to the transfer controller with hub and ports, which is the subject of U.S. Pat. No. 6,496,740 claiming priority from U.K. Patent Application Number 9909196.9 filed Apr. 10, 1999. The transfer controller with hub and ports is a significant basic improvement in data transfer techniques in complex digital systems and provides many useful features, one of which is the internal memory port which allows connection of a virtually unlimited number of processor/memory nodes to a centralized transfer controller. The centralized transfer controller must be able to transfer data from node to node with performance relatively independent of how near or remote a node might be from the transfer controller itself. To clarify the problem solved by the present invention, it is helpful to review the characteristics, architecture, and functional building blocks of the transfer controller with hub and ports.The system problem addressed by this invention is that of sending service transaction requests from many sources. The many sources may be on a single silicon chip. The transaction requests are sent to a common central resource such as a conventional transfer controller. In the preferred embodiment the transfer controller with hub and ports is the subject of the above named patent application. The service requests are contained in transaction request packets composed of words, each of which may be many bits wide.The conventional approach would be to provide dedicated buses from each potential requester to the controller. This construction has several disadvantages. It is inherently complex and requires costly hardware because the transaction requests must be serviced in parallel. The more potential requesters, the more complex such a system must be.Non-parallel transaction processing is an alternative. This requires a centralized arbiter to determine order of servicing on service request collisions. This alternative must also force each non-serviced source to re-submit requests until acknowledged and handled. With either parallel or non-parallel transaction processing, the transaction processor would require extensive modifications for each new design adding or removing requesters. This results in poor re-usability of chip module designs, making poor use of the scarce resource of design engineers. Additionally, requesters distant from the centralized transaction processor would have longer buses. This requires extra design attention or hardware to ensure that signal paths would not be slow.These basic limitations to conventional data transfer techniques led to the initial development of the transfer controller with hub and ports. This transfer controller is a unique mechanism that consolidates the functions of a transfer controller and other data movement engines in a digital signal processor system (for example, cache controllers) into a single module.Consolidation of such functions has both advantages and disadvantages. The most important advantage of consolidation is that it will, in general, save hardware since multiple instantiations of the same type of address generation hardware will not have to be implemented.On a higher level, it is also advantageous to consolidate address generation since it inherently makes the design simpler to modify from a memory-map point of view. For example, if a peripheral is added or removed from the system, a consolidated module will be the only portion of the design requiring change. In a distributed address system (multi-channel transfer controller for example), all instances of the controller channels would change, as would the digital signal processor memory controllers.Fundamental disadvantages of the consolidated model, however, are its inherent bottle necking, resulting from conflicting multiple requests, and its challenge to higher clock rates. Additionally, there is in general an added complexity associated with moving to a consolidated address model, just because the single module is larger than any of the individual parts it replaces.The transfer controller with hub and ports, to which this invention relates, is a highly parallel and highly pipelined memory transaction processor. This transfer controller with hub and ports serves as a backplane to which many peripheral and/or memory ports may be attached.Systems which contain a central mechanism for processing multiple transfer requests from multiple transfer request nodes have as an immediate challenge to solve the problem: how are conflicting transfers, i.e. transfer collisions, to be arbitrated.In networking applications as an example, some systems technique of collision detection and random backoff to provide fair access to the network. Any station can start transmitting when it sees no activity on the network. However, in the unarbitrated state, it is possible for multiple stations to start transmitting simultaneously. Stations do not negotiate for ownership of the network. Instead stations check for the conflicting condition by receiving back what was transmitted, and checking to see if it has been corrupted (indicating a collision with another station). If this happens, all stations that started transmission simultaneously will detect the collision and abort their transmission. These stations then wait a random amount of time before attempting to start transmitting again. As each station will pick a random delay, each station eventually gets to transmit its data. Over time this system could provide fair access to all stations.Other networking systems use a technique of passing a token between the stations. A station can start transmitting only if it has the token. When it has finished, it passes the token to the next station, which can either take it and transmit data, or pass the token on again if it is not ready to transmit. This system is very fair, but is somewhat more complex and costly to implement.A centralized data transfer controller handling multiple simultaneous data transfer requests must be designed to manage the number of independent data transfer requests in a manner which solves these collision incidents unequivocally and any system design faces obvious compromises.In DSP processors there is typically a DMA mechanism to do explicit data moves from one memory space to another in the address map. Also typically there are multiple requestors seeking DMA transfers. In a uni-processor there are multiple requestors like CPU, memory system (12), autonomous DMA controller (XDMA) and external bus mastering peripherals (host port interface devices HPI). Moreover on a multi-processor configuration on a single chip, it would be desirable to seamlessly share the DMA mechanism between the multiple processors. So the DMA request mechanism needs to be 'scalable' to accommodate different number of request nodes. Also it is desired to have a seamless interface to the requesting mechanism, and the details of the request transfer protocol should be hidden from the requesting node. Also the request interface needs to be simple to be able to integrate different kinds of request nodes.SUMMARY OF THE INVENTIONThis invention provides the solution to collision arbitration with fairness on a network of transfer request nodes. The network consists of one transfer request node per transfer requester, arranged in a transfer request bus. The transfer request bus starts at an upstream node and terminates downstream at a receiver node referred to as the request bus master input.At each node, on a given clock cycle only one of two possible transfer requests can be transmitted. First, the previous upstream node can transmit a transfer request to the present node, which it retransmits downstream. Secondly, the requester attached to the present node itself can transmit a request to the next downstream node. Arbitration of which is to occur is done by a token passing scheme.A token signal is active at only one node on the transfer request bus. This token is passed in a downstream direction around the transfer request nodes of the bus on each clock cycle. Thus one and only one transfer request node holds the token at any given time. The token is passed from the extreme downstream request node to the extreme upstream request node to form a token loop.Arbitration of requests takes place as follows. If the present node is not ready to insert a transfer request from its transfer requester, then any upstream request is transmitted to the present node. This happens independent of whether the present node has the token. If the present node is ready to insert a request, it cannot occur except under certain conditions. If there is no request from an upstream node, then the present node may transmit its request downstream regardless of whether it has the token. If the present node receives a request from the immediate upstream node, then its action depends upon whether it holds the token. If the present node does not hold the token, then it must retransmit the request signal from the upstream node. If the present node holds the token, then it can transmit its own request. In this case the present node sends a stall signal to the next upstream node, stalling its request. No requests are aborted. Any previously stalled upstream requests may proceed as soon as the token passes from the present node.The solution to the above problem involves integrating all the DMA requesting nodes in a ring topology. Also each DMA requestor in the chain instances a Transfer Request (TR) node. The TR node is the controller which handles all the transfer protocol and buffering. The bus which is comprised of a chain of all these transfer nodes is referred to as Transfer Request (TR) bus.The Transfer Request (TR) Bus is a pipelined bus with TR nodes prioritizing and forwarding either their local request or the incoming upstream request. In case of both upstream and local request, the priority is based on a token passing scheme that allows a local request to be passed onto the TR bus only if the TR node has the token; otherwise the TR node will pass along the upstream request. In case where there is no upstream request, the TR node can pass the local request. The TR node will test for collisions on each subsequent local transfer request and follow the same protocol as above if a collision is detected. A pipelined stall will cause upstream requests to be held while the local transfer request is injected into the stream. FIG. 1 (Transfer Request Bus) is a block diagram of the transfer request bus.BRIEF DESCRIPTION OF THE DRAWINGSThese and other aspects of this invention are illustrated in the drawings, in which:FIG. 1 illustrates a block diagram of the basic principal features of a transfer controller with hub and ports architecture;FIG. 2 illustrates the multi-processor machine with transfer controller with hub and ports architecture and associated functional units;FIG. 3 illustrates the functional block diagram of the transfer request data bus;FIG. 4 illustrates a detailed block diagram of the transfer request node;FIG. 5 illustrates waveform timing diagram for local/pstream request with downstream stall (no token to local transfer request node);FIG. 6 illustrates waveform timing diagram for local/pstream request with downstream stall (active token present at local transfer request node).DETAILED DESCRIPTION OF PREFERRED EMBODIMENTSThe transfer controller with hub and ports architecture is optimized for efficient passage of data throughout a digital signal processor chip. FIG. 1 illustrates a block diagram of the principal features of the transfer controller with hub and ports. It consists of a system of a single hub 100 and multiple ports 111 through 115. At the heart of the hub is the transfer controller with hub and ports hub control unit 109 which acts upon request and status information to direct the overall actions of the transfer controller.The transfer controller with hub and ports functions in conjunction with, first, a transfer request bus having a set of nodes 117 which bring in transfer request packets at input 103. These transfer request bus nodes (TR nodes) individually receive transfer request packets from transfer requesters 116 which are processor-memory nodes or other on-chip functions which send and receive data.Secondly the transfer controller uses an additional bus, the data transfer bus having a set of nodes 118, to read or write the actual data at the requester nodes 116. The data transfer bus carries commands, write data and read data from a special internal memory port 115 and returns read data to the transfer controller hub via the data router 150 function at inputs 104.The transfer controller has, at its front-end portion, a request queue controller 101 (also commonly referred to as the queue manager of this invention) receiving transfer requests in the form of transfer request packets at its input 103. The queue manager prioritizes, stores, and dispatches these as required.The queue manager connects within the transfer controller hub unit 100 to the channel request registers 120 which receive the data transfer request packets and process them. In this process, it first prioritizes them and assigns them to one of the N channel request registers 120, each of which represent a priority level.If there is no channel available for direct processing of the transfer request packets, it is stored in the queue manager memory (usually a RAM) 102. The transfer request packet is then assigned at a later time when a channel becomes available. The channel registers interface with the source 130 and destination 140 control pipelines which effectively are address calculation units for source (read) and destination (write) operations.Outputs from these pipelines are broadcast to M ports through the transfer controller ports I/O subsystem 110 which includes a set of hub interface units, which drive the M possible external ports units (four such external ports are shown in FIG. 1 as 111 through 114). The external ports units (also referred to as application units) are clocked either at the main processor clock frequency or at a lower (or higher) external device clock frequency. If a port operates at its own frequency, synchronization to the core clock is required.As an example of read-write operations at the ports, consider a read from external port node 112 followed by a write to external port node 114. First the source pipeline addresses port 112 for a read. The data is returned to the transfer controller hub through the data router unit 150. On a later cycle the destination control pipeline addresses port 114 and writes the data at port 114. External ports as described here do not initiate transfer requests but merely participate in reads and writes requested elsewhere on the chip.Read and write operations involving the processor-memory nodes (transfer requesters) 116 are initiated as transfer request packets on the transfer request bus 117. The queue manager 101 processes these as described above, and on a later cycle a source pipeline output (read command/address) is generated which is passed at the internal memory port to the data transfer bus 118 in the form of a read. This command proceeds from one node to the next in pipeline fashion on the data transfer bus. When the processor node addressed is reached, the read request causes the processor-memory node to place the read data on the bus for return to the data router 150.On a later cycle, a destination pipeline output passes the corresponding write command and data to the internal memory port and on to the data transfer bus for writing at the addressed processor node.The channel parameter registers 105 and port parameters registers 106 hold all the necessary parametric data as well as status information for the transfer controller hub pipelines to process the given transfer. Both pipelines share some of the stored information. Other portions relate specifically to one pipeline or the other.The transfer controller with hub and ports introduced several new ideas supplanting the previous transfer controller technology. First, it is uniformly pipelined. In the previous transfer controller designs, the pipeline was heavily coupled to the external memory type supported by the device. In the preferred embodiment, the transfer controller with hub and ports contains multiple external ports, all of which look identical to the hub. Thus peripherals and memory may be freely interchanged without affecting the transfer controller with hub and ports. Secondly, the transfer controller with hub and ports concurrently executes transfers. That is, up to N transfers may occur in parallel on the multiple ports of the device, where N is the number of channels in the transfer controller with hub and ports core. Each channel in the transfer controller with hub and ports core is functionally just a set of registers. These registers track the current source and destination addresses, the word counts and other parameters for the transfer. Each channel is identical, and thus the number of channels supported by the transfer controller with hub and ports is highly scalable. Thirdly, the transfer controller with hub and ports includes a mechanism for queuing transfers up in a dedicated queue RAM.FIG. 2 illustrates from a higher level an overview of a multiprocessor integrated circuit employing the transfer controller with hub and ports of this invention. There are four main functional blocks. The transfer controller with hub and ports 220 and the ports, including ports external port interface units 230 to 233 and internal memory port 260, are the first two main functional blocks. Though four external port interface units 230, 231, 232 and 233 are illustrated, this is an example only and more or less could be employed. The other two main functional blocks are the transfer request bus 245 and the data transfer bus (DTB) 255. These are closely associated functional units that are but not a part of the transfer controller with hub and ports 220. Transfer request bus 245 is coupled to plural internal memory port nodes 270, 271 and 272. Though three internal port nodes 270, 271 and 272 are illustrated, this is an example only and more or less could be employed. Each of these internal memory port nodes preferable includes an independently programmable data processor, which may be a digital signal processor, and corresponding cache memory or other local memory. The internal construction of these internal memory port nodes 270, 271 and 272 is not important for this invention. For the purpose of this invention it sufficient that each of the internal memory port nodes 270, 271 and 272 can submit transfer requests via transfer request bus 245 and has memory that can be a source or destination for data. Transfer request bus 245 prioritizes these packet transfer requests. Transfers originating from or destined for internal memory port nodes 270, 271 or 272 are coupled to transfer controller with hub and ports 220 via data transfer bus 255 and internal memory port master 260. FIG. 2 highlights the possible connection of data transfer bus 255 to multiple internal memory port nodes 270, 271 and 272 and the possible connection of multiple transfer request nodes to transfer request bus 245.FIG. 3 illustrates a transfer request bus at the major block level. The Processor-Cache internal memory port nodes of FIG. 2 are shown as Requestor nodes 270, 271 and 272 of FIG. 3. Other additional Requestor Nodes 313 through 319 are shown in FIG. 3. Upstream Request signals 320, 322, 325, and 326, Local Request signals 334, and 335, Stall signals 330 and 337, and Token signals 323, 327 and 329 are identified in FIG. 3 and these will now be described.Transfer RequestsA Transfer Request (e.g. 320, 322, 325, 334, or 335) consists of one or more n bit word transfer request packets. These transfer request packets are always originated and propagated back to back on the TR bus. In other words, a local request 334 can stall and preempt an upstream request 325 only when the first upstream packet arrives. After the first packet has gone through a TR node, the local request can be injected only at the end of the upstream packet transfer.Transfer Request NodeThe TR node, in its simplest form, multiplexes between dispatching one of local or upstream requests and stalling the other one. The frequency and scaling requirements of the architecture require the stall signal to be pipelined from one TR node to another. This requires the TR nodes to have local storage so that upstream request during stall propagation will not be lost.The stall to the local requester is also pipelined, requiring local storage for these requests as well. A node having an upstream request and a local request collide causes stalls. On a collision if the TR node does not have the token, it will pass the upstream request and stall the local request. If the TR node has the token, it will pass the local request and stall the upstream request. The upstream stall ripples up until it hits a TR node with no upstream request at that node.Token Passing SchemeIn order to guarantee against starvation of getting a local request onto the TR bus, a token is passed downstream from node to node to give priority to the next local request for that node over the incoming upstream request. When a node receives the token, it can stall and buffer the incoming upstream request and pass its local request to the downstream node. The token passing protocol is detailed below for all possible operating scenarios:Operating Scenario 1No Local Request, Yes/No Upstream Request, Token InIf a TR node 302 has no local request pending or arriving in the same clock as the token arrives, then the token (see active token 323) is passed on to the next downstream node 302 in the very next clock.Operating Scenario 2Yes Local Request, No Upstream Request, Token InAssume a TR node 302 has a local request pending or arriving in the same clock as the actual token 323 arrives, and there is no upstream request 343. Then the token is passed onto the next downstream node 301 in the same cycle as the first transfer request packet of local request. Note that if there is a downstream stall 331 coming back, then the token is held at the TR node until the stall goes away and the transfer of first local transfer request packet can be initiated.Operating Scenario 3Yes Local Request, First Transfer Request Packet of Upstream Request, Token InAssume a TR node 302 has a local request 342 pending or arriving in the same clock as the actual token 323 arrives and also the first transfer request packet of upstream request 343 arrives in the same clock. Then the token is passed onto the next downstream node 301 in the same cycle as the first transfer request packet of local request 342, and the upstream request 343 is stalled.Operating Scenario 4Yes Local Request, Second Transfer Request Packet of Upstream Request, Token InAssume a TR node 302 has a local request 342 pending or arriving in the same clock as the actual token 323 arrives and also that the second transfer request packet of the upstream request 343 arrives in the same clock. Then the token is held till the upstream request passes through, and is then passed onto the next downstream node 301 in the same cycle as the first transfer request packet of local request 342.To summarize, the transfer request node implements the operations illustrated in Table 1.<tb>TABLE 1<tb>Inputs<sep>Outputs<tb>Upstream<sep>Local<sep><sep>Downstream<sep>Upstream<sep>Local<tb>Request<sep>Request<sep>Token<sep>Request<sep>Stall<sep>Stall<tb>No<sep>No<sep>-<sep>None<sep>No<sep>No<tb>Yes<sep>No<sep>-<sep>Upstream<sep>No<sep>No<tb><sep><sep><sep>Request<tb>Yes<sep>Yes<sep>Absent<sep>Upstream<sep>No<sep>Yes<tb><sep><sep><sep>Request<tb>Yes<sep>Yes<sep>Present<sep>Local<sep>Yes<sep>No<tb><sep><sep><sep>Request<tb>No<sep>Yes<sep>-<sep>Local<sep>No<sep>No<tb><sep><sep><sep>RequestTransfer Request Node Detailed DiagramRefer to FIG. 4 for the detailed diagram of the transfer request bus node. The implementation shown is the heart of this invention. Before describing how the elements of the transfer request bus node accomplish the desired behavior of Table 1 and the four Operating Scenarios described above, it should be noted that there are two additional busses not shown in FIG. 3.Request Acknowledgment/CompletionOne of the two additional buses runs upstream and parallel to the TR bus is the requestor acknowledge bus Qack shown in FIG. 4 as 413. This bus sends the requestor ID of those transfer requests (mapped to the priority bits of the request and the requester ID of the unit submitting the request) which have been accepted by the transfer controller for servicing. This allows the local node to increment its counter of reserved space, so it may issue more transfer requests. The Qack bus is simply passed on to the local node so it that may decide based on the counter value, priority information, and Requestor ID information what operations will proceed next. The format of the Qack is, in the preferred embodiment: Qack: <Valid Bits><Requestor_ID><Requestor_priority>The second additional bus, also runs upstream and parallel to the TR bus and is referred to as the request completed bus, Qcomp (see 414 of FIG. 4). This request completed bus sends the report code (as specified in the TR parameters) with a valid bit on completion of the request by the I/O subsystem. The Qcomp bus is simply passed on to the local node so it may test the information contained and take the appropriate action. The report word portion of Qcomp can be encoded to carry relevant information about a transfer request completion. The format of the Qcomp, in the preferred embodiment is as follows: Qcomp: <Valid><Requestor_ID><Report word>Acknowledgement of completion of the transfer request may be used in control of local processor functions. The report word may indicate any exceptions or special conditions or the like.TR ProtocolThe basic TR protocol involves sending requests, and responding to stalls, while not losing any of the data. The basic mechanism of the local node interface to the TR node (to the TR bus) is to set Local Request 406 'high' whenever data is sent, and hold the same data if a stall is received on the next cycle. If there is no stall, then Local Request remains 'high', and the next data is sent to the TR node, until the entire transfer request has been sent, and then Local Request is set 'low'.Some requirements of the local node interface are: (1) Local Request 406 must be 'high' when data is being sent to the TR node; (2) Once a request (local or upstream) is initiated, the entire transfer request data (two 68-bit data words) must be sent to the TR node in successive cycles (disregarding stalls); (3) When a node receives a stall, the data sent last cycle must be resent that cycle as well; (4) There is no guarantee about when a stall may come, so it must be comprehended in either case, whether it occurs both before, between, or after the two 68 bit words are transferred; (5) There are no restrictions on how many transfer requests can be sent successively, although they may be stalled.TR Node Control LogicThe heart of the TR Node control is the finite state machine which accepts the upstream token input 404, the upstream request input 402, and the downstream stall input 410.Each of these signals is registered, the upstream token input in register 431, the upstream request input in register 432 and the stall input in register 438. The finite state machine control block 400, keeps track of the number of each type of inputs in it's counters and generates the control signals for the multiplexers and registers for the TR Node Datapath.TR Node DatapathThe datapath in TR node is primarily devoted to multiplexing and holding the incoming upstream transfer request packets 405 and local transfer request packets 401 and also holding outgoing downstream transfer request packets 411 in case of a downstream stall.The transfer request packets are 68 bit wide data words. Register 433 is used to register the incoming local request packet 401 and drives it through the output multiplexer 423 as downstream data 411 to the downstream node. Also in case of a stall, register 433 recirculates and holds the local request packet 401 which has arrived. Similarly register 434 keeps track of the upstream transfer request packets 405. Register 437 holds and recirculates the outgoing transfer request packets 411in case of a downstream stall. The other paths simply involve registering and forwarding the Qack (register 435) and Qcomp busses (register 436). Downstream Qack input is labeled 413 and upstream Qack output is labeled 415. Downstream Qcomp input is labeled 414 and upstream Qcomp output is labeled 416.FIG. 5 illustrates the waveform diagram showing a simple request with a stall and no token present at the local TR node. During time interval 500 both a local transfer request packet 401 and an upstream transfer request packet 405 are present but a downstream stall input 410 has also been received.During time interval 501 the downstream stall input is registered in finite state machine block (400 in FIG. 4) and is output as an active upstream stall output 403. Also during time interval 501 an active local stall output 407 is generated. The downstream transfer request packet output 411 will hold and recirculate data N as shown in FIG. 5. The local transfer request packet input 401 with Data L2 and upstream transfer request packet input 405 with Data U2 will hold their data until the downstream transfer request packet output 411 completes processing of the respective inputs. Note that recirculation of input local Data L2 and upstream Data U2 takes place in registers 433 and 434 of FIG. 4 respectively and recirculation of output downstream Data N takes place in register 437.With no active token present at this node, the local stall output 407 persists until all upstream requests are cleared. The upstream stall output 403 however goes inactive in time interval 502 allowing the upstream requests to be completed. During time intervals 502 and 503 the upstream transfer request packet 405 Data U1 and Data U2 are cleared and passed on as downstream transfer request packets 411.During time interval 503 no upstream request is present and the local stall 407 becomes inactive at the beginning of time interval 504.During time intervals 504 and 505, the local stall output 407 being inactive, the local requests Data L1 and Data L2 are passed downstream.FIG. 6 illustrates the waveform diagram showing a simple request with a stall but with an active token present at the node. During time interval 600 both a local transfer request packet 401 and an upstream transfer request packet 405 are present but a downstream stall input 410 has also been received.With an active upstream token input 404 present at this node, the local stall output 407 persists for only through time interval 601.During time interval 602 the local transfer request packet Data L1 receives priority and is processed and shows as an output downstream transfer request packet 411 during time interval 602. The upstream transfer request packet Data U1 is recirculated in register 434 until the local transfer request packet has been processed.During time interval 603 the processing of the local request packet completes with the downstream transfer request packet output 411 being Data L2.During time intervals 604 and 605 the processing of the upstream transfer request packet Data U1 and Data U2 resumes producing the downstream transfer request packet outputs Data U1 and U2 respectively.This invention has been described in conjunction with the preferred embodiment in which the requests are for data transfer. Those skilled in the art would realize that this type request is not only type that can be serviced by this invention. This invention can be used to connect and prioritize any data processing function that can be requested by plural requesters and is serviced by a central application unit.
A semiconductor device may include a first semiconductor die. A passivation layer supports the first semiconductor die. The passivation layer may include a first via having a barrier layer and a first redistribution layer (RDL) conductive interconnect coupled to the first via through the barrier layer. The first via may couple the first semiconductor die to the first RDL conductive interconnect.
CLAIMSWHAT IS CLAIMED IS:1. A semiconductor device, comprising:a first semiconductor die; anda passivation layer supporting the first semiconductor die, the passivation layer comprising a first via having a barrier layer and a first redistribution layer (RDL) conductive interconnect coupled to the first via through the barrier layer, the first via coupling the first semiconductor die to the first RDL conductive interconnect.2. The semiconductor device of claim 1, further comprising:a second semiconductor die; anda second via having the barrier layer and coupled to the first RDL conductive interconnect through the barrier layer, the second via coupling the secondsemiconductor die to the first RDL conductive interconnect.3. The semiconductor device of claim 1, further comprising:a second via coupled to the first RDL conductive interconnect through a second barrier layer; anda second RDL conductive interconnect directly coupled to the second via.4. The semiconductor device of claim 3, further comprising a package interconnect layer directly coupled to the second RDL conductive interconnect.5. The semiconductor device of claim 1, incorporated into at least one of a music player, a video player, an entertainment unit, a navigation device, acommunications device, a personal digital assistant (PDA), a fixed location data unit, and a computer.6. A method of manufacturing a semiconductor device, comprising:coating a first organic passivation layer on a plurality of die and molding compound;lithographically fabricating a plurality of via openings;depositing a first barrier layer and a first seed layer within the via openings; filling the via openings with a first conductive material; andplanarizing the first organic passivation layer and the first conductive material.7. The method of claim 6, further comprising:coating a second organic passivation layer on the planarized first organic passivation layer and the first conductive material;lithographically fabricating a plurality of conductive pad openings and trace trenches within the second organic passivation layer;depositing a second barrier layer and a second seed layer within the conductive pad openings and the trace trenches;filling the conductive pad openings and the trace trenches with a second conductive material; andplanarizing the second passivation layer and the second conductive material.8. The method of claim 7, further comprising creating additional vias, additional conductive pads and additional conductive traces with a semi-additive process.9. The method of claim 8, further comprising fabricating a package interconnect layer coupled to the additional conductive pads.10. The method of claim 6, further comprising incorporating the semiconductor device into at least one of a music player, a video player, anentertainment unit, a navigation device, a communications device, a personal digital assistant (PDA), a fixed location data unit, and a computer.11. A semiconductor device, comprising:a first semiconductor die; anda passivation layer supporting the first semiconductor die, the passivation layer comprising a first via having a barrier layer, a second via having the barrier layer and a means for interconnecting the first via and the second via through the barrier layer, the first via coupling the first semiconductor die to the interconnecting means.12. The semiconductor device of claim 11, further comprising a second semiconductor die coupled to the first interconnecting means by the second via.13. The semiconductor device of claim 11, further comprising:a package conductive interconnect; andmeans for directly interconnecting the package conductive interconnect and the second via.14. The semiconductor device of claim 11, further comprising additional vias and means for directly interconnecting the additional vias to each other but not through the barrier layer.15. The semiconductor device of claim 11, incorporated into at least one of a music player, a video player, an entertainment unit, a navigation device, acommunications device, a personal digital assistant (PDA), a fixed location data unit, and a computer.16. A method of manufacturing a semiconductor device, comprising:a step for coating a first organic passivation layer on a plurality of die and molding compound;a step for lithographically fabricating a plurality of via openings;a step for depositing a first barrier layer and a first seed layer within the via openings;a step for filling the via openings with a first conductive material; and a step for planarizing the first organic passivation layer and the first conductive material.17. The method of claim 16, further comprising:a step for coating a second organic passivation layer on the planarized first organic passivation layer and the first conductive material;a step for lithographically fabricating a plurality of conductive pad openings and trace trenches in the second organic passivation layer;a step for depositing a second barrier layer and a second seed layer within the conductive pad openings and the trace trenches;a step for filling the conductive pad openings and the trace trenches with a second conductive material; anda step for planarizing the second passivation layer and the second conductive material.18. The method of claim 17, further comprising a step for creating additional vias, additional conductive pads and additional conductive traces with a semi-additive process.19. The method of claim 18, further comprising a step for fabricating a package interconnect layer coupled to the additional conductive pads and the additional conductive traces.20. The method of claim 17, further comprising a step for incorporating the semiconductor device into at least one of a music player, a video player, anentertainment unit, a navigation device, a communications device, a personal digital assistant (PDA), a fixed location data unit, and a computer.
DAMASCENE RE-DISTRIBUTION LAYER (RDL) IN FAN OUT SPLITDIE APPLICATIONCROSS-REFERENCE TO RELATED APPLICATION[0001] This application claims the benefit under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application No. 62/106,364, entitled "DAMASCENE RE-DISTRIBUTION LAYER (RDL) IN FAN OUT SPLIT DIE APPLICATION," filed on January 22, 2015, the disclosure of which is expressly incorporated by reference herein in its entirety.BACKGROUNDField[0002] Aspects of the present disclosure relate to semiconductor devices, and more particularly to a redistribution layer for fabrication of a fan out structure.Background[0003] The process flow for semiconductor fabrication of integrated circuits (ICs) may include front-end-of-line (FEOL), middle-of-line (MOL), and back-end-of-line (BEOL) processes. The front-end-of-line processes may include wafer preparation, isolation, well formation, gate patterning, spacer, extension and source/drain implantation, silicide formation, and dual stress liner formation. The middle-of-line process may include gate contact formation. Middle-of-line layers may include, but are not limited to, middle-of- line contacts, vias or other layers within close proximity to the semiconductor device transistors or other like active devices. The back-end-of-line processes may include a series of wafer processing steps for interconnecting the semiconductor devices created during the front-end-of-line and middle-of-line processes. Successful fabrication of modern semiconductor chip products involves an interplay between the materials and the processes employed.[0004] An interposer is a die-mounting technology in which the interposer serves as a base upon which the semiconductor dies of a system on chip (SoC) are mounted. An interposer is an example of a fan out wafer level package structure. The interposer may include wiring layers of conductive traces and conductive vias for routing electrical connections between the semiconductor dies (e.g., memory modules and processors) and a system board. The interposer may include a redistribution layer (RDL) that provides a connection pattern of bond pads on the active surface of a semiconductor device (e.g., a die or chip) to a redistributed connection pattern that is more suitable for connection to the system board.[0005] Fabrication of wafer level package structures may include attachment of a semiconductor device (e.g., a die or chip) to the wafer level package structure according to a chip first attach process prior to forming the redistribution layer. The chip first attach process, however, may be problematic for split die applications, rendering the semiconductor device defective because of the formation of the redistribution layer and/or because of defects associated with the redistribution layer.SUMMARY[0006] A semiconductor device may include a first semiconductor die. A passivation layer supports the first semiconductor die. The passivation layer may include a first via having a barrier layer and a first redistribution layer (RDL) conductive interconnect coupled to the first via through the barrier layer. The first via may couple the first semiconductor die to the first RDL conductive interconnect.[0007] This has outlined, rather broadly, the features and technical advantages of the present disclosure in order that the detailed description that follows may be better understood. Additional features and advantages of the disclosure will be described below. It should be appreciated by those skilled in the art that this disclosure may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the teachings of the disclosure as set forth in the appended claims. The novel features, which are believed to be characteristic of the disclosure, both as to its organization and method of operation, together with further objects and advantages, will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present disclosure. BRIEF DESCRIPTION OF THE DRAWINGS[0008] For a more complete understanding of the present disclosure, reference is now made to the following description taken in conjunction with the accompanying drawings.[0009] FIGURE 1 illustrates a perspective view of a semiconductor wafer in an aspect of the present disclosure.[0010] FIGURE 2 illustrates a cross-sectional view of a die in accordance with an aspect of the present disclosure.[0011] FIGURES 3 A and 3B illustrate a top view and a side view of a conventional split die architecture.[0012] FIGURE 3C is a block diagram illustrating a conventional redistribution layer.[0013] FIGURE 3D is a block diagram illustrating a redistribution layer in accordance with an aspect of the present disclosure.[0014] FIGURE 4 illustrates a semiconductor device according to one aspect of the present disclosure.[0015] FIGURES 5A-5F illustrate a semiconductor device at various stages of fabrication according to one aspect of the present disclosure.[0016] FIGURE 6 is a process flow diagram illustrating a method for fabricating a high density fan out package structure according to an aspect of the present disclosure.[0017] FIGURE 7 is a block diagram showing an exemplary wireless communication system in which a configuration of the disclosure may be advantageously employed.[0018] FIGURE 8 is a block diagram illustrating a design workstation used for circuit, layout, and logic design of a semiconductor component according to one configuration.DETAILED DESCRIPTION[0019] The detailed description set forth below, in connection with the appended drawings, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. It will be apparent to those skilled in the art, however, that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts. As described herein, the use of the term "and/or" is intended to represent an "inclusive OR", and the use of the term "or" is intended to represent an "exclusive OR".[0020] Some described implementations relate to wafer level package structures, such as interposer technology. An interposer generally serves as an intermediate layer that can be used for direct electrical interconnection between one component or substrate and a second component or substrate with the interposer positioned in between. For example, an interposer may have a pad configuration on one side that can be aligned with corresponding pads on a first component (e.g., a die), and a different pad configuration on a second side that corresponds to pads on a second component (e.g., a package substrate, system board, etc.) Interposers are widely used for integrating multiple chips on a single package. In addition, interposer substrates can be composed of glass and quartz, organic, or other like material and normally contain a few interconnect layers.[0021] Fabrication of wafer level package structures, such as interposers, may include the formation of a redistribution layer (RDL). The redistribution layer may enable expansion of a connection pattern of bond pads on the active surface of an active device (e.g., a die or chip) to a redistributed connection pattern that is more suitable for connection to a substrate (e.g., system board, package substrate, printed circuit board, etc.) Conventional fabrication techniques include attaching the active device prior to forming a redistribution layer according to a chip first attach process. The chip first attach process, however, assumes that no defects are associated with the redistribution layer.[0022] Furthermore, conventional fabrication techniques for forming the redistribution layer may result in a step height difference between dies for split die applications. In this arrangement, the molding compound (e.g., silica) between the split die may shrink during the fabrication process (e.g., wafer level molding), resulting in the step height difference. The step height difference leads to malformation of a subsequent conductive interconnect layer coupling the split die (e.g., active dies). In conventional processes, the conductive interconnect layer and the via are concurrently formed within a passivation layer. This passivation layer, however, may be partially absorbed into the step height difference, which affects the proper formation of the subsequent conductive interconnect layer. For example, the absorbed passivation layer may lead to a height difference in the photoresist used to define the conductive interconnect. Unfortunately, such defects and malformed redistribution layers may lead to loss of the active dies.[0023] Various aspects of the disclosure provide techniques for fabrication of a semiconductor device such as a fan out wafer level package, for example, including a redistribution layer that enables a line/space of or up to two (2) microns by two (2) microns. The process flow for semiconductor fabrication may include front-end-of-line (FEOL) processes, middle-of-line (MOL) processes, and back-end-of-line (BEOL) processes. It will be understood that the term "layer" includes film and is not to be construed as indicating a vertical or horizontal thickness unless otherwise stated. As described herein, the term "substrate" may refer to a substrate of a diced wafer or may refer to a substrate of a wafer that is not diced. Similarly, the terms chip and die may be used interchangeably unless such interchanging would tax credulity.[0024] In one aspect of the disclosure, the semiconductor device may include a redistribution layer which supports the die. The redistribution layer may include one or more vias (e.g., Vx) coupled to a conductive interconnect layer (e.g., Mx) through a barrier layer. The vias may be arranged to couple the die with the conductive interconnect layer. The vias, which in some aspects, may be fabricated lithographically, may also be formed before depositing the conductive interconnect layer.[0025] FIGURE 1 illustrates a perspective view of a semiconductor wafer in an aspect of the present disclosure. A wafer 100 may be a semiconductor wafer, or may be a substrate material with one or more layers of semiconductor material on a surface of the wafer 100. When the wafer 100 is a semiconductor material, it may be grown from a seed crystal using the Czochralski process, where the seed crystal is dipped into a molten bath of semiconductor material and slowly rotated and removed from the bath. The molten material then crystalizes onto the seed crystal in the orientation of the crystal. [0026] The wafer 100 may be a compound material, such as gallium arsenide (GaAs) or gallium nitride (GaN), a ternary material such as indium gallium arsenide (InGaAs), quaternary materials, or any material that can be a substrate material for other semiconductor materials. Although many of the materials may be crystalline in nature, polycrystalline or amorphous materials may also be used for the wafer 100.[0027] The wafer 100, or layers that are coupled to the wafer 100, may be supplied with materials that make the wafer 100 more conductive. For example, and not by way of limitation, a silicon wafer may have phosphorus or boron added to the wafer 100 to allow for electrical charge to flow in the wafer 100. These additives are referred to as dopants, and provide extra charge carriers (either electrons or holes) within the wafer 100 or portions of the wafer 100. By selecting the areas where the extra charge carriers are provided, which type of charge carriers are provided, and the amount (density) of additional charge carriers in the wafer 100, different types of electronic devices may be formed in or on the wafer 100.[0028] The wafer 100 has an orientation 102 that indicates the crystalline orientation of the wafer 100. The orientation 102 may be a flat edge of the wafer 100 as shown in FIGURE 1, or may be a notch or other indicia to illustrate the crystalline orientation of the wafer 100. The orientation 102 may indicate the Miller Indices for the planes of the crystal lattice in the wafer 100.[0029] Once the wafer 100 has been processed as desired, the wafer 100 is divided up along dicing lines 104. The dicing lines 104 indicate where the wafer 100 is to be broken apart or separated into pieces. The dicing lines 104 may define the outline of the various integrated circuits that have been fabricated on the wafer 100.[0030] Once the dicing lines 104 are defined, the wafer 100 may be sawn or otherwise separated into pieces to form die 106. Each of the die 106 may be an integrated circuit with many devices or may be a single electronic device. The physical size of the die 106, which may also be referred to as a chip or a semiconductor chip, depends at least in part on the ability to separate the wafer 100 into certain sizes, as well as the number of individual devices that the die 106 is designed to contain.[0031] Once the wafer 100 has been separated into one or more die 106, the die 106 may be mounted into packaging to allow access to the devices and/or integrated circuits fabricated on the die 106. Packaging may include single in-line packaging, dual in-line packaging, motherboard packaging, flip-chip packaging, indium dot/bump packaging, or other types of devices that provide access to the die 106. The die 106 may also be directly accessed through wire bonding, probes, or other connections without mounting the die 106 into a separate package.[0032] FIGURE 2 illustrates a cross-sectional view of a die 106 in accordance with an aspect of the present disclosure. In the die 106, there may be a substrate 200, which may be a semiconductor material and/or may act as a mechanical support for electronic devices. The substrate 200 may be a doped semiconductor substrate, which has either electrons (designated N-channel) or holes (designated P-channel) charge carriers present throughout the substrate 200. Subsequent doping of the substrate 200 with charge carrier ions/atoms may change the charge carrying capabilities of the substrate 200.[0033] Within a substrate 200 (e.g., a semiconductor substrate), there may be wells 202 and 204, which may be the source and/or drain of a field-effect transistor (FET), or wells 202 and/or 204 may be fin structures of a fin structured FET (FinFET). Wells 202 and/or 204 may also be other devices (e.g., a resistor, a capacitor, a diode, or other electronic devices) depending on the structure and other characteristics of the wells 202 and/or 204 and the surrounding structure of the substrate 200.[0034] The semiconductor substrate may also have a well 206 and a well 208. The well 208 may be completely within the well 206, and, in some cases, may form a bipolar junction transistor (BJT). The well 206 may also be used as an isolation well to isolate the well 208 from electric and/or magnetic fields within the die 106.[0035] Layers (e.g., 210 through 214) may be added to the die 106. The layer 210 may be, for example, an oxide or insulating layer that may isolate the wells (e.g., 202-208) from each other or from other devices on the die 106. In such cases, the layer 210 may be silicon dioxide, a polymer, a dielectric, or another electrically insulating layer. The layer 210 may also be an interconnection layer, in which case it may comprise a conductive material such as copper, tungsten, aluminum, an alloy, or other conductive or metallic materials.[0036] The layer 212 may also be a dielectric or conductive layer, depending on the desired device characteristics and/or the materials of the layers (e.g., 210 and 214). The layer 214 may be an encapsulating layer, which may protect the layers (e.g., 210 and 212), as well as the wells 202-208 and the substrate 200, from external forces. For example, and not by way of limitation, the layer 214 may be a layer that protects the die 106 from mechanical damage, or the layer 214 may be a layer of material that protects the die 106 from electromagnetic or radiation damage.[0037] Electronic devices designed on the die 106 may comprise many features or structural components. For example, the die 106 may be exposed to any number of methods to impart dopants into the substrate 200, the wells 202-208, and, if desired, the layers (e.g., 210-214). For example, and not by way of limitation, the die 106 may be exposed to ion implantation, deposition of dopant atoms that are driven into a crystalline lattice through a diffusion process, chemical vapor deposition, epitaxial growth, or other methods. Through selective growth, material selection, and removal of portions of the layers (e.g., 210-214), and through selective removal, material selection, and dopant concentration of the substrate 200 and the wells 202-208, many different structures and electronic devices may be formed within the scope of the present disclosure.[0038] Further, the substrate 200, the wells 202-208, and the layers (e.g., 210-214) may be selectively removed or added through various processes. Chemical wet etching, chemical mechanical planarization (CMP), plasma etching, photoresist masking, damascene processes, and other methods may create the structures and devices of the present disclosure.[0039] FIGURES 3 A and 3B illustrate a top view and a side view of a conventional split die architecture. A first die 360 A and a second die 360B are separated by a molding compound (MC) 370 and supported by a passivation layer 350 (e.g., an organic passivation layer). Unfortunately, base material (e.g., polymer) in the molding compound 370 shrinks during wafer level molding. This shrinkage results in the step height difference 372 between the molding compound 370 and the first die 360A and the second die 360B. The step height difference 372 may cause the passivation layer 350 to absorb within the opening between the first die 360A and the second die 360B, resulting in irregularities of a subsequent redistribution layer formed within the passivation layer 350. [0040] FIGURE 3C is a block diagram illustrating a conventional redistribution layer 340. In the conventional redistribution layer 340, vias 304 (304A, 304B, 304C) and RDL conductive interconnects 306 are concurrently formed within the passivation layer 350 using, for example, a dual damascene process. In addition, a single barrier layer process is used to form a barrier layer 330 only on a surface of the vias 304 and the RDL conductive interconnects 306 that will face the active dies. Unfortunately, irregularities in the surface 352 of the passivation layer 350 due to the step height difference 372 of FIGURE 3B may prohibit formation of a sufficiently flat surface 308 of the RDL conductive interconnects 306.[0041] FIGURE 3D illustrates a redistribution layer 300 in accordance with an aspect of the present disclosure. Referring to FIGURE 3D, the redistribution layer 300 includes vias 310 (310A, 310B and 3 IOC) and RDL conductive interconnects 320 formed within a passivation layer 350. Of course, the number and arrangement of the vias and RDL conductive interconnects is merely exemplary, for ease of illustration, and not limiting. The passivation layer 350 may, for example, comprise an organic material, such as a polymer dielectric material.[0042] The vias 310 and the RDL conductive interconnects 320 may be separately formed by way of separate single damascene processes. As further described below, a first damascene process enables planarization of the vias 310 and the passivation layer 350 prior to formation of the RDL conductive interconnects 320 to overcome the step height difference 372 of FIGURE 3B. In some aspects, the vias 310 and the RDL conductive interconnects 320 may be composed of copper or other suitable conductive material. The vias 310 include a first portion 330A of a barrier layer 330 on the sidewalls and a surface of the vias 310 that will couple to active die.[0043] In this aspect of the disclosure, the first damascene process is performed to line the first portion 330A of the barrier layer 330 on the sidewalls and the surface of the vias 310 that will couple to active die. Once openings of the vias 310 are lined, the openings may be filled with a conductive material. According to this first damascene process, the conductive material within the vias 310 and the passivation layer 350 are planarized or polished smooth to complete formation of the vias 310. In some aspects, the conductive material within the vias 310 and the passivation layer 350 may be planarized by techniques such as chemical-mechanical planarization (CMP), for example.[0044] Following completion of the vias 310, a second damascene process is performed to line a second portion 330B of the barrier layer 332 on sidewalls and a surface of a trench openings (not shown) for RDL conductive interconnects 306 that face the active die. Once trench openings of the RDL conductive interconnects 320 are lined, the trench openings may be filled with a conductive material. According to this second damascene process, the conductive material within the RDL conductive interconnects 320 and the passivation layer 350 are planarized or polished smooth to complete formation of the RDL conductive interconnects 320 with a sufficiently flat surface 322. The conductive material within the RDL conductive interconnects 320 and the passivation layer 350 may also be planarized by CMP.[0045] In this arrangement, the second portion 330B of the barrier layer 330 also separates the vias 310 from the RDL conductive interconnects 320, in contrast to the direct coupling of the vias 304 and the RDL conductive interconnects 306 shown in FIGURE 3C. The barrier layer 330 may be deposited or otherwise formed by a process such as physical vapor deposition (PVD) or the like. In aspects of the disclosure, the conventional redistribution layer 340 of FIGURE 3C is combined with the redistribution layer 300 of FIGURE 3C, for example, as shown in FIGURE 4.[0046] FIGURE 4 illustrates a semiconductor device 400 in accordance with aspects of the present disclosure. The semiconductor device 400 includes a first die 460A and a second die 460B that are separated by a molding compound (MC) 470 and supported by a passivation layer 450 (e.g., an organic passivation layer). Although only two die are shown, this is merely for ease of illustration and additional die may be included in the semiconductor device. The die may be arranged and subjected to molding (see molding compound 470).[0047] The passivation layer 450 of the semiconductor device 400 may also include one or more organic passivation layers. A first set of vias 410 (e.g., 41 OA, 410B, 4 IOC, 410D) may be fabricated in the passivation layer 450 and coupled to contact pads 462 (e.g., 462A, 462B, 462C, 462D) of the first die 460A and the second die 460B. The first set of vias 410 may be fabricated using a damascene process, a laser via and fill process or other like process for via formation. The vias 410 are lined with a first portion 43 OA of a barrier layer 430 and filled with a conductive material. Once fabricated, the first set of vias 410 and the passivation layer 450 are planarized, for example, according to a first damascene process. Once planarized, a second damascene process is performed to couple the first set of vias 410 to first RDL conductive interconnects (e.g., 420 and/or 422) through a second portion 430B of the barrier layer 430.[0048] In this arrangement, a die-to-die RDL conductive interconnect 420 couples the first die 460 A and the second die 460B by joining the vias 410B and 4 IOC through a second portion 430B of the barrier layer 430. In addition, first RDL conductive interconnects 422 (e.g., 422A and 422B) may couple the vias 410A and 410D through the second portion 430B of the barrier layer 430. The die-to-die RDL conductive interconnect 420 and the first RDL conductive interconnects 422 may, in some aspects, comprise conductive traces and/or conductive pads. The conductive pads or traces may be composed of copper or other suitable conductive material.[0049] The semiconductor device 400 may further include a conventional redistribution layer 440 (e.g., 440A and 440B), for example, as shown in FIGURE 3C. In this arrangement, vias 304 (FIGURE 3) and RDL conductive interconnects 306 (FIGURE 3) are concurrently formed within the passivation layer 450 using, for example, a dual damascene process. In addition, a single barrier layer process is used to form a barrier layer 432 only on a surface of the conventional redistribution layer 440 that will face the active dies. The RDL conductive interconnects 306 (FIGURE 3) and the vias 304 (FIGURE 3) of the conventional redistribution layer 440 may be formed using a semi- additive process such as, for example, a dual damascene process.[0050] In some aspects, the semiconductor device 400 may further include a package conductive interconnect 480. The package conductive interconnect 480 may be coupled to second RDL conductive interconnects 442 (e.g., 442A, 442B, 442C, 442D). In addition, the package conductive interconnect 416 may couple to a system board, a package substrate or other suitable carrier substrate (not shown). The package conductive interconnect 416 may be configured according to a ball grid array (BGA) interconnect structure. [0051] FIGURES 5A-5F illustrate a semiconductor device structure 500 at various stages of fabrication in accordance with aspects of the present disclosure. For example, FIGURES 5 A-5F illustrate a sequential fabrication approach for the semiconductor device 400 shown in FIGURE 4.[0052] Beginning with FIGURE 5A, a carrier substrate 502 (e.g., a semiconductor wafer) is provided. The carrier substrate 502 may be, for example, a silicon-based substrate, a glass-based substrate or other materials such as those implemented with bulk substrates for semiconductor wafers. A pair of split die, including a first semiconductor die 560A and a second semiconductor die 560B, may be placed face down on and fixed to the carrier substrate 502 using, for example, an adhesive layer (e.g., tape). The first semiconductor die 560A and the second semiconductor die 560B may, for example, be arranged on the substrate using a pick and place (PnP) or cap place process.[0053] In FIGURE 5B, a molding compound 570 is applied to encapsulate the first semiconductor die 560A and the second semiconductor die 560B. Thereafter, the carrier substrate 502 is debonded and removed, leaving the first semiconductor die 560A and the second semiconductor die 560B encapsulated within the molding compound 570, as shown in FIGURE 5C.[0054] In FIGURE 5D, damascene processing is used to fabricate a redistribution layer of the semiconductor device. A first passivation layer 552 is coated on the surface of the first semiconductor die 560A and the second semiconductor die 560B. The first passivation layer 552 may be an organic passivation layer and may comprise a polymer dielectric. A first set of vias 510 (e.g., 51 OA, 510B, 5 IOC, 510D) may be formed in the first passivation layer 552. In some aspects, the first set of vias 510 may be formed by way of a lithographic fabrication process.[0055] In another aspect of the present disclosure, a first damascene process is performed to line a first portion 530A of the barrier layer 530 (e.g., a first barrier layer) on the sidewalls and a surface of the vias 510 that will couple to the first semiconductor die 560A or the second semiconductor die 560B. A conductive material (e.g., Cu) may be deposited using a physical vapor deposition process and an electroplating process to fill the first set of vias 510. Thereafter, a planarization process, such as CMP, is applied to the first passivation layer 552 and the first set of vias 510. In this aspect of the disclosure, the damascene process enables planarization of the first set of vias 510 and the first passivation layer 552 prior to formation of RDL conductive interconnects to overcome the step height difference 372 of FIGURE 3B.[0056] In FIGURE 5E, a damascene process may be used to form conductive pads and traces of the semiconductor device. In this arrangement, a second passivation layer 554 may be coated on the planarized surface of the first passivation layer 552 and the first set of vias 510. Trace trenches and pad openings may be formed in the second passivation layer 554 to provide additional RDL conductive interconnects using a lithographic process. A second portion 530B of the barrier layer 530 (e.g., a second barrier layer) may be deposited to line the first RDL conductive interconnects 522 (522A, 522B) and a die-to-die RDL conductive interconnect 520. The pad openings and trace trenches may then be filled with a conductive material such as copper or another suitable conductive material. The deposition may be conducted using an electroplating process (e.g., ECP). Thereafter, a planarization process, such as CMP, is applied to the second passivation layer 554 and conductive material fills the first RDL conductive interconnects 522 and the die-to-die RDL conductive interconnect 520.[0057] By forming the RDL using damascene processing, variation in height between the molding compound 570 and the first semiconductor die 560A and the second semiconductor die 560B may beneficially be reduced. In particular, planarization of the first passivation layer 552 and the first set of vias 510 avoids irregularities caused by the step height difference 372 of FIGURE 3B. In addition, the subsequent planarization of the first RDL conductive interconnects 522 and a die-to-die RDL conductive interconnect 520 enables precise formation of the conductive interconnect layers to fabricate a line/space below, for example, two (2) microns by two (2) microns.[0058] As shown in FIGURE 5F, additional vias, additional conductive pads and additional conductive traces may be included in the semiconductor device structure 500 according to the conventional RDL layer, for example, as shown in FIGURE 3C. As shown in FIGURE 4, the semiconductor device 400 may further include a conventional redistribution layer 440 (e.g., 440A and 440B), for example, as shown in FIGURE 3C, to provide a second set of vias and second RDL conductive interconnects. In this arrangement, vias 304 (FIGURE 3) and RDL conductive interconnects 306 (FIGURE 3) are concurrently formed within the passivation layer 550 using, for example, a dual damascene process, as shown in FIGURE 5F. In addition, a single barrier layer process is used to form a barrier layer 532 only on a surface of the conventional redistribution layer 540 that will face the active dies. The additional conductive traces and the additional vias of the conventional redistribution layer 540 may be formed using a semi- additive process such as, for example, a dual damascene process.[0059] A passivation layer 550 may be coated on a surface of the second passivation layer 554 and conductive material within the pad openings and the trace trenches.Additional vias may be formed in the passivation layer 550. The vias may be formed using a damascene process, a semi-additive process, a laser via and fill process or other like process for via formation. In one example, the vias are formed using a lithographic process. A barrier and seed layer may be deposited in the opening of the additional via openings. A photoresist (PR) may be deposited and lithographically processed. An electroplating process may be applied. The passivation layer 550 and the barrier layer of the vias may be planarized using etch processing or other planarization processes (e.g., grinding or polishing). For example, the photoresist (PR) may be removed by a PR strip process and the barrier/seed layer may be removed by a wet chemical etching process.[0060] In some aspects, an additional passivation layer may be coated on the etched vias. Additional conductive pads and trace trenches may be lithographically formed. The additional conductive pads may, in some aspects comprise a package conductive interconnect layer for attaching a ball grid array (BGA).[0061] In some aspects, the semiconductor device 500 may further include a package conductive interconnect 580. The package conductive interconnect 580 may be coupled to second RDL conductive interconnects 542 (e.g., 542A, 542B, 542C, 542D). In addition, the package conductive interconnect 516 may couple to a system board, a package substrate or other suitable carrier substrate (not shown). The package conductive interconnect 516 may be configured according to a ball grid array (BGA) interconnect structure. [0062] It should be recognized that a semiconductor device according to aspects of the present disclosure is not limited to the number of layers shown in FIGURES 4 and 5A- 5F.[0063] FIGURE 6 is a flow diagram illustrating a method 600 for manufacturing a semiconductor device according to one aspect of the disclosure. At block 602, a first organic passivation layer is coated on a plurality of die and molding compound. At block 604, via openings are lithographically fabricated. At block 606, a barrier layer and a seed layer are deposited within the via openings. At block 608, the via openings are filled with a first conductive material. At block 610, the passivation layer and the first conductive material are planarized. The passivation layer and first conductive material may be planarized via a CMP process.[0064] In some aspects, a second organic passivation layer may be coated on the planarized passivation layer. Additional conductive pads and trace trenches may be lithographically fabricated in the second organic passivation layer. A barrier and seed layer may be deposited within the pad and trace trenches. The pad and trace trenches may, in turn, be filled with a second conductive material. The second conductive material may be composed of copper or other suitable conductive material. The second passivation layer and the second conductive material may be planarized. The second passivation layer and the second conductive material may, for example be planarized using a CMP process.[0065] In some aspects, additional vias, pads and traces may be formed on a surface of the planarized second passivation layer and second conductive material. The additional vias, pads and traces may be formed using semi-additive processing. In some aspects, an interconnect layer may be coupled to the additional pads. The interconnect layer may be used to attach a ball grid array.[0066] According to an aspect of the present disclosure, a semi-conductor device including multiple die is described. In one configuration, the semi-conductor device includes a passivation layer (RDL) supporting the die. The passivation layer includes multiple vias having a barrier layer. The RDL further includes means forinterconnecting the vias through the barrier layer, with the vias coupling thesemiconductor die to the interconnecting means. The interconnecting means may be the RDL conductive interconnects 320/420/520 or the die-to-die RDL conductive interconnect 520. In another aspect, the aforementioned means may be any module or any apparatus or material configured to perform the functions recited by theaforementioned means.[0067] FIGURE 7 is a block diagram showing an exemplary wireless communication system 700 in which an aspect of the disclosure may be advantageously employed. For purposes of illustration, FIGURE 7 shows three remote units 720, 730, and 750 and two base stations 740. It will be recognized that wireless communication systems may have many more remote units and base stations. Remote units 720, 730, and 750 include IC devices 725A, 725C, and 725B that include the disclosed semiconductor device. It will be recognized that other devices may also include the semiconductor device, such as the base stations, switching devices, and network equipment. FIGURE 7 shows forward link signals 780 from the base station 740 to the remote units 720, 730, and 750 and reverse link signals 790 from the remote units 720, 730, and 750 to base stations 740.[0068] In FIGURE 7, remote unit 720 is shown as a mobile telephone, remote unit 730 is shown as a portable computer, and remote unit 750 is shown as a fixed location remote unit in a wireless local loop system. For example, the remote units 720, 730, and 750 may be a mobile phone, a hand-held personal communication systems (PCS) unit, a portable data unit such as a personal digital assistant (PDA), a GPS enabled device, a navigation device, a set top box, a music player, a video player, anentertainment unit, a fixed location data unit such as a meter reading equipment, or other devices that store or retrieve data or computer instructions, or combinations thereof. Although FIGURE 7 illustrates remote units according to the aspects of the disclosure, the disclosure is not limited to these exemplary illustrated units. Aspects of the disclosure may be suitably employed in many devices, which include the disclosed IC devices.[0069] FIGURE 8 is a block diagram illustrating a design workstation used for circuit, layout, and logic design of a semiconductor component, such as the IC devices disclosed above. A design workstation 800 includes a hard disk 802 containing operating system software, support files, and design software such as Cadence or OrCAD. The design workstation 800 also includes a display 804 to facilitate design of a circuit 806 or a semiconductor component 808 such as a semiconductor device. A storage medium 810 is provided for tangibly storing the design of the circuit 806 or the semiconductor component 808. The design of the circuit 806 or the semiconductor component 808 may be stored on the storage medium 810 in a file format such as GDSII or GERBER. The storage medium 810 may be a CD-ROM, DVD, hard disk, flash memory, or other appropriate device. Furthermore, the design workstation 800 includes a drive apparatus 812 for accepting input from or writing output to the storage medium 810.[0070] Data recorded on the storage medium 810 may specify logic circuitconfigurations, pattern data for photolithography masks, or mask pattern data for serial write tools such as electron beam lithography. The data may further include logic verification data such as timing diagrams or net circuits associated with logicsimulations. Providing data on the storage medium 810 facilitates the design of the circuit 806 or the semiconductor component 808 by decreasing the number of processes for designing semiconductor wafers.[0071] For a firmware and/or software implementation, the methodologies may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. A machine-readable medium tangibly embodying instructions may be used in implementing the methodologies described herein. For example, software codes may be stored in a memory and executed by a processor unit. Memory may be implemented within the processor unit or external to the processor unit. As used herein, the term "memory" refers to types of long term, short term, volatile, nonvolatile, or other memory and is not to be limited to a particular type of memory or number of memories, or type of media upon which memory is stored.[0072] If implemented in firmware and/or software, the functions may be stored as one or more instructions or code on a computer-readable medium. Examples include computer-readable media encoded with a data structure and computer-readable media encoded with a computer program. Computer-readable media includes physical computer storage media. A storage medium may be an available medium that can be accessed by a computer. By way of example, and not limitation, such computer- readable media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer; disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.[0073] In addition to storage on computer readable medium, instructions and/or data may be provided as signals on transmission media included in a communication apparatus. For example, a communication apparatus may include a transceiver having signals indicative of instructions and data. The instructions and data are configured to cause one or more processors to implement the functions outlined in the claims.[0074] Although the present disclosure and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the technology of the disclosure as defined by the appended claims. For example, relational terms, such as "above" and "below" are used with respect to a substrate or electronic device. Of course, if the substrate or electronic device is inverted, above becomes below, and vice versa. Additionally, if oriented sideways, above and below may refer to sides of a substrate or electronic device.Moreover, the scope of the present application is not intended to be limited to the particular configurations of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification and in Appendix A. As one of ordinary skill in the art will readily appreciate from the disclosure, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achievesubstantially the same result as the corresponding configurations described herein and in Appendix A may be utilized according to the present disclosure. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.[0075] Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[0076] The various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with a general- purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein and inAppendix A. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller,microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.[0077] The steps of a method or algorithm described in connection with the disclosure may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM, flash memory, ROM, EPROM, EEPROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.[0078] In one or more exemplary designs, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store specified program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD) and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer- readable media.[0079] The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean "one and only one" unless specifically so stated, but rather "one or more." Unless specifically stated otherwise, the term "some" refers to one or more. A phrase referring to "at least one of a list of items refers to any combination of those items, including single members. As an example, "at least one of: a, b, or c" is intended to cover: a; b; c; a and b; a and c; b and c; and a, b and c. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. §112, sixth paragraph, unless the element is expressly recited using the phrase "means for" or, in the case of a method claim, the element is recited using the phrase "a step for."
The memory subsystem includes link encryption for a system memory data bus. The memory controller may provide encryption for static data and provide link protection. The memory controller may optionally provide link encryption. Thus, the system may provide link protection for data in transmission. The memory module may include a link decryption engine that, if the link encryption is used, may decrypt the link encryption and that performs a link integrity check using a link integrity tag associated with the link protection. Then, after the link protection verification, the memory device may store the encrypted protected data and ECC data from the link decryption engine.
1. A memory module comprising:a link decryption engine for receiving write data from a memory controller having encrypted protected data and error checking and correcting (ECC) data for the encrypted protected data, the The written data has link protection for the written data, and the link decryption engine is configured to further receive a link integrity tag associated with the link protection, wherein the link decryption engine is configured to performing a link integrity check using the link integrity tag; anda memory device for storing the protected data and the ECC data from the link decryption engine.2. The memory module of claim 1 , wherein the memory devices each include a link decryption engine for performing the link integrity check locally at the memory device using the link integrity tag .3. The memory module of claim 1, further comprising:a data buffer for buffering data for the memory device;Wherein, each of the data buffers includes a link decryption engine, configured to use the link integrity tag to perform the link integrity check on a specific memory device.4. The memory module of claim 1, further comprising:A link decryption chip is used as the link decryption engine for the memory device.5. The memory module of claim 4, wherein the memory module includes a link decryption chip for each memory device, wherein a specific link decryption chip is used to use the link integrity tag for a specific The memory device performs the link integrity check.6. The memory module of claim 1, further comprising:a registered clock driver (RCD) for receiving command and address information for said memory device;Wherein, the command and address information has the link protection;Wherein, the RCD includes a link decryption engine for performing a link integrity check using the link integrity label.7. The memory module of claim 1, wherein the link protection comprises an implementation of Advanced Encryption Standard with Galois Message Authentication Code (AES-GMAC).8. The memory module of claim 1, further comprising: the write data has link encryption, wherein the link decryption engine is configured to decrypt the link encryption.9. The memory module of claim 8, wherein the link encryption comprises an implementation of AES-GCM (Advanced Encryption Standard in Galois/Counter Mode).10. The memory module of claim 8, further comprising:a registered clock driver (RCD) for receiving command and address information for said memory device;Wherein, the command and address information has the link encryption;Wherein, the RCD includes a link decryption engine for decrypting the link encryption.11. A memory module comprising:a memory device for storing encrypted protected data and error checking and correcting (ECC) data for said encrypted protected data;a link encryption engine for receiving encrypted protected data and error checking and correction (ECC) data for the encrypted protected data as read data from the memory device, the link encryption engine using for generating link protection for transferring the read data to a memory controller, including generating a link integrity tag associated with the link protection; andI/O hardware for sending the read data with the link protection and the link integrity tag to the memory controller.12. The memory module of claim 11, wherein the memory devices each include a link encryption engine for generating the link protection and generating the link integrity tag locally at the memory device.13. The memory module of claim 11 , further comprising:a data buffer for buffering data for the memory device;Wherein, each of the data buffers includes a link encryption engine for generating the link protection and generating the link integrity label for a specific memory device.14. The memory module of claim 11 , further comprising:A link encryption chip as the link encryption engine for the memory device, wherein the memory module includes a link encryption chip for each memory device, wherein a specific link encryption chip is used to generate the the link protection and generate the link integrity tag for a specific memory device.15. The memory module of claim 11, wherein the link protection comprises an implementation of Advanced Encryption Standard with Galois Message Authentication Code (AES-GMAC).16. The memory module of claim 11, further comprising: the link encryption engine for encrypting the read data.17. The memory module of claim 16, wherein the link encryption comprises an implementation of AES-GCM (Advanced Encryption Standard in Galois/Counter Mode).18. A memory controller comprising:I/O (input/output) hardware for coupling to memory modules with memory devices; anda link encryption engine for generating link protection for transmission of write data to the memory device, including generating a link integrity tag associated with the link protection, the write data having encrypted protected data and error checking and correction (ECC) data for said encrypted protected data;wherein the I/O hardware is configured to send the write data with the link protection and the link integrity tag to the memory module; andWherein, the memory module is configured to: use the link integrity tag to perform a link integrity check, and store the protected data and the ECC data in the memory device.19. The memory controller of claim 18, wherein the link encryption engine comprises a data link encryption engine, and further comprising:a command and address link encryption engine for generating command and address link protection for command and address information to be sent to the memory device, including generating command and address link protection associated with the command and address link protection link integrity label;Wherein, the memory module includes a registered clock driver (RCD) for receiving the command and address information for the memory device with the command and address link protection, and utilizing the command and address chain link integrity tags to perform link integrity checks.20. The memory controller of claim 18 , wherein the link encryption engine is to perform link encryption to the memory module on a new cryptographic key, wherein after passing the new cryptographic key , the link encryption engine is used to perform link encryption using the new encryption key.21. The memory controller of claim 18 , wherein the memory controller includes the link encryption engine for encrypting the write data, wherein the link encryption comprises AES-GCM (Gamma Advanced Encryption Standard in Rova/Counter mode).22. A memory controller comprising:I/O (input/output) hardware for coupling to memory modules with memory devices; anda link decryption engine for verifying link protection on read data received from the memory device with errors to and for the encrypted protected data Link protection of check-and-correct (ECC) data, the link decryption engine for validating the link protection provided by the memory device using a link integrity tag associated with the link protection.23. The memory controller of claim 22, wherein the protected data comprises data-at-rest protected using multi-key total memory encryption (MKTME).24. The memory controller of claim 22, wherein the memory controller includes: the read data has link encryption, wherein the link decryption engine is configured to decrypt the link encryption .
Memory Bus Integrity and Data Encryption (IDE)technical fieldThe description relates generally to memory systems and, more specifically, to data bus integrity and encryption.Background techniqueThe importance of system security in securing the operation of computing devices continues to increase. Operating system data is typically stored in volatile memory, such as dynamic random access memory (DRAM) devices, where the data exchange. Attacks on data at rest (data at rest) in the memory can be prevented by encrypting the data stored in the storage device.However, physical attacks on the link between memory and processing hardware are increasing, which can negatively impact the confidentiality, integrity, and replay protection of data stored in memory. Data-at-rest encryption can provide some protection for data in transit, but not significant replay protection.Description of drawingsThe following description includes a discussion of the accompanying drawings with illustrations given by way of example of implementations. The drawings are to be understood as examples rather than limitations. As used herein, reference to one or more examples is to be understood to describe a particular feature, structure or characteristic that is included in at least one embodiment of the invention. Phrases such as "in one example" or "in an alternative example" appearing herein provide examples of implementations of the invention and do not necessarily all refer to the same implementation. However, they are not necessarily mutually exclusive.1 is a block diagram of an example of a system in which system memory data is exchanged using link protection.2 is a block diagram of an example of a memory module with memory devices to exchange data with link protection.3 is a block diagram of an example of a system with a host and system memory that exchange data using link protection and CRC.4 is a block diagram of an example of a system with a host and system memory exchanging data with link protection without CRC.5A is a block diagram of an example of a memory system with a data link protection engine in a data buffer.5B is a block diagram of an example of a memory system with a data link protection engine in a memory device.5C is a block diagram of an example of a memory system with a data link protection engine between each data buffer and DRAM pair.5D is a block diagram of an example of a memory system with a data link protection engine between a data buffer and a DRAM.FIG. 6A is a block diagram of an example of link protection where MACs are sent on separate I/O signal lines.FIG. 6B is a block diagram of an example of link protection where a MAC is sent inline with encrypted data.7 is a flowchart of an example of a process for writing data to system memory with link protection.8 is a flowchart of an example of a process for reading data from system memory with link protection.9 is a block diagram of an example of a memory subsystem in which system memory link protection may be implemented.10 is a block diagram of an example of a computing system in which system memory link protection may be implemented.11 is a block diagram of an example of a mobile device that can implement system memory link protection.The following is a description of certain details and implementations, including non-limiting descriptions of the accompanying drawings, which may depict some or all examples, as well as other potential implementations.detailed descriptionAs described herein, the memory subsystem includes link encryption for the system memory data bus. In one example, the memory controller provides link protection as well as encryption for data at rest. When encryption of data at rest is provided, data in transit on the link protected by the link is also encrypted. In one example, the system may provide encryption for data at rest and may provide encryption for data in transit. The use of link protection and link encryption can provide confidentiality, integrity and replay protection without adding significant overhead to performance, power or memory footprint.Data-at-rest protection can be provided by the application of Multi-Key Total Memory Encryption (MKTME), where a Memory Encryption Engine (MEE) is used to provide a counter tree (Merkel tree) maintained in memory. Applications of MKTME traditionally use bits that would normally be used for ECC, which reduces ECC protection since fewer bits are available for ECC protection. Furthermore, using a counter tree large enough to cover all memory on a server scale would require maintaining a tree with 7 or 8 levels. Both of these approaches add significant memory subsystem overhead, resulting in increased latency, reduced bandwidth, large amounts of memory set aside to maintain the replay protection tree, and reduced ECC protection.Encryption of data in transit on the memory bus provides link-level cryptography that provides confidentiality, integrity, and replay protection for data traversing the memory data bus. For example, the memory data bus can be a double data rate (DDR) bus between the CPU (central processing unit) SOC (system on chip) and DIMM (dual inline memory module), or between the GPU (graphics processing unit) and DDR bus between DIMMs. In one example, link encryption can take into account specific properties of DDR bus and DIMM implementations to develop high-throughput links with low performance impact.The memory module may include a link decryption engine that performs link integrity checks using link integrity tags associated with link protection. In one example, when the link also includes link encryption, the link decryption engine decrypts the write data to extract the encrypted protected data and error checking and correction (ECC) data for the encrypted protected data. In one example, the link decryption engine can decrypt the link encryption and perform a link integrity check using a link integrity tag associated with the link encryption. The memory device can then store the encrypted protected data and ECC data from the link decryption engine without link encryption.Similarly, a memory module may include a link encryption engine that applies link protection to read data intended to be provided to the memory controller. In one example, the link decryption engine and the link encryption engine are the same engine. The link encryption engine enables the memory module to return data with data link protection. In one example, the link encryption engine enables the memory module to utilize data link encryption as well as link protection to return data. The memory controller includes corresponding link encryption for writing data and link protection/decryption for reading data.In one example, link protection can be applied without using ECC bits to store integrity information. Therefore, link protection can improve the reliability of the system. The use of link protection can complement the application of data-at-rest encryption (eg, MKTME) to provide link integrity and replay protection.1 is a block diagram of an example of a system in which system memory data is exchanged using link protection. System 100 shows memory coupled to a host. Host 110 represents a host computing system. An example of host 110 may represent a CPU SOC. Host 110 includes host hardware such as processor 112 and memory controller 120 . The host hardware also includes hardware interconnect and driver/receiver hardware to provide interconnection between the host 110 and the memory 140 . Memory 140 includes array 144 , which represents a memory array to store data for host 110 . The memory controller 120 controls access to the memory 140 .Host hardware supports the execution of host software on host 110 . Host software may include a host OS (operating system) 114 . Host OS 114 represents the software platform under which other software will execute. Host OS 114 provides control to interface with the hardware interconnect to couple to memory 140 .During execution, host OS 114 provides requests to access memory 140 . The request can come directly from the host OS software, can be a request through an API (Application Programming Interface), or other mechanism for a program executing under the host OS 114 to request memory access. A memory access may include writing data to or reading data from array 144 . In response to host memory access requests, memory controller 120 maps host-based addressing for memory resources to physical address locations of memory 140 .Host 110 includes I/O (input/output) 130 to interconnect with memory 140 . Memory 140 includes corresponding I/O 150 . C/A (Command/Address) 132 represents interface hardware including a signal line interface to enable the memory controller 120 to send commands to the memory 140 . C/A 152 represents interface hardware including a signal line interface for memory 140 to receive commands issued by memory controller 120 .D[0:X-1] represents interface hardware, including signal line interfaces to enable host 110 and memory 140 to exchange data associated with commands. D[0:X-1] represents a data bus with X data (DQ) lines. C/A 132 and C/A 152 represent interfaces to the command bus. For write commands, I/O 130 will drive the data bus. For read commands, I/O 150 will drive the data bus.In one example, memory controller 120 includes data encryption (ENCR) 122 to encrypt data at rest. In one example, data encryption 122 implements MKTME to provide data at rest confidentiality. System 100 supports data encryption implementations other than MKTME. Any encryption mechanism may be used to store data in a manner in which the data is provided to memory and cannot be interpreted without proper decryption of the data after retrieval from memory 140 . Thus, when memory controller 120 applies data encryption 122, the data content stored in memory array 144 is not in the clear and requires application decryption to be properly understood or used.In one example, memory controller 120 includes ECC 124 that includes circuitry for generating ECC data to send to memory 140 along with write data. ECC is commonly used in memory subsystems. Using ECC 124, memory controller 120 may generate ECC data to protect data. Array 144 is shown with data 162 representing system data stored in array 144 and array 144 showing ECC 164 representing ECC information corresponding to data 162 . ECC 164 is generated by applying an ECC algorithm to data 162 (for example, by XOR (exclusive OR) hardware) to generate ECC codes that are used to determine if there is a momentary bit flip or memory array error, and possibly correct the data error. ECC 164 may be applied to data 162 whether data 162 is stored in the clear or data 162 represents encrypted data. Accordingly, data encryption 122 may encrypt data, and ECC 124 may apply ECC to the encrypted data.It should be understood that stored data 162 may be encrypted when memory controller 120 provides data encryption 122 to generate encrypted data. Link Protection (PROT) 128 will provide link protection for encrypted data. Link protection protects data in transit. After link protection has been verified, the receiver can perform processing on the verified transmitted data, such as ECC processing, calculation of CRC, or even decrypt the data for use. If the data is encrypted, verification of link protection does not decrypt the data itself. Thus, data may be encrypted at host 110, transmitted to memory 140 for storage, and then returned to host 110, with data-at-rest encryption at all times.In one example, link protection 128 includes link encryption. Link encryption may be added to encryption of data at rest, where data at rest encryption is used. Prior to transmission, the data will have no link encryption, but with link encryption added for transmission, where link decryption is removed to leave encrypted data without link encryption.In one example, the memory controller 120 includes a CRC (Cyclic Redundancy Check) 126 to provide CRC information for data sent to the memory 140 . In one example, CRC 126 is not required when memory controller 120 applies link encryption. Alternatively, CRC 126 can be used in conjunction with link encryption. CRC 126 enables memory controller 120 to apply a CRC algorithm to data being sent to memory 140 to determine if an error occurred in the data transfer between host 110 and memory 140 .In one example, memory controller 120 includes link protection 128 to provide link protection for exchanges between I/O 130 and I/O 150 (which may or may not include link encryption). Link protection may include at least link integrity messages to allow verification of transmitted data at the receiving end. Link protection may additionally include link encryption that can be decrypted at the receiving end. For writes, the memory controller 120 will provide link protection and optional link encryption, which the memory 140 can authenticate/decrypt. For reads, the memory 140 will provide link protection and optional link encryption, which the memory controller 120 can authenticate/decrypt.In one example, link protection 128 provides command link protection to protect command and address information through C/A 132 to memory 140 . In one example, link protection 128 provides link protection on D[0:X-1] for write data and verifies link protection for read data for data link. In one example, link protection can encrypt command and address information to protect C/A data.In one example, when the memory controller 120 performs link protection or link encryption or both, it will forego performing CRC for transmission error checking. Memory controller 120 may forgo the CRC in the case of link protection 128 since link protection 128 will provide link protection, which may make CRC 126 redundant.The memory 140 includes a controller 142 , which represents a controller or control logic at the memory to receive and process commands from the host 110 . The controller 142 generates internal commands or internal operations to execute commands received from the memory controller 120 . Link Protection (PROT) 172 represents protection of command and address information. Link protection 172 represents the ability of memory 140 to perform link verification of command and address information and optionally decrypt command bus encryption. Link Protection (PROT) 174 represents link protection and optional link encryption for data. Link protection 174 represents the ability of memory 140 to perform link protection verification and optionally decrypt link encryption for writes, and to provide link protection and optionally provide link encryption for reads.In one example, memory 140 represents a memory module, such as a DIMM. A DIMM may include a registered clock driver (RCD) or other hardware as controls for the memory module. In one example, link protection 172 for the command bus is implemented in the RCD. In one example, link protection 174 for the data bus is implemented in each memory device on the memory module. In one example, a memory module includes a data buffer (DB) to buffer data for various memory devices on the module. In one example, link protection 174 is implemented at the data buffer. In one example, link protection 174 represents the encryption hardware as a separate component of the memory module. Further details of various embodiments are described below with respect to Figures 5A-5D.In one example, link protection 128 may be referred to as an IDE (integrity and data encryption) or IDE engine. In one example, link protection 128 implements standards-based cryptography, such as AES (Advanced Encryption Standard). In one example, link protection 128 implements AES in counter mode (AES-CTR). In one example, link protection 128 implements AES-GCM (AES in Galois/Counter Mode). In one example, link protection 128 implements AES-GMAC (AES with Galois Message Authentication Code). AES-GMAC refers to an authentication-only variant of GCM that enables link protection 128 to form incremental message authentication codes.It should be appreciated that AES-GCM and AES-CTR encryption provide only XOR operations in the critical path, which has a low impact on latency and overall memory subsystem performance. AES in counter mode (whether it's AES-CTR, AES-GCM, or AES-GMAC, or some other counter-based cipher implementation) provides replay protection.Link protection 128 may implement different forms of AES, which may include a 256-bit key length, a 128-bit key length, or some other key length. In one example, link protection 128 supports implementations of different integrity tag sizes for system 100 . For example, system 100 may implement tag sizes of 32, 64, 96, or 128 bits.Larger integrity tags will result in more latency overhead. In the case of smaller tag sizes, the system 100 should refresh the data encryption keys more frequently. In one example, the tag represents a Message Authentication Code (MAC). In one example, system 100 provides tag or MAC information using out-of-band signal lines, which refer to signal lines other than standard memory signal lines defined by standards. In one example, the system 100 provides tag or MAC information inline with the data, such as by adding additional bursts to the burst sequence, or extending the burst length to include more data transfers per burst, The burst is used to send tag or MAC information.In one example, system 100 provides full memory bandwidth for data transfers, as opposed to other methods that replace ECC bits with integrity data. In one example, MAC computation overhead may be reduced by computing the MAC over multiple transmission units or multiple bursts of a sequence of data transmissions. If the MAC information replaces the bus-level CRC or parity information for link error detection, the MAC transmission overhead can be reduced or eliminated.In one example of system 100, the SOC supports MKTME for data-at-rest protection, eliminating the need for data on the IDE path through link protection 128 (e.g., from cache memory on processor 112 to memory controller 120). Encryption needs to be done. Eliminating IDE path data encryption can reduce the area and power used by the cryptographic engine. Data will still be integrity protected and replay protected using cryptographic algorithms such as AES-GMAC.In one example, host 110 provides encryption for command and address information, which is distinct from link encryption, which may be provided in conjunction with link protection provided by link protection 128 for C/A 132 . In an alternate example, command/address encryption could be eliminated, similar to eliminating IDE data encryption. Eliminating command and address encryption reduces the area and power used by the cryptographic engine. Commands and addresses still use cryptographic algorithms (eg, AES-GMAC) for link encryption for integrity and replay protection. It should be appreciated that remove command and address encryption may allow disclosure of address side channel information if the address is not encrypted.In one example, host 110 and memory 140 exchange integrity tag information (eg, MAC) via a different reserved signal line than C/A 132 and D[0:X-1]. In one example, dedicated signal lines allow 1 bit to be sent for every 8-bit data lane. Therefore, the MAC may need to be accumulated over multiple transfer bytes across the data bus. For example, for a MAC size of 32, 32 bits of MAC data will be sent in a data transfer of 32 bytes. Larger tag sizes (eg, 64, 96, or 128) will need to be accumulated over more data transfers, which may result in increased latency for data transfers to accumulate tag information to be able to decrypt the data.In one example, initialization vector (IV) values used by encryption engines (eg, AES-GMAC, AES-GCM, or other encryption) need not be explicitly transmitted over the link. For example, the host and memory can perform a handshake and agree on the number of bytes to be part of a given MAC epoch, and internally update the IV for each MAC epoch. Not having to send the IV value reduces bandwidth overhead because the IV doesn't have to be transmitted every time the MAC is set.It should be understood that PCIe (Peripheral Component Interconnect Express) is a transport standard that allows link protection. However, PCIe has packetized data, which is not the case in the case of a memory system data bus, where data is exchanged in parallel with different devices as different parts of the overall data transfer. Therefore, the link protection of PCIe cannot be applied to the system data bus shown in system 100 . Implementations of link protection 128 utilizing parallel processing can efficiently utilize instruction pipelines or hardware pipelines. Both AES-GCM and AES-GMAC can accept an initialization vector (IV) of arbitrary length and are implemented in parallel. Accordingly, link protection 128 may provide these or other cryptographic implementations for parallel link protection of data across parallel devices.In one example, host 110 represents a motherboard, which may include BIOS (basic input/output system) 116 . BIOS 116 is typically stored in non-volatile storage on the motherboard and is accessed and executed by processor 112 prior to loading host OS 114 . BIOS 116 may perform various system configuration operations, including operations related to the interaction of host 110 and memory 140 . In one example, BIOS 116 triggers a security module for key exchange between memory controller 120 and memory 140 .BIOS is not the only mechanism that can be used to trigger key exchange. For example, operation of host OS 114 may trigger security operations to perform key exchange. In one example, the key exchange includes writing to an RCD or directly to a memory device to switch to a new key. In one example, the system 100 may exchange keys after a preset or predetermined number of retries. In one example, system 100 may switch keys to discern between transmission errors and other errors. For example, after repeating an error, the system 100 can switch encryption keys and try the transaction again to determine if the error recurred. In one example, system 100 supports multi-key operations for link encryption. Link encryption using multiple keys will involve latency management and buffering to operate with different keys.In one example, link protection 128 performs link encryption on the new cryptographic key for transmission to memory 140 . After passing the new cryptographic key to memory 140, memory controller 120 may switch to using the new cryptographic key for link encryption, and memory 140 will begin using the new key upon receipt.2 is a block diagram of an example of a memory module with memory devices to exchange data with link protection. System 200 represents a system according to an example of system 100 . System 200 includes socket 210 coupled to DIMM 230 . Socket 210 represents a CPU socket, which may include CPU 212 and memory controller 220 . DIMM 230 includes multiple DRAM devices.System 200 is shown with a shared control or command bus (C/A (command/address) bus 244[0] for one channel and C/A bus 244[1] for the other channel) and data buses (data An example of a system of memory devices where the bus 226[0] is used for one channel and the data bus 226[1] is used for the other channel). The memory devices are denoted DRAM (Dynamic Random Access Memory) devices. Each channel has N DRAM devices, DRAMs 252[0:(N-1)] (collectively DRAM devices 252) for a channel, and DRAMs 256[0:(N-1)] (collectively DRAM devices 256 ) for other channels, where N can be any integer. In one example, N includes one or more ECCDRAM devices in addition to data devices. In one example, two different channels share the C/A bus 224 connection between memory controller 220 and RCD 240 . In one example, different channels will have different C/A buses. DRAM devices can be accessed individually with device-specific commands and in parallel with parallel commands.The RCD (Registered Clock Driver or Registered Clock Driver) 240 represents a controller for the DIMM (Dual Inline Memory Module) 230 . In one example, RCD 240 receives information from memory controller 220 and buffers signals to various DRAM devices. By buffering the input signal from the memory controller 220, the controller only sees the load of the RCD 240, which can then control the timing and signaling to the DRAM device.In one example, RCD 240 controls signals to DRAM device 252 via C/A bus 244[0] and signals to DRAM device 256 via C/A bus 244[1]. In one example, RCD 240 has separate command ports for different channels. In one example, DIMM 230 includes a data buffer to buffer data bus signals between the DRAM devices of DIMM 230 and memory controller 220 .Data bus 226[0] provides a data bus for DRAM device 252, which is buffered by data buffer DB 262[0:(N-1)]. Data bus 226[1] provides a data bus for DRAM device 256, and DRAM device 256 uses data buffer DB266[0:(N-1)] for buffering. System 200 shows a one-to-one relationship between data buffers and DRAM devices. In one example, there are fewer data buffers than DRAM devices.C/A bus 244[0] and C/A bus 244[1] (collectively referred to as C/A bus 244) are generally single sided buses or unidirectional buses used to transfer command and address information from memory controller 220 to the DRAM device. Accordingly, C/A bus 244 may be a multidrop bus. Data bus 226[0] and data bus 226[1] (collectively data bus 226) are conventionally bidirectional point-to-point buses.System 200 represents an example of a system that provides link protection and optionally link encryption between socket 210 and DIMM 230 . In one example, memory controller 220 includes Link IDE (Integrity and Data Encryption) 222, which may be referred to as an IDE engine. Link IDE 222 represents the circuitry and logic of memory controller 220, which mirrors the link protection and optional link encryption of DIMM 230, enabling the exchange of protected link data.In one example, RCD 240 includes Link IDE 242, which represents an IDE engine on the RCD for performing link verification and optionally for link protection on C/A bus 224 and Decryption of address information. In one example, DIMM 230 includes link IDE 264 and link IDE 268 . Link IDE 264 represents the IDE engine in DB 262 and link IDE 268 represents the IDE engine in DB 266 . In one example, DIMM 230 includes link IDE 254 and link IDE 258 . Link IDE 254 represents the IDE engine in DRAM device 252 and link IDE 258 represents the IDE engine in DRAM device 256 .Link IDE 264 and link IDE 268 may replace link IDE 254 and link IDE 258, respectively. Therefore, system 200 will include an IDE engine in a data buffer or in a memory device. Application of the IDE engine in memory devices enables unbuffered DIMM or unbuffered memory implementations. Although system 200 could include the IDE engine in the memory device in an implementation with a data buffer, such an implementation would have significant inefficiencies in the event of link errors being found. Typically, the memory device will only have an IDE engine if there is no DB in the system. If there are DBs in the system, they will usually include an IDE engine for managing link protection, rather than a memory device.In one example, LACTM (Link and Configuration Trust Module) 214 may manage the configuration of the link IDE 222 of the memory controller and the link IDE of the DIMM 230 . In one example, LACTM 214 represents a software trusted module. In one example, LACTM 214 represents a firmware trusted module. In one example, LACTM 214 represents a dedicated hardware engine or microcontroller as a trusted module. It will be appreciated that a single memory controller can manage multiple memory channels (eg, DDR channels to different DIMMs). Each memory channel can have one or more DIMMs plugged into it. Socket 210 may include multiple memory controllers 220, each memory controller managing one or more memory channels.In one example, BIOS 202 provides LACTM 214 with a list of DIMMs, including DIMM 230 . LACTM 214 represents a trusted or trusted module that manages the link between the memory controller 220 and the memory and manages configuration information related to the memory. A trusted module refers to secure hardware, such as an out-of-bound or other secure microprocessor. Secure hardware can manage cryptographic keys or other proofs of security or tamper resistance. In one example, LACTM 214 manages keys associated with link encryption through link IDE 222 and link IDE on DIMM 230 .In one example, LACTM 214 manages authenticated key exchange with DIMM 230 and other DIMMs that may be connected to socket 210 . LACTM 214 may establish a secure session or security mode for key exchange. In one example, LACTM 214 generates a random key and programs it into DIMM 230 . In the case of multiple DIMMs, LACTM 214 can generate a random key for each DIMM and program the key into the corresponding DIMM. LACTM 214 may configure memory controller 220 (in the case of multiple DIMMs, for each DIMM) with the same key. In one example, the LACTM 214 calculates an initial IV based on the unique ID (identifier) of the DB or RCD and configures the initial IV into the memory controller channel. Alternatively, the IV can be based on a different value than the ID of the DB or RCD.In one example, once the key is updated for both memory controller 220 and DIMM 230, LACTM sets a bit in memory controller 220 to trigger an in-band mechanism to switch to the new cryptographic key. The link IDE of system 200 can then implement cryptographic operations based on the key. In one example, LACTM 214 may hand over the secure session to a runtime trusted module (not specifically shown) to enable the runtime module to periodically renew the IDE keys.In one example, each DIMM includes a unique (per-part) certificate chain signed by the DIMM manufacturer, or the system integrator, or both the DIMM manufacturer and the system integrator. Such certificates attest to the authenticity of the device and ensure that there are no malicious interposers inside the DIMM itself. In one example, certificates are used as part of the authenticated key exchange. In one example, DIMM 230 includes SPD (Serial Presence Detect) 232, which stands for SPD hub. SPD 232, acting as an SPD hub, provides a center or hub for control plane communications between components of DIMM 230 and RCD 240, and provides one or more sensor functions. In one example, SPD 232 provides a root of trust for DIMM 230, which provides a certificate or other form of proof of trust. In one example, SPD 232 participates in key exchange for components including the IDE engine.In one example, SPD 232 performs an authenticated key exchange to establish a secure session. In one example, SPD 232 provides asymmetric cryptographic operations to establish secure sessions. A secure session may enable memory controller 220 to securely send the IDE key to SPD 232 , which may then securely propagate the key across DIMM 230 .Therefore, SPD 232 may internally forward the cryptographic key to RCD 240 . In one example, RCD 240 distributes one or more cryptographic keys to data buffer 262 and data buffer 266 , or to DRAM device 252 and DRAM device 256 . In one example, RCD 240, DB 262, DB 266, DRAM 252, and DRAM 256 may determine an initial IV value without external configuration. In one example, initial IVs are explicitly configured into components of DIMM 230 . In one example, once keys are updated in all cryptographic engines of DIMM 230, DIMM 230 may apply an in-band mechanism to switch to the new keys.In one example, an IDE engine (eg, link IDE 264, link IDE 268, link IDE 254, link IDE 258) may implement operations on individual portions of data words. Depending on the cryptographic engine, the operations performed by the link IDE 222 can be divided into different parallel parts. In one example, system 200 provides an integrity solution for each data word portion (eg, each DB or each DRAM device). In one example, the system 200 can provide a cache line solution where a link decoder/encoder on the DIMM 230 that can generate link protection performs verification of the link protection. Where link encryption is used, the cache line solution may decrypt the entire write data word and encrypt the entire read data word for exchange with socket 210 .In one example, system 200 has a channel reserved for each device to provide a MAC or integrity tag. In one example, the reserved lanes may provide additional DQS (data strobe) pairs with DQ groups for providing parity lanes of the MAC. Using a parity or CRC signal line can reduce the bandwidth impact on the MAC transmission. In an alternative example, the data bits may be sent using all lanes (which may include additional lanes) first, followed by the MAC bits at the end, which may reduce transmission latency. In the example where the MAC bits are sent at the end of the data burst, both data framing and command framing will be adapted to allow MAC transmission at the end of each frame.3 is a block diagram of an example of a system with a host and system memory that exchange data using link protection and CRC. System 300 represents an example of a system according to system 200 . System 300 represents hardware components of a system with link management for link protection.CPU socket 310 represents a socket or SOC that includes processor 312 and memory controller 320 . Processor 312 represents the processing logic of system 300, such as a CPU. For example, a similar system configuration can be applied to GPUs. Memory controller 320 represents circuitry for managing access to system memory. DIMM 330 represents a memory module that includes system memory.For data writing processed from the CPU socket 310 to the DIMM 330 , the processor 312 performs an operation of generating a write request for data stored in the memory of the DIMM 330 . In one example, memory controller 320 executes MKTME to perform MKTME memory encryption. MKTME encryption represents an example of data-at-rest encryption. Other forms of data-at-rest encryption may be applied. MKTME may be available in system 300 and enabled for some transactions and not enabled for others. Accordingly, memory controller 320 may selectively apply data encryption. Application of encryption to data at rest can result in protected data for write transactions.The memory controller 320 generates a memory write command and address information and prepares to write data. Preparation for writing data may include generating ECC with ECC 324 . Memory controller 320 may schedule write transactions and prepare the write data and corresponding ECC for sending to DIMM 330 . It should be understood that if data encryption is applied, ECC 324 may generate ECC calculations for encrypted data. If data encryption is not applied, ECC 324 may generate an ECC calculation for unencrypted data.In one example, data IDE 326 generates link protection/encryption for memory controller 320 . Data IDE 326 represents an example of an IDE engine for memory controller 320 . In one example, data IDE 326 generates link IDE to cryptographically protect data in transit. In one example, data IDE 326 operates after scheduling decisions have been made; thus, transactions can be fully sequenced, including potential reordering of transactions to time data transfers on the bus. In one example, data IDE 326 represents link protection/encryption of the data bus, including source data and its corresponding ECC (whether encrypted (ie, protected) or unencrypted). In one example, memory controller 320 includes an IDE engine (which may be the same IDE engine as data IDE 326 ) to protect command and address information to be sent to DIMM 330 .In system 300, memory controller 320 executes CRC 328 to generate CRC, parity, or other bus-level reliability information for data transfers. The calculation of CRC 328 may be performed on the protected link content (protected data and its ECC encrypted with link encryption). CRC or parity can be used as protection against transmission errors.PHY 314 represents the physical layer or physical interface of CPU socket 310 to DIMM 330 . In one example, PHY 314 includes hardware interface components to drive write commands and write data to DIMM 330 . In one example, PHY 314 may be considered part of memory controller 320 . Memory controller 320 may manage PHY 314 to send commands and exchange data with the memory of system 300 .DIMM 330 includes PHY 332 , which represents a hardware interface to PHY 314 of CPU socket 310 . In one example, PHY 332 may be considered part of DB 350 or part of DRAM 360 . In one example (where command information is link encrypted), RCD 340 includes CRC 342 and data IDE 344 . CRC 342 enables RCD 340 to check the CRC information for transmission errors. Thus, the RCD 340 can check the command bus parity or CRC for errors and request any retries that are needed based on detected errors.DB 350 represents multiple data buffers on DIMM 330 . DRAM 360 represents multiple DRAM devices on DIMM 330 . In one example, DB 350 includes CRC 352 to check parity or CRC information against the data bus. In one example, in an unbuffered DIMM, DRAM 360 may include CRC checking logic as represented by CRC 362 . CRC 342, CRC 352 or CRC 362, similar to RCD 340, can check for transmission errors on the data bus and request any required retries.In one example, data that has been checked and shown to be free of transmission errors is sent to an equivalent IDE on the DIMM side to decrypt the contents, to check the integrity of the received data, or to decrypt the contents and check the integrity. Data IDE 344 represents the IDE engine of RCD 340 for verifying link protection/decrypting link encryption against command and address information. Data IDE 354 represents the IDE engine of DB 350 to perform link verification/decryption. After verifying link protection or decrypting link encryption in DB 350, DB 350 may send data and ECC information to DRAM 360, which may include protected data (data-at-rest encryption) and accompanying ECC.In one example (where DIMM 330 does not include DB 350 ), DRAM 360 may include data IDE 364 as an IDE engine to perform link protection verification for write data and generate link protection for read data. The IDE 364 can decrypt the link encryption of the data bus for write data and generate link encryption for read data. If RCD 340 detects a cryptographic error in the command and address information, RCD 340 may make a request to memory controller 320 to retry the command transaction. If DB 350 or DRAM 360 detects a cryptographic error in transmission, they can request a retry of the data. Once the cryptographic checks for link encryption pass, either through DB 350 or DRAM 360, DRAM 360 can process commands and data for storage in the memory device. Storage of data may include storing ECC 366 with source write data from memory controller 320 .In one example, data IDE 326 generates encrypted data and MACs, link integrity tags, or other cryptographic information for decryption of link encryption. In one example, PHY 314 and PHY 332 include one or more signal lines or conductors to transmit MAC information. In one example, memory controller 320 includes the MAC information as part of the data payload for transmission to DIMM 330 . For example, MAC information may be sent from memory controller 320 to DIMM 330 as part of a data burst. The IDE engine can apply MAC or link integrity tag information in link encryption.MKTME 322 can provide a guarantee that data stored in DRAM 360 cannot be used if a physical attack reads the data from the memory. The data IDE 326 can ensure that the transmitted data is transmitted correctly and securely. In addition, the data IDE 326 can enable replay protection because when data is written to a particular location in memory, reading from that memory location results in the same data.4 is a block diagram of an example of a system with a host and system memory exchanging data with link protection without CRC. System 400 represents an example of a system according to system 200 . System 400 represents hardware components of a system with link management for link protection.CPU socket 410 represents a socket or SOC that includes a processor 412 and a memory controller 420 . Processor 412 represents processing logic, such as a CPU, for system 400 . For example, a similar system configuration can be applied to GPUs. Memory controller 420 represents circuitry for managing access to system memory. DIMM 430 represents a memory module that includes system memory.For data writing processed from the CPU socket 410 to the DIMM 430 , the processor 412 performs an operation of generating a write request for data stored in the memory of the DIMM 430 . In one example, memory controller 420 executes MKTME to perform MKTME memory encryption. MKTME encryption represents an example of data-at-rest encryption. Other forms of data-at-rest encryption may be applied. MKTME may be available in system 400 and enabled for some transactions and not for others. Accordingly, memory controller 420 may selectively apply data encryption. Applying encryption to data at rest results in protected data for write transactions.The memory controller 420 generates a memory write command and address information and prepares to write data. Preparation for writing data may include generating ECC with ECC 424 . Memory controller 420 may schedule write transactions and prepare write data and corresponding ECC to send to DIMM 430 . It should be understood that if data encryption is applied, ECC 424 may generate ECC calculations for encrypted data. If data encryption is not applied, ECC 424 may generate an ECC calculation for the unencrypted data.In one example, data IDE 426 generates link protection/encryption for memory controller 420 . Data IDE 426 represents an example of an IDE engine for memory controller 420 . In one example, the data IDE 426 generates a link IDE to cryptographically protect data in transit. In one example, data IDE 426 operates after scheduling decisions have been made; thus, transactions can be fully sequenced, including potential reordering of transactions to time data transfers on the bus. In one example, data IDE 426 represents link protection/encryption for the data bus, including source data and its corresponding ECC (whether encrypted data (ie, protected data) or unencrypted data). In one example, memory controller 420 includes an IDE engine (which may be the same IDE engine as data IDE 426 ) to protect command and address information sent to DIMM 430 . In system 400, data IDE 426 can provide link integrity, thereby eliminating the need for CRC or parity information.PHY 414 represents the physical layer or interface of CPU socket 410 to DIMM 430 . In one example, PHY 414 includes hardware interface components to drive write commands and write data to DIMM 430 . In one example, PHY 414 may be considered part of memory controller 420 . Memory controller 420 may manage PHY 414 to send commands and exchange data with the memory of system 400 .DIMM 430 includes PHY 432 , which represents the hardware interface to PHY 414 of CPU socket 410 . In one example, PHY 432 may be considered part of DB 450 or part of DRAM 460 . In one example (where command information is link encrypted), RCD 440 includes data IDE 442 . DB 450 represents multiple data buffers on DIMM 430 . DRAM 460 represents multiple DRAM devices on DIMM 430 . The IDE engine on DIMM 430 can decrypt the content, check the integrity of received data, or decrypt the content and check the integrity. Checking integrity can provide a certainty of correct link transmission, which allows system 400 to eliminate the use of CRC.Data IDE 442 represents the IDE engine of RCD 440 for verifying link protection/decryption link encryption against command and address information. Data IDE 452 represents the IDE engine of DB 450 for performing link verification/decryption. After verifying link protection or decrypting link encryption in DB 450, DB 450 may send data and ECC information to DRAM 460, which may include protected data (data-at-rest encryption) and accompanying ECC.In one example (where DIMM 430 does not include DB 450 ), DRAM 460 may include data IDE 462 as an IDE engine to perform link protection verification for write data and generate link protection for read data. The IDE 462 can decrypt the link encryption for the data bus for write data and generate the link encryption for read data. If RCD 440 detects a cryptographic error in the command and address information, RCD 440 may make a request to memory controller 420 to retry the command transaction. If DB 450 or DRAM 460 detects a cryptographic error in transmission, they can request a retry of the data. Once the cryptographic checks for link encryption pass (whether through DB 450 or DRAM 460), DRAM 460 can process commands and data for storage in the memory device. Storage of data may include storing ECC 464 with source write data from memory controller 420 .In one example, data IDE 426 generates encrypted data and MACs, link integrity tags, or other cryptographic information for decryption of link encryption. In one example, PHY 414 and PHY 432 include one or more signal lines or conductors to transmit MAC information. In one example, data IDE 426 can replace the use of CRC. In this case, MAC information or link integrity label information may be sent through the CRC signal line. Therefore, the CRC bandwidth can be replaced by the use of integrity tag information. In one example, memory controller 420 includes the MAC information as part of the data payload for transmission to DIMM 430 . For example, MAC information may be sent from memory controller 420 to DIMM 430 as part of a data burst. The IDE engine can apply MAC or link integrity tag information in link encryption.MKTME 422 may provide a guarantee that data stored in DRAM 460 cannot be used if a physical attack reads the data from the memory. The data IDE 426 can ensure that the transmitted data is transmitted correctly and securely. In addition, the data IDE 426 can enable replay protection because when data is written to a particular location in memory, reading from that memory location results in the same data.5A is a block diagram of an example of a memory system with a data link protection engine in a data buffer. System 502 represents an example of a memory module according to system 100 , system 200 , system 300 , or system 400 . System 502 shows an IDE engine based on each DB and RCD.System 502 illustrates DRAM 522, which represents a memory device or DRAM device used to store system data. DB 512 represents a data buffer that buffers the data bus between the host and the memory module. Data 572 represents one or more data buses of system 502 . CMD (command) 582 represents the command bus from the host to the memory modules.System 502 represents an example of a buffered DIMM where link protection provides integrity and replay protection for data in transit between a host and a memory device. Link Protection may be referred to as DDR IDE, referring to integrity and data encryption for double data rate data channels. In one example, the DDR IDE applies AES-GMAC or another AES-CTR algorithm.In one example, the host's memory controller encrypts data with MKTME to provide data-at-rest protection, and the DDR IDE does not need to perform additional encryption. System 502 can generate integrity tags or MAC messages to verify the integrity of data transmissions.In one example, RCD 532 includes IDE engine 562 or cryptographic engine to perform link decryption of CMD 582 for command and address information. In one example, RCD 532 also handles control signals, which allows for simpler functional mapping. In one example, DB 512 includes an IDE engine 552 or a cryptographic engine to perform link protection/decryption of data 572 for written data. In one example, the IDE engine 552 may provide link protection/encryption for read data to be sent back to the memory controller.In one example, SPD hub 542 provides a center for key exchange. Thus, the SPD hub 542 can manage the key exchange, and the IDE engine 552 can implement cryptography based on the provided keys. In one example, SPD hub 542 provides key information to RCD 532 , which can propagate the key to IDE engine 552 .5B is a block diagram of an example of a memory system with a data link protection engine in a memory device. System 504 represents an example of a memory module according to system 100 , system 200 , system 300 , or system 400 . System 504 shows an IDE engine per memory device and RCD.System 504 illustrates DRAM 524, which represents a memory device or DRAM device used to store system data. System 504 shows an example of a memory module without a data buffer. Data 574 represents one or more data buses for system 504 . CMD (command) 584 represents the command bus from the host to the memory modules.System 504 represents an example of an unbuffered DIMM in which link protection provides integrity and replay protection for data in transit between a host and a memory device. Applying link protection in the memory device itself can provide additional protection for data on the bus inside the memory module. Link Protection may be referred to as DDR IDE, referring to integrity and data encryption for double data rate data channels. In one example, the DDR IDE applies AES-GMAC or another AES-CTR algorithm.In one example, the host's memory controller encrypts data with MKTME to provide data-at-rest protection, and the DDR IDE does not need to perform additional encryption. System 504 can generate integrity tags or MAC messages to verify the integrity of data transmissions.In one example, the RCD 534 includes an IDE engine 564 or a cryptographic engine to perform link decryption of the CMD 584 for command and address information. In one example, RCD 534 also handles control signals, which allows for simpler functional mapping. In one example, DRAM 524 includes an IDE engine 554 or a cryptographic engine to perform link protection/decryption of data 574 for written data. In one example, the IDE engine 554 may provide link protection/encryption for read data to be sent back to the memory controller. The IDE engine 554 allows the memory device to perform link integrity checks or decryption of link encryption locally at the memory device.In one example, SPD hub 544 provides a center for key exchange. Thus, the SPD hub 544 can manage the key exchange, and the IDE engine 554 can implement cryptography based on the provided keys. In one example, SPD hub 544 provides key information to RCD 534 , which can propagate the key to IDE engine 554 .5C is a block diagram of an example of a memory system with a data link encryption engine between each data buffer and DRAM pair. System 506 represents an example of a memory module according to system 100 , system 200 , system 300 , or system 400 . System 506 shows the IDE engine as a separate component of the memory module.System 506 illustrates DRAM 526, which represents a memory device or DRAM device used to store system data. DB 516 represents a data buffer that buffers the data bus between the host and the memory module. Data 576 represents one or more data buses for system 506 . CMD (command) 586 represents the command bus from the host to the memory modules. In one example, system 506 includes DB 516 to buffer data 576 between DRAM 526 and the host.System 506 represents an example of a buffered DIMM where link protection provides integrity and replay protection for data in transit between a host and a memory device. Link Protection may be referred to as DDR IDE, referring to integrity and data encryption for double data rate data channels. In one example, the DDR IDE applies AES-GMAC or another AES-CTR algorithm.In one example, the host's memory controller encrypts data with MKTME to provide data-at-rest protection, and the DDR IDE does not need to perform additional encryption. System 506 can generate integrity tags or MAC messages to verify the integrity of data transmissions.In one example, the RCD 536 includes an IDE engine 566 or a cryptographic engine to perform link decryption of the CMD 586 for command and address information. In one example, RCD 536 also handles control signals, which allows for simpler functional mapping. In one example, the memory module includes an IDE engine 556 or a cryptographic engine implemented as distinct components on the memory module. In one example, the IDE engine 556 is a separate link decryption chip on the module used to write data. In one example, IDE engine 556 represents a link encryption chip on the module for reading data. Link decryption or encryption chips may be specific to each DRAM 526 . The IDE engine 556 performs link protection/decryption of data 576 for write data. In one example, the IDE engine 556 may provide link protection/encryption for read data to be sent back to the memory controller. In one example, IDE engine 556 is implemented as an intermediate component between DRAM 526 and DB 516 . It should be appreciated that the use of intermediate components for the IDE engine may introduce additional memory latency as well as increased component count and module cost.In one example, SPD hub 546 provides a center for key exchange. Thus, SPD hub 546 can manage the key exchange, and IDE engine 556 can implement cryptography based on the provided keys. In one example, SPD hub 546 provides key information to RCD 536 , which can propagate the key to IDE engine 556 .5D is a block diagram of an example of a memory system with a data link protection engine between a data buffer and a DRAM. System 508 represents an example of a memory module according to system 100 , system 200 , system 300 , or system 400 . System 508 shows the IDE engine as a separate component of the memory module.System 508 illustrates DRAM 528, which represents a memory device or DRAM device used to store system data. DB 518 represents a data buffer that buffers the data bus between the host and the memory module. Data 578 represents one or more data buses for system 508 . CMD (command) 588 represents the command bus from the host to the memory modules. In one example, system 508 includes DB 518 to buffer data 578 between DRAM 528 and the host.System 508 represents an example of a buffered DIMM where link protection provides integrity and replay protection for data in transit between a host and a memory device. Link Protection may be referred to as DDR IDE, referring to integrity and data encryption for double data rate data channels. In one example, the DDR IDE applies AES-GMAC or another AES-CTR algorithm.In one example, the host's memory controller encrypts data with MKTME to provide data-at-rest protection, and the DDR IDE does not need to perform additional encryption. System 508 may generate integrity tags or MAC messages to verify the integrity of data transmissions.In one example, the RCD 538 includes an IDE engine 568 or a cryptographic engine to perform link decryption of the CMD 588 for command and address information. In one example, RCD 538 also handles control signals, which allows for simpler functional mapping. In one example, the memory module includes an IDE engine 558 or a cryptographic engine implemented as a separate component on the memory module. In one example, the IDE engine 558 is a separate link decryption chip on the module used to write data. In one example, IDE engine 556 represents a link encryption chip on the module for reading data. Link decryption or encryption chips can be shared by DRAM 528 .In one example, IDE engine 558 is implemented as a component that spans an entire bus or across an entire cache line. Thus, the IDE engine 558 can provide cryptographic functions for multiple parallel devices of a memory module. The IDE engine 558 performs link protection/decryption of data 578 for write data. In one example, the IDE engine 558 may provide link protection/encryption for read data to be sent back to the memory controller. In one example, IDE engine 558 is implemented as an intermediate component between DRAM 528 and DB 518 . It should be appreciated that using intermediate components for the IDE engine may introduce additional memory latency as well as increase component count and module cost.In one example, SPD hub 548 provides a center for key exchange. Thus, the SPD hub 548 can manage the key exchange, and the IDE engine 558 can implement cryptography based on the provided keys. In one example, SPD hub 548 provides key information to RCD 538 , which can propagate the key to IDE engine 558 .FIG. 6A is a block diagram of an example of link protection where MACs are sent on separate I/O signal lines. Link protection 602 represents link protection, where the IDE engine generates MAC information that is sent over a different signal line than the data bus. Link protection 602 may be used with systems according to system 100 , system 200 , system 300 or system 400 .Data at rest 610 represents source data, which may be encrypted or unencrypted data. IDE engine 622 represents an example of a cryptographic engine that may generate MAC information for link encryption. In one example, IDE engine 622 encrypts data at rest 610 . In one example, the IDE engine 622 does not encrypt the data itself, but instead generates cryptographic check information (eg, MAC or integrity tags) to encrypt the data link. For data that is already encrypted, the link's encryption information may be sufficient to provide security to the data in transit.In one example, IDE engine 622 generates MAC and encrypted data. In one example, the data is already encrypted and the IDE engine generates a MAC without encrypting the data again. Therefore, data is denoted as protected/encrypted data, referring to protection of the link, encryption of the data and optional encryption of the link. In one example, due to the size of the MAC used in link protection 602, the MAC is accumulated over multiple data bursts, identified as BRST[0:B-1]. The B bursts may represent the number of data bursts to which the MAC is applied, over which the MAC is computed.In one example, the MAC has a section for data of each burst. A burst represents a sequence of periods or unit intervals in which link data 632 is transmitted for a single memory access transaction. In one example, the bits of the MAC are sent over multiple data transmission cycles (represented by data bursts).With link protection 602, the MAC is sent over the signal line, unlike the data burst BRST[0:B-1]. I/O 652 represents an interface to signal lines other than the data bus. DQ 642 represents the data bus that transmits encrypted data, while MAC transmits through I/O 652 .6B is a block diagram of an example of link encryption, where a MAC is sent inline with encrypted data. Link protection 604 represents link encryption, where the IDE engine generates MAC information that is sent inline with the data. Link protection 604 may be used with systems according to system 100 , system 200 , system 300 or system 400 .Data at rest 610 represents source data, which may be encrypted or unencrypted data. IDE engine 624 represents an example of a cryptographic engine that may generate MAC information for link encryption. In one example, IDE engine 624 encrypts data at rest 610 . In one example, the IDE engine 624 does not encrypt the data itself, but instead generates cryptographic check information (eg, MAC or integrity tags) to encrypt the data link. For data that is already encrypted, the link's encryption information may be sufficient to provide security to the data in transit.In one example, IDE engine 624 generates MAC and encrypted data. In one example, the data is already encrypted and the IDE engine generates a MAC without encrypting the data again. Therefore, data is denoted as protected/encrypted data, referring to protection of the link, encryption of the data, and optional encryption of the link. In one example, due to the size of the MAC used in link protection 604, the MAC is accumulated over multiple data bursts, identified as BRST[0:B-1]. The B bursts may represent the number of data bursts to which the MAC is applied, over which the MAC is computed.In one example, the MAC has a section for data of each burst. A burst represents a sequence of periods or unit intervals in which link data 634 is transferred for a single memory access transaction. In one example, the bits of the MAC are sent over multiple data transmission cycles (represented by data bursts).With link protection 604, the MAC is sent with the data. In one example, sending the MAC information with the data includes extending the length of each data burst to allow transmission of the MAC information. DQ 644 represents a data bus where encrypted data is transmitted along with MAC information.7 is a flowchart of an example of a process for writing data to system memory with link protection. Process 700 represents a process for performing a write with link protection.At 702, the host generates a write command with an associated address. In one example, at 704, the host applies data-at-rest encryption to the data. Application of data-at-rest encryption is optional. The host generates ECC for data at rest. Thus, in one example, at 706, the host generates ECC for encrypted data at rest.At 708, the host optionally generates a CRC for the data for transmission error checking. At 710, the host utilizes the IDE engine to apply link protection to the static data and its associated ECC data to be transferred to memory. At 712, the memory receives the protected link data and verifies the link protection with the IDE engine. The IDE engine may be an engine in a data buffer, in a memory device, or in an intermediate component on a memory module.At 714, the memory can determine whether the protected transfer was successful. If an error is detected in the link protection, the protected transfer was not successful (NO branch at 716), the memory may generate a retry to the memory controller to resend the protected transfer (at 718). In one example, if there are multiple failed attempts, a retry request may trigger a change of the encryption key to try again with the new key applied.If no errors were detected in the link protection, the protected transfer was successful (YES branch at 716 ), and in one example, the memory optionally checks the CRC and generates a CRC response if necessary (at 720 ). Upon successful checking of the link protection and optionally decrypting the link encryption, the memory may restore the protected data and ECC bits, and then store the encrypted data at rest and its ECC (at 722).8 is a flowchart of an example of a process for reading data from system memory with link protection. Process 800 represents a process for performing a read with link protection.At 802, the host generates a read command with an associated address. In one example, at 804, the host optionally generates a CRC for the data for transmission error checking. In one example, at 806, the host utilizes the IDE engine to apply link protection to the command and data messages. The memory receives commands and addresses. In one example, at 808, the memory validates link protection for the command using the IDE engine. The IDE engine may be an engine in RCD.At 810, the memory can determine whether the protected transfer was successful. If an error is detected in the link protection, the protected transfer was not successful (No (NO) branch at 812), and the memory may generate a retry to the memory controller to resend the protected transfer (at 814). In one example, if there are multiple failed attempts, a retry request may trigger a change of the encryption key to try again (where the new key is applied).If no errors were detected in the link protection, then the protected transfer was successful (YES branch at 812), and in one example, the memory optionally checks the CRC and generates a CRC response if necessary (at 816). At 818, the memory may then read the encrypted data-at-rest and its ECC from the target address.In one example, the memory applies link protection to the data to send back to the memory controller. Accordingly, at 820, the memory can apply link protection and send a protected read response. At 822, the host receives the read data and the IDE engine verifies link protection.At 824, the host can determine whether the protected transfer was successful. If an error is detected in the link protection, the protected transfer was not successful (No (NO) branch at 826), and the memory controller may generate a retry to the memory (at 828). If no errors were detected in the link protection, the protected transfer was successful (YES branch at 826), and in one example, the memory controller can recover the protected data and ECC and process the read data and its ECC (at 830).9 is a block diagram of an example of a memory subsystem in which system memory link protection may be implemented. System 900 includes elements of a processor and memory subsystem in a computing device. System 900 represents a system according to examples of system 100 , system 200 , system 300 , or system 400 .In one example, the memory controller 920 includes an IDE engine 992 to perform protected data exchanges with the memory device 940 . The protected data exchange may be link protected according to any of the examples described. In one example, memory module 970 includes IDE engine 990 for link protection with memory controller 920 according to any of the examples described. In one example, IDE engine 990 is part of memory device 940 . In one example, IDE engine 990 is a separate component of memory module 970 . In one example, IDE engine 990 is part of a data buffer (not shown) of memory module 970 .Processor 910 represents a processing unit of a computing platform that can execute an operating system (OS) and applications, which can collectively be referred to as a user or host of memory. The OS and applications perform operations that result in memory accesses. Processor 910 may include one or more separate processors. Each individual processor may comprise a single processing unit, a multi-core processing unit, or a combination. The processing unit may be a main processor such as a CPU (Central Processing Unit), a peripheral processor such as a GPU (Graphics Processing Unit), or a combination. Memory accesses can also be initiated by devices such as network controllers or hard disk controllers. Such devices may be integrated with the processor in some systems, or attached to the processor via a bus (eg, PCI express), or a combination. System 900 can be implemented as a SOC (system on chip), or with separate components.References to memory devices may apply to different memory types. Memory devices are generally referred to as volatile memory technologies. Volatile memory is memory whose state (and therefore data stored on it) is indeterminate when power to the device is interrupted. Non-volatile memory refers to memory whose state is deterministic even if power to the device is interrupted. Dynamic volatile memory requires the data stored in the device to be refreshed to maintain state. An example of dynamic volatile memory includes DRAM (Dynamic Random Access Memory) or some variant such as Synchronous DRAM (SDRAM). The memory subsystem described in this article may be compatible with a variety of memory technologies such as: DDR4 (Double Data Rate Version 4, JESD79-4, originally proposed by JEDEC (Joint Electron Device Engineering Council, now JEDEC Solid State Technology Association) in 2012 Released in September), LPDDR4 (Low Power DDR Version 4, JESD209-4, originally published by JEDEC in August 2014), WIO2 (Wide I/O 2 (WideIO2), JESD229-2 (originally published by JEDEC in 2014 Published in August), HBM (High Bandwidth Memory DRAM, JESD235A, originally released by JEDEC in November 2015), DDR5 (DDR version 5, originally released by JEDEC in July 2020), LPDDR5 (LPDDR version 5, JESD209- 5, originally published by JEDEC in February 2019), HBM2 ((HBM version 2), currently discussed by JEDEC), or other memory technologies or combinations of memory technologies, and technologies derived or extended based on such specifications.Memory controller 920 represents one or more memory controller circuits or devices of system 900 . Memory controller 920 represents control logic that generates memory access commands in response to operations performed by processor 910 . Memory controller 920 accesses one or more memory devices 940 . The memory device 940 may be a DRAM device according to any of the above mentioned. In one example, memory devices 940 are organized and managed as distinct channels, where each channel is coupled to a bus and signal lines coupled to multiple memory devices in parallel. Each channel can be operated independently. Therefore, each channel is independently accessed and controlled, and timing, data transfers, command and address exchanges, and other operations are different for each channel. Coupling may refer to electrical coupling, communicative coupling, physical coupling, or a combination of these. Physical coupling may include direct contact. Electrical coupling includes interfaces or interconnections that allow electrical flow between components, or that allow signaling between components, or both. A communicative coupling includes a connection, whether wired or wireless, that enables components to exchange data.In one example, the settings for each channel are controlled by different mode register or other register settings. In one example, each memory controller 920 manages a separate memory channel, although system 900 may be configured to have multiple channels managed by a single controller, or to have multiple controllers on a single channel. In one example, the memory controller 920 is part of the main processor 910, such as logic implemented on the same die or in the same packaging space as the processor.Memory controller 920 includes I/O interface logic 922 to couple to a memory bus, such as a memory channel as described above. The I/O interface logic 922 (and the I/O interface logic 942 of the memory device 940) may include pins, pads, connectors, signal lines, traces, or wires, or other hardware for connecting the devices, or any of these combination. I/O interface logic 922 may include a hardware interface. As shown, the I/O interface logic 922 includes at least drivers/transceivers for the signal lines. Typically, wires within an integrated circuit interface are coupled to pads, pins, or connectors to bond signal lines or traces or other wires between devices. I/O interface logic 922 may include drivers, receivers, transceivers, or terminals, or other circuitry or combinations of circuitry to exchange signals on signal lines between devices. Handshaking includes at least one of sending or receiving. While shown as coupling I/O 922 from memory controller 920 to I/O 942 of memory devices 940, it should be understood that in system 900 implementations that access multiple sets of memory devices 940 in parallel, the multiple memory devices may include I/O interface to the same interface of the memory controller 920. In an implementation of system 900 that includes one or more memory modules 970, I/O 942 may include interface hardware to the memory modules as well as interface hardware on the memory devices themselves. Other memory controllers 920 will include different interfaces to other memory devices 940 .The bus between the memory controller 920 and the memory devices 940 may be implemented as a plurality of signal lines coupling the memory controller 920 to the memory devices 940 . The bus may generally include at least a clock (CLK) 932 , a command/address (CMD) 934 , and write data (DQ) and read data (DQ) 936 , and zero or more other signal lines 938 . In one example, the bus or connection between memory controller 920 and the memory may be referred to as a memory bus. In one example, the memory bus is a multidrop bus. The signal line of CMD may be referred to as the "C/A bus" (or ADD/CMD bus, or some other name indicating the transfer of command (C or CMD) and address (A or ADD) information), and is used for writing and reading The signal line that takes DQ can be called "data bus".In one example, independent channels have different clock signals, C/A buses, data buses, and other signal lines. Thus, system 900 may be considered to have multiple "buses" in the sense that separate interface paths may be considered as separate buses. It should be understood that, in addition to the lines explicitly shown, the bus may also include at least one of a strobe signal line, an alarm line, an auxiliary line or other signal lines, or a combination thereof. It should also be understood that serial bus technology may be used for the connection between the memory controller 920 and the memory devices 940 . An example of a serial bus technology is to encode and transmit high-speed data with an embedded clock 8B10B over a single differential signal pair in each direction. In one example, CMD 934 represents a signal line shared in parallel with multiple memory devices. In one example, multiple memory devices share the encoded command signal line of CMD 934 and each memory device has a separate chip select (CS_n) signal line to select the respective memory device.It should be understood that in the example of system 900 , the buses between memory controller 920 and memory devices 940 include an auxiliary command bus CMD 934 and an auxiliary bus for carrying write and read data DQ 936 . In one example, the data bus may include bidirectional lines for reading data and for writing/command data. In another example, the auxiliary bus DQ 936 may include a unidirectional write signal line for writing data from the host to the memory, and may include a unidirectional line for reading data from the memory to the host. Depending on the memory technology and system design chosen, other signals 938 may accompany the bus or sub-bus, such as the strobe line DQS. Based on the design of the system 900, or based on the implementation if the design supports multiple implementations, the data bus 940 may have more or less bandwidth for each memory device. For example, a data bus may support memory devices with x4 interfaces, x8 interfaces, x16 interfaces, or other interfaces. The convention “xW”, where W is an integer, refers to the interface size or width of the interface of the memory device 940 , which represents the number of signal lines exchanging data with the memory controller 920 . The interface size of the memory devices is the controlling factor in how many memory devices can be used simultaneously by each channel in the system 900 or how many memory devices can be coupled in parallel to the same signal line. In one example, a high bandwidth memory device, wide interface device, or stacked memory configuration or combination can enable wider interfaces, such as x128 interfaces, x256 interfaces, x512 interfaces, x1024 interfaces, or other data bus interface widths.In one example, memory device 940 and memory controller 920 exchange data over the data bus in a sequence of burst or continuous data transfers. A burst corresponds to a number of transfer cycles, which is related to the bus frequency. In one example, the transfer period may be an entire clock period for transfers occurring on the same clock or strobe signal edge (eg, on a rising edge). In one example, each clock cycle (referring to one cycle of the system clock) is divided into a number of unit intervals (UI), where each UI is a transmission cycle. For example, double data rate transfers are triggered on both edges (eg, rising and falling edges) of the clock signal. A burst can last for a configured number of UIs, this can be a configuration stored in a register, or it can be triggered on the fly. For example, a sequence of eight consecutive transfer cycles may be considered a burst length eight (BL8), and each memory device 940 may transfer data on each UI. Thus, an x8 memory device operating on BL8 can transfer 94 bits of data (8 data signal lines times the 8 bits of data transferred per line on a burst). It should be understood that this simple example is illustrative only, and not restrictive.Memory device 940 represents memory resources of system 900 . In one example, each memory device 940 is a separate memory die. In one example, each memory device 940 may interface with multiple (eg, 2) channels per device or die. Each memory device 940 includes I/O interface logic 942 with a bandwidth determined by the device's implementation (eg, x16 or x8 or some other interface bandwidth). I/O interface logic 942 enables memory devices to interface with memory controller 920 . I/O interface logic 942 may include a hardware interface and may be in accordance with the memory controller's I/O 922, but at the memory device end. In one example, multiple memory devices 940 are connected in parallel to the same command and data bus. In another example, multiple memory devices 940 are connected in parallel to the same command bus and to different data buses. For example, system 900 may be configured with multiple memory devices 940 coupled in parallel, where each memory device responds to commands and accesses memory resources 960 internal to each memory device. For write operations, the separate memory device 940 can write a portion of the entire data word, and for read operations, the separate memory device 940 can fetch a portion of the entire data word. The remaining bits of the word will be provided or received in parallel by other memory devices.In one example, memory device 940 is disposed directly on a motherboard or host system platform of a computing device (eg, a PCB (printed circuit board) on which processor 910 is disposed). In one example, memory devices 940 may be organized into memory modules 970 . In one example, memory module 970 represents a dual inline memory module (DIMM). In one example, memory module 970 represents another organization of multiple memory devices sharing at least a portion of access or control circuitry, which may be separate circuits, separate devices, or separate boards relative to the host system platform. The memory module 970 may include multiple memory devices 940, and the memory module may include support for multiple individual channels of the included memory devices disposed thereon. In another example, the memory device 940 may be incorporated into the same package as the memory controller 920, eg, by means such as a multi-chip module (MCM), package-on-package, through-silicon-via (TSV), or other techniques or combinations. Similarly, multiple memory devices 940 may be incorporated into a memory module 970 , which may itself be incorporated into the same package as memory controller 920 , in one example. It should be appreciated that for these and other implementations, memory controller 920 may be part of main processor 910 .Each memory device 940 includes one or more memory arrays 960 . Memory array 960 represents addressable memory locations or storage locations for data. In general, memory array 960 is managed as rows of data, accessed via wordline (row) and bitline (individual bits within a row) control. Memory array 960 may be organized into different channels, ranks, and banks of memory. A channel may refer to an independent control path to a storage location within memory device 940 . A column may refer to a common location across multiple memory devices in parallel (eg, the same row address within different devices). A memory bank may refer to a sub-array of memory locations within memory device 940 . In one example, a bank of a memory is divided into sub-banks, wherein at least a portion of shared circuitry (eg, drivers, signal lines, control logic) is used for the sub-banks to allow individual addressing and access. It should be understood that channels, columns, banks, sub-banks, groups of banks, or other organizations of memory locations, and combinations of such organizations, may overlap in their application to physical resources. For example, the same physical memory location may be accessed through a specific channel as a specific memory bank, which may also belong to a rank. Accordingly, the organization of memory resources is to be understood in an inclusive rather than an exclusive manner.In one example, memory device 940 includes one or more registers 944 . Registers 944 represent one or more storage devices or storage locations that provide configuration or settings for the operation of the memory device. In one example, registers 944 may provide storage locations for memory devices 940 to store data for access by memory controller 920 as part of control or management operations. In one example, registers 944 include one or more mode registers. In one example, registers 944 include one or more multipurpose registers. Configuration of locations within registers 944 may configure memory device 940 to operate in different "modes," wherein command information may trigger different operations within memory device 940 based on the mode. Additionally or alternatively, different modes can also trigger different operations from address information or other signal lines depending on the mode. The settings of the registers 944 may indicate configurations for I/O settings (eg, timing, termination or ODT (on-die termination) 946, driver configuration, or other I/O settings).In one example, memory device 940 includes ODT 946 as part of the interface hardware associated with I/O 942 . The ODT 946 can be configured as described above and provides settings for the impedance of the interface to be applied to a given signal line. In one example, ODT 946 is applied to the DQ signal lines. In one example, ODT 946 is applied to the command signal lines. In one example, ODT946 is applied to the address signal lines. In one example, ODT 946 may be applied in any combination of the foregoing. The ODT setting can be changed based on whether the memory device is the selected target or non-target device of the access operation. ODT946 settings may affect reflection and timing of signaling on terminated lines. Careful control of the ODT 946 can enable higher speed operation through improved matching of the applied impedance and load. The ODT 946 may apply to a particular signal line of the I/O interface 942, 922 (eg, ODT for the DQ line or ODT for the CA line), and not necessarily all signal lines.Memory device 940 includes controller 950, which represents control logic within the memory device for controlling internal operations within the memory device. For example, controller 950 decodes commands sent by memory controller 920 and generates internal operations to execute or satisfy those commands. The controller 950 may be referred to as an internal controller, and is different from the memory controller 920 of the host. Controller 950 may determine what mode to select based on registers 944 and configure internal execution of operations to access memory resources 960 or other operations based on the selected mode. Controller 950 generates control signals to control the routing of bits within memory device 940 to provide the appropriate interface for the selected mode and to direct commands to the appropriate memory locations or addresses. Controller 950 includes command logic 952 that can decode command codes received on command and address signal lines. Accordingly, command logic 952 may be or include a command decoder. Using command logic 952, the memory device can recognize commands and generate internal operations to execute the requested commands.Referring again to memory controller 920 , memory controller 920 includes command (CMD) logic 924 , which represents logic or circuitry for generating commands to send to memory device 940 . Generation of commands may refer to commands prior to dispatch, or preparation of queued commands ready to be sent. Typically, signaling in the memory subsystem includes address information within or accompanying the command to indicate or select one or more memory locations where the memory device should execute the command. In response to scheduling of transactions for memory device 940, memory controller 920 may issue commands via I/O 922 to cause memory device 940 to execute the commands. In one example, controller 950 of memory device 940 receives and decodes command and address information received from memory controller 920 via I/O 942 . Based on the received command and address information, the controller 950 may control the timing of operations of logic and circuitry within the memory device 940 to execute the commands. The controller 950 is responsible for complying with standards or specifications within the memory device 940, such as timing and signaling requirements. The memory controller 920 may achieve compliance with standards or specifications through access scheduling and control.Memory controller 920 includes scheduler 930 , which represents logic or circuitry for generating and ordering transactions for sending to memory devices 940 . From one perspective, the primary function of memory controller 920 can be said to be scheduling memory accesses and other transactions to memory devices 940 . Such scheduling may include generating the transactions themselves to fulfill processor 910 requests for data and maintain data integrity (eg, with refresh-related commands). A transaction may include one or more commands and result in the transfer of commands or data or both over one or more timing cycles (eg, clock cycles or unit intervals). Transactions may be for access such as read or write or related commands or combinations, and other transactions may include memory management commands for configuration, settings, data integrity, or other commands or combinations.Memory controller 920 typically includes logic, such as scheduler 930 , to allow selection and sequencing of transactions to improve system 900 performance. Thus, the memory controller 920 can select which outstanding transactions should be sent to the memory device 940 in what order, typically through logic that is much more complex than a simple first-in-first-out algorithm. Memory controller 920 manages the transfer of transactions to memory devices 940 and manages the timing associated with the transactions. In one example, transactions have deterministic timing, which may be managed by memory controller 920 and used to determine how to schedule transactions with scheduler 930 .In one example, memory controller 920 includes refresh (REF) logic 926 . Refresh logic 926 may be used for memory resources that are volatile and need to be refreshed to maintain a deterministic state. In one example, refresh logic 926 indicates where to refresh and the type of refresh to perform. Refresh logic 926 may trigger a self-refresh within memory device 940 by sending a refresh command or combination, or perform an external refresh, which may be referred to as an auto-refresh command. In one example, controller 950 within memory device 940 includes refresh logic 954 to apply refresh within memory device 940 . In one example, refresh logic 954 generates internal operations to perform a refresh based on an external refresh received from memory controller 920 . Refresh logic 954 may determine whether a refresh is for memory device 940 and what memory resource 960 to refresh in response to the command.10 is a block diagram of an example of a computing system in which system memory link protection may be implemented. System 1000 represents a computing device according to any of the examples herein, and may be a laptop computer, desktop computer, tablet computer, server, game or entertainment control system, embedded computing device, or other electronic device.System 1000 represents a system according to examples of system 100 , system 200 , system 300 or system 400 . In one example, the memory controller 1022 includes a link IDE 1092 for performing protected data exchanges with the memory 1030 . The protected data exchange may be link protected according to any of the examples described. In one example, memory 1030 includes link IDE 1090 for link protection with memory controller 1022 according to any of the examples described. In one example, link IDE 1090 is part of the memory device of memory 1030 . In one example, link IDE 1090 is a separate component of the memory module of memory 1030 . In one example, link IDE 1090 is part of a data buffer (not shown) of a memory module of memory 1030 .System 1000 includes processor 1010, which may include any type of microprocessor, central processing unit (CPU), graphics processing unit (GPU), processing core, or other processing hardware, or combination thereof, to provide processing of instructions for system 1000 or execute. Processor 1010 may be a host processor device. Processor 1010 controls the overall operation of system 1000 and may be or include one or more programmable general or special purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic device (PLD), or a combination of such devices.System 1000 includes boot/configuration 1016, which represents a system for storing boot code (e.g., basic input/output system (BIOS)), configuration settings, secure hardware (e.g., Trusted Platform Module (TPM)), or other A storage device for system-level hardware operated externally. Boot/configuration 1016 may include non-volatile storage devices such as read-only memory (ROM), flash memory, or other memory devices.In one example, system 1000 includes interface 1012 coupled to processor 1010, which may represent a higher speed or high throughput interface to system components requiring higher bandwidth connections (e.g., memory subsystem 1020 or graphics interface component 1040) . Interface 1012 represents interface circuitry, which may be a stand-alone component or integrated onto the processor die. Interface 1012 may be integrated as a circuit on the processor die or as a component on a system-on-chip. When present, graphical interface 1040 interfaces with graphical components to provide a visual display to a user of system 1000 . Graphics interface 1040 may be a stand-alone component or integrated onto a processor die or system-on-chip. In one example, graphics interface 1040 may drive a high-definition (HD) display or an ultra-high-definition (UHD) display that provides output to a user. In one example, the display can include a touch screen display. In one example, graphical interface 1040 generates a display based on data stored in memory 1030 or based on operations performed by processor 1010 or both.Memory subsystem 1020 represents the main memory of system 1000 and provides storage for code to be executed by processor 1010 or data values to be used in executing routines. Memory subsystem 1020 may include one or more random access memory (RAM), such as DRAM, 3DXP (three-dimensional intersection point), or other memory devices, or a combination of these devices. Memory 1030 stores and hosts an operating system (OS) 1032 , among other things, to provide a software platform for executing instructions in system 1000 . Additionally, applications 1034 may execute from memory 1030 on a software platform of OS 1032 . Applications 1034 represent programs that have their own operating logic to perform one or more functions. Process 1036 represents an agent or routine that provides auxiliary functionality to OS 1032 or one or more applications 1034 or a combination. OS 1032 , applications 1034 and processes 1036 provide software logic to provide functionality for system 1000 . In one example, memory subsystem 1020 includes memory controller 1022 , which is a memory controller for generating and issuing commands to memory 1030 . It will be appreciated that memory controller 1022 may be a physical part of processor 1010 or a physical part of interface 1012 . For example, memory controller 1022 may be an integrated memory controller integrated onto circuitry with processor 1010, such as integrated onto a processor die or system-on-a-chip.Although not specifically illustrated, it is understood that system 1000 may include one or more buses or bus systems between devices, such as a memory bus, a graphics bus, an interface bus, or others. A bus or other signal line can communicatively or electrically couple the components together, or both communicatively and electrically, the components. A bus may include physical communication lines, point-to-point connections, bridges, adapters, controllers, or other circuit devices or combinations. The bus may include, for example, one or more of the following: a system bus, a peripheral component interconnect (PCI) bus, a Hypertransport or Industry Standard Architecture (ISA) bus, a Small Computer System Interface (SCSI) bus, a general purpose serial bus (USB), or other buses, or a combination.In one example, system 1000 includes interface 1014 , which can be coupled to interface 1012 . Interface 1014 may be a lower speed interface than interface 1012 . In one example, interface 1014 represents interface circuitry, which may include both separate components and integrated circuitry. In one example, a number of user interface components or peripheral components or both are coupled to interface 1014 . Network interface 1050 provides system 1000 with the ability to communicate with remote devices (eg, servers or other computing devices) over one or more networks. Network interface 1050 may include an Ethernet adapter, a wireless interconnection component, a cellular network interconnection component, USB (Universal Serial Bus), or other wired or wireless standard-based or proprietary interfaces. Network interface 1050 can exchange data with remote devices, which can include sending data stored in memory or receiving data to be stored in memory.In one example, system 1000 includes one or more input/output (I/O) interfaces 1060 . I/O interface 1060 may include one or more interface components through which a user interacts with system 1000 (eg, audio, alphanumeric, tactile/touch, or other interface). Peripheral interface 1070 may include any hardware interface not specifically mentioned above. Peripherals generally refer to devices that are dependently connected to system 1000 . A dependent connection is a connection in which the system 1000 provides a software platform or a hardware platform, or both, on which operations are performed and a user interacts with.In one example, system 1000 includes storage subsystem 1080 for storing data in a non-volatile manner. In one example, at least some components of storage 1080 may overlap with components of memory subsystem 1020 in certain system implementations. Storage subsystem 1080 includes storage device 1084, which may be or include any conventional medium for storing large amounts of data in a non-volatile manner, such as one or more magnetic, solid-state, NAND, 3DXP, or optical-based disks, or a combination . Storage 1084 maintains code or instructions and data 1086 in a persistent state (ie, the value is retained despite an interruption of power to system 1000 ). Storage 1084 may generally be considered “memory,” although memory 1030 is generally execution or operating memory that provides instructions to processor 1010 . While storage 1084 is non-volatile, memory 1030 may include volatile memory (ie, the value or state of the data is indeterminate if power to system 1000 is interrupted). In one example, the storage subsystem 1080 includes a controller 1082 for interfacing with a storage device 1084 . In one example, controller 1082 is a physical part of processor 1010 or interface 1014 , or may include circuitry or logic in both processor 1010 and interface 1014 .Power supply 1002 provides power to the components of system 1000 . More specifically, power source 1002 typically interfaces with one or more power supplies 1004 in system 1000 to provide power to the components of system 1000 . In one example, the power supply 1004 includes an AC to DC (alternating current to direct current) adapter to plug into a wall outlet. Such an AC power source may be a renewable energy (eg, solar) power source 1002 . In one example, power supply 1002 includes a DC power supply, such as an external AC to DC converter. In one example, the power source 1002 or power supply 1004 includes wireless charging hardware to charge by being in close proximity to a charging field. In one example, power source 1002 may include an internal battery or fuel cell source.11 is a block diagram of an example of a mobile device in which system memory link protection can be implemented. System 1100 represents a mobile computing device, such as a computing tablet, mobile phone or smartphone, a wearable computing device or other mobile device, or an embedded computing device. It should be understood that certain components are shown generically and not all components of such a device are shown in system 1100 .System 1100 represents a system according to examples of system 100 , system 200 , system 300 , or system 400 . In one example, memory controller 1164 includes link IDE 1194 to perform protected data exchanges with memory 1162 . The protected data exchange may be link protected according to any of the examples described. In one example, memory 1030 includes link IDE 1192 for link protection with memory controller 1164 according to any of the examples described. In one example, link IDE 1192 is part of the memory device of memory 1162 . In one example, link IDE 1192 is a separate component of the memory module of memory 1162 . In one example, link IDE 1192 is part of a data buffer (not shown) of a memory module of memory 1162 .The system 1100 includes a processor 1110 that performs the main processing operations of the system 1100 . Processor 1110 may be a host processor device. Processor 1110 may include one or more physical devices such as microprocessors, application processors, microcontrollers, programmable logic devices, or other processing units. The processing operations performed by the processor 1110 include the execution of applications and an operating platform or operating system on which device functions execute. Processing operations include operations related to a human user or I/O (input/output) to other devices, operations related to power management, operations related to connecting system 1100 to another device, or a combination. Processing operations may also include operations related to audio I/O, display I/O, or other splices or combinations. The processor 1110 may execute data stored in the memory. The processor 1110 may write or edit data stored in the memory.In one example, system 1100 includes one or more sensors 1112 . Sensors 1112 represent embedded sensors or interfaces with external sensors, or a combination. Sensors 1112 enable system 1100 to monitor or detect one or more conditions of the environment or equipment in which system 1100 is implemented. Sensors 1112 may include environmental sensors (e.g., temperature sensors, motion detectors, light detectors, cameras, chemical sensors (e.g., carbon monoxide, carbon dioxide, or other chemical sensors)), pressure sensors, accelerometers, gyroscopes, medical or physiological sensors (eg, biosensors, heart rate monitors, or other sensors for detecting physiological attributes), or other sensors, or a combination. Sensors 1112 may also include sensors for biometric systems such as fingerprint recognition systems, facial detection or recognition systems, or other systems that detect or recognize characteristics of a user. Sensors 1112 should be understood broadly and are not limited to the many different types of sensors that can be implemented with system 1100 . In one example, one or more sensors 1112 are coupled to processor 1110 through front-end circuitry integrated with processor 1110 . In one example, one or more sensors 1112 are coupled to processor 1110 through another component of system 1100 .In one example, system 1100 includes audio subsystem 1120, which represents hardware components (eg, audio hardware and audio circuits) and software components (eg, drivers, codecs) associated with providing audio functionality to a computing device. Audio functions can include speaker or headphone output, as well as microphone input. Devices for such functions may be integrated into system 1100 or connected to system 1100 . In one example, a user interacts with system 1100 by providing audio commands that are received and processed by processor 1110 .Display subsystem 1130 represents hardware (eg, a display device) and software components (eg, drivers) that provide a visual display for presentation to a user. In one example, the display includes haptic components or touch screen elements for user interaction with the computing device. Display subsystem 1130 includes display interface 1132, which includes specific screens or hardware devices for providing a display to a user. In one example, display interface 1132 includes logic distinct from processor 1110 (eg, a graphics processor) for performing at least some processing related to the display. In one example, display subsystem 1130 includes a touch screen device that provides both output and input to a user. In one example, display subsystem 1130 includes a high definition (HD) or ultra high definition (UHD) display that provides output to a user. In one example, the display subsystem includes or drives a touch screen display. In one example, display subsystem 1130 generates display information based on data stored in memory or based on operations performed by processor 1110 or both.I/O controller 1140 represents hardware devices and software components related to user interaction. I/O controller 1140 may operate to manage hardware that is part of audio subsystem 1120 or display subsystem 1130 or both. Additionally, I/O controller 1140 illustrates connection points for additional devices connected to system 1100 through which a user may interact with the system. For example, devices that may be attached to system 1100 may include microphone devices, speakers or stereo systems, video systems or other display devices, keyboard or keypad devices, buttons/switches, or other I/O devices used with specific applications, such as card reader or other device.As noted above, I/O controller 1140 may interact with audio subsystem 1120 or display subsystem 1130 or both. For example, input through a microphone or other audio device may provide input or commands to one or more applications or functions of system 1100 . Additionally, an audio output may be provided instead of or in addition to the display output. In another example, if the display subsystem includes a touch screen, the display device also acts as an input device, which can be managed at least in part by I/O controller 1140 . There may also be additional buttons or switches on system 1100 to provide I/O functions managed by I/O controller 1140 .In one example, I/O controller 1140 manages devices such as accelerometers, cameras, light sensors or other environmental sensors, gyroscopes, Global Positioning System (GPS), or other hardware or sensors 1112 that may be included in system 1100 . The input can be part of direct user interaction, or it can provide environmental input to the system to affect its operation (eg, filter noise, adjust the display for brightness detection, apply a flash or other features for the camera).In one example, system 1100 includes power management 1150 that manages battery power usage, charging of the battery, and features related to power saving operation. Power management 1150 manages power from power supply 1152 , which provides power to components of system 1100 . In one example, power supply 1152 includes an AC to DC (alternating current to direct current) adapter that plugs into a wall outlet. This AC power can be a renewable energy source (eg, solar energy, motion-based power). In one example, power supply 1152 includes only DC power, which may be provided by a DC power source, such as an external AC to DC converter. In one example, the power supply 1152 includes wireless charging hardware to charge via proximity to a charging field. In one example, power source 1152 may include an internal battery or fuel cell source.Memory subsystem 1160 includes memory devices 1162 for storing information in system 1100 . Memory subsystem 1160 may include nonvolatile (state does not change if power to the memory device is interrupted) or volatile (state is indeterminate if power to the memory device is interrupted) memory devices, or a combination. Memory 1160 may store application data, user data, music, photos, documents, or other data, as well as system data (whether long-term or temporary) related to the performance of applications and functions of system 1100 . In one example, memory subsystem 1160 includes memory controller 1164 (which may also be considered part of the control of system 1100 and may potentially be considered part of processor 1110 ). Memory controller 1164 includes a scheduler to generate and issue commands to control access to memory devices 1162 .Connectivity 1170 includes hardware devices (eg, wireless or wired connectors and communication hardware, or a combination of wired and wireless hardware) and software components (eg, drivers, protocol stacks) to enable system 1100 to communicate with external devices. External devices can be different devices such as other computing devices, wireless access points or base stations, and peripherals such as headsets, printers, or other devices. In one example, system 1100 exchanges data with external devices for storage in memory or display on a display device. The exchanged data may include data to be stored in memory, or data already stored in memory, to read, write or edit data.Connectivity 1170 may include a variety of different types of connectivity. In general terms, system 1100 is shown with cellular connectivity 1172 and wireless connectivity 1174 . Cellular connectivity 1172 generally refers to cellular network connectivity provided by wireless carriers, such as via GSM (Global System for Mobile Communications) or variants or derivatives, CDMA (Code Division Multiple Access) or variants or derivatives, TDM (Time Division Multiplexing) or variants or derivatives, LTE (Long Term Evolution - also known as "4G"), 5G or cellular network connectivity provided by other cellular service standards. Wireless connectivity 1174 refers to non-cellular wireless connectivity, and may include personal area networks (eg, Bluetooth), local area networks (eg, WiFi), or wide area networks (eg, WiMax), or other wireless communications, or combinations. Wireless communication refers to the use of modulated electromagnetic radiation to transmit data through a non-solid medium. Wired communications take place over a solid communications medium.Peripheral connections 1180 include hardware interfaces and connectors, as well as software components (eg, drivers, protocol stacks) for making peripheral connections. It will be appreciated that system 1100 can either be a peripheral ("to" 1182) of other computing devices, or have peripherals ("from" 1184) connected to it. System 1100 typically has a “docking” connector to connect to other computing devices for purposes such as managing (eg, downloading, uploading, changing, synchronizing) content on system 1100 . Additionally, a docking connector may allow the system 1100 to connect to certain peripheral devices, the docking connector allowing the system 1100 to control the output of content, eg, to an audiovisual or other system.In addition to proprietary docking connectors or other proprietary connection hardware, system 1100 can also make peripheral connections 1180 through generic or standards-based connectors. Common types may include a Universal Serial Bus (USB) connector (which may include any of a number of different hardware interfaces), DisplayPort including MiniDisplayPort (MDP), High Definition Multimedia Interface (HDMI), or other types.In general, with respect to the description herein, in one example, a memory module includes a link decryption engine for receiving write data from a memory controller, the write data having encrypted protected data and error checking and correction (ECC) data for said encrypted protected data, said write data having link protection for said write data, said link decryption engine for further receiving links with said link a link integrity tag associated with road protection, wherein the link decryption engine is configured to perform a link integrity check using the link integrity tag; and a memory device is configured to store information from the link decryption Engine's protected data and the ECC data.In one example of a memory module, the memory devices each include a link decryption engine for performing the link integrity check locally on the memory device using the link integrity tag. According to any of the foregoing examples of the memory module, in one example, the memory module includes: a data buffer for buffering data for the memory device; wherein each of the data buffers includes a link decryption engine for The link integrity check is performed for a particular memory device using the link integrity tag. According to any preceding example of the memory module, in one example, the memory module includes: a link decryption chip as the link decryption engine for the memory device. According to any of the preceding examples of the memory module, in one example, the memory module includes a link decryption chip for each memory device, wherein a specific link decryption chip is used to utilize the link integrity tag for a specific The memory device performs the link integrity check. According to any preceding example of a memory module, in one example, the memory module includes: a registered clock driver (RCD) for receiving command and address information for the memory device; wherein the command and address information has The link protection; wherein the RCD includes a link decryption engine for performing a link integrity check using the link integrity tag. In one example, the link protection includes an implementation of Advanced Encryption Standard with Galois Message Authentication Code (AES-GMAC) according to any preceding example of a memory module. According to any preceding example of the memory module, in one example, the memory module includes: the write data has link encryption, wherein the link decryption engine is configured to decrypt the link encryption. According to any preceding example of the memory module, in one example, the link encryption includes an implementation of an Advanced Encryption Standard (AES) in counter mode. According to any of the preceding examples of the memory module, in one example, the implementation of AES in counter mode includes an implementation of AES-GCM (Galois/Advanced Encryption Standard in Counter Mode). According to any preceding example of a memory module, in one example, the memory module includes: a registered clock driver (RCD) for receiving command and address information for the memory device; wherein the command and address information has The link encryption; wherein, the RCD includes a link decryption engine for decrypting the link encryption.In general, with respect to the description herein, in one example, a memory module comprising: a memory device for storing encrypted protected data and for error checking and correcting (ECC) of the encrypted protected data ) data; a link encryption engine for receiving encrypted protected data and error checking and correction (ECC) data for the encrypted protected data from the memory device as read data, the link an encryption engine to generate link protection for transmitting the read data to the memory controller, including generating a link integrity tag associated with the link protection; and I/O hardware to transfer the The read data with the link protection and the link integrity tag are sent to the memory controller.In one example of a memory module, the memory devices each include a link encryption engine for generating the link protection and generating the link integrity tag locally at the memory device. According to any of the foregoing examples of the memory module, in one example, the memory module includes: a data buffer for buffering data for the memory device; wherein each of the data buffers includes a link encryption engine for The link protection is generated and the link integrity tag is generated for a particular memory device. According to any preceding example of a memory module, in one example, the memory module includes: a link encryption chip as the link encryption engine for the memory device. According to any preceding example of a memory module, in one example, the memory module includes a link encryption chip for each memory device, wherein a specific link encryption chip is used to generate the link protection and target a specific memory device The device generates the link integrity label. According to any of the preceding examples of memory modules, in one example, the link protection includes an implementation of Advanced Encryption Standard with Galois Message Authentication Code (AES-GMAC). According to any of the preceding examples of a memory module, in one example, the memory module includes a link encryption engine for encrypting read data. According to any preceding example of the memory module, in one example, the link encryption includes an implementation of an Advanced Encryption Standard (AES) in counter mode. According to any of the preceding examples of the memory module, in one example, the implementation of AES in counter mode includes an implementation of AES-GCM (Galois/Advanced Encryption Standard in Counter Mode).In general, with respect to the description herein, in one example, a memory controller comprising: I/O (input/output) hardware for coupling to a memory module having a memory device; and a link encryption engine, for generating a link protection for transfer of write data to the memory device, including generating a link integrity tag associated with the link protection, the write data having encrypted protected data and using error checking and correcting (ECC) data on said encrypted protected data; wherein said I/O hardware is configured to convert said write data with said link protection and said link integrity tag sent to the memory module; and wherein the memory module is configured to: perform a link integrity check using the link integrity tag and store protected data and ECC data in the memory device.In one example of a memory controller, the link encryption engine includes a data link encryption engine and further includes a command and address link encryption engine for generating commands and addresses to be sent to the memory device command and address link protection of information, including generating a command and address link integrity tag associated with said command and address link protection; wherein said memory module includes a registered clock driver (RCD) for receiving The command and address information for the memory device is protected with the command and address link, and a link integrity check is performed using the command and address link integrity tag. According to any of the preceding examples of the memory controller, in one example, the link integrity tag includes a message authentication code (MAC), wherein the I/O hardware is configured to send bits of the MAC over multiple data transfer cycles. According to any of the preceding examples of the memory controller, in one example, the link integrity tag includes a message authentication code (MAC) that the I/O hardware is used to send over signal lines separate from the data bus or the command and address bus. MAC bits. According to any of the preceding examples of the memory controller, in one example, the protected data includes data at rest protected with multi-key total memory encryption (MKTME). According to any of the preceding examples of the memory controller, in one example, the link encryption engine is configured to perform link encryption on a new cryptographic key to the memory module, wherein, after passing the new cryptographic key, the link encryption engine uses to use the new cryptographic key for link encryption. According to any of the preceding examples of a memory controller, in one example, the memory controller includes a link encryption engine for encrypting write data. According to any of the preceding examples of the memory controller, in one example, the link encryption includes an implementation of an Advanced Encryption Standard (AES) in counter mode. According to any of the preceding examples of the memory controller, in one example, the implementation of AES in counter mode includes an implementation of AES-GCM (Galois/Advanced Encryption Standard in Counter Mode).In general, with respect to the description herein, in one example, a memory controller includes: I/O (input/output) hardware for coupling to a memory module having a memory device; and a link decryption engine for for verifying link protection for read data received from the memory device with error checking and correction (ECC) for encrypted protected data and for the encrypted protected data link protection of data, the link decryption engine for validating the link protection provided by the memory device using a link integrity tag associated with the link protection.In one example of the memory controller, the protected data includes data at rest protected with multi-key total memory encryption (MKTME). According to any preceding example of a memory controller, in one example, the memory controller includes: the read data has link encryption, wherein the link decryption engine is configured to decrypt the link encryption . According to any of the preceding examples of the memory controller, in one example, the link encryption includes an implementation of an Advanced Encryption Standard (AES) in counter mode. According to any of the preceding examples of the memory controller, in one example, the implementation of AES in counter mode includes an implementation of AES-GCM (Galois/Advanced Encryption Standard in Counter Mode).In general, with respect to the description herein, in one example, a computer system includes a memory controller for performing encryption of data at rest to generate encrypted protected data, performing error checking and correction on the encrypted protected data (ECC) to generate ECC bits, and generate link protection for encrypted protected data and ECC bits, including generating a link integrity tag associated with the link protection; a memory module coupled to a memory controller, the memory module For performing a link integrity check using the link integrity tag to recover encrypted protected data and ECC bits, wherein the memory device of the memory module is used to store the encrypted protected data and ECC bits.In one example of a computer system, the memory module is configured to perform link integrity checks using a link decryption engine on the memory device, or to perform link integrity checks using a data buffer on the memory module that buffers data for the memory , or perform a link integrity check using a link decryption chip of a memory module different from the memory device. According to any preceding example of the computer system, in one example, the memory controller is configured to perform encryption of encrypted protected data and ECC bits, wherein the memory module is configured to decrypt link encryption. According to any of the preceding examples of the computer system, in one example, the memory controller is configured to optionally perform link encryption, wherein when the memory controller performs link encryption, the memory controller is configured to abstain from performing transmission error checking CRC (Cyclic Redundancy Check). According to any preceding example of a computer system, in one example, the memory module is configured to generate a link protection for encrypted protected data and ECC bits to send to the memory controller, including generating a link protection associated with the link protection. Link integrity tag; used by the memory controller to perform a link integrity check using the link integrity tag to recover encrypted protected data and ECC bits. According to any preceding example of the computer system, in one example, the memory module is configured to perform encryption of the encrypted protected data and ECC bits, wherein the memory controller is configured to decrypt the link encryption. According to any of the preceding examples of a computer system, in one example, the computer system includes a basic input/output system (BIOS), wherein the BIOS is used to trigger a secure mode during which the memory controller and memory modules execute Key exchange for link encryption and link decryption. According to any preceding example of a computer system, in one example, the computer system includes one or more of: a multi-core host processor coupled to a memory controller; a display communicatively coupled to the host processor; a network an interface communicatively coupled to a host processor; or a battery for powering the computer system.In general, with respect to the description herein, in one example, a method includes receiving write data from a memory controller, the write data having encrypted protected data and an error checking sum for the encrypted protected data Correcting (ECC) data, writing data with link protection for written data; receiving a link integrity label associated with the link protection; performing link integrity checks using the link integrity label; and being protected The data and ECC data are stored in the memory device.In one example of the method, performing the link integrity check with the link integrity tag is local to the memory device. According to any preceding example of the method, in one example, performing the link integrity check using the link integrity tag is performed at a data buffer buffering data for the memory device. According to any preceding example of the method, in one example, a link integrity check is performed at the link decryption chip. According to any preceding example of the method, in one example, the memory module includes a link decryption chip for each memory device. According to any preceding example of the method, in one example, receiving command and address information for a memory device, wherein the command and address information has link protection; and performing chaining on the command and address information using a link integrity tag Road integrity check. According to any preceding example of the method, in one example, the command and address information has link encryption. According to any preceding example of the method, in one example, the link protection includes an implementation of Advanced Encryption Standard with Galois Message Authentication Code (AES-GMAC). According to any preceding example of the method, in one example, the write data has link encryption, wherein the link decryption engine decrypts the link encryption. According to any preceding example of the method, in one example, link encryption includes an implementation of Advanced Encryption Standard (AES) in counter mode. According to any preceding example of the method, in one example, the implementation of AES in counter mode includes an implementation of AES-GCM (Galois/Advanced Encryption Standard in Counter Mode).In general, with respect to the description herein, in one example, a method includes: accessing encrypted protected data and error checking and correcting (ECC) data for the encrypted protected data from a memory device; link protection of read data transfer to a memory controller, including generating a link integrity tag associated with the link protection; and sending the read data with link protection and the link integrity tag to the memory controller .In one example of the method, generating the link protection and generating the link integrity tag occurs locally at the memory device. According to any preceding example of the method, in one example, generating the link protection occurs at a data buffer buffering data for the memory device. According to any preceding example of the method, in one example, generating the link protection occurs at the link encryption chip. According to any preceding example of the method, in one example, generating the link protection occurs at a link encryption chip corresponding to a particular memory device. According to any preceding example of the method, in one example, the link protection includes an implementation of Advanced Encryption Standard with Galois Message Authentication Code (AES-GMAC). According to any preceding example of the method, in one example, the method includes encrypting the read data. According to any preceding example of the method, in one example, link encryption includes an implementation of Advanced Encryption Standard (AES) in counter mode. According to any preceding example of the method, in one example, the implementation of AES in counter mode includes an implementation of AES-GCM (Galois/Advanced Encryption Standard in Counter Mode).In general, with respect to the description herein, in one example, a method includes generating link protection for transmitting write data to a memory device, the write data having encrypted protected data and encrypted protected data Error checking and correcting (ECC) data to protect data; generate link integrity tags associated with link protection; send write data with link protection and link integrity tags to memory modules to trigger memory modules A link integrity check is performed using a link integrity flag, and the protected data and ECC data are stored in the memory device.In one example of the method, the method includes generating command and address link protection for command and address information to be sent to the memory device, including generating a command and address link integrity tag associated with the command and address link protection . According to any preceding example of the method, in one example, the link integrity tag includes a message authentication code (MAC), wherein the I/O hardware is configured to send bits of the MAC over multiple data transmission cycles. According to any preceding example of the method, in one example, the link integrity tag includes a message authentication code (MAC), wherein the I/O hardware is used to send the MAC over a signal line different from the data bus or the command and address bus bits. According to any preceding example of the method, in one example, the protected data includes data at rest protected with multi-key total memory encryption (MKTME). According to any preceding example of the method, in one example, the method includes performing link encryption on the new cryptographic key; and, after communicating the new cryptographic key, applying the new key for link encryption. According to any preceding example of the method, in one example, the method includes encrypting the write data. According to any preceding example of the method, in one example, link encryption includes an implementation of Advanced Encryption Standard (AES) in counter mode. According to any preceding example of the method, in one example, the implementation of AES in counter mode includes an implementation of AES-GCM (Galois/Advanced Encryption Standard in Counter Mode).In general, with respect to the description herein, in one example, a method includes receiving read data from a memory device, the read data having link protection for encrypted protected data and a link protection for encrypted protected data Error checking and correcting (ECC) data; receiving a link integrity label associated with the link protection; and verifying the link protection using the link integrity label.In one example of the method, the protected data includes data at rest protected with multi-key total memory encryption (MKTME). According to any preceding example of the method, in one example, reading the data has link encryption, and further includes decrypting the link encryption. According to any preceding example of the method, in one example, link encryption includes an implementation of Advanced Encryption Standard (AES) in counter mode. According to any preceding example of the method, in one example, the implementation of AES in counter mode includes an implementation of AES-GCM (Galois/Advanced Encryption Standard in Counter Mode).The flowcharts as presented herein provide examples of sequences of various processing actions. A flowchart may indicate operations to be performed by software or firmware routines, as well as physical operations. A flowchart may illustrate an example of an implementation of the states of a finite state machine (FSM), which may be implemented in hardware and/or software. Although shown in a particular order or sequence, the order of acts may be modified unless otherwise indicated. Therefore, the diagrams shown should be understood as examples only, and the processes may be performed in a different order, and some actions may be performed in parallel. Furthermore, one or more actions may be omitted; thus, not all implementations perform all actions.In terms of various operations or functions described herein, they may be described or defined as software codes, instructions, configurations and/or data. Content can be directly executable ("object" or "executable" form), source code or differential code ("delta" or "patch" code). The software content described herein may be provided by an article of manufacture having the content stored thereon, or by a method of operating a communication interface to send data via the communication interface. A machine-readable storage medium can cause the machine to perform the described functions or operations and includes any mechanism for storing information in a form accessible to the machine (e.g., computing device, electronic system, etc.), such as recordable/non-recordable media ( For example, read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, etc.). A communication interface includes any mechanism that interfaces with any hardwired, wireless, optical, etc. medium to communicate with another device, eg, a memory bus interface, a processor bus interface, an Internet connection, a disk controller, and the like. The communication interface may be configured by providing configuration parameters and/or sending signals such that the communication interface is ready to provide data signals describing the software content. The communication interface may be accessed through one or more commands or signals sent to the communication interface.The various components described herein can be a means for performing the described operations or functions. Each component described here includes software, hardware, or a combination of these. These components may be implemented as software modules, hardware modules, dedicated hardware (eg, application specific hardware, application specific integrated circuits (ASICs), digital signal processors (DSPs), etc.), embedded controllers, hardwired circuitry, and the like.In addition to what is described herein, various modifications may be made to the disclosed content and embodiments of the invention without departing from their scope. Accordingly, the descriptions and examples herein are to be interpreted as illustrative rather than restrictive. The scope of the invention should be measured only by reference to the appended claims.
A means to minimize physical distortion and modifications in the electrical properties of ferroelectric films incorporated into semiconductor devices is proposed. By introducing crystallographic texture into these ferroelectric films, the piezoelectric coefficient of the material can be minimized, reducing the interaction between a voltage across and mechanical stress on the film. In addition to having low piezoelectric coefficients, rhombohedral lead zirconate titanate films oriented along (111) exhibit low coercive fields and high remnant polarization, increasing their usefulness in layered semiconductor devices.
What is claimed is: 1. A multi-layer electrical device, comprising:a dielectric layer; and an electrically conductive layer in electrical communication with the dielectric layer, wherein: the dielectric layer comprises a piezoelectric material, the composition and orientation of the dielectric layer are chosen to minimize the effect of the mechanical stresses imposed by the other layers of the device on the electrical properties of the piezoelectric material in the dielectric layer, and a number of domains in the dielectric layer that are oriented along a projection of a polarization dipole of the piezoelectric material is maximized. 2. The device of claim 1, wherein the electrically conductive layer is adjacent to the dielectric layer.3. The device of claim 1, wherein the dielectric layer comprises a ferroelectric material.4. The device of claim 3, wherein the ferroelectric material has the composition PbZr1-xTixO3.5. The device of claim 4, wherein the PbZr1-xTixO3 has a (111) orientation.6. The device of claim 4, wherein the PbZr1-xTixO3 has a rhombohedral unit cell.7. The device of claim 4, wherein 0.15<x<0.4.8. The device of claim 4, further comprising an underlying layer on which the PbZr1-xTixO3 is deposited having an interatomic spacing compatible with that of the interatomic spacing within a plane with respect to which the deposited PbZr1-xTixO3 is to be oriented.9. The device of claim 8, wherein the compatible spacing is between 0.37 and 0.45 nm.10. The device of claim 8, wherein the underlying layer comprises platinum or iridium.11. The electrical device of claim 1, wherein the device is a transistor or a capacitor.12. An oriented thin film comprising a dielectric material, whereincharacteristics of the film have been optimized to minimize the interaction between voltage across the dielectric material and mechanical stress on the dielectric material, and the optimized characteristics are selected from the group consisting of the composition of the dielectric material and the orientation of the film. 13. The oriented thin film of claim 12, wherein the dielectric material comprises a ferroelectric material.14. The oriented thin film of claim 13, wherein the ferroelectric material has the composition PbZr1-xTixO3.15. The thin film of claim 14, wherein the ferroelectric material has a (111) orientation.16. The thin film of claim 14, wherein the ferroelectric material has a rhombohedral unit cell.17. The thin film of claim 14, wherein 0.15<x<0.4.18. The thin film of claim 14, wherein the ferroelectric material is deposited on an underlying layer having an interatomic spacing compatible with that of the interatomic spacing within a plane with respect to which the deposited PbZr1-xTixO3 is to be oriented.
FIELD OF THE INVENTIONThis invention relates to the production of ferroelectric memory, and, more particularly, to the production of ferroelectric memory incorporating oriented PbZr1-xTixO3 thin films.BACKGROUND OF THE INVENTIONEmbedded memory applications bring together two different silicon technologies, logic and memory, presenting new challenges for device integration. To date, there have been many publications and patents on discrete ferroelectric (FE) capacitors for use in memory devices. However, commonly used FE materials, such as lead zirconate titanate (PZT), are piezoelectric, that is, their electrical properties vary in response to mechanical stress or physical distortion. In addition, they exhibit a physical distortion when an electric field is applied. This distortion may alter the charge storage properties of the material. Thus, as ferroelectric materials are embedded into devices containing four to five layers comprising various materials, the practitioner must be concerned with the electrical effects of stresses imposed on the ferroelectric layers. Memory applications require robust dielectric materials that are insensitive to fluctuations in stress resulting from deposition of subsequent layers. Dielectrics, including PZT and other ferroelectric materials, that are used in semiconductor memory must exhibit electrical properties independent of imposed external stresses. That is, the interaction between mechanical stress (or volume) and voltage must be reduced.The majority of researchers studying ferroelectric thin films for memory applications employ tetragonal PZT materials as the storage medium because the remnant polarization is larger in the tetragonal phase than in other phases of PZT. In addition, these films are easier to produce than films incorporating other phases of PZT. However, tetragonal films require high drive voltages because of their relatively high coercive fields. In contrast, the current trend is to reduce the operating voltage of devices. The drive voltage can be reduced by decreasing film thickness, but these thin films are frequently not able to store charge reliably. Dielectric materials are required that can be utilized at lower voltages (i.e., lower coercivity materials) and that will exhibit electrical properties independent of imposed external stresses.SUMMARY OF THE INVENTIONIn one aspect, this invention is a multi-layer electrical device, for example, a capacitor or a transistor, including a dielectric layer and an electrically conductive layer in electrical communication with one another. The dielectric layer comprises a piezoelectric material for which the composition and orientation are chosen to minimize the effect of mechanical stresses imposed by the other layers of the device on the electrical properties of the piezoelectric material in the dielectric layer. In addition, the composition of the layer is optimized to maximize the number of available domains that are oriented along a projection of a polarization dipole of the piezoelectric material, increasing the number of domains in the layer that are available for charge storage. In a preferred embodiment, the piezoelectric material is a ferroelectric material having the composition PbZr1-xTixO3 (PZT), where x is between 0.15 and 0.4. As a result, the ferroelectric material has a rhombohedral unit cell. The PZT may be deposited with a (111) orientation. The device may also include an underlying layer on which the PbZr1-xTixO3 is deposited. The layer has an interatomic spacing, i.e., between 0.37 and 0.45 nm, compatible with that of the interatomic spacing within a plane with respect to which the deposited PbZr1-xTixO3 is to be oriented. The layer may comprise platinum or iridium.This invention is also directed to an oriented thin film comprising a dielectric material having an orientation wherein characteristics of the film such as composition and orientation are optimized to minimize the interaction between voltage across and mechanical stress on the dielectric material. In a preferred embodiment, the dielectric material comprises a ferroelectric material. Again, the material may have the composition PbZr1-xTixO3, where x is between 0.15 and 0.4, resulting in a rhombohedral unit cell. The PZT may be deposited with a (111) orientation.In another aspect, the invention is directed to an electrical device incorporating a dielectric layer that includes a piezoelectric material. The piezoelectric material comprises a ferroelectric material of the composition PbZr1-xTixO3. Such a PZT material, where x is between 0.15 and 0.4, will have a rhombohedral unit cell. If the material is deposited with an (111) orientation, the electromechanical coefficient, d33, will be minimized.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a graph showing the piezoelectric coefficient, d33, as a function of crystallographic orientation and composition for rhombohedral and tetragonal PZT films.FIG. 2 is a cross-sectional view of a ferroelectric memory device incorporating a multi-layer device and an oriented thin film according to the invention.DETAILED DESCRIPTIONThe instant invention concerns the production of an improved dielectric layer for semiconductor devices in which the piezoelectric properties of the dielectric material, i.e., PZT, are minimized. Rhombohedral PZT materials are attractive for ferroelectric memory applications for a variety of reasons. For example, because rhombohedral PZT has a coercive field about half that of the tetragonal phase, to achieve an operating voltage of 1.5 volts or less, a film comprising rhombohedral PZT can be twice the thickness of one incorporating tetragonal PZT.Rhombohedral PZT also has a lower piezoelectric constant than tetragonal PZT (FIG. 1; Du, et al., Appl. Phys. Lett. 72:2421-2423, 1998). The minimum electromechanical coefficient, d33, is significantly smaller along the pseudo-cubic [111] direction of the rhombohedral phase than along any direction in the tetragonal phase. For the rhombohedral phase material, the small dependence of the d33 coefficient on composition provides the flexibility to choose the best composition for the ferroelectric material from an electrical performance standpoint. PbZr1-xTixO3 thin films, where x is between 0.15 and 0.4, have a rhombohedral unit cell and are also far from the phase transition regions of rhombohedral to tetragonal (x≈0.5) and rhombohedral to orthogonal at x≈0.1.Another problem that plagues ferroelectric materials such as tetragonal PZT is 90 degree domain formation. The 90 degree domains form in thin films to compensate for the thermal and lattice mistatch strain between the ferroelectric and the substrate, thereby reducing the system energy. Tetragonal PZT exhibits a strong polarization dipole along the [001] and virtually no polarization along the [100], which forms a 90 degree angle with [001] in tetragonal systems. These [100] domains cannot be electrically switched, yielding no switched charge. However, the switched charge per capacitor will vary according to the domain pattern in the films. As the transistor size decreases, the ferroelectric capacitor must be reduced to fit within a smaller area. As a result, the switched charge per capacitor encompasses fewer and fewer averaged domains. In tetragonal PZT, the average switched charge will decrease as 90 degree domains, which contribute no switched charge, are formed. The rhombohedral phase material has a similar behavior to the tetragonal material and forms domains at angles slightly lower than 90 degrees. However, unlike the tetragonal material, the maximum dipole moment of the rhombohedral material lies along the 111 direction. Thus, in a 111 oriented rhombohedral PZT film that forms 90 degree domains, each domain will include a component, or projection, of [111], thereby reducing the variation in polarization charge between domains, yielding more consistent switched charge from one capacitor cell to another.Rhombohedral compositions of (100) PZT display remnant polarizations of approximately 40 [mu]C/cm<2 >and coercive fields of 25-30 kV/cm. Because the maximum dipole lies along [111] and measurements of (100) oriented materials only indicate projections of the maximum (vector) value, (111) oriented rhombohedral materials may yield even higher remnant polarizations (Foster, et al., J Appl. Phys. 81:23492357, 1997). While the proportion of lead titanate should be kept between 15 and 40 percent, well away from the rhombohedral/tetragonal and rhombohedral/orthogonal phase transitions, routine manipulation of the composition within the rhombohedral range by one skilled in the art will enable optimization of electrical, magnetic, mechanical, and other properties.In a preferred embodiment, rhombohedral PZT is incorporated as a dielectric layer into a ferroelectric memory device. FIG. 2 shows an exemplary one-transistor/one-capacitor ferroelectric memory device 10 comprising a silicon (or other semiconductor) substrate 12, transistor 14, plugs 16, a diffusion barrier 18, a bottom electrode 20, a dielectric layer 22, a top electrode 24, a bit line 26, and a metal line 28. Techniques for the manufacture of such memory devices are described in U.S. Pat. No. 5,767,541 to Hanagasaki, incorporated herein by reference. Rhombohedral PZT can also be incorporated into one-transistor type memories such as those described by U.S. Pat. No. 3,832,700 to Wu, the entire contents of which are incorporated herein by reference.Rhombohedral PZT films can be deposited via metal-organic chemical vapor deposition (MOCVD), sputtering, or sol-gel. The phase is controlled in part by controlling the composition of the precursor materials. The grain size and orientation of a film deposited through any of these techniques may be engineered by any of several mechanisms. According to one mechanism, film texture is controlled by selecting a template having an interatomic spacing similar to the spacing of the desired lattice plane parallel to the substrate. Thus, if the lattice constants of the template and the growing film are similar, a particular growth direction can be promoted by obtaining a particular orientation in the substrate.This principle may be applied to the deposition of PZT films by using either platinum or iridium as the template. Platinum and iridium are commonly used as electrodes for ferroelectric capacitors, and it is fortuitous that their lattice constants are particularly suited to this application. However, the substrate is not necessarily limited to Pt or Ir. Other applications for these oriented ferroelectric films may require different substrates. The only requirement is that the substrate have some plane (hkl) where the interatomic spacing is compatible with the interatomic spacing of the desired plane (h'k'l') along which the deposited material is being deposited. That is, the interatomic spacing of the substrate should facilitate film growth in the desired orientation. A general rule of thumb is that the two interatomic spacings should differ by less than about 10%. The lattice constants of iridium and platinum are approximately 0.394 nm and 0.392 nm, while rhombohedral PZT has a lattice constant of 0.411 nm. The lattice constants in the (111) direction for Ir and Pt are 0.653 nm and 0.680 nm; PZT (111) has a lattice constant of 0.71187 nm. For both orientations, the lattice mismatch between either Ir or Pt with the PZT is less than about 8%. Thus, PZT grown on (100) or (111) Pt or Ir would be expected to exhibit (100) or (111) texturing, respectively.To achieve a single orientation in Pt and Ir, one may carefully choose the deposition parameters to encourage a particular orientation. For platinum, (100) is the fast growing plane. Intermediate growth temperatures (<400[deg.] C.) and relatively high deposition rates will encourage growth of (100) Pt. The (111) plane is the low energy surface; therefore, higher temperatures (>400[deg.] C.) that yield low growth rates will encourage this orientation. Another mechanism for encouraging (111) oriented Pt is to add a thin Ti seed layer beneath the Pt. It is well documented experimentally that Ti seed layers encourage a (111) texture.While PZT layers for capacitors are frequently deposited on single crystal substrates, other surfaces on which the PZT layers are deposited, e.g., for transistors, are frequently polycrystalline. Thus, closely matched lattice constants are not sufficient to ensure development of the desired texture. Careful control of the deposition conditions and exploitation of other physical properties of the materials system may encourage a particular crystallographic orientation.A third mechanism for controlling the texture is to adjust the deposition temperature of the PZT thin film in a similar manner to that described above for Pt and Ir. For example, the (111) orientation of the PZT can be promoted through lower growth rates at higher temperatures. In the rhombohedral phase, the ferroelectric dipole lies along the [111] direction. This may further encourage the (111) growth orientation.It is expected that Ir will behave similarly to the Pt and that the above texturing practices will be effective with Ir as well as Pt.Other embodiments of the invention will be apparent to those skilled in the art from a consideration of the specification or practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with the true scope of the invention being indicated by the following claims.
An example computing device comprises a processor to be coupled to a display device, and a boot controller coupled to the processor and to be coupled to the display device. The boot controller is configured to detect a power signal, receive sensor data detected by one or more sensors prior to an operating system being loaded by a boot process of the processor, determine a posture associated with the display device based on the sensor data detected by the one or more sensors, and communicate, to the display device, posture information indicating the posture associated with the display device. Pre-boot content is to be displayed on a display panel of the display device in a first arrangement based on the posture information. In more specific embodiments, determining the posture includes determining at least an orientation of the display device and whether a peripheral is present on the display device.
A method comprising:detecting, by a boot controller coupled to a processor, a power signal;receiving sensor data detected by one or more sensors prior to an operating system being loaded by a boot process of the processor;determining a posture associated with a display device coupled to the boot controller and the processor based on the sensor data; andcommunicating to the display device, posture information indicating the posture associated with the display device, wherein pre-boot content is displayed on a display panel of the display device in a first arrangement based on the posture information.The method of Claim 1, wherein the determining the posture includes determining a first parameter that indicates an orientation of the display device, wherein the first arrangement includes the pre-boot content being aligned with the orientation of the display device.The method of Claim 2, further comprising:selecting the posture information based, at least in part, on the first parameter.The method of any one of Claims 2-3, further comprising:selecting a bitmap of the pre-boot content based, at least in part, on the first parameter; andsending the bitmap of the pre-boot content to the display device.The method of any one of Claims 2-4, further comprising:in response to receiving an indication that the pre-boot content was displayed on the display panel of the display device, initializing a boot process on the processor and communicating the first parameter to the boot process.The method of Claim 5, further comprising:storing pre-boot posture information in a pre-boot posture table to be accessed by the boot process, wherein the pre-boot posture information includes a second parameter that indicates whether a hardware peripheral is present and covers a first portion of the display panel of the display device.The method of Claim 6, wherein the boot process uses the first parameter and the second parameter to render a second bitmap with second pre-boot content, the method further comprising:providing the second bitmap for display in the first arrangement prior to the operating system being loaded.The method of any one of Claims 2-7, wherein the determining the posture further includes determining a second parameter that indicates whether a hardware peripheral is present and covers a first portion of the display panel of the display device.The method of Claim 8, wherein, in response to determining that the hardware peripheral is present and covers the first portion of the display panel, the first arrangement further includes the pre-boot content being located in a second portion of the display panel that is not covered by the hardware peripheral.The method of any one of Claims 8-9, wherein the determining the posture further includes determining a third parameter that indicates a hinge angle of a hinge on one or more housing members containing the boot controller and the processor.The method of Claim 10, further comprising:selecting the posture information based on at least one of the first parameter, the second parameter, or the third parameter.The method of any one of Claims 1-11, further comprising:subsequent to receiving an indication that the pre-boot content was displayed on the display panel of the display device, activating a switch to connect the one or more sensors to an integrated sensor hub instead of the boot controller.The method of any one of Claims 1-3 or 5-12, further comprising:receiving, by a timing controller in the display device, the posture information from the boot controller; andselecting, based on the posture information, a bitmap of the pre-boot content, wherein the bitmap is one of a plurality of bitmaps, wherein each bitmap is rendered with the pre-boot content to be displayed in a different arrangement.An apparatus, the apparatus comprising means for performing the method of any preceding claim.At least one machine readable storage medium comprising instructions, wherein the instructions when executed implement the method or realize the apparatus as claimed in any preceding claim.
TECHNICAL FIELDThis disclosure relates in general to computing devices, and more particularly, to device posture-based pre-boot display orientation and other usage support.BACKGROUNDAs technology has become an integral part of daily life, form factors for computing devices are continuously evolving. New form factors such as foldable systems and dual display systems are designed to be used in multiple different orientations or postures. For example, a foldable system can be used in a laptop posture, a tablet posture, a tabletop (e.g., all-in-one) posture, or a journal (e.g., book) posture. Dual display systems may incorporate an external accessory or hardware peripheral such as a keyboard that covers a portion of one display panel. Content that is displayed in a display panel of a computing device prior to loading an operating system during a boot process may be displayed in an orientation that is not aligned with the posture of the computing device or in an area of the display panel where the displayed content is not consumable by a user.BRIEF DESCRIPTION OF THE DRAWINGSFIGS. 1A-1E are pictorial diagrams of some possible postures of a foldable computing device.FIGS. 2A-2B are pictorial diagrams of some possible postures of a dual display computing device.FIGS. 3A-3B are block diagrams illustrating an example default signs of life display orientation in an example computing device in various postures.FIG. 3C is a block diagram illustrating another signs of life display orientation in the example computing device of FIGS. 3A-3B , according to an embodiment.FIGS. 4A-4B are block diagrams illustrating an example default signs of life display orientation in another example computing device in various postures.FIG. 4C is a block diagram illustrating a signs of life display orientation in the example computing device of FIGS. 4A-4B , according to an embodiment.FIGS. 5A-5B are block diagrams illustrating an example default signs of life display orientation in yet another example computing device in various postures.FIG. 5C is a block diagram illustrating a signs of life display orientation in the example computing device of FIGS. 5A-5B , according to an embodiment.FIG. 6 is a block diagram of one possible embodiment of a system including device posture-based pre-boot display orientation according to an embodiment.FIG. 7 is a block diagram of another possible embodiment of a system including device posture-based pre-boot display orientation according to an embodiment.FIGS. 8A-8B are simplified interaction diagrams illustrating example interactions and operations for realizing device posture-based pre-boot display orientation according to an embodiment.FIGS. 9A-9B are high-level flowcharts of an example process for realizing device posture-based pre-boot display orientation according to an embodiment.FIG. 10 is a block diagram of an example processor in accordance with one embodiment.FIG. 11 is a block diagram of an example computing system in according to an embodiment.FIG. 12 is a block diagram of an example system-on-a-chip (SoC) computer architecture according to an embodiment.Like reference numbers and designations in the various drawings indicate like elements.DETAILED DESCRIPTIONThe present disclosure provides various possible embodiments, or examples, of systems, methods, machine readable media, and apparatuses to enable device posture-based pre-boot display orientation and other usage support in computing devices. Many of today's computing device form factors are operable in various postures. For example, at least some mobile devices, foldable computing devices, tablets, and/or all-in-one devices allow users to operate the devices in various postures. Different postures of a particular device may correspond to different orientations of a display device, the presence/absence of a hardware peripheral on a display panel, and/or various combinations thereof. Typically, an operating system of such a device can manage the orientation of content displayed on a display panel in order to ensure that the content is correctly oriented to be readable by a user of the device. Pre-boot content, however, is not managed by the operating system and may not be displayed in an arrangement in a display panel that is consumable (e.g., visible and correctly oriented to be readable) by the user. In this disclosure, an arrangement of content in a display panel is considered to be not consumable by the user if the content is not aligned with device posture (e.g., when the content is displayed sideways or upside down to the user) and/or if the content is not visible to the user (e.g., when content is displayed in an area of a display panel that is hidden by a peripheral such as a keyboard).Various embodiments disclosed herein enable an improved, seamless user experience by displaying all pre-boot content or content in accordance with the user's intended device use posture. Thus, a user does not have to change device posture during the pre-boot stage and reorient the device again post-boot to its intended use posture in order to consume or interact with content displayed on the device. It should also be noted that, as used herein, the term 'content' can include text, images, videos, user input boxes, graphical display elements, graphical interaction elements (e.g. scroll bars, buttons, etc.), data, or information that is displayable on a display device, or any combination thereof.To better understand the techniques of the various embodiments in this disclosure, the following contextual information related to various computing device postures, the pre-boot environment, and displaying content on display panels of display devices is now provided. Form factors of computing devices are continuously evolving to provide new operational features and display options for users. In particular, many computing device form factors are designed to be operable in different postures. The term 'posture', as used in reference to a computing device herein, is intended to mean either the rotation or orientation of a display device of the computing device relative to a user of the computing device and/or the presence or absence of a hardware peripheral (e.g., hardware keyboard) or other external accessory.FIGS. 1A-1E show pictorial diagrams of some possible postures of a foldable computing device 100. Fig. 1A shows foldable computing device 100 in a laptop posture, including a display panel 105 having a first portion 104A and a second portion 104B, both of which are arranged in a landscape orientation to display content to a user. The first and second portions 104A and 104B may include housing members to contain electronic components of the computing device. Additionally, the first portion 104A may be rotatably connected to the second portion 104B by, for example, a hinge 106 connected to the housing members. The hinge may be configured to permit movement of the portions relative to one another about an axis. Fig. 1B shows foldable computing device 100 including a peripheral keyboard 106, where the foldable computing device 100 is in a tabletop posture where the display panel 105 is generally flat and arranged in a landscape orientation for displaying content to a user. Fig. 1C shows foldable computing device 100 in a tablet posture where the display panel 105 is generally flat and arranged in a landscape orientation for displaying information to a user. Fig. 1D shows foldable computing device 100 in a canvas posture where the display panel 105 is generally flat and arranged in a portrait orientation for displaying content to a user. Fig. 1E shows foldable computing device 100 in a journal (e.g., a book, bent landscape) posture where the display panel 105 is bent between the first and second portions 104A and 104B, which are each arranged in a portrait orientations to display content to a user.Figs. 2A-2B show pictorial diagrams of some possible postures of a dual display computing device 200. Dual display computing device 200 includes a first display panel 204A and a second display panel 204B. The first panel 204A may be rotatably connected to the second panel 204B by a hinge 206, which may be configured to permit movement of the panels relative to one another about an axis. Additionally, the first and second panels 204A and 204B may be contained in housing members to which hinge 206 is connected. Fig. 2A shows dual display computing device 200 in a tablet landscape posture where the first and second display panels 204A and 204B are generally flat and are each arranged in a landscape orientation for displaying content to a user. Both display panels 204A and 204B are unobstructed and can be visible to a user. Fig. 2B shows dual display computing device 200 in a tablet posture with an attached peripheral keyboard. Although the first and second display panels 204A and 204B are generally flat and arranged in landscape orientations for displaying information to a user, the peripheral keyboard 206 covers a portion (e.g., bottom half) of the second display panel 204B.Today's mobile devices (e.g., foldable computing device 100, dual display computing device 200, tablets, smart phones, all-in-one devices, etc.) typically run operating systems that enable rotation of content displayed on their display panel(s) by an appropriate degree (e.g., 0, 90, 180, or 270 degrees) in order to align the displayed content with the current posture of the computing device. Rotation of the content is performed so that the content, when displayed in the display panel, is correctly oriented on the display panel to enable the user to read, comprehend, interact with, or otherwise consume the content.Computing devices that are operable in multiple postures, however, do not align displayed content to the device posture during pre-boot display and user interactions. A pre-boot stage is generally defined as the time between powering on a computing device and successfully loading an operating system on the computing device. After a computing device has booted up and the operating system has been loaded, however, the device may enter certain states where the device is not completely powered off, but another boot process has to be performed and the operating system has to be loaded again for the device to operate. For example, the Advanced Configuration and Power Interface (ACPI) open standard used by operating systems defines four global (Gx) states and six states (Sx) for ACPI systems:GxSTATE(S)DESCRIPTIONG0 WorkingS0The computer is running and the computer processing unit (CPU) is executing instructions.G1 SleepingS0ixReferred to as the 'Modern Standby' or 'Low Power S0 Idle', where the processor is partially sleeping.S1-3Volatile memory is kept refreshed to maintain the system state. Power is maintained to at least some components to allow the computer to wake in response to appropriate input and return to S0.S4Hibernation- Power consumption is reduced to the lowest level and contents of main memory are saved to a hibernation file to preserve user state. A subsequent boot is needed to wake the CPU but may be performed using the hibernation file.G2 Soft OffS5Soft off - Power is still supplied to the power button and to other components to allow a return to S0. No previous content is retained and a full reboot is required.G3 Mechanical OffComputer power has been removed via a mechanical switchIn ACPI-compliant devices, pre-boot stage can also include the time between waking from certain sleep states (e.g., S4 or S5) that require the operating system to be loaded again and successfully completing the loading of the operating system again.When a system is powered on, awakened from a certain sleep state (e.g., requiring the operating system to be loaded), or begins a boot process, it typically displays a Signs of Life (SOL) image (e.g., an original equipment manufacturer (OEM) or a product logo) before the operating system is loaded and the processor starts executing. The SOL image may be followed by Basic Input/Output System (BIOS) information and an on-screen keyboard, in some implementations. The OEM or product logo, the BIOS information, and/or the on-screen keyboard may be displayed until the operating system is loaded and the BIOS handoff to the operating system is performed. During a pre-boot stage (e.g., SOL and BIOS), a user may also need to use an on-screen keyboard to enter credentials such as a username and/or a password (e.g., for a password-protected hard drive or encrypted hard drive password) or to change the BIOS settings. Current devices do not have a way of determining device posture to display such pre-boot content and on-screen keyboard in the correct orientation and placement within the display panel for the user.Current systems display pre-boot content in only one predefined orientation in a display panel of the system. Thus, in order for pre-boot content to be oriented such that it is consumable by a user of the system, a display panel needs to be physically oriented relative to the user based on the predefined orientation for displaying pre-boot content. A device being booted, however, may be oriented in a completely different orientation or posture that can cause confusion and/or render device interaction annoying and difficult if the pre-boot content is displayed sideways or upside down, or is hidden by a keyboard, for example. The annoyance and difficulty can be particularly acute when using newer form factors, such as foldable or dual display devices, because the remedy is to change the device posture during the pre-boot stage, and then change back to the desired device posture after the operating system is loaded.Current systems rely on the user of a device to physically manipulate (e.g., rotate, turn, etc.) the device into a posture that enables the user to consume (e.g., read, comprehend, or interact with) the displayed pre-boot content. Relying on the user to physically manipulate a device to a particular orientation to consume pre-boot content can result in an undesirable user experience. Users generally expect consistency when operating a computing device. Because content is aligned with a display panel orientation after the operating system is loaded, users generally expect all displayed content (including the pre-boot content) to rotate to follow the display panel orientation to enable user readability, comprehension, and interaction with the content. Consequently, pre-boot content that is not aligned with the device posture may not meet the expectations of the user, which can be frustrating. The inconsistency of content alignment with device posture between pre-boot content and post-boot content can also cause the user to doubt the stability of the system and/or to believe that the system is malfunctioning.For systems with large display panels (e.g., foldable computing devices with display panels > 17 inches), having to change the display device orientation from a preferred posture to another posture in order to consume pre-boot content can be an annoying and frustrating user experience. Rotating the display device may not be as quick and easy as it would be for an eight- or ten-inch tablet or for a smart phone. Moreover, larger dual display and foldable computing devices may require changing the hinge angle as well as rotating the device, to match the predefined pre-boot orientation. Such aspects can further diminish the user experience.A system for enabling device posture-based pre-boot display orientation can resolve these issues (and more). One or more embodiments of the present disclosure connects system sensors (e.g., accelerometer, gyroscope, compass, temperature sensors, Hall sensors, etc.) in a computing device to a boot controller during the period from a boot controller detecting a power signal when the device is "powered on" (e.g., from G3, S4 or S5 system state) until a handoff to BIOS (e.g., a boot process) for a system boot. The boot controller receives sensor data from the system sensors and processes the sensor data to determine device posture. The boot controller determines posture information based on the posture and provides the posture information to a timing controller (TCON) of a display device to display a Signs of Life (SOL) image, and further provides posture information to BIOS to enable appropriate BIOS screen orientation and on-screen keyboard. The boot controller then hands off control to the processor (e.g., system-on-a-chip (SOC)), and the BIOS continues the boot flow. Additionally, such sensor data can also be processed to enable other usages that require pre-boot user authentication, nearby device and environment intelligence, etc.Figs. 3A-3B are pictorial diagrams illustrating an example predefined (or default) display orientation for a boot screen displayed during the pre-boot stage of a foldable computing device 302 in a journal posture ( Fig. 3A ) and a laptop posture ( Fig. 3B ). A "boot screen" is the signs-of-life (SOL) image that is displayed while a computing device is booting up. Foldable computing device 302 may be the same or similar to foldable computing device 100 of Figs. 1A-1E . In Figs. 3A-3B , the predefined display orientation is landscape with pre-boot content being displayed in a readable orientation above a particular edge 308 of the display panel 305.In Fig. 3A , an OEM logo 310 is the SOL image that is displayed in display panel 305 of foldable computing device 302 when the device is powered on or awakened from a certain sleep state, but before a boot process begins. In this example, the OEM logo 310 can be consumed (e.g., read, comprehended) by a user when foldable computing device 302 is in a journal posture with the display panel oriented toward the user such that edge 308 is below the content. However, it may not be an ideal position for the OEM logo 310 or other pre-boot content as it is displayed across a bend in the display panel.Fig. 3B illustrates what can happen when the display orientation is predefined for content displayed in a pre-boot stage. As shown in Fig. 3B , foldable computing device 302 is rotated clockwise ninety degrees (or counterclockwise 270 degrees) to laptop posture. In this scenario, the OEM logo 310 is also rotated, and becomes difficult to consume (e.g., read, comprehend) as it is displayed in a sideways or ninety degree orientation. Also, it is still positioned across the bend in the display panel, which may further hinder a user's ability to read or otherwise consume the information.Fig. 3C illustrates foldable computing device 302 enabled with the device posture-based pre-boot orientation system disclosed herein. In Fig. 3C , the foldable computing device 302 is rotated clockwise ninety degrees (or counterclockwise two hundred seventy degrees) to a laptop posture. When the foldable computing device 302 is enabled with the device posture-based pre-boot orientation system, however, the OEM logo 310 is aligned with the device posture and therefore, is consumable by a user.Figs. 4A-4B are pictorial diagrams illustrating an example predefined (or default) display orientation for BIOS input information displayed during the pre-boot stage of a boot process of a rotatable computing device 402 (e.g., a foldable device, tablet, smart phone, all-in-one device, etc.) having a single flat display panel 405. In Figs. 4A-4B , the predefined display orientation is landscape with BIOS content being displayed in a readable orientation above a particular edge 408 of the display panel 405. In Fig. 4A , BIOS input information includes a user credentials prompt 410 that is displayed in display panel 405 after a boot process begins, but before an operating system is loaded. In this example, the user credentials prompt 410 is consumable (e.g., readable, comprehensible) by a user when the display panel 405 of computing device 402 is in a tabletop posture, a tablet posture, or a canvas posture with the display panel oriented toward the user such that edge 408 is below the content.Fig. 4B illustrates what can happen when the display orientation is predefined for content displayed in a pre-boot stage. As shown in Fig. 4B , the computing device 402 is rotated clockwise two hundred seventy degrees (or counterclockwise ninety degrees) to a portrait orientation of the tabletop, tablet, or canvas posture. In this scenario, the user credentials prompt 410 is also rotated, and becomes difficult to consume (e.g., read, comprehend) as it is displayed in a sideways or two hundred seventy degrees orientation.Fig. 4C illustrates computing device 402 enabled with the device posture-based pre-boot orientation system disclosed herein. In Fig. 4C , computing device 402 is rotated clockwise two hundred seventy degrees (or counterclockwise ninety degrees) to a portrait orientation. When computing device 402 is enabled with the device posture-based pre-boot orientation system, however, the user credentials prompt 410 is aligned with the device posture and therefore, is consumable by a user.Figs. 5A-5B are pictorial diagrams illustrating an example predefined (or default) display orientation for a boot screen displayed in the pre-boot stage of a dual display computing device 502 in a tablet posture. Dual display computing device 502 may be the same or similar to dual display computing device 200 of Figs. 2A-2B . Dual display computing device 502 includes a first display panel 504A, a second display panel 504B, and a removable peripheral (e.g., keyboard) 520. The first panel 504A may be rotatably connected to the second panel 504B by a hinge 506, which may be configured to permit movement of the panels relative to one another about an axis. Additionally, the first and second panels 504A and 504B may be contained in housing members to which hinge 506 is connected. In Figs. 5A-5B , the predefined display orientation for each display panel 504A and 504B is landscape with pre-boot content being displayed in a readable orientation above a particular edge 508 of the second display panel 504B.In Fig. 5A , an OEM logo 510 is the SOL image that is displayed in the second display panel 504B of dual display computing device 502 when the device is powered on or awakened from a certain sleep state, but before a boot process begins. In this example, the OEM logo 510 is consumable (e.g., readable, comprehensible) by a user when dual display computing device 502 is in a laptop posture, tablet posture or other posture in which the second display panel 504B is oriented such that edge 508 is below the content, and when peripherals that cover at least part of the display panel are not present.Fig. 5B illustrates what can happen when the display orientation is predefined for content displayed in the pre-boot stage and the posture of the device is modified to include a peripheral, such as a keyboard 520, attached to a lower portion of the second display panel 504B. In this scenario, the OEM logo 510 is at least partially hidden behind the keyboard 520 and becomes difficult to consume (e.g., read, comprehend).Fig. 5C illustrates the dual display computing device 502 enabled with the device posture-based pre-boot orientation system disclosed herein. In Fig. 5C , the posture is modified to include a peripheral, such as keyboard 520, attached to a lower portion of the second display panel 504B. When the dual display computing device 502 is enabled with the device posture-based pre-boot orientation system, however, the OEM logo 510 is moved so that it is displayed entirely within a visible (or unobstructed) portion of the second display panel 504B and therefore, is consumable by a user. It should be noted that the OEM logo 510 could be moved to the first display panel 504A in other embodiments.Embodiments of a system for enabling device posture-based pre-boot display orientation, as disclosed herein and graphically depicted in Figs. 3C, 4C , and 5C , advantageously eliminate user confusion and/or annoyance and frustration that may be caused by the wrong orientation and/or placement of displayed pre-boot content and an on-screen keyboard with respect to device postures. In particular, users of foldable and dual display systems may benefit from one or more embodiments as these systems may be cumbersome and more difficult to manipulate to the correct posture. Furthermore, ensuring a seamless user experience can build a user's confidence as to the stability of the system.Turning to Fig. 6, Fig. 6 is a simplified block diagram of a computing device 600 configured to enable device posture-based pre-boot display orientation. Device 600 may include any combination of components, some of which are shown by way of example in Fig. 6 . These components may be implemented as integrated circuits, discrete computing devices, or other modules, logic, hardware, software, firmware, or any suitable combination thereof adapted in a computing device, or as components otherwise incorporated within a chassis of the computing device. Fig. 6 is intended to show a high-level view of many components of the computing device. However, it is to be understood that some of the components shown may be omitted, additional components may be present, and different arrangement of the components shown may occur in other implementations.By way of example, computing device 600 may be a laptop computer, a mobile computing device, a tablet, a phablet, an all-in-one device, a dual display device, a foldable computing device, a wearable device, or any other computing device that includes at least one display panel and that can be operated in multiple postures (e.g., different orientations of display panel relative to a user, peripherals being present or absent, etc.).In at least one embodiment, computing device 600 includes a system-on-a-chip (SOC) 610, a display device 620 with a timing controller (TCON) 622, a sensor hub 630 connected to one or more sensors 640, and a boot controller 650. Computing device 600 may also include a posture lookup table 656 that is accessible by boot controller 650, a pre-boot content table 626 that is accessible by TCON 622, and a pre-boot posture table 658 that is accessible by the boot controller 650 and by SOC 610. Computing device 600 may comprise other components including, but not necessarily limited to memory, storage, user interface (e.g., touch pad, keyboard, trackball, etc.), battery, microphones, cameras, touch controller, and external ports.The SOC 610 of computing device 600 may comprise one or more processors integrated with one or more additional components, such as a memory controller, a graphics processing unit (GPU), an image processing unit (IPU), caches, and other components. SOC 610 also can include a basic input/output system (BIOS) 612 that runs after power 605 is supplied to the SOC 610 to power on the device. BIOS 612 can use a boot loader to load and boot system peripherals and a memory controller to load an operating system for the SOC 610. In one or more embodiments, BIOS 612 is initialized after boot controller 650 determines the current posture of computing device 600 and a SOL image is displayed. It should be noted that, although an SOC is one possible implementation, other computing system configurations may alternatively be used to enable device posture-based pre-boot display orientation as disclosed herein.In at least one implementation, sensor hub 630 may be configured as a processing unit communicatively coupled to SOC 610 and boot controller 650, but separately implemented. For example, sensor hub 630 could be a microcontroller, a digital signal processor (DSP), or any other processing unit capable of receiving sensor data from one or more sensors 640, processing the sensor data, and communicating the processed sensor data to boot controller 650 during the pre-boot stage or to SOC 610 once the operating system is loaded. Sensors 640 can include various sensors that provide sensor data that can be used to determine a posture of computing device 600. Examples of sensors 640 used to determine the posture of a device can include any one or more of an accelerometer (e.g., single-axis and/or multi-axis), a gyroscope, a compass, a geomagnetic field sensor, and/or a Hall sensor (e.g., for detecting the presence of a peripheral such as a keyboard over a display panel). Sensors 640 may also include other sensors that provide information to the device for various usages. Sensor hub 630 may be configured to process the received sensor data (e.g., streams of data) into a form that can be easily understood and used by boot controller 650 during the pre-boot stage and by SOC 610 in the post-boot stage (i.e., once the operating system has been successfully loaded). In one example, the sensor data received in the pre-boot stage could be processed into a single compressed data stream and provided to boot controller 650.Sensor hub 630 may be implemented in numerous other different configurations. In another implementation, sensor hub 630 may be integrated with boot controller 650 and communicatively coupled to SOC 610. In yet another implementation, sensor hub 630 may be integrated with SOC 610. In this implementation, a power plane in the SOC 610 can be configured to allow the integrated sensor hub to power up within the SOC 610 before other components (e.g., processors, memory controller, etc.) power up and before the boot flow (e.g., BIOS 612) is initiated. Additionally, a data plane may be configured to facilitate communication from the integrated sensor hub to boot controller 650 so that the boot controller can receive the processed sensor data during the pre-boot stage before the boot controller's handoff to BIOS 612 during system boot.In at least one embodiment, boot controller 650 may be configured as an embedded controller, which may be implemented separately from SOC 610 and sensor hub 630. In other implementations, boot controller 650 may be integrated with sensor hub 630 and separate from SOC 610. In yet other implementations, boot controller 650 may be integrated with SOC 610. Boot controller 650 is configured to detect a power signal when the system is powered on (e.g., from G3, S4, or S5 system states). A power signal generally refers to a signal that corresponds to a user action or another event that is intended to activate, reboot, or restart a computer system. Boot controller 650 is also configured to power up before SOC 610 (e.g., before an SOC reset is released) in a pre-boot stage of the computing device 600, and to receive sensor data from sensor hub 630 during the pre-boot stage before the handoff to BIOS 612.In at least one embodiment, boot controller 650 can include processor circuitry 652 and memory circuitry 654 along with any other computing components (e.g., signal registers, I/O components, etc.) needed to perform pre-boot operations to effect device posture-based pre-boot display orientation. Memory circuitry 654 may include any non-volatile and/or volatile memory devices such as random access memory (RAM), read-only memory (ROM), flash ROM, programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), etc. Firmware may be stored in memory circuitry (e.g., nonvolatile memory such as ROM or flash ROM) and, in at least one implementation, may perform control, monitoring, and data manipulation functions of boot controller 650. Processor circuitry 652 may include any suitable central processing unit (CPU) capable of facilitating the activities of boot controller 650 as described herein.Using the sensor data received from sensor hub 630, the boot controller 650 can determine a posture of computing device 600. In at least one embodiment, the posture of the computing device 600 can be defined by one or more parameters including, but not necessarily limited to, the orientation (e.g., degrees the device is rotated) of the device or display panel of the device, the presence or absence of a peripheral (e.g., external keyboard) affecting the exposure of a display panel, a hinge angle of a hinge that connects housing members of a computing device, or any combination thereof. These posture parameters can be derived by the boot controller 650 based on the sensor data. Boot controller 650 can communicate posture information to TCON 622 of display device 620 during the pre-boot stage, where the posture information represents the determined posture of the computing device 600. The posture information can be communicated via general-purpose input/outputs (GPIOs), flags or interrupts, etc.In one embodiment, boot controller 650 can use posture lookup table 656 to select posture information that represents the posture of computing device 600. Posture lookup table 656 can map posture information (e.g., representative values) to particular posture parameters derived from sensor data. The postures that are possible depend on the type of the computing device and the parameters that can be derived from the sensor data. A nonlimiting example of a posture lookup table could include posture information (e.g., representative values) mapped to different combinations of the orientation of the display device (e.g., in degrees) and the presence/absence of a keyboard (e.g., yes/no indication):Posture InformationOrientationKeyboard00No190No2180No3270No40Yes590Yes6180Yes7270YesIn this embodiment, boot controller 650 can derive posture parameters from the sensor data and use the posture parameters to search the lookup table and select the corresponding posture information. Boot controller 650 can then communicate the posture information to TCON 622 of display device 620. It should be noted that any suitable communication mechanism may be used including, but not necessarily limited to general-purpose input/outputs (GPIOs), I2C, flags or interrupts, etc.It should be noted that the posture lookup table above is offered for illustration purposes only, and that such a table may be configured in any suitable manner based on particular devices, implementations and/or needs. For example, any number of parameters and/or combination of parameters may be used to determine device posture. Hinge angle is another parameter that may be considered in combination with orientation, in combination with keyboard presence/absence, in combination with both orientation and keyboard presence/absence, or in combination with any other parameters derived from sensor data related to a computing device. In one example, a parameter derived from sensor data to indicate a hinge angle may be used to determine whether to move the content so that it is not centered on a display panel. This can prevent the content from being displayed across a bend in the display panel (See e.g., Figs. 3A-3B ) of a foldable device when the foldable device is in a journal posture or laptop posture, for example.In another embodiment, boot controller 650 can select a correctly aligned bitmap with desired pre-boot content to be displayed based on the sensor data received from sensor hub 630. For example, posture parameters can be derived from the sensor data. The boot controller can use the parameters and the desired pre-boot content to select (e.g., from a table) or render a bitmap to be displayed. Boot controller 650 can then send the selected or rendered bitmap to the display device 620 to be displayed. In this scenario, the posture information is included in the bitmap and therefore, is communicated to the display device via the bitmap.In at least one embodiment, pre-boot posture table 658 that is accessible to both the boot controller 650 and the SOC 610, may be implemented in computing device 600. More specifically, pre-boot posture table 658 may be accessible to a boot process once the boot controller initializes the BIOS 612 and starts the boot process, and to an operating system once the operating system is loaded by the boot process. Pre-boot posture table 658 may be populated by the boot controller with information that indicates the posture of the computing device 600 as determined by the boot controller 650 during the pre-boot stage. By way of example, the information may include the posture parameters derived from the sensor data received from the sensor hub. In some embodiments, the information used to populate pre-boot posture table 658 may include the sensor data received by the boot controller 650, posture information that is selected by the boot controller 650 from posture lookup table 656, and/or any other suitable information that can indicate the posture of computing device 600.In some scenarios, the boot process may use the information in pre-boot posture table 658 to determine the presence or absence of a hardware peripheral as determined by the boot controller 650. The boot process can use this information, along with the orientation of display device 620 (which may be provided to the BIOS 612 during handoff from the boot controller 650) to render pre-boot BIOS content for display on display device 620. In some scenarios, the operating system may use the information in pre-boot posture table 658 before the operating system is able to access updated sensor data, in order to render content to be displayed before the updated sensor data is accessible or available to the operating system. Pre-boot posture table 658 may be integrated with boot controller or any other component that is accessible to boot controller 650 and the boot process during the pre-boot stage, and that is accessible to the operating system after the operating system has been loaded (i.e., post-boot stage).Display device 620 of the computing device 600 may be operably connected to SOC 610 and, in some cases, movably connected to SOC 610. For example, in a laptop computer, a hinge can movably connect display device 620 to a base housing that contains SOC 610. In a mobile computing device, display device 620 may be operably connected to SOC 610 within the same housing. These nonlimiting examples are provided for illustrative purposes and it should be appreciated that embodiments herein could be implemented using any computing device configuration that is operable in multiple postures.Display device 620 includes a display panel 624, timing controller 622, and possibly other components such as microphones, cameras, and/or a touch controller. TCON 622 converts video data received from the SoC 610 into signals that drive the display panel 624. The display panel 624 can be any type of embedded display in which the display elements responsible for generating light or allowing the transmission of light are located in each pixel. Such displays may include TFT LCD (thin-film-transistor liquid crystal display), micro-LED (micro-light-emitting diode (LED)), OLED (organic LED), and QLED (quantum dot LED) displays. When touchscreen technology is implemented in display device 620, a touch controller drives the touchscreen technology utilized in the display panel 624 and collects touch sensor data provided by the employed touchscreen technology.In at least one embodiment, display device 620 includes a pre-boot content table 626. Pre-boot content table 626 can map posture information (e.g., representative values) representing particular postures to bitmaps of pre-boot content. Each bitmap can store the pre-boot content in a different arrangement (e.g., orientation and placement) based on the posture information. TCON 622 can use the posture information received from boot controller 650 to search the pre-boot content table 626 to identify the corresponding bitmap to be displayed in display panel 624. The bitmap can be displayed on display panel 624. A nonlimiting example of a pre-boot content table 626 could include posture information (e.g., representative values) mapped to bitmaps of pre-boot content such as signs-of-life images:Posture InformationBitmap0signs_of_life_0_no_kb.bmp1signs_of_life_90_no_kb.bmp2signs_of_life_180_no_kb.bmp3signs_of_life_270_no_kb.bmp4signs_of_life_0_kb.bmp5signs_of_life_90_kb.bmp6signs_of_life_180_kb.bmp7signs_of_life_270_kb.bmpDisplay device 620 can display the selected bitmap (e.g., SOL image) on display panel 624. The pre-boot content is displayed in an arrangement that aligns the pre-boot content with the device posture, such that the pre-boot content has the correct orientation and placement within the display panel to enable the content to be consumed by a user. Additionally, boot controller 650 can set the display panel orientation and/or the hardware peripheral presence, for the BIOS flow to follow.It should be apparent that the particular aspects of communicating device posture to the display device (e.g., TCON) and the SOC (e.g., BIOS) are for illustrative purposes only and that many other implementations are possible and are considered to be within the broad scope of this disclosure. In one illustrative example of the numerous possible implementations, the posture information that is communicated to the TCON may include the orientation parameter, the peripheral presence parameter (e.g., keyboard parameter), and a hinge angle parameter rather than a single representative value. Accordingly, in this case, the pre-boot content table 626 could be modified to map bitmaps to the various combinations of the orientation, peripheral presence, and hinge angle parameters.Fig. 7 is a simplified block diagram of another hardware implementation for a computing device 700 configured to enable device posture-based pre-boot display orientation. Device 700 may include any combination of components, some of which are shown by way of example in Fig. 7 . These components may be implemented as integrated circuits, discrete computing devices, or other modules, logic, hardware, software, firmware, or any suitable combination thereof adapted in a computing device, or as components otherwise incorporated within a chassis of the computing device. Fig. 7 is intended to show a high-level view of many components of the computing device. However, it is to be understood that some of the components shown may be omitted, additional components may be present, and different arrangement of the components shown may occur in other implementations.Similar to computing device 600, computing device 700 may be a laptop computer, a mobile computing device, a tablet, a phablet, an all-in-one device, a dual display device, a foldable computing device, a wearable device, or any other computing device that includes at least one display panel and that can be operated in multiple postures (e.g., different orientations of display panel relative to a user, peripherals being present or absent, etc.).Computing device 700 includes a system-on-a-chip (SOC) 710 with an integrated sensor hub 714, a display device 720 with a timing controller (TCON) 722 and a display panel 724, a switch 730 connected to one or more sensors 740, and a boot controller 650. Computing device 700 may also include a posture lookup table 756 that is accessible by boot controller 750, a pre-boot content table 726 that is accessible by TCON 722, and a pre-boot posture table 758 that is accessible by both the boot controller 650 and by the SOC 710. Posture lookup table 756, pre-boot content table 726, and pre-boot posture table 758 may be similar to corresponding components (e.g., posture lookup table 656, pre-boot content table 626, pre-boot posture table 658) shown and described with reference to computing device 600 in Fig. 6 . Computing device 700 may comprise other components including, but not necessarily limited to memory, storage, user interface (e.g., touch pad, keyboard, trackball, etc.), battery, microphones, cameras, touch controller, and external portsThe SOC 710 of computing device 700 may be comprise one or more processors integrated with one or more additional components, such as a memory controller, a graphics processing unit (GPU), an image processing unit (IPU), caches, and other components. SOC 710 also can include a basic input/output system (BIOS) 712 that runs after power 705 is supplied to the SOC 710 to power on the device. BIOS 712 can use a boot loader to load and boot system peripherals and a memory controller to load an operating system for the SOC 710. In one or more embodiments, BIOS 712 is initialized after boot controller 750 determines the current posture of computing device 700 and an SOL image is displayed. SOC 710 may be coupled to display device 720 via any suitable display interface including, but not necessarily limited to an embedded display port (eDP) 708. In another nonlimiting example, SOC 710 could be coupled to display device 720 via an MIPI display serial interface (DSI).SOC 710 also includes integrated sensor hub (ISH) 714, which may be configured as a co-processor. For example, ISH 714 could be a microcontroller, a microprocessor, a digital signal processor (DSP), or any other processing unit capable of receiving sensor data from one or more sensors 740, processing the sensor data, and communicating the processed sensor data to the operating system (or other components) of SOC 710. A sensor hub driver may be provided in SOC 710 to enable communication between the operating system and the ISH 714.Sensors 740 in computing device 700 can include various sensors as described with reference to sensors 640 of computing device 600 in Fig. 6 , for example. In one implementation, switch 730 is provisioned in computing device 700 to enable sensors 740 to be connected to either the boot controller 750 or to ISH 714 at the appropriate time. In one example, sensor interfaces are multiplexed between ISH 714 on the SOC 710 and boot controller 750. Switch 730 connects sensor interfaces to boot controller 750 during a period of the pre-boot stage when the system is powered on from G3, S4, or S5 system states to the handoff by the boot controller 750 to BIOS 712 during the boot flow. After the handoff to BIOS 712, boot controller 750 causes switch 730 to change the communication flow from sensors 740 by connecting the sensor interfaces to the ISH 714 on the SOC 710.In one implementation, connections 702A, 702B, and 702C between sensors 740 and boot controller 750 and ISH 714 can be achieved using I2C serial communication protocol or some other suitable communication protocol. Switch 730 may be activated by boot controller 750 via a GPIO 704 digital signal pin. Additionally, the switched sensor interfaces can be implemented using any interface required for sensor connectivity and not limited to I2C only. Depending on the capability of the sensor interfaces, a switch may or may not be required if the interface support multi master mode, etc.In at least one embodiment, boot controller 750 may be configured as an embedded controller, which may be implemented separately from SOC 710 and ISH 714. In another implementation, boot controller 750 may be integrated with SOC 710. In at least one embodiment, boot controller 750 can include processor circuitry 752 and memory circuitry 754 along with any other computing components (e.g., signal registers, I/O components, etc.) needed to perform pre-boot operations to effect device posture-based pre-boot display orientation. In one or more embodiments, boot controller 750 may be configured in the same or similar manner as boot controller 650.Boot controller 750 is configured to detect a power signal when the system is powered on (e.g., from G3, S4, or S5 system states) by a user action or another event intended to activate, reboot, or restart the system. Boot controller 750 is also configured to power up before SOC 710 (e.g., processor and memory controller) in a pre-boot stage of the computing device 700, to activate switch 730 to connect sensors 740 to boot controller 750, and to receive sensor data from sensors 740 in the pre-boot stage before the boot controller hands off to the BIOS.When system power up is initiated (e.g., by power 705), before an SOC reset is released, boot controller 750 has access to sensors 740. Boot controller 750 can determine the posture of the computing device 700 based on the sensor data received from the sensors. For example, parameters such as orientation, the presence/absence of a peripheral keyboard, and a hinge angle may be derived from the sensor data and define the posture of the device. In one embodiment, boot controller 750 can use the parameters to search posture lookup table 756 to select posture information that represents the posture of the device. Boot controller 750 can communicate the posture information to TCON 722 of display device 720 via general-purpose input/outputs (GPIOs), I2C, flags or interrupts, etc. The boot controller 750 then prompts the switch 730 to switch the sensors 740 to ISH 714 and initializes the BIOS 712. A BIOS flow (or boot process) continues to boot the system normally with screen images displayed, and on-screen keyboard enabled (if available) as per device orientation.In another embodiment, boot controller 750 can select a correctly aligned bitmap with desired pre-boot content to be displayed based on the sensor data received from sensors 740. For example, posture parameters can be derived from the sensor data. The boot controller can use the parameters and the desired pre-boot content to select (e.g., from a table) or render a bitmap to be displayed. Boot controller 750 can then send the selected or rendered bitmap to the display device 720 to be displayed. In this scenario, the posture information is included in the bitmap and therefore, is communicated to the display device via the bitmap.In at least one embodiment, a pre-boot posture table 758 that is accessible to both the boot controller 750 and the SOC 710, may be implemented in computing device 700. More specifically, pre-boot posture table 758 may be accessible to a boot process once the boot controller initializes the BIOS 712 and starts the boot process, and to an operating system once the operating system is loaded by the boot process. Pre-boot posture table 758 may be populated by the boot controller with information that indicates the posture of the computing device 700 as determined by the boot controller 750 during the pre-boot stage. By way of example, the information may be the same or similar as described with reference to the information stored in pre-boot posture table 658 of computing device 600.In some scenarios, the boot process may use the information in pre-boot posture table 758 to determine the presence or absence of a hardware peripheral as determined by the boot controller 750. The boot process can use this information, along with the orientation of display device 720 (which may be provided to the BIOS 712 during handoff from the boot controller 750) to render pre-boot BIOS content for display on display device 720. In some scenarios, the operating system may use the information in pre-boot posture table 758 before the operating system is able to access updated sensor data via ISH 714, in order to render content to be displayed before the updated sensor data is accessible or available to the operating system. Pre-boot posture table 758 may be integrated with boot controller 750 or any other component that is accessible to boot controller 750 and the boot process during the pre-boot stage, and that is accessible to the operating system after the operating system has been loaded (i.e., post-boot stage).Display device 720 of the computing device 700 may be operably connected to SOC 710 and, in some cases, movably connected to SOC 710 in one of the same or similar configurations shown and described with reference to display device 620 and SOC 610 of Fig. 6 . Display device 720 may also be configured in one of the same or similar configurations shown and described with referend to display device 620 of Fig. 6 . Display device 720 may receive posture information from boot controller 750. In one embodiment, the posture information may be used to search pre-boot content table 726 for a bitmap to be displayed with pre-boot content such as an SOL image. In another embodiment, display device 720 may receive, from boot controller 650, a bitmap to be displayed on display panel 724, where the received bitmap incorporates the posture information to cause the pre-boot content to be displayed in the correct orientation, based on the posture of the device, for consumption by a user. Additionally, boot controller 750 can set the display panel orientation and/or the hardware peripheral presence, for the BIOS flow to followIt should also be noted that embodiments (e.g., computing devices 600, 700) shown and described herein to enable device posture-based pre-boot display orientation can be include other features to enable new usages requiring system/device response ahead of operating system readiness. These other features can be implemented with input from different sensors. This can improve user experience by creating an impression of quick system responses.Turning to Figs. 8A-8B , simplified interaction diagrams 850A-850B illustrating an example process for realizing device posture-based pre-boot display orientation according to an embodiment. In this example, a computing device 800 comprises components having the same or similar configuration as components in embodiments of computing devices or systems described herein (e.g., computing device 600 or 700). For example, the process shown in interaction diagrams 850A-850B may be performed via interactions among a boot controller 802 (e.g., similar to boot controllers 650 or 750), a sensor hub 804 (e.g., similar to sensor hub 630 or integrated sensor hub 714), a timing controller (TCON) 806 (e.g., similar to TCONs 622 or 722), a pre-boot posture table 808 (e.g., similar to pre-boot posture tables 658 or 758), and a CPU/SOC 810 (e.g., similar to SOCs 610 or 710) to display pre-boot content in a correct orientation based on device posture.In Fig. 8A , the process begins when the computing device 800 is powered on from a G3, S4, or S5 system state. Power is detected at 812a by boot controller 802 and at 812b by sensor hub 804. At 814, the boot controller 802 loads firmware and initializes sensor hub 804. At 816, boot controller 802 receives a signal from sensor hub 804 indicating that the sensor hub has been successfully initialized.At 818, if the sensor hub 804 has been successfully initialized, boot controller 802 queries the sensor hub 804 for sensor data. In some implementations, if a switch is used to connect sensors to boot controller 802 directly, then the boot controller 802 may query the sensors for sensor data. At 820, boot controller 702 receives sensor data from sensor hub 804 (or from the sensors directly when boot controller 802 is connected to the sensors directly).At 822, boot controller 802 loads firmware and initializes TCON 806 of a display device in computing device 800. At 824, boot controller 802 receives a signal from TCON 806 indicating that the TCON has been successfully initialized.At 826, boot controller 802 determines a posture of the display device of computing device 800. The posture can be determined by deriving posture parameters from the sensor data received at 820. Parameters can include, for example, orientation of the display device (or the display panel of the display device), an indication of whether a peripheral component (e.g., keyboard) is present on the display device, and/or a hinge angle of a hinge that connects housing members (e.g., containing display panels) of the computing device. By way of example, the orientation of the display device may be determined as a degree of rotation measured in relation to a default or standard posture. In at least one implementation, the display device is designated as having an orientation of zero degrees (0°) when the computing device is in the default or standard posture. Accordingly, other degrees of rotation that can be detected for the orientation of the display device may include ninety degrees (90°), one hundred eighty degrees (180°), and two hundred seventy degrees (270°).Boot controller 802 can also select posture information that corresponds to the parameters that define the posture. In at least one embodiment, the posture information that corresponds to the posture parameters is selected from a posture lookup table (e.g., posture lookup table 656 or 756). At 828, the posture information is communicated to TCON 806. In one embodiment, TCON 806 can use the posture information to select a bitmap with an SOL image to be displayed in the correct orientation on the computing device 800.At 830, TCON 806 loads and displays the selected (or rendered) bitmap with pre-boot content, such as an SOL image. With reference to interaction diagram 850B in Fig. 8B , at 832, boot controller 802 receives a signal from TCON 806 indicating that the TCON successfully displayed the bitmap (e.g., SOL image).At 834, boot controller 802 stores pre-boot posture information in pre-boot posture table 808. Pre-boot posture information may indicate, fully or partially, the posture of the device determined by boot controller 802. For example, the pre-boot posture information may include all of the parameters derived by boot controller 801 (e.g., orientation parameter, peripheral presence parameter, hinge angle parameter, etc.). In other embodiments, some of the parameters may be passed to SOC 810 (e.g., when a boot process is initiated) by boot controller 802 and therefore, the stored pre-boot posture information may include only the parameters that are not passed to SOC 810. For example, an orientation parameter may be passed to SOC 810 by boot controller 802 and, therefore, a peripheral presence parameter and/or a hinge angle parameter may be stored in pre-boot posture table 808. In other embodiments, the pre-boot content may include the posture information selected by boot controller 802 from the posture lookup table (e.g., 656, 756), the sensor data received by boot controller 802, or any other information indicative of the posture of computing device 800.In an implementation in which the sensor hub is an integrated sensor hub (ISH) in SOC 810, sensors may be multiplexed between boot controller 802 and the ISH. Accordingly, in this implementation, at 836, prior to the handoff to the BIOS in the SOC to start the boot process, boot controller 802 can activate a switch to cause the communication from the sensors to flow to the ISH instead of to the boot controller.At 838, boot controller 802 passes control to the BIOS on the SOC 810 to initialize a boot process (e.g., from G3, S4 or S5 system states). Boot controller 802 can also pass an orientation parameter that indicates the orientation of the display device as detected by the boot controller. At 840, the boot process is initialized. At 842, boot controller 802 receives a signal from SOC 810 indicating that the boot flow was successfully initialized and the BIOS handoff is completed. At 844, the boot flow continues.At 846, SOC 810 obtains other relevant parameters for SOC 810 (e.g., peripheral presence parameter, hinge angle parameter, etc.) that were previously determined by the boot controller. As indicated at 848, additional pre-boot content may be displayed during the boot flow before the operating system is loaded. For example, a BIOS screen, security passwords (e.g., password-protected hard drives, etc.), on-screen keyboards, and, in some scenarios, operating system log-in screens. In all of these examples, the pre-boot content is displayed in an arrangement based on the pre-boot device posture detected by boot controller 802. For example, the orientation parameter that was communicated to the boot process by boot controller 802 can be used to ensure that the additional pre-boot content is aligned with the orientation of the display device. In other embodiments, the orientation parameter can be obtained by the boot process at pre-boot posture table 808.In at least one embodiment, pre-boot posture table 808 may be accessed by the boot process to determine whether a hardware peripheral was detected by the boot controller. This information can be used by the boot process to ensure that the additional pre-boot content is displayed in an exposed area of a display panel of the display device (or in another display panel of the display device) if the hardware peripheral was detected and is covering a portion of the display panel. Pre-boot posture table 808 may also be accessed by the boot process to determine a hinge angle of the device (e.g., for a foldable device). In one example scenario, this information can be used by the boot process to ensure that the additional pre-boot content is displayed in the display panel without crossing a bend in the display panel if, for example, a foldable device is in a journal or laptop posture.It should be noted that the size of the keyboard and the physical relationship between the sensors and the display device may also be included in the pre-boot posture table 808, may be stored in another table or storage structure accessible to the boot process and the operating system, or may be hard coded in the BIOS (e.g., 612, 712). Once the operating system is loaded, the operating system handles the orientation of content to be displayed on the display device.At any particular time, the system state S0 may change to G3, S3, S4, or S5 system states. If the state subsequently changes from G3 or S3 to S0, then no action is required because the system does not need to execute the boot flow again. However, if the state subsequently changes from G3, S4, or S5 system state to S0 system state, then the process shown in interaction diagrams 850A-850B is performed.Turning to Figs. 9A-9B, Figs. 9A-9B show simplified flowcharts 900A-900B of example operations associated with achieving device posture-based pre-boot display orientation in accordance with embodiments herein. In at least one embodiment, one or more operations correspond to activities of Figs. 9A-9B . A boot controller (e.g., 650, 750) of a computing device (e.g., 600, 700, 800), or a portion thereof, may perform or utilize at least some of the one or more operations.Beginning in flowchart 900A, at 902, the boot controller detects a power signal corresponding to a user action or external event. For example, a power switch on the computing device may be turned on by a user, or an external event (e.g., restart request, fatal error during processing, etc.) may occur that requires the system to reboot. At 904, the firmware for a sensor hub (e.g., 630, 804) is loaded and a signal is sent to the sensor hub to initialize it. At 906, the boot controller may receive a signal from the sensor hub indicating whether the initialization of the sensor hub was successful. If the sensor hub initialization was not successful, then at 926 in flowchart 900B, the boot process may be aborted and an alert may be sent to the user (e.g., via a display device) that a boot error occurred.If the sensor hub initialization was successful, then at 908, the boot controller can query the sensor hub for sensor data. Alternatively, the sensors may be directly connected to the boot controller and may be queried by the boot controller for sensor data. At 910, the boot controller receives sensor data from the sensor hub (or directly from the sensors in some implementations).At 912, the firmware for a timing controller or TCON (e.g., 622, 722, 806) in a display device (e.g., 620, 720) is loaded and a signal is sent to the TCON to initialize it. At 914, the boot controller may receive a signal from the TCON indicating whether the initialization of the TCON was successful. If the TCON initialization was not successful, then at 926 in flowchart 900B, the boot process may be aborted and an alert may be sent to the user (e.g., via the display device) that a boot error occurred.If the TCON initialization was successful, then at 916, the boot controller determines the posture of the computing device based on the received sensor data. The posture of the computing device may be defined by one or more parameters depending on the computing device's particular type. Thus, determining the posture can include deriving the appropriate parameters from the sensor data. For example, orientation of a display panel, the presence/absence of a keyboard attached to a display panel, and a hinge angle of a hinge that connects different portions of a computing device are three possible parameters that may be derived from sensor data and that could be used alone or in any combination to determine the posture of the computing device.At 918, posture information may be selected based on the parameters derived from the sensor data. For example, posture information may comprise representative values that are mapped to various combinations of the parameters in a posture lookup table (e.g., 656, 756). The boot controller may search the posture lookup table based on a parameter (or combination of parameters) that was derived from the sensor data and select the corresponding posture information.In another embodiment, a bitmap with pre-boot content (e.g., SOL image) may be selected from a bitmap lookup table. The bitmap lookup table may include a plurality of bitmaps rendered with pre-boot content to be displayed in different arrangements in a display screen of a display device. The different arrangements can include different orientations and/or placements of the bitmaps in the display screen. The bitmaps can each be mapped to the appropriate parameter or combination of parameters that defines a posture of the computing device for which the mapped bitmap would be correctly aligned for consumption by a user when displayed.At 920, the posture information is sent to the TCON by the boot controller. In other embodiments, the selected or rendered bitmap, which includes implicit posture information, is sent to the TCON to be displayed on the display device. The bitmap includes implicit posture information because the bitmap is rendered to display the pre-boot content in an arrangement in which the pre-boot content is in alignment with the posture of the device (e.g., in an orientation and placement in the display panel that is consumable by a user).In one embodiment, the TCON receives posture information in the form of a representative value or other similar data, and the TCON may use the posture information to select a bitmap of pre-boot content to be displayed. For example, a table that maps the posture information to the appropriate bitmap may be searched based on the posture information. In another embodiment, the TCON can receive the actual bitmap of the pre-boot content to be displayed.Continuing in flowchart 900B, at 922, the boot controller may receive a signal from the TCON indicating whether the pre-boot content (e.g., SOL image) was successfully displayed. If the TCON did not successfully display the pre-boot content, then at 924, the boot process may be aborted and an alert may be sent to the user (e.g., via the display device) that a boot error occurred.If the TCON successfully displayed the pre-boot content, however, then at 926, the posture information, one or more of the parameters derived from the sensor data, and/or the received sensor data can be stored in an appropriate storage or memory that will be accessible by a boot process once the boot process is initiated, and by the operating system once the operating system is successfully loaded by the boot process.In at least one embodiment in which an integrated sensor hub (ISH) (e.g., 714) is implemented, sensors may be multiplexed between the boot controller and the ISH. Accordingly, in this implementation, at 928, prior to the handoff to the BIOS to start the boot process, the boot controller can activate a switch (e.g., 730) to cause the communication from the sensors to flow to the ISH instead of to the boot controller. In other embodiments a power plane may be modified to allow the ISH to power on before the processor, and the ISH may be queried by the boot controller for sensor data. Accordingly, a switch may not be used in this embodiment.At 930, the boot controller can send a signal to a processor (e.g., 610, 710, 810) of the computing device to initialize the boot flow and hand off control to the BIOS (e.g., 612, 712). In addition, the boot controller may also pass some pre-boot posture information to the boot process, such as an orientation parameter. This parameter, along with any additional parameters (or other information) stored by the boot controller in the pre-boot posture table, may be used to display additional pre-boot content in an arrangement that is aligned with the device posture as determined by boot controller. It should be noted that, alternatively, the boot controller could pass some or all of the parameters to the boot process, or that all of the parameters (including the orientation parameter) could be stored in the pre-boot posture table and accessed by the boot process to determine the device posture. The signal sent at 930 may be sent in response to receiving the indication that the pre-boot content was successfully displayed.FIGS. 10-12 are block diagrams of example computer architectures that may be connected to, embedded with, or otherwise interoperate with the system for realizing device posture-based pre-boot orientation and other usage support in accordance with embodiments disclosed herein. Other computer architecture designs known in the art for processors and computing systems may also be used. Generally, suitable computer architectures for embodiments disclosed herein can include, but are not limited to, configurations illustrated in FIGS. 10-12 .FIG. 10 is an example illustration of a processor according to an embodiment. Processor 1000 is an example of a type of hardware device that can be used in connection with the implementations shown and described herein (e.g., SOC 610, 710, 810 or processor circuitry 652, 752) above. Processor 1000 may be any type of processor, such as a microprocessor, an embedded processor, a digital signal processor (DSP), a network processor, a multi-core processor, a single core processor, or other device to execute code. Although only one processor 1000 is illustrated in FIG. 10 , a processing element may alternatively include more than one of processor 1000 illustrated in FIG. 10 . Processor 1000 may be a single-threaded core or, for at least one embodiment, the processor 1000 may be multi-threaded in that it may include more than one hardware thread context (or "logical processor") per core.FIG. 10 also illustrates a memory 1002 coupled to processor 1000 in accordance with an embodiment. Memory 1002 may be any of a wide variety of memories (including various layers of memory hierarchy) as are known or otherwise available to those of skill in the art. Such memory elements can include, but are not limited to, random access memory (RAM), read only memory (ROM), logic blocks of a field programmable gate array (FPGA), erasable programmable read only memory (EPROM), and electrically erasable programmable ROM (EEPROM).Processor 1000 can execute any type of instructions associated with algorithms, processes, or operations detailed herein. Generally, processor 1000 can transform an element or an article (e.g., data) from one state or thing to another state or thing.Code 1004, which may be one or more instructions to be executed by processor 1000, may be stored in memory 1002, or may be stored in software, hardware, firmware, or any suitable combination thereof, or in any other internal or external component, device, element, or object where appropriate and based on particular needs. In one example, processor 1000 can follow a program sequence of instructions indicated by code 1004. Each instruction enters a front-end logic 1006 and is processed by one or more decoders 1008. The decoder may generate, as its output, a micro operation such as a fixed width micro operation in a predefined format, or may generate other instructions, microinstructions, or control signals that reflect the original code instruction. Front-end logic 1006 also includes register renaming logic 1010 and scheduling logic 1012, which generally allocate resources and queue the operation corresponding to the instruction for execution.Processor 1000 can also include execution logic 1014 having a set of execution units 1016a, 1016b, 1016n, etc. Some embodiments may include a number of execution units dedicated to specific functions or sets of functions. Other embodiments may include only one execution unit or one execution unit that can perform a particular function. Execution logic 1014 performs the operations specified by code instructions.After completion of execution of the operations specified by the code instructions, back-end logic 1018 can retire the instructions of code 1004. In one embodiment, processor 1000 allows out of order execution but requires in order retirement of instructions. Retirement logic 1020 may take a variety of known forms (e.g., re-order buffers or the like). In this manner, processor 1000 is transformed during execution of code 1004, at least in terms of the output generated by the decoder, hardware registers and tables utilized by register renaming logic 1010, and any registers (not shown) modified by execution logic 1014.Although not shown in FIG. 10 , a processing element may include other elements on a chip with processor 1000. For example, a processing element may include memory control logic along with processor 1000. The processing element may include I/O control logic and/or may include I/O control logic integrated with memory control logic. The processing element may also include one or more caches. In some embodiments, non-volatile memory (such as flash memory or fuses) may also be included on the chip with processor 1000. Such memory may be used to store a posture lookup table (e.g., 656, 756) and/or a pre-boot posture table (e.g., 658, 758) for example.In an example implementation, processor 1000 could be used in connection with SOCs 610, 710, or 810 of computing devices 600, 700, or 800 disclosed in one or more embodiments herein. Furthermore, processor 900 could be used in connection with processor circuitry 652 or 752 of boot controllers 650, 750, or 802 disclosed in one or more embodiments herein.FIG. 11 illustrates a computing system 1100 that is arranged in a point-to-point (PtP) configuration according to an embodiment. In particular, FIG. 11 shows a system where processors, memory, and input/output devices are interconnected by a number of point-to-point interfaces. Generally, one or more of the computing devices (e.g., 600, 700, 800) described herein may be configured in the same or similar manner as computing system 1100.Processors 1170 and 1180 may be implemented as single core processors 1174a and 1184a or multi-core processors 1174a-1174b and 1184a-1184b. Processors 1170 and 1180 may each include a cache 1171 and 1181 used by their respective core or cores. A shared cache (not shown) may be included in either processors or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode. It should be noted that one or more embodiments described herein could be implemented in a computing system, such as computing system 1100. Moreover, processors 1170 and 1180 are examples of the types of hardware that can be used in connection with the implementations shown and described herein (e.g., computing devices 600, 700, 800, laptops, dual display computing devices, foldable computing devices, tabletops, all-in-ones, tablets, phablets, etc.).Processors 1170 and 1180 may also each include integrated memory controller logic (IMC) 1172 and 1182 to communicate with memory elements 1132 and 1134. In alternative embodiments, memory controller logic 1172 and 1182 may be discrete logic separate from processors 1170 and 1180. Memory elements 1132 and/or 1134 may store various data to be used by processors 1170 and 1180 in achieving operations and functionality outlined herein.Processors 1170 and 1180 may be any type of processor, such as those discussed in connection with other figures. Processors 1170 and 1180 may exchange data via a point-to-point (PtP) interface 1150 using point-to-point interface circuits 1178 and 1188, respectively. Processors 1170 and 1180 may each exchange data with an I/O subsystem 1190 via individual point-to-point interfaces 1152 and 1154 using point-to-point interface circuits 1176, 1186, 1194, and 1198. I/O subsystem 1190 may include a display unit or appropriate interface for coupling to one or more display devices 1133. I/O subsystem 1190 may also exchange data with a co-processor 1138, such as a high-performance graphics circuit, machine learning accelerator, or other co-processor 1138, via an interface 1139, which could be a PtP interface circuit. In alternative embodiments, any or all of the PtP links illustrated in FIG. 11 could be implemented as a multi-drop bus rather than a PtP link.I/O subsystem 1190 may be in communication with a bus 1110 via an interface circuit 1196. Bus 1120 may have one or more devices that communicate over it, such as a bus bridge 1118, I/O devices 1116, and potentially other processors 1115. Via a bus 1110, bus bridge 1118 may be in communication with other devices such as a user interface 1112 (such as a keyboard, mouse, touchscreen, or other input devices), one or more sensors 1125 (e.g., sensors 640, 740), I/O devices 1126 (such as modems, network interface devices, or other types of communication devices that may communicate through a computer network 1160), audio I/O devices 1114, and/or a storage unit 1128. Storage unit 1128 may store code and data 1130, which may be executed by processors 1170 and/or 1180. In alternative embodiments, any portions of the bus architectures could be implemented with one or more PtP links.The computer system depicted in FIG. 11 is a schematic illustration of an embodiment of a computing system that may be utilized to implement various embodiments discussed herein. For example, processors 1170 and/or 1180 could be used in connection with a processor of computing devices shown and described herein (e.g., computing devices 600, 700, 800, foldable computing device, dual display device, tablet, phablet, mobile device, wearable device, etc.) and be operatively connected to appropriate sensors (e.g., 640, 740) or sensor hubs (e.g., 630, 714, 804). Furthermore, in at least one example, processors 1170 and/or 1180 could be implemented using processor 1000. It will be appreciated that various components of the system depicted in FIG. 11 may be combined in a system-on-a-chip (SoC) architecture (e.g., such as SOC 610, 710, 810) or in any other suitable configuration capable of achieving the functionality and features of examples and implementations provided herein.Turning to FIGURE 12, FIGURE 12 is a simplified block diagram associated with an example ARM ecosystem SOC 1200 of the present disclosure. At least some embodiments of computing systems shown and described herein (e.g., 600, 700, 800), could be configured in the same or similar manner ARM ecosystem SOC 1200. Further, the architecture can be part of any type of tablet, smartphone (inclusive of Android™ phones, iPhones™), iPad™, Google Nexus™, Microsoft Surface™, personal computer, server, video processing components, laptop computer (inclusive of any type of notebook), dual display device, foldable computing device, Ultrabook™ system, any type of touch-enabled input device, etc.In this example of FIGURE 12 , ARM ecosystem SOC 1200 may include multiple cores 1206-1207, an L2 cache control 1208, a bus interface unit 1209, an L2 cache 1210, a graphics processing unit (GPU) 1215, an interconnect 1202, a video codec 1220, and an organic light emitting diode (OLED) I/F 1225, which may be associated with mobile industry processor interface (MIPI)/high-definition multimedia interface (HDMI) links that couple to an OLED display.ARM ecosystem SOC 1200 may also include a subscriber identity module (SIM) I/F 1230, a boot read-only memory (ROM) 1235, a synchronous dynamic random access memory (SDRAM) controller 1240, a flash controller 1245, a serial peripheral interface (SPI) master 1250, a suitable power control 1255, a dynamic RAM (DRAM) 1260, flash 1265, and one or more sensors 1290. In addition, one or more example embodiments include one or more communication capabilities, interfaces, and features such as instances of Bluetooth™ 1270, a 3G modem 1275, a global positioning system (GPS) 1280, and an 802.11 Wi-Fi 1285.In operation, the example of FIGURE 12 can offer processing capabilities, along with relatively low power consumption to enable computing of various types (e.g., mobile computing, high-end digital home, servers, wireless infrastructure, etc.). In addition, such an architecture can enable any number of software applications (e.g., Android™, Adobe® Flash® Player, Java Platform Standard Edition (Java SE), JavaFX, Linux, Microsoft Windows Embedded, Symbian and Ubuntu, etc.). In at least one example embodiment, the core processor may implement an out-of-order superscalar pipeline with a coupled low-latency level-2 cache.While some of the systems and solutions described and illustrated herein have been described as containing or being associated with a plurality of elements, not all elements explicitly illustrated or described may be utilized in each alternative implementation of the present disclosure. Additionally, one or more of the elements described herein may be located external to a system, while in other instances, certain elements may be included within or as a portion of one or more of the other described elements, as well as other elements not described in the illustrated implementation. Further, certain elements may be combined with other components, as well as used for alternative or additional purposes in addition to those purposes described herein.With regard to this specification generally, unless expressly stated to the contrary, use of the phrase 'at least one of refers to any combination of the named elements, conditions, or activities. For example, 'at least one of X, Y, or Z' is intended to mean any of the following: 1) at least one X, but not Y and not Z; 2) at least one Y, but not X and not Z; 3) at least one Z, but not X and not Y; 4) at least one X and at least one Y, but not Z; 5) at least one X and at least one Z, but not Y; 6) at least one Y and at least one Z, but not X; or 7) at least one X, at least one Y, and at least one Z. Additionally, unless expressly stated to the contrary, the terms 'first', 'second', 'third', etc., are intended to distinguish the particular items (e.g., element, condition, module, activity, operation, claim element, etc.) they modify, but are not intended to indicate any type of order, rank, importance, temporal sequence, or hierarchy of the modified noun. For example, 'first X' and 'second X' are intended to designate two separate X elements that are not necessarily limited by any order, rank, importance, temporal sequence, or hierarchy of the two elements.Further, it should be appreciated that the examples presented above are nonlimiting examples provided merely for purposes of illustrating certain principles and features and not necessarily limiting or constraining the potential embodiments of the concepts described herein. For instance, a variety of different embodiments can be realized utilizing various combinations of the features and components described herein, including combinations realized through the various implementations of components described herein. Other implementations, features, and details should be appreciated from the contents of this specification.Although this disclosure has been described in terms of certain implementations and generally associated methods, alterations and permutations of these implementations and methods will be apparent to those skilled in the art. For example, the actions described herein can be performed in a different order than as described and still achieve the desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve the desired results. In certain implementations, multitasking and parallel processing may be advantageous. Additionally, other user interface layouts and functionality can be supported. Other variations are within the scope of the following claims.While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any subject matter or of what may be claimed, but rather as descriptions of features specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the claims herein. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results.OTHER NOTES AND EXAMPLESThe following examples pertain to embodiments in accordance with this specification. The system, apparatus, method, and machine readable storage medium embodiments can include one or a combination of the following examples:Example A1 provides a computing device comprising a processor to be coupled to a display device, and a boot controller coupled to the processor and to be coupled to the display device. The boot controller is configured to detect a power signal; receive sensor data detected by one or more sensors prior to an operating system being loaded by a boot process of the processor; determine a posture associated with the display device based on the sensor data detected by the one or more sensors; and communicate, to the display device, posture information indicating the posture associated with the display device. The pre-boot content is to be displayed on a display panel of the display device in a first arrangement based on the posture information.Example A2 comprises the subject matter of Example A1, and to determine the posture is to include determining a first parameter that indicates an orientation of the display device.Example A3 comprises the subject matter of Example A2, and the first arrangement includes the pre-boot content being aligned with the orientation of the display device.Example A4 comprises the subject matter of any one of Examples A2-A3, and the boot controller is further to select the posture information based, at least in part, on the first parameter.Example A5 comprises the subject matter of any one of Examples A2-A4, and the boot controller is further to select a bitmap of the pre-boot content based, at least in part, on the first parameter and to send the bitmap of the pre-boot content to the display device.Example A6 comprises the subject matter of any one of Examples A2-A5, and the boot controller is further to in response to receiving an indication that the pre-boot content was displayed on the display panel of the display device, initialize a boot process on the processor and communicate the first parameter to the boot process.Example A7 comprises the subject matter of Example A6, and the boot controller is further to store pre-boot posture information in a pre-boot posture table to be accessed by the boot process, and the pre-boot posture information includes a second parameter that indicates whether a hardware peripheral is present and covers a first portion of the display panel of the display device.Example A8 comprises the subject matter of Example A7, and the first parameter and the second parameter are to be used by the boot process to render a second bitmap with second pre-boot content to be displayed in the first arrangement prior to the operating system being loaded.Example A9 comprises the subject matter of any one of Examples A2-A8, and to determine the posture is to further include determining a second parameter that indicates whether a hardware peripheral is present and covers a first portion of the display panel of the display device.Example A10 comprises the subject matter of Example A9, and in response to a determination that the hardware peripheral is present and covers the first portion of the display panel, the first arrangement is to further include the pre-boot content being located in a second portion of the display panel that is not covered by the hardware peripheral.Example A11 comprises the subject matter of any one of Examples A9-A10, and to determine the posture is to further include determining a third parameter that indicates a hinge angle of a hinge on a housing member of the computing device.Example A12 comprises the subject matter of Example A11, and the boot controller is further to select the posture information based on at least one of the first parameter, the second parameter, or the third parameter.Example A13 comprises the subject matter of any one of Examples A1-A12, and the boot controller is further to subsequent to receiving an indication that the pre-boot content was displayed on the display panel of the display device, activate a switch to connect the one or more sensors to an integrated sensor hub instead of the boot controller.Example A14 comprises the subject matter of any one of Examples A1-A4 or A6-A13, and the display device includes a timing controller, and the timing controller is configured to receive the posture information from the boot controller, and select, based on the posture information, a bitmap of the pre-boot content. The bitmap is one of a plurality of bitmaps, and each bitmap is rendered with the pre-boot content to be displayed in a different arrangement.Example C1 provides one or more machine readable storage media comprising instructions that when executed by processor circuitry of a boot controller, cause the processor circuitry to: detect a power signal receive sensor data detected by one or more sensors prior to an operating system being loaded by a boot process of a system-on-a-chip (SOC) coupled to the boot controller; determine a posture associated with a display device coupled to the boot controller and the SOC based on the sensor data; and communicate, to the display device, posture information indicating the posture associated with the display device, and pre-boot content is to be displayed on a display panel of the display device in a first arrangement based on the posture information.Example C2 comprises the subject matter of Example C1, and to determine the posture is to include determining a first parameter that indicates an orientation of the display device.Example C3 comprises the subject matter of Example C2, and the first arrangement includes the pre-boot content being aligned with the orientation of the display device.Example C4 comprises the subject matter of any one of Examples C2-C3, and the instructions, when executed by the processor circuitry of the boot controller, further cause the processor circuitry to select the posture information based, at least in part, on the first parameter.Example C5 comprises the subject matter of any one of Examples C2-C4, and the instructions, when executed by the processor circuitry of the boot controller, further cause the processor circuitry to select a bitmap of the pre-boot content based, at least in part, on the first parameter and to send the bitmap of the pre-boot content to the display device.Example C6 comprises the subject matter of any one of Examples C2-C5, and the instructions, when executed by the processor circuitry of the boot controller, further cause the processor circuitry to in response to receiving an indication that the pre-boot content was displayed on the display panel of the display device, initialize a boot process on the SOC and communicate the first parameter to the boot process.Example C7 comprises the subject matter of Example C6, and the instructions, when executed by the processor circuitry of the boot controller, further cause the processor circuitry to store pre-boot posture information in a pre-boot posture table to be accessed by the boot process, and the pre-boot posture information includes a second parameter that indicates whether a hardware peripheral is present and covers a first portion of the display panel of the display device.Example C8 comprises the subject matter of Example C7, and the first parameter and the second parameter are to be used by the boot process to render a second bitmap with second pre-boot content to be displayed in the first arrangement prior to the operating system being loaded.Example C9 comprises the subject matter of any one of Examples C2-C8, and to determine the posture is to further include determining a second parameter that indicates whether a hardware peripheral is present and covers a first portion of the display panel of the display device.Example C10 comprises the subject matter of Example C9, and in response to a determination that the hardware peripheral is present and covers the first portion of the display panel, the first arrangement is to further include the pre-boot content being located in a second portion of the display panel that is not covered by the hardware peripheral.Example C11 comprises the subject matter of any one of Examples C9-C10, and to determine the posture is to further include determining a third parameter that indicates a hinge angle of a hinge on one or more housing members containing the processor circuitry and the SOC.Example C12 comprises the subject matter of Example C11, and the instructions, when executed by the processor circuitry of the boot controller, further cause the processor circuitry to select the posture information based on at least one of the first parameter, the second parameter, or the third parameter.Example C13 comprises the subject matter of any one of Examples C1-C12, and the instructions, when executed by the processor circuitry of the boot controller, further cause the processor circuitry to subsequent to receiving an indication that the pre-boot content was displayed on the display panel of the display device, activate a switch to connect the one or more sensors to an integrated sensor hub instead of the boot controller.Example M1 provides a method comprising: detecting, by a boot controller coupled to a processor, a power signal; receiving sensor data detected by one or more sensors prior to an operating system being loaded by a boot process of the processor; determining a posture associated with a display device coupled to the boot controller and the processor based on the sensor data; and communicating to the display device, posture information indicating the posture associated with the display device, and pre-boot content is displayed on a display panel of the display device in a first arrangement based on the posture informationExample M2 comprises the subject matter of Example M1, and the determining the posture includes determining a first parameter that indicates an orientation of the display device.Example M3 comprises the subject matter of Example M2, and the first arrangement includes the pre-boot content being aligned with the orientation of the display device.Example M4 comprises the subject matter of any one of Examples M2-M3, and further comprises selecting the posture information based, at least in part, on the first parameter.Example M5 comprises the subject matter of any one of Examples M2-M3, and further comprises selecting a bitmap of the pre-boot content based, at least in part, on the first parameter; and sending the bitmap of the pre-boot content to the display device.Example M6 comprises the subject matter of any one of Examples M2-M5, and further comprises, in response to receiving an indication that the pre-boot content was displayed on the display panel of the display device, initializing a boot process on the processor and communicating the first parameter to the boot process.Example M7 comprises the subject matter of Example M6, and further comprises storing pre-boot posture information in a pre-boot posture table to be accessed by the boot process, and the pre-boot posture information includes a second parameter that indicates whether a hardware peripheral is present and covers a first portion of the display panel of the display device.Example M8 comprises the subject matter of Example M7, and the boot process uses the first parameter and the second parameter to render a second bitmap with second pre-boot content, and the method further comprises providing the second bitmap for display in the first arrangement prior to the operating system being loaded.Example M9 comprises the subject matter of any one of Examples M2-M8, and the determining the posture further includes determining a second parameter that indicates whether a hardware peripheral is present and covers a first portion of the display panel of the display device.Example M10 comprises the subject matter of Example M9, and in response to determining that the hardware peripheral is present and covers the first portion of the display panel, the first arrangement further includes the pre-boot content being located in a second portion of the display panel that is not covered by the hardware peripheral.Example M11 comprises the subject matter of any one of Examples M9-M10, and the determining the posture further includes determining a third parameter that indicates a hinge angle of a hinge on one or more housing members containing the boot controller and the processor.Example M12 comprises the subject matter of Example M11, and further comprises selecting the posture information based on at least one of the first parameter, the second parameter, or the third parameter.Example M13 comprises the subject matter of any one of Examples M1-M12, and further comprises, subsequent to receiving an indication that the pre-boot content was displayed on the display panel of the display device, activating a switch to connect the one or more sensors to an integrated sensor hub instead of the boot controller.Example M14 comprises the subject matter of any one of Examples M1-M4 or M6-M13, and further comprises receiving, by a timing controller in the display device, the posture information from the boot controller; and selecting, based on the posture information, a bitmap of the pre-boot content, and the bitmap is one of a plurality of bitmaps, and each bitmap is rendered with the pre-boot content to be displayed in a different arrangement.Example P1 provides an apparatus comprising memory circuitry and processor circuitry coupled to the memory circuitry and configured to be coupled to a first processor of a computing device and to a display device of the computing device. The processor circuitry is to detect a power signal; in response to detecting the power signal, receive sensor data detected by one or more sensors prior to a boot process for the first processor being initialized; determine a posture associated with the display device based on the sensor data; and communicate, to the display device, posture information indicating the posture associated with the display device, and pre-boot content is to be displayed on a display panel of the display device in a first arrangement based on the posture information.Example P2 comprises the subject matter of Example PI, and to determine the posture of the display device is determine at least one of a first parameter that indicates an orientation of the display device, or a second parameter that indicates whether a peripheral is present on the display device.Example P3 comprises the subject matter of Example P2, and the first arrangement includes the pre-boot content being aligned with the orientation of the display device.Example P4 comprises the subject matter of any one of Examples P2-P3, and the processor circuitry is further to select the posture information based on either the first parameter or a combination of the first parameter and the second parameter.Example P5 comprises the subject matter of any one of Examples P2-P4, and the processor circuitry is further to: select a bitmap of the pre-boot content based, at least in part, on the first parameter; and to send the bitmap of the pre-boot content to the display device.Example P6 comprises the subject matter of any one of Examples P2-P5, and the processor circuitry is further to, in response to receiving an indication that the pre-boot content was displayed on the display panel of the display device, initialize a boot process on the first processor and communicate the first parameter to the boot process.Example P7 comprises the subject matter of Example P6, and the processor circuitry is further to store pre-boot posture information in a pre-boot posture table to be accessed by the boot process, and the pre-boot posture information includes the second parameter.Example P8 comprises the subject matter of any one of Examples P6-P7, and the first parameter and the second parameter are to be used by the boot process to render a second bitmap with second pre-boot content to be displayed in the first arrangement prior to an operating system being loaded by the boot process.Example P9 comprises the subject matter of any one of Examples P2-P8, and, in response to a determination that the hardware peripheral is present and covers the first portion of the display panel, the first arrangement is to further include the pre-boot content being located in a second portion of the display panel that is not covered by the hardware peripheral.Example P10 comprises the subject matter of any one of Examples P2-P9, and to determine the posture is to further include determining a third parameter that indicates a hinge angle of a hinge on a housing member of the apparatus.Example P11 comprises the subject matter of Example P10, and the processor circuitry is further to select the posture information based on at least one of the first parameter, the second parameter, or the third parameter.Example P12 comprises the subject matter of any one of Examples P1-P11, and the processor circuitry is further to, subsequent to receiving an indication that the pre-boot content was displayed on the display panel of the display device, activate a switch to connect the one or more sensors to an integrated sensor hub instead of the apparatus.An Example Y1 provides an apparatus, the apparatus comprising means for performing the method of any one of the Examples M1-M14.Example Y2 comprises the subject matter of Example Y1, and the means for performing the method comprises at least one processor and at least one memory element.Example Y3 comprises the subject matter of Example Y2, and the at least one memory element comprises machine readable instructions that when executed, cause the apparatus to perform the method of any one of Examples M1-M14.Example Y4 comprises the subject matter of any one of Examples Y1-Y3, and the apparatus is one of a computing system or a system-on-a-chip.An Example X1 provides at least one machine readable storage medium comprising instructions, where the instructions when executed realize a computing device as in any one of Examples A1-A14, implement a method as in any one of Examples M1-M14, or realize an apparatus as in any one of Examples P1-P12.An Example Z1 provides a system that comprises the apparatus of any one of Examples P1-P12.
The techniques described in this disclosure are directed to validating an application that is to be executed on a graphics processing unit (GPU). For example, a validation server device may receive code of the application. The validation server device may provide some level of assurance that the application satisfies one or more performance criteria. In this manner, the probability of a problematic application executing on the device that includes the GPU may be reduced.
CLAIMS: 1. A method comprising: receiving, with a server device, an application that is to be executed by a graphics processing unit (GPU) that resides on a device external to the server device; performing, with the server device, at least one of: an analysis of the application prior to and during compilation of the application on the server device; and an analysis of the application during execution of the application on the server device; determining whether the application satisfies one or more performance criteria based on at least one of the analyses; and transmitting to the device a validation of the application if the application satisfies the one or more performance criteria. 2. The method of claim 1, wherein the performance criteria comprises at least one of a determination that the application is absent of malicious code and a determination that the application is not error-prone. 3. The method of claim 1, wherein the performance criteria includes one or more of a determination that a code of the application does not include a code of known viruses, a determination that no errors are found as determined during compilation of the code of the application, a determination that there are no out-of-bounds memory accesses as determined during execution of the application, a determination that a system bus of the device is not overloaded as determined during execution of the application, a determination that a task of the application completes execution within a threshold execution time, and a determination that the task of the application executes at least at a threshold execution rate. 4. The method of claim 1, wherein performing the analysis of the application prior to and during compilation comprises comparing a code of the application with a code of known viruses, and determining whether any errors are found during compilation of the code of the application. 5. The method of claim 1, wherein performing the analysis of the application during execution of the application comprises: executing a virtual GPU model; executing the application on the virtual GPU model; and analyzing functionality of the virtual GPU model during the execution of the application on the GPU model. 6. The method of claim 5, further comprising: executing a virtual device model; and analyzing functionality of the virtual device model during the execution of the application on the GPU model. 7. The method of claim 5, wherein executing the application on the virtual GPU model comprises inputting GPU inputs to the application executing on the virtual GPU model. 8. The method of 5, further comprising: monitoring functions performed by the executed application. 9. The method of claim 8, wherein monitoring functions comprises one or more of monitoring memory accesses by the executed application, monitoring rate of execution, and monitoring execution time. 10. The method of claim 1, further comprising: modifying code of the application; and transmitting the modified code of the application to the device. 11. The method of claim 10, further comprising: determining that the application would execute inefficiently on the GPU, wherein modifying the code of the application comprises modifying the code of the application based on the determination. 12. The method of claim 1, wherein performing the analysis of the application during execution of the application comprises: executing the application on a hardware emulation board; and analyzing functionality of the hardware emulation board during the execution. 13. The method of claim 1, wherein receiving the application comprises receiving at least one of source code and intermediate code of the application, the method further comprising: compiling at least one of the source code and the intermediate code of the application to generate object code of the application; and transmitting the object code of the application to the device. 14. An apparatus comprising: an emulator unit operable to: receive an application that is to be executed by a graphics processing unit (GPU) that resides on a device external to the apparatus; perform at least one of: an analysis of the application prior to and during compilation of the application on the apparatus; and an analysis of the application during execution of the application on the apparatus; determine whether the application satisfies one or more performance criteria based on at least one of the analyses; and transmit to the device a validation of the application if the application satisfies the one or more performance criteria. 15. The apparatus of claim 14, wherein the performance criteria comprises at least one a determination that the application is absent of malicious code and a determination that the application is not error-prone. 16. The apparatus of claim 14, wherein the performance criteria includes one or more of a determination that a code of the application does not include a code of known viruses, a determination that no errors are found as determined during compilation of the code of the application, a determination that there are no out-of-bounds memory accesses as determined during execution of the application, a determination that a system bus of the device is not overloaded as determined during execution of the application, a determination that a task of the application completes execution within a threshold execution time, and a determination that the task of the application executes at least at a threshold execution rate. 17. The apparatus of claim 14, wherein the emulator unit compares a code of the application with a code of known viruses, and determines whether any errors are found during compilation of the code of the application to perform the analysis of the application prior to and during compilation. 18. The apparatus of claim 14, further comprising a memory, wherein to perform the analysis of the application during execution of the application, the emulator unit is operable to: execute a virtual GPU model stored in the memory; execute the application on the virtual GPU model; and analyze functionality of the virtual GPU model during the execution of the application on the GPU model. 19. The apparatus of claim 18, wherein the emulator unit is further operable to: execute a virtual device model stored in the memory; and analyze functionality of the virtual device model during the execution of the application on the GPU model. 20. The apparatus of claim 18, further comprising a memory, wherein the emulator unit inputs GPU inputs stored in the memory to the application executing on the virtual GPU model during the execution of the application on the virtual GPU model. 21. The apparatus of 18, wherein the emulator unit is further operable to monitor functions performed by the executed application. 22. The apparatus of claim 21, wherein the emulator unit is operable to monitor one or more of memory accesses by the executed application, rate of execution, and execution time. 23. The apparatus of claim 14, wherein the emulator unit is further operable to: modify code of the application; and transmit the modified code of the application to the device. 24. The apparatus of claim 23, wherein the emulator unit is further operable to determine that the application would execute inefficiently on the GPU, and modify the code of the application based on the determination. 25. The apparatus of claim 14, wherein the emulator unit comprises a hardware emulation board, and wherein the hardware emulation board executes the application to perform the analysis of the application during execution of the application. 26. The apparatus of claim 14, wherein the emulator unit receives at least one of source code and intermediate code of the application, and wherein the emulator unit is further operable to: compile at least one of the source code and the intermediate code of the application to generate object code of the application; and transmit the object code of the application to the device. 27. A server device comprising: means for receiving an application that is to be executed by a graphics processing unit (GPU) that resides on a device external to the server device; means for performing at least one of: an analysis of the application prior to and during compilation of the application on the server device; and an analysis of the application during execution of the application on the server device; means for determining whether the application satisfies one or more performance criteria based on at least one of the analyses; and means for transmitting to the device a validation of the application if the application satisfies the one or more performance criteria. 28. A non-transitory computer-readable storage medium comprising instructions that cause one or more processors to: receive, with a server device, an application that is to be executed by a graphics processing unit (GPU) that resides on a device external to the server device; perform, with the server device, at least one of: an analysis of the application prior to and during compilation of the application on the server device; and an analysis of the application during execution of the application on the server device; determine whether the application satisfies one or more performance criteria based on at least one of the analyses; and transmit to the device a validation of the application if the application satisfies the one or more performance criteria. 29. A method comprising: receiving an application that is to be executed by a graphics processing unit (GPU) of a device; transmitting the application to a server device external to the device for validation of the application; and receiving a validation from the server device that indicates that the application satisfies one or more criteria for execution on the GPU. 30. The method of claim 29, further comprising: executing the application on the GPU based on the received validation. 31. The method of claim 29, wherein receiving the application comprises receiving at least one of source code for the application, intermediate code of the application, and complied code for the application, and wherein transmitting the application comprises transmitting at least one of the source code for the application, intermediate code of the application, and the compiled code of the application. 32. The method of claim 29, further comprising: receiving a modified version of the application from the server device; and executing the modified version of the application on the GPU. 33. The method of claim 29, wherein transmitting the application comprises transmitting at least one of a source code of the application and an intermediate code of the application, the method further comprising: receiving compiled object code of the application from the server device; and executing the compiled object code of the application on the GPU. 34. The method of claim 29, wherein transmitting the application to the server device comprises transmitting the application only once to the server device, and wherein receiving the validation from the server device comprises receiving, only once, the validation from the server device. 35. An apparatus comprising : a graphics processing unit (GPU); a device memory operable to store an application that is to be executed by the GPU; and a processor operable to: transmit the application to a server device external to the apparatus; and receive a validation from the server device that indicates that the application satisfies one or more criteria for execution on the GPU. 36. The apparatus of claim 35, wherein the processor is further operable to instruct the GPU to execute the application based on the received validation, and wherein the GPU is operable to execute the application in response to the instruction from the processor. 37. The apparatus of claim 35, wherein the processor receives at least one of source code for the application, intermediate code of the application, and complied code for the application, and wherein the processor transmits at least one of the source code for the application, intermediate code of the application, and the compiled code of the application. 38. The apparatus of claim 35, wherein the processor is further operable to receive a modified version of the application from the server device, and wherein the GPU is further operable to execute the modified version of the application. 39. The apparatus of claim 35, wherein the processor transmits at least one of a source code of the application and an intermediate code of the application, wherein the processor is further operable to receive compiled object code of the application, and wherein the GPU is further operable to execute the compiled object code of the application. 40. The apparatus of claim 35, wherein the processor transmits the application only once to the server device, and wherein the processor receives the validation from the server device only once. 41. A device comprising : a graphics processing unit (GPU); means for receiving an application that is to be executed by the GPU; means for transmitting the application to a server device external to the device for validation of the application; and means for receiving a validation from the server device that indicates that the application satisfies one or more criteria for execution on the GPU. 42. A non-transitory computer-readable storage medium comprising instructions that cause one or more processors to: receive an application that is to be executed by a graphics processing unit (GPU) of a device; transmit the application to a server device external to the device for validation of the application; and receive a validation from the server device that indicates that the application satisfies one or more criteria for execution on the GPU.
VALIDATION OF APPLICATIONS FOR GRAPHICS PROCESSING UNIT TECHNICAL FIELD [0001] This disclosure is directed to applications that execute on a graphics processing unit (GPU), and more particularly, to validation of such applications. BACKGROUND [0002] Graphics processing units (GPUs) traditionally have been limited to performing only graphics related processing in fixed-function pipelines that provide very limited functional flexibility. Newer GPUs include programmable cores that execute programs, and thereby provide greater functional flexibility as compared to the traditional GPUs. The programmable cores may execute both graphics related applications and non- graphics related applications. SUMMARY [0003] In general, this disclosure is related to techniques for identifying potentially problematic applications that are to be executed on a graphics processing unit (GPU), prior to execution. Examples of problematic applications include, but are not limited to, malicious applications, as well as inefficient or error-prone applications. For example, a server device external to the device that houses the GPU may validate the application. Validation of the application may mean that the application satisfies one or more criteria. As one example, validation may mean determining with some level of assurance that the application is not a malicious application, an error-prone application, or an inefficient application. The server device may transmit an indication, to the device, that indicates whether it is either safe or unadvisable for the GPU to execute the program. The device may then elect to execute the program on the GPU based on the received indication. [0004] In one example, the disclosure describes a method that includes receiving, with a server device, an application that is to be executed by a graphics processing unit (GPU) that resides on a device external to the server device. The method also include performing, with the server device, at least one of an analysis of the application prior to and during compilation of the application on the server device, and an analysis of the application during execution of the application on the server device. The method further includes determining whether the application satisfies one or more performance criteria based on at least one of the analyses, and transmitting to the device a validation of the application if the application satisfies the one or more performance criteria. [0005] In another example, the disclosure describes an apparatus that includes an emulator unit operable to receive an application that is to be executed by a graphics processing unit (GPU) that resides on a device external to the apparatus. The emulator unit is also operable to perform at least one of an analysis of the application prior to and during compilation of the application on the apparatus, and an analysis of the application during execution of the application on the apparatus. The emulator unit is also operable to determine whether the application satisfies one or more performance criteria based on at least one of the analyses, and transmit to the device a validation of the application if the application satisfies the one or more performance criteria. [0006] In another example, the disclosure describes a server device that includes means for receiving an application that is to be executed by a graphics processing unit (GPU) that resides on a device external to the server device. The server device also includes means for performing at least one of an analysis of the application prior to and during compilation of the application on the server device, and an analysis of the application during execution of the application on the server device. The server device further includes means for determining whether the application satisfies one or more performance criteria based on at least one of the analyses, and means for transmitting to the device a validation of the application if the application satisfies the one or more performance criteria. [0007] In another example, the disclosure describes a non-transitory computer-readable storage medium comprising instructions that cause one or more processors to receive, with a server device, an application that is to be executed by a graphics processing unit (GPU) that resides on a device external to the server device. The instructions further cause one or more processors to perform, with the server device, at least one of an analysis of the application prior to and during compilation of the application on the server device, and an analysis of the application during execution of the application on the server device. The instructions also cause the one or more processors to determine whether the application satisfies one or more performance criteria based on at least one of the analyses, and transmit to the device a validation of the application if the application satisfies the one or more performance criteria. [0008] In another example, the disclosure describes a method that includes receiving an application that is to be executed by a graphics processing unit (GPU) of a device, and transmitting the application to a server device external to the device for validation of the application. The method further includes receiving a validation from the server device that indicates that the application satisfies one or more criteria for execution on the GPU. [0009] In another example, the disclosure describes an apparatus that includes a graphics processing unit (GPU), and a device memory operable to store an application that is to be executed by the GPU. The apparatus also includes a processor operable to transmit the application to a server device external to the apparatus, and receive a validation from the server device that indicates that the application satisfies one or more criteria for execution on the GPU. [0010] In another example, the disclosure describes a device that includes a graphics processing unit (GPU). The device also includes means for receiving an application that is to be executed by the GPU, and means for transmitting the application to a server device external to the device for validation of the application. The device further includes means for receiving a validation from the server device that indicates that the application satisfies one or more criteria for execution on the GPU. [0011] In another example, the disclosure describes a non-transitory computer-readable storage medium comprising instructions that cause one or more processors to receive an application that is to be executed by a graphics processing unit (GPU) of a device, and transmit the application to a server device external to the device for validation of the application. The instructions further cause the processor to receive a validation from the server device that indicates that the application satisfies one or more criteria for execution on the GPU. [0012] The details of one or more aspects of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the disclosure will be apparent from the description and drawings, and from the claims. BRIEF DESCRIPTION OF DRAWINGS [0013] FIG. 1 is a block diagram illustrating an example of a system that may be operable to implement one or more aspects of this disclosure. [0014] FIG. 2 is a flowchart illustrating an example operation of a device that may be operable to implement one or more aspects of this disclosure. [0015] FIG. 3 is a flowchart illustrating an example operation of a server that may be operable to implement one or more aspects of this disclosure. [0016] FIG. 4. is a flowchart illustrating another example operation of a server that may be operable to implement one or more aspects of this disclosure. [0017] FIG. 5 is a block diagram illustrating an example device, illustrated in FIG. 1, in further detail. DETAILED DESCRIPTION [0018] In general, this disclosure is related to techniques to ensure proper functionality of applications that are to be executed on a graphics processing unit (GPU). Some previous GPUs included only fixed-function hardware pipelines which did not provide programming capabilities. However, to increase functional flexibility, newer GPUs allow for programmable shader cores. For example, these GPUs execute applications such as vertex shaders and fragment shaders that perform functions that were previously delegated to components of the fixed- function hardware pipelines. [0019] While programmable shader cores allow for functional flexibility, they also invite misuse or suboptimal use of the GPU. For example, a malicious developer may develop an application that generates a denial of service attack or a virus. In some instances, a developer, who may not have malicious intent, may nevertheless inadvertently develop an inefficient or error-prone application. A problematic application (e.g., a malicious, inefficient or error-prone application) can substantially undermine the operation of the GPU or a device in which the GPU is provided. [0020] The techniques of this disclosure may assist in identifying possibly malicious, inefficient and/or error-prone GPU-executed applications, prior to execution by the GPU. For example, the techniques of this disclosure may be directed to a cloud-based solution in which a server device, external to the device that houses the GPU, and coupled to the device housing the GPU via one or more network connections, functions as an emulator for execution of an application. The server may emulate the results of the application, as if the application is executing on the GPU. Based on the results, the server may validate the application (e.g., determine whether or not the program is malicious, inefficient, or error-prone), and indicate as such to the device that houses the GPU. The GPU may then execute the application based on the received indication. [0021] There may be various ways in which the server may execute a validation process to validate the application. The validation process may be a software process. The software process may be executed in conjunction with general purpose processor and/or special purpose hardware. For example, the server may execute virtual model software. The virtual model causes the server to emulate the GPU or the actual device that includes GPU upon which the application will execute. In alternate examples, instead of or in addition to virtual models, the server may include a hardware emulation board to validate the application. The server may also include an application that is specifically designed to test security violations of the application that is be executed by the GPU. [0022] To validate the application that is to be executed by the GPU, the server may perform static analysis, dynamic analysis, or a combination thereof. Static analysis refers to analysis of the application that can be performed without execution of the application. For instance, static analysis can be performed during compilation. During the compilation, the server may identify errors in the application such as infinite loops in the program or out-of-bounds access to array locations within the application as two non-limiting examples. [0023] Dynamic analysis refers to analysis of the application during execution, which may additionally result in identifying problematic applications (e.g., malicious, inefficient, and error-prone applications). For example, the server may execute compiled code, and the server may provide the executed code with hypothetical input values. The hypothetical input values may be, for example, different input images, input images with different sizes, and the like. [0024] The server, executing a validation process, may monitor the results and the functions performed by the executed code. For example, the server may monitor memory accesses by the virtual model of the GPU, and determine whether the memory accesses are out-of-bounds memory accesses. The server may also monitor the memory addresses where the virtual model of the GPU is writing information. Based on the memory accesses of the virtual model of the GPU and memory addresses where the virtual model of the GPU is writing information, the server may be able to determine whether the application is error-prone. Such memory tracking may be particularly useful when the application reads or writes to variables using pointers. [0025] The server may also detect applications that generate or enable denial of service attacks. For example, the server may monitor the rate at which the virtual model of the GPU is able to execute the application. If the server detects slow responsiveness, unintended termination, or hanging, the server may determine that the application is an application designed for a denial of service attack, or a very poorly designed application. In either case, execution of such an application may negatively impact the experience of a user. [0026] In addition to validating the application, in some examples, the server may be able to tune and optimize the application as well. For example, the server may insert or replace the source code, or portions of the source code, or collect statistics to determine how well the compiled code works. In some examples, the server may validate the application and optimize or tune the application once. After such validation, the device may execute the application as often as the user would like without requiring further validations or optimization. Also, in some examples, after validating a certain application, the server may store an indication that indicates that this application has already been validated. If the server receives the same source code or pre-compiled object code again, the server may first ensure that the code is identical, and if so, immediately validate that application. [0027] FIG. 1 is a block diagram illustrating an example of a system that may be operable to implement one or more aspects of this disclosure. For example, FIG. 1 illustrates system 10 that includes device 12, network 22, validation server device 24, and application server device 38. Although only one device 12, validation server device 24, and application server device 38 is illustrated in FIG. 1, in other examples, system 10 may include a plurality of devices 12, validation servers 24, and application servers 38. System 10 may be referred to as a cloud-based system to indicate that validation of application 20 occurs in validation server device 24, which is external to device 12, as described in more detail. For example, the techniques of this disclosure may be directed to validating application 20 in the cloud (e.g., in validation server device 24, which is external to device 12). [0028] Examples of device 12 include, but are not limited to, video devices such as media players, set-top boxes, wireless handsets such as mobile telephones, personal digital assistants (PDAs), desktop computers, laptop computers, gaming consoles, video conferencing units, tablet computing devices, and the like. Examples of validation server device 24 and application server device 38 include, but are not limited to, laptops, desktops, web servers, and the like. In general, validation server device 24 and application server device 38 may be any type of device capable of performing the functions attributed to validation server device 24 and application server device 38 in this disclosure. [0029] Network 22 may allow device 12 to securely communicate with validation server device 24 and application server device 38. For security purposes, any communication between device 12 and validation server device 24 and application server device 38 may be encrypted or otherwise secured. Also, for further protection, any communication between device 12 and validation server device 24 and application server device 38 may require user authorization. [0030] In some examples, network 22 may ensure that information transmitted by any one of device 12, validation server device 24, and application server device 38 is received only by the intended device or devices, and no other device. Network 22 may be a local area network (LAN), a wide area network (WAN), the Internet, and the like. Device 12, validation server device 24, and application server device 38 may be coupled to network 22 wirelessly or through a wired link. In some examples, it may be possible for device 12 to be coupled directly to validation server device 24 and/or application server device 38. For example, device 12 may directly communicate with validation server device 24 and/or application server device 38 through a wireless or wired connection. In these examples, network 22 may not be needed in system 10. [0031] As illustrated in FIG. 1, device 12 may include GPU 14, processor 16, and device memory 18. Device 12 may include components in addition to those illustrated in FIG. 1. For example, FIG. 5 illustrates an example of device 12 that includes more components than those illustrated in FIG. 1. [0032] Examples of GPU 14 and processor 16 include, but are not limited, to a digital signal processor (DSP), a general purpose microprocessor, an application specific integrated circuit (ASIC), a field programmable logic array (FPGA), or other equivalent integrated or discrete logic circuitry. Furthermore, although GPU 14 and processor 16 are illustrated as separate components, aspects of this disclosure are not so limited. In alternate examples, GPU 14 and processor 16 may be part of a common integrated circuit. For purposes of illustration and ease of description, GPU 14 and processor 16 are illustrated as separate components. [0033] Examples of device memory 18 include, but are not limited to, a random access memory (RAM), a read only memory (ROM), or an electrically erasable programmable read-only memory (EEPROM). Examples of device memory 18 may also include storage devices such as CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory. In general, device memory 18 may include mediums that can be used to store desired program code in the form of instructions or data structures and that can be accessed by GPU 14 and processor 16. In some examples, device memory 18 may comprise one or more computer-readable storage media, such as a computer-readable storage device. For instance, in some example implementations, device memory 18 may include instructions that cause GPU 14 and processor 16 to perform the functions ascribed to GPU 14 and processor 16 in this disclosure. [0034] Device memory 18 may, in some examples, be considered as a non-transitory storage medium. The term "non-transitory" may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term "non-transitory" should not be interpreted to mean that device memory 18 is non-movable. As one example, device memory 18 may be removed from device 12, and moved to another device. As another example, a storage device, substantially similar to device memory 18, may be inserted into device 12. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in RAM). [0035] GPU 14 may be operable to execute one or more software applications. For example, GPU 14 may include a processor core on which one or more software applications may execute. The applications that execute on GPU 14 may be graphics applications such as vertex shaders and fragment shaders for generating graphics data. However, it may be possible for the applications that execute on GPU 14 to be unrelated to graphics processing. For example, a developer may consider it beneficial to exploit the massive parallelism of GPU 14 and develop a software application unrelated to graphics processing that exploits the massive parallelism of GPU 14. In these cases, GPU 14 may be referred to as a general purpose GPU (GP-GPU). [0036] As one example, FIG. 1 illustrates GPU 14 executing application 20. Application 20 may be a graphics application or a non-graphics application that executes on GPU 14. Application 20 is illustrated in a dashed box within GPU 14 to indicate that application 20 is executing on GPU 14. GPU 14 does not actually include application 20. For instance, application 20 may be stored in device memory 18, as illustrated in FIG. 1. [0037] Application 20 may be developed using a wide variety of different programming application processing interfaces (APIs). For example, a developer may have developed application 20 using any programming API such as OpenGL, OpenCL, WebGL, and WebCL. In general, applications that are developed using the OpenGL or WebGL APIs are designed for graphics processing. Applications that are developed using the OpenCL or WebCL APIs are designed for processing unrelated to graphics processing. The OpenGL, OpenCL, WebGL, and WebCL APIs are provided for illustration purposes and should not be considered limiting. The techniques of this disclosure may be extendable to APIs in addition to the examples provided above. In general, the techniques of this disclosure may be extendable to any technique utilized by a developer to develop application 20. [0038] As illustrated, device memory 18 may store application 20. For example, a user of device 12 may cause device 12 to download application 20 from application server device 38 via network 22. In turn, device 12 may store application 20 in device memory 18. There may be other ways in which device 12 stores application 20 in device memory 18. For instance, a user of device 12 may insert a FLASH drive into device 12 that stores application 20, and device 12 may retrieve application 20 from the FLASH drive and store application 20 in device memory 18. In this example, application server device 38 may not be needed. The above examples that describe the manner in which device 12 stores application 20 in device memory 18 are provided for purposes of illustration and should not be considered limiting. The techniques of this disclosure may be applicable to any technique in which application 20 is loaded into device memory 18. [0039] Device memory 18 may store the source code of application 20, intermediate representation of application 20, or object code of application 20. The source code of application 20 may be the text in the programming language in which application 20 was developed. The object code of application 20 may be the binary bits resulting from the compilation of application 20. For example, application server device 38 may compile the source code of application 20, and device 12 may download this precompiled object code of application 20. The intermediate representation of application 20 may be intermediate to the source code and the object code. For example, in the intermediate representation of application 20, the variables of the source code of application 20 may be replaced with register or memory identifiers for where the variables will be stored in device memory 18. [0040] The capability of the programmable core or cores of GPU 14 to execute applications, such as application 20, increases the functionality of GPU 14. However, the capability of GPU 14 to execute applications may invite misuse or suboptimal use of GPU 14 and make device 12 more susceptible to malicious applications or error-prone applications. For example, applications that execute solely on a central processing unit (CPU), such as processor 16, execute applications in a virtual machine setting which allocates the amount of memory of device memory 18 and storage locations within device memory 18 that are accessible to the applications. Because the applications are confined to the virtual machine of processor 16, the applications are unable to access out-of-bounds memory addresses and are limited to accessing memory addresses specifically provided to it by the virtual machine of processor 16. In this way, it may be difficult for applications executing on processor 16 to drastically impact processor 16, and device 12, in turn, in a negative manner. [0041] In some instances, it may not be practical to implement virtual machines on GPU 14. For example, the massive parallel processing capabilities of GPU 14 may not be well suited for executing virtual machines. For instance, if virtual machines were to execute on GPU 14, the virtual machines would dominate the resources of GPU 14, possibly restricting other applications from being executed on GPU 14. Accordingly, in some instances, virtual machines may not be able to limit the negative impacts of malicious or error-prone applications that execute on GPU 14. [0042] Applications that execute on GPU 14, such as application 20, may be considered as applications that execute "natively" (i.e., are not confined to a virtual machine). Native execution of application 20 may allow for application 20 to access larger portions of device memory 18. Such access may allow problematic application such as malicious applications or poorly designed (e.g., error-prone) applications to negatively impact the performance capabilities of GPU 14 and device 12. [0043] As one example, the developer of application 20 may develop application 20 such that application 20, when executed, provokes a denial of service attack on device 12, or propagates a virus that impacts the performance of device 12. For example, when GPU 14 executes application 20, application 20 may control GPU 14 such that GPU 14 may not be able to perform any other tasks such as rendering graphics content for a user interface. This may cause device 12 to "hang," which may drastically impact the functionality of device 12. In some cases, the developer of application 20 may develop application 20 to access portions of device memory 18 that it should be limited from accessing. Application 20 may store instructions for a virus in these portions of device memory 18. Then, when processor 16 or GPU 14 accesses these portions of device memory 18, processor 16 or GPU 14 may accidentally execute the stored virus. There may be additional examples of malicious applications, and aspects of this disclosure should not be considered limited to denial of service attacks or viruses. [0044] As another example, the developer of application 20 may inadvertently develop application 20 such that application 20 is inefficient or error-prone. For instance, an error-prone application may include infinite loops, out-of-bounds access to an array, or out-of-bounds access to memory locations of device memory 18. An inefficient application may not properly utilize the functionality of GPU 14. For example, an inefficient application may not properly use the programmable functionality of GPU 14. [0045] In some cases, application server device 38 may potentially provide a modicum of protection from malicious and error-prone applications. For example, the owner of application server device 38 may guarantee that none of the applications stored on application server device 38 are malicious or error-prone applications. However, this may not be the case in every instance (e.g., the owner of application server device 38 may not provide a guarantee of safe and proper operation), or the purported "guarantee" from the owner of application server device 38 may not be trustworthy. [0046] The techniques of this disclosure may assist in identifying whether applications that are to be executed on GPU 14 (e.g., application 20) are problematic applications such as malicious applications, as well as inefficient and error-prone applications, prior to execution. For example, the techniques of this disclosure may validate application 20 prior to GPU 14 executing application 20. Validation of application 20 may mean that the application 20 satisfies one or more performance criteria. For example, validation may mean determining with some level of assurance that application 20 is not a malicious application, an inefficient application, or an error-prone application. The example techniques described in this disclosure may transmit an indication to device 12 that indicates whether it is safe or inadvisable for GPU 14 to execute application 20. Processor 16 may then elect to instruct GPU 14 to execute application 20 based on the received indication. [0047] For example, processor 16 may instruct GPU 14 to execute application 20 if the indication is favorable, i.e., indicates that the program is not malicious, not inefficient, and/or not error-prone. In some examples, processor 16 may instruct GPU 14 to execute application 20 even if the indication is unfavorable. For example, if application 20 is not malicious or error-prone, but inefficient, processor 16 may instruct GPU 14 to execute application 20 as such execution may potentially not harm GPU 14 or device 12, but may not execute as efficiently as possible. [0048] In some examples, the techniques of this disclosure may also tune, or otherwise optimize, an inefficient application that is to be executed on GPU 14. For example, the developer of application 20 may not have any malicious intent, and may have developed application 20 such that application 20 is not prone to errors. Nevertheless, it may be possible that application 20 may not efficiently utilize the resources of GPU 14. [0049] As one example, one of the functions of application 20 may be to divide a task into workgroups and perform parallel processing on the workgroups to exploit the parallelism of GPU 14. For example, application 20 may divide an image into blocks and perform parallel processing on the blocks. The size of each of blocks may be based on the amount of local memory available on GPU 14. [0050] Because the developer of application 20 may want to design application 20 to execute on a variety of different GPUs, the developer may not know ahead of time how much local memory is available on a particular GPU, such as GPU 14, as different GPUs may include different amounts of local memory. To address this, the developer may develop application 20 to utilize variable sized blocks. In some instances, utilizing variable sized blocks may be less efficient than utilizing fixed sized blocks. The techniques of this disclosure may tune or optimize application 20 such that application 20 utilizes fixed sized blocks based on the amount of available memory in GPU 14. [0051] As another example, application 20 may perform matrix operations. The developer of application 20 may have developed application 20 to perform row-based matrix operations or column-based matrix operation. In some instances, GPU 14 may be better suited to perform row-based matrix operations, as compared to column-based matrix operations, or vice-versa. In this example, the techniques of this disclosure may modify application 20 to perform row-based matrix operations, if application 20 uses column-based matrix operations, to more efficiently utilize GPU 14. [0052] As yet another example, the developer may have developed application 20 for older versions of GPUs, and application 20 may not be optimized for GPU 14. The techniques of this disclosure may modify application 20 so that application 20 is more optimized for newer GPUs, such as GPU 14. GPU 14 may then execute application 20, which is optimized to execute on newer GPUs. [0053] In accordance with techniques of this disclosure, validation server device 24 may validate application 20, and in some examples, optimize or tune application 20. To validate application 20, validation server device 24 may implement a validation process that determines whether application 20 satisfies one or more performance criteria. For example, validation server device 24 may determine, with some reasonable level of assurance, whether application 20 is a malicious application, an error-prone application, or an inefficient application. In examples where application 20 is an error-prone application or an inefficient application, validation server device 24 may attempt to correct the errors in application 20, or optimize application 20 to be more efficient. [0054] It may be generally difficult to absolutely guarantee that application 20 is not a problematic application because it may be difficult to test all of the various ways in which application 20 may affect GPU 14 and device 12. Although an absolute guarantee that application 20 is not a problematic application may be difficult, validation server device 24 may employ different types of analysis to ensure with some reasonable amount of certainty that application 20 is not a problematic application. [0055] As illustrated in FIG. 1, validation server device 24 is external to device 12. Accordingly, the validation of application 20 and optimization of application 20 may be offloaded from device 12, which may be referred to as validating application 20 in the "cloud" because validation server device 24 is a server that is external to device 12. By offloading the validation of application 20 to validation server device 24, the probability of application 20 negatively impacting GPU 14 and device 12 may be reduced, in cases where application 20 is a malicious application or an error-prone application. Also, by offloading the optimization of application 20 to validation server device 24, power savings and processing efficiency may be realized because processor 16 does not need to consume power and clock cycles validating or optimizing application 20. [0056] There may be various examples of performance criteria that application 20 may need to satisfy for validation server device 24 to validate application 20. In general, the performance criteria can be part of static analysis, dynamic analysis, or a combination thereof. Static analysis refers to analysis of application 20 that can be performed without execution of application 20 to ensure that application 20 satisfies one or more performance criteria associated with static analysis. Dynamic analysis refers to analysis of application 20 during execution to ensure that application 20 satisfies one or more performance criteria associated with dynamic analysis. [0057] Validation server device 24 may be operable to perform static analysis, dynamic analysis, or both static analysis and dynamic analysis. For purposes of illustration, validation server device 24 is described as being operable to perform both static analysis and dynamic analysis, and therefore, operable to ensure that application 20 satisfies the performance criteria associated with both static analysis and dynamic analysis. In alternate examples, validation server device 24 may be operable to perform one of static analysis or dynamic analysis, and in these alternate examples, validation server device 24 may be operable to ensure that application 20 satisfies the performance criteria associated with the type of analysis that validation server device 24 is operable to perform (e.g., performance criteria associated with static analysis or dynamic analysis). [0058] As illustrated in FIG. 1, validation server device 24 includes emulator unit 26 and server memory 28. Server memory 28 may include data and/or instructions defining one or more GPU models 30, one or more GPU inputs 32, and one or more device models 34. Emulator unit 26 may be a processing unit that is operable to execute one or more of GPU models 30 and device models 34. As another example, emulator unit 26 may be a hardware emulation board, which may be a GPU. In some examples, emulator unit 26 may include two portions, which may be part of the same circuitry or separate, distinct circuits, where the first portion is a processing unit that is operable to execute one or more of GPU models 30 and device models 34, and the second portion that is the hardware emulation board (e.g., a GPU). Examples of emulator unit 26 include, but are not limited to, a DSP, a general purpose microprocessor, an ASIC, a FPGA, or other equivalent integrated or discrete logic circuitry. [0059] Server memory 28 may be similar to device memory 18. For instance, server memory 18 may be any medium that can be used to store desired program code in the form of instructions, data, and/or data structures and that can be accessed by emulator unit 26 and that cause emulator unit 26 to perform one or more the functions ascribed to emulator unit 26. Similar to device memory 18, server memory 28 may, in some examples, be considered as a non-transitory storage medium, as described above with respect to device memory 18. [0060] As illustrated, server memory 28 may store data and/or instructions defining one or more GPU models 30, GPU inputs 32, and device models 34. It may not be necessary for server memory 28 to store one or more GPU models 30, GPU inputs 32, and device models 34 in every example. For example, server memory 28 may store GPU models 30 and GPU inputs 32, but may not store device models 34. If validation server device 24 is operable to perform only static analysis, GPU models 30, GPU inputs 32, and device models 34 may not be needed. In some examples, it is with the GPU models 30, GPU inputs 32, and device models 34 that emulator unit 26 performs dynamic analysis. [0061] Each of the one or more GPU models 30 may correspond to a particular GPU type, and each of the one or more device models 34 may correspond to a particular device type. For instance, each one of the GPU models 30 may model the configuration of its corresponding GPU type in terms of parallel processing capabilities, local memory availability, and any other pertinent characteristic that defines the functionality of GPUs of that GPU type. Each one of the device models 34 may model the configuration of its corresponding device type in terms of memory configuration, processor speed, system bus speed, device memory, and any other pertinent characteristics that defines the functionality of devices of that device type. For examples, different vendors provide different types of devices with different functional characteristics, and device models 34 may be models for each of these different device types. [0062] The one or more GPU models 30 and device models 34 may each be considered as virtual model software that emulator unit 26 can execute. For example, when emulator unit 26 executes one of the GPU models 30, emulator unit 26 emulates the GPU to which the executed GPU model 30 corresponds. When emulator unit 26 executes one of the GPU models 30 and one of the device models 34, emulator unit 26 emulates the device to which the executed device model 34 corresponds, as if such a device included the GPU to which the executed GPU model 30 corresponds. In some examples, the GPU vendors and the device vendors may supply GPU models 30 and device models 34, respectively. There may be other ways in which server memory 28 stores GPU models 30 and device models 34, and aspects of this disclosure are not limited to the specific examples where vendors provide GPU models 30 and device models 34. [0063] For example, when emulator unit 26 executes one of GPU models 30, emulator unit 26 may function as if the parallel processing capabilities and local memory availability of emulator unit 26 (as two examples) are functionally equivalent to the GPU type associated with executed one of GPU models 30. Similarly, when emulator unit 26 executes one of device models 34, emulator unit 26 may function as if the memory configuration, processor speed, system bus speed, and device memory of emulator unit 26 (as four examples) are functionally equivalent to the device type associated with executed one of device models 34. In other words, the execution of one of GPU models 30 causes emulator unit 26 to function as the GPU associated with the executed one of GPU models 30. The execution of one of GPU models 30 and one of device models 34 causes emulator unit 26 to function as a device associate with the executed one of device models 34 that includes the GPU associated with the executed one of GPU models 30. [0064] One of the plurality of GPU models 30 may be a generic GPU model 30, and one of the plurality of device models 34 may be generic device model 34. In some examples, server memory 28 may store a generic GPU model and a generic device model instead of a plurality of GPU models and device models. The generic GPU model and device model may not correspond to a particular GPU or device type, but may be suitable for static and dynamic analysis. In some examples, if server memory 28 does not store a GPU model that corresponds to GPU 14, then the generic GPU model may be suitable for validation purposes. The generic GPU model and the generic device model may conform to a base profile of operation common to most GPUs or devices. [0065] There may be various types of GPUs and devices that may be modeled by the generic GPU and generic device models. As one example, the generic GPU model may model a GPU with average parallel processing capabilities and local memory availability as compared to other GPUs. The generic device model may model a device with average memory configuration, processor speed, system bus speed, and device memory as compared to other devices. [0066] As an illustrative example for validating and/or optimize application 20 for execution on GPU 14, device 12 may download application 20 from application server device 38. Application 20 may be source code, an intermediate representation, or precompiled object code, as described above. Processor 16 may then install application 20 on device 12. If application 20 is in source code or in the intermediate representation, e.g., not pre-compiled object code, part of the installation may be processor 16 executing a compiler to compile the code of application 20. [0067] In some examples, where the downloaded code of application 20 is source code or the intermediate representation, prior to compiling, processor 16 may cause device 12 to transmit the downloaded code of application 20 to validation server device 24 for validation. In some examples, where the downloaded code of application 20 is precompiled object code, processor 16 may cause device 12 to transmit the pre-compiled object code to validation server device 24 for validation before allowing GPU 14 to execute application 20. [0068] For security purposes, processor 16 may encrypt or otherwise make secure the downloaded code of application 20 that device 12 transmits to validation server device 24. In some examples, processor 16 may require authorization from a user prior to transmitting the downloaded code of application 20 to validation server device 24. Furthermore, in some examples of dynamic analysis, processor 16 may cause device 12 to transmit the GPU type of GPU 14 or both the GPU type of GPU 14 and the device type of device 12 to validation server device 24. In some of these instances, processor 16 may require authorization from the user prior to transmitting the GPU type of GPU 14 or the GPU type of GPU 14 and device type of device 12 to validation server device 24. [0069] Emulator unit 26 may be operable to perform static analysis on application 20 to determine whether application 20 satisfies the performance criteria associated with static analysis. For example, emulator unit 26 may analyze application 20 without executing application 20. As one example, emulator unit 26 may parse through the downloaded code of application 20 to identify code known to be code for a virus. For instance, server memory 28 may store code of known viruses, and emulator unit 26 may compare the downloaded code of application 20 to the code of the known viruses. Determining that the downloaded code of application 20 does not include code of known viruses may be one example of performance criteria that needs to be satisfied to validate application 20. [0070] As part of the static analysis, emulator unit 26 may compile the downloaded code of application 20, in examples where the downloaded code of application 20 is the source code or intermediate representation of application 20, to identify errors in application 20 during compilation. For example, emulator unit 26 may execute compiler 36, as indicated by dashed lines within emulator unit 26. The compilation of application 20, with compiler 36, may identify any infinite loops in application 20 or out-of-bounds access to memory array locations within application 20. In this example, determining that there are not errors in application 20, that can be found during compilation, may be another example of performance criteria that needs to be satisfied to validate application 20. [0071] Static analysis may be limited in the types of errors, inefficiencies, and malicious code that can be found. For example, if the downloaded code of application 20 is pre-compiled object code, it may not be possible for emulator unit 26 to identify errors in application 20 during compilation because the code for application 20 is already pre-compiled object code. As another example, if application 20 relies on pointers for storage, it may not be possible to determine if there are any out-of-bounds memory access errors in application 20 based simply on compiling application 20. [0072] To further determine whether application 20 is problematic (e.g., inefficient, error-prone, or malicious), emulator unit 26 may perform dynamic analysis. As indicated above, dynamic analysis refers to analysis of application 20 during execution. In some examples, to perform dynamic analysis emulator unit 26 may cause itself to appear as if it is GPU 14. For example, in some instances, in addition to transmitting the downloaded code of application 20, processor 16 may cause device 12 to transmit the GPU type of GPU 14 to emulator unit 26 of validation server device 24, or both the GPU type of GPU 14 and the device type of device 12 to emulator unit 26 of validation server device 24 via network 22. Emulator unit 26, in turn, may identify which one of GPU models 30 corresponds to the GPU type of GPU 14, and may execute that one of GPU models 30 to emulate GPU 14 on validation server device 24. In examples where emulator unit 26 also receives the device type, emulator unit 26 may identify which one of device models 34 corresponds to the device type of device 12, and may execute that one of device models 34 to emulate device 12 on validation server device 24. [0073] In examples where device 12 does not transmit the GPU type of GPU 14 and/or the device type of device 12, emulator unit 26 may execute the generic GPU model and/or the generic device model. Alternatively, if device 12 does transmit the GPU type of GPU 14 and/or the device type of device 12, but none of GPU models 30 and device models 34 correspond to the GPU and device type, emulator unit 26 may execute the generic GPU model and/or generic device model. In examples where emulator unit 26 is or includes a hardware emulation board, such a hardware emulation board may be designed to function, at least in part, as a generic GPU on a generic device. [0074] Once emulator unit 26 emulates itself to be GPU 14, or to be GPU 14 as part of device 12, emulator unit 26 may execute application 20. For example, if emulator unit 26 received the source code or intermediate code of application 20, emulator unit 26 may compile the source code via compiler 36, and execute the resulting object code. If emulator unit 26 received pre-compiled object code of application 20, emulator unit 26 may execute the pre-compiled object code of application 20. [0075] The techniques of this disclosure may be considered, in some examples, as being performed at least in part by emulator unit 26 executing a virtual model based on the type of GPU 14 (e.g., one of GPU models 30). Then, when emulator unit 26 executes application 20, application 20 can be considered as executing in the virtual model (e.g., the one of GPU models 30 that is executing on emulator unit 26). For example, both the GPU model, of GPU models 30, that corresponds to GPU 14 and application 20 are executing on emulator unit 26. In the techniques of this disclosure, because emulator unit 26 functions as if it is GPU 14, due to the execution of the GPU model that corresponds to GPU 14, when emulator unit 26 executes application 20, application 20 may execute on the GPU model that corresponds to GPU 14. [0076] As part of the dynamic analysis, emulator unit 26 may receive hypothetical input values for application 20 that is executing on emulator unit 26. As illustrated, server memory 28 may store one or more GPU inputs 32. These one or more GPU inputs 32 may be values for different graphical images or objects. In some examples, each of these different images may be of different sizes. In examples where application 20 is not related to graphics processing, GPU inputs 32 may be non-graphics inputs. It may be difficult to ensure that emulator unit 26 tests every permutation and combination of possible input values. Accordingly, server memory 28 may store a sufficient number and/or range of GPU inputs 32, e.g., as samples or test inputs, to provide some reasonable level of assurance that application 20 is not a malicious or highly error-prone application (e.g., a problematic application). The GPU inputs 32 may include different types of images or objects to be processed and rendered by GPU 14. [0077] During execution of application 20, emulator unit 26 may input the values of GPU inputs 32 and may analyze functionality of the executed GPU model of GPU models 30. In examples, where emulator unit 26 is a hardware emulation board, emulator unit 26 may analyze the functionality of the hardware emulation board. For example, emulator unit 26 may monitor memory accesses by the executed GPU model of GPU models 30. In this example, emulator unit 26 may determine whether any of the memory accesses by the executed GPU model of GPU models 30 are out-of-bounds memory accesses of server memory 28. As another example, emulator unit 26 may monitor the memory addresses where the execute GPU model of GPU models 30 is writing information in server memory 28. Based on the memory accesses of the GPU model and the memory addresses where the GPU model is writing information, emulator unit 26 may be able to determine whether application 20 is error-prone. Such memory tracking may be particularly useful when application 20 reads or writes to variables using pointers. [0078] For example, if the executed GPU model writes information to or reads information from out-of-bounds memory locations, emulator unit 26 may determine that application 20 is error-prone, and possibly malicious. For example, if the executed GPU model writes information to or reads information from a non-existent memory location, emulator unit 26 may determine that application 20 is error-prone. If the executed GPU model writes information to a memory location that is not reserved for the GPU model, emulator unit 26 may determine that application 20 is error-prone or possibly malicious. For example, emulator unit 26 may determine that application 20 is attempting to load a virus into the memory locations which application 20 should not be able to access. [0079] The limitations of where application 20 can write information to or read information from (e.g., access) during execution may be an example of performance criteria associated with dynamic analysis. For example, the performance criteria may be a limitation of the memory locations that application 20 is allowed to access. If the GPU model of GPU models 30 accesses memory location outside of the limited memory locations, due to the execution of application 20, application 20 may be in violation of the performance criteria. For example, there may be threshold number of access outside the limited memory locations that is allowable, in accordance with the performance criteria. The threshold number may be zero to provide a highest level of assurance that application 20 is not attempting to access memory locations outside of the limited memory locations. [0080] In examples where emulator unit 26 also executes one of device models 34, emulator unit 26 may similarly analyze functionality of the executed device model of device models 34. For example, emulator unit 26 may monitor the functions performed by the executed one of device models 34 while emulator unit 26 executes one of GPU models 30. For example, the execution of one of device models 34 may result in emulator unit 26 device 12 which includes a system bus. Emulator unit 26 may determine whether the execution of application 20 causes the system bus to overload resulting in device 12 slowing down. [0081] The monitoring of the system bus to determine whether the system bus is being overloaded may be an example of performance criteria associated with dynamic analysis. For example, if the execution of application 20 causes the system bus to overload, application 20 may be in violation of the performance criteria. In this example, the performance criteria may allow for some level of overloading the system bus, as it may not be possible to not allow any overloading of the system bus. For example, the perform criteria may establish a percentage amount threshold of system bus overload. If the system bus overload is below the allowable percentage, the performance criteria is satisfied. Otherwise, the performance criteria is not satisfied. [0082] Emulator unit 26 may similarly detect malicious applications such as denial of service attacks. For example, emulator unit 26 may monitor the rate at which the GPU model of GPU models 30 is able to execute application 20. If emulator unit 26 detects slow responsiveness, unintended termination, or hanging, emulator unit 26 may determine application 20 is an application designed for a denial of service attack, or a very poorly designed application. In this example, the performance criteria may be a threshold execution time or execution rate for a particular task of application 20. If application 20 takes longer than the threshold execution time to complete a particular task or executes the task at a rate less than the threshold execution rate, application 20 may be in violation of the performance criteria. [0083] As another example of emulator unit 26 detecting malicious applications or error-prone applications, emulator unit 26 may monitor instructions issued by application 20. For instance, in some examples, instructions issued by application 20 may be 96-bit words. However, not all combinations of 96 bits represents a valid instruction. In some examples, GPU 14 may be designed to ignore invalid instructions; however, this may not be case for every example of GPU 14. To avoid GPU 14 from inadvertently executing an invalid instruction, emulator unit 26 may determine whether the instructions issued by application 20 during execution are valid or invalid instructions. If emulator unit 26 determines that application 20 is issuing invalid instructions, emulator unit 26 may determine that application 20 is a malicious application, an error-prone application, or an inefficient application. [0084] As another example, during execution, application 20 may write data to and read data from registers. A malicious application, error-prone application, or inefficient application may read data from unwritten registers. If application 20 attempts to read data from a register that was not previously written to, the data read by application 20 may be meaningless data (i.e., uninitialized data). Such reading of uninitialized data may result in unpredictable behavior. In some examples, emulator unit 26 may monitor which registers application 20 writes to during execution, and may determine whether application 20 is reading from a register that has not previously been written to. If emulator unit 26 determines that application 20 is reading from unwritten registers, emulator unit 26 may determine that application 20 is a malicious application, error- prone application, or an inefficient application. [0085] If emulator unit 26 determines that the performance criteria associated with static analysis and dynamic analysis are met, validation server device 24 may transmit an indication to device 12 indicating that application 20, with some level of assurance, satisfies one or more performance criteria associated with static analysis, dynamic analysis, or both static and dynamic analysis (e.g., validates application 20). In this case, validation server device 24 may provide an indication that application 20 is validated for use by GPU 14. Otherwise, in some examples, validation server device 24 may transmit an indication to device 12 indicating that application 20 is invalidated for use by GPU 14, such that it is inadvisable for GPU 14 to execute application 20. In response, processor 16 may instruct GPU 14 to execute application 20 based on the received indication. [0086] In examples where validation server device 24 received source code or intermediate code of application 20, emulator unit 26 may also transmit the compiled object code of application 20, as compiled by compiler 36. In this way, the compilation of application 20 may also be offloaded from device 12 and offloaded to an external device, such as validation server device 24. [0087] Validation server device 24 may also be tasked with optimizing or tuning application 20. For example, emulator unit 26 may receive the source code or intermediate code of application 20. As part of the static and/or dynamic analysis, emulator unit 26 may determine that application 20 is somewhat error-prone or would inefficiently utilize the capabilities of GPU 14. In these examples, rather than transmitting an indication to device 12 indicating that it is inadvisable for GPU 14 to execute application 20, emulator unit 26 may attempt to correct the errors of application 20 or attempt to tune application 20 for GPU 14 when it is determined that application 20 may execute inefficiently or with errors on GPU 14. [0088] If emulator unit 26 is able to correct the errors or make application 20 more efficient, emulator unit 26 may compile the modified code of application 20 to generate object code that GPU 14 should execute. Emulator unit 26 may then transmit the resulting object code to device 12 with an indication that GPU 14 should execute the resulting object code. In this case, GPU 14 may execute the object code generated from the modified code, rather than the object code generated from the original code of application 20. Alternatively, emulator unit 26 may transmit the modified code of application 20 without compilation. [0089] In either of these examples, the validation of application 20 may be considered as being part of the transmission of the modified code of application 20 (e.g., the transmission of the modified code or the resulting object code). For example, when device 12 receives modified code of application 20 from validation server device 24, device 12 may automatically determine that the modified code of application 20 is suitable for execution because device 12 received the modified code of application 20 from validation server device 24. In this sense, the validation that device 12 receives from validation server device 24 may be an explicit validation or an implicit validation. In either case, i.e., explicit or implicit validation, emulator unit 26 may determine with some level of assurance that application 20 or the modified version of application 20 satisfies one or more performance criteria. [0090] If emulator unit 26 is unable to correct the errors of application 20, emulator unit 26 may transmit the indication indicating that it is inadvisable to execute application 20 on GPU 14. If emulator unit 26 is unable to make application 20 more efficient, emulator unit 26 may still transmit an indication to device 12 indicating that it may be suitable for GPU 14 to execute application 20 because while application 20 may not be completely efficient, application 20 may not be error-prone or malicious. [0091] To tune or optimize application 20, emulator unit 26 may insert code (e.g., source code or intermediate code), replace code, or modify code of application 20 in some other manner. In some examples, emulator unit 26 may collect statistics to determine how well the compiled code of application 20 works. For example, application 20 may utilize array indices for storing variable values in an array. Emulator unit 26 may add code into the source code of application 20 that checks that array indices, utilized by application 20, are within the range. Emulator unit 26 may add code into the source code of application 20 that causes application 20 to abort when an array index is not within range. Emulator unit 26 then may compile the modified source code to produce object code for execution of application 20 by GPU 14. [0092] Optimization or tuning may be based on the assumption that applications, such as application 20, are generally developed to exploit the high level of parallelism of GPU 14. If the developer did not intend to exploit the parallelism of GPU 14, the developer would have developed application 20 to not execute on GPU 14, and rather execute on processor 16. [0093] For example, the developer of application 20 may have developed application 20 to perform image processing on blocks of images in parallel. As described above, the size of the blocks of the images may be based on the amount of available local memory on GPU 14. Because the developer may not know how much memory is available on GPU 14, the developer may develop application 20 to use variable-sized blocks, instead of the more efficient fixed sized blocks. For example, fixed-size blocks may be more efficient because the size of the blocks does not change during execution. [0094] In some examples, emulator unit 26 may determine the optimal size for the blocks because the GPU model of GPU models 30 that corresponds to GPU 14 may include information that indicates the size of the local memory of GPU 14. In this example, emulator unit 26 may select the optimal size for the blocks based on the amount of available local memory on GPU 14, the amount of data that will be needed to write to or read from the local memory of GPU 14, and other such information which may not be available to developer of application 20. In aspects of this disclosure, emulator unit 26 would know how much local memory is available and how much data needs to be written or read from local memory because emulator unit 26 may execute application 20 on the GPU model of GPU models 30 that correspond to GPU 14. [0095] In these examples, emulator unit 26 may update or otherwise modify the source code or intermediate code of application 20 to fix block size to the optimally determined size. In other words, emulator unit 26 may determine the optimal size of the blocks to best utilize the parallelism of GPU 14. Emulator unit 26 may then compile this modified code of application 20, and transmit the resulting object code to device 12 for execution on GPU 14. In this way, when GPU 14 executes the modified application 20, the modified application 20 may execute more efficiently on GPU 14, as compared to the original application 20. [0096] In another example for optimization, as described above, application 20 may perform matrix operations. In this example, emulator unit 26 may determine whether column-based matrix operations or row-based matrix operations are handled easier by GPU 14. For instance, emulator unit 26 may cause the GPU model of GPU models 30 that corresponds to GPU 14 to execute application 20 using row-based matrix operations and using column-based matrix operations. Emulator unit 26 may compare the efficiency of the column-based and row-based matrix operations (e.g., number of accesses to memory, amount of processing time, and other such efficiency measures). Based on the measured efficiency, emulator unit 26 may modify the code of application 20. For example, if column-based operations are more efficiently executed than row- based operations, emulator unit 26 may modify the code of application 20 so that the matrix operations are performed as column-based operations. Similarly, if row-based operations are more efficiently executed than column-based operations, emulator unit 26 may modify the code of application 20 so that the matrix operations are performed as row-based operations. [0097] In another example for optimization, as described above, the developer of application 20 may have developed application 20 to be executed on older versions of GPU. In this case, application 20 may properly execute on a GPU such as GPU 14; however, application 20 may not fully exploit the functionality of GPU 14. For example, application 20 may unnecessarily limit the amount of graphics or non-graphics data that GPU 14 should process in parallel because older versions of GPUs may be limited in processing capabilities. In this example, emulator unit 26 may modify the code of application 20 such that, when application 20 is executed, application 20 causes GPU 14 to process more data in parallel. There may be other examples of ways in which emulator unit 26 may modify application 20 such that application 20 is better suited for execution on newer GPUs, and aspects of this disclosure should not be considered limited to the above examples. [0098] After optimizing application 20, emulator unit 26 may transmit the modified or updated code of application 20 to device 12. In this example, processor 16 may compile the code of application 20, as received from emulator unit 26, and instruct GPU 14 to execute the resulting object code. In some other examples, emulator unit 26 may compile the modified application 20, via compiler 36, and transmit the resulting object code to device 12. In this example, processor 16 may instruct GPU 14 to execute the received object code for application 20. [0099] In some examples, emulator unit 26 may validate application 20 and optimize or tune application 20 once. After such validation, GPU 14 may execute application 20 as needed without requiring further validation or optimization. Also, in some examples, after emulator unit 26 validates application 20, emulator unit 26 may store an indication in server memory 28 that indicates that this application 20 has already been validated. In these examples, when emulator unit 26 receives code for validation, emulator unit 26 may first determine whether emulator unit 26 previously validated the code based on the indication stored in server memory 28. If emulator unit 26 previously validated the code, emulator unit 26 may immediately valid that received code. For example, emulator unit 26 may validate application 20, as received from device 12. Subsequently, emulator unit 26 may receive code for application 20 from a device other than device 12. In this case, emulator unit 26 may first determine that the received code is same as the code that emulator unit 26 previously validated, and if so, may immediately validate the received code. In this manner, emulator unit 26 may not need to perform the static and/or dynamic analysis again for previously validated code. [0100] FIG. 2 is a flowchart illustrating an example operation of device 12. For purposes of illustration only, reference is made to FIG. 1. Device 12 may receive application 20 that is to be executed by GPU 14 (40). For example, device 12 may download application 20 from application server device 38. As another example, application 20 may be preloaded on device memory 18. As described above, device 12 may receive the source code, intermediate code (e.g., intermediate representation of application 20), or object code of application 20. [0101] Device 12 may transmit the code of application 20 to validation server device 24 (42). For example, device 12 may transmit the source code, intermediate code, or object code of application 20 to validation server device 24 for validation of application 20. In some examples, device 12 may transmit the code of application 20 to validation server device 24 once for validation. GPU 14, of device 12, may then execute application 20 as needed without requiring subsequent validation. [0102] In response to transmitting the code of application 20 to validation server device 24 for validation, device 12 may receive the validation from validation server device 24 (44). Alternatively, device 12 may receive an invalidation or either a validation or an invalidation. The validation from server device 24 may indicate that application 20 satisfies one or more performance criteria. If application 20 does not satisfy the one or more performance criteria, validation server device 24 may indicate that application 20 did not satisfy the performance criteria. For example, the validation may indicate that application 20 satisfies performance criteria associated with static analysis, dynamic analysis, or both static and dynamic analysis. In some examples, validation server device 24 may optimize or tune application 20 to make application 20 more efficient or less error-prone. In this case, the validation may indicate that the modified version of application 20 satisfies one or more performance criteria. [0103] In some examples, processor 16 of device 12 may instruct GPU 14 of device 12 to execute application 20 based on the validation (48). For example, if validation server device 24 indicates that application 20 satisfies the performance criteria, processor 16 may instruct GPU 14 to execute application 20. Otherwise, processor 16 may not allow GPU 14 to execute application 20. [0104] In some alternate examples, prior to execution, device 12 may receive a modified version of application 20 (46). In FIG. 2, the dashed line from block 44 to block 46, and from block 46 to block 48 is used to indicate that the functions of block 46 may not be necessary in every example. For instance, validation server device 24 may be able to optimize or tune application 20, and may transmit the modified version of application 20. As another example, device 12 may transmit the source code or intermediate code of application 20, and receive a compiled version of application 20 from validation server device 24. As yet another example, device 12 may receive a compiled version of the code as modified by validation server device 24 (e.g., modified for optimization or tuning). In these examples, processor 16 may instruct GPU 14 to execute the modified version of application 20 (48). [0105] FIG. 3 is a flowchart illustrating an example operation of validation server device 24. For purposes of illustration only, reference is made to FIG. 1. Validation server device 24 may receive application 20, which is to be executed by GPU 14, from device 12 (50). For example, validation server device 24 may receive source code, intermediate code, or object code of application 20 from device 12 via network 22. [0106] Validation server device 24 may perform at least one of static analysis and dynamic analysis on application 20 (52). For example, as part of static analysis, emulator unit 26 of validation server device 24 may compile the code of application 20, and monitor for any errors during the compilation of application 20. As part of the dynamic analysis, emulator unit 26 of validation server device 24 may execute a virtual model of GPU 14 or the virtual model of GPU 14 and a virtual model of device 12. As described above, GPU models 30 and device models 34 may include a virtual model of GPU 14 and device 12, respectively. In some examples, GPU models 30 and device models 34 may include a generic GPU model and a generic device model. [0107] For example, emulator unit 26 may receive an identification of GPU 14 and/or device 12 from device 12. Emulator unit 26 may identify which one of GPU models 30 corresponds to GPU 14 and which one of device models 34 corresponds to device 12, and execute the corresponding GPU and device models. If there is no corresponding GPU and/or device models for GPU 14 and device 12, or if emulator unit 26 did not receive an identification of GPU 14 and/or device 12, emulator unit 26 may execute the generic GPU and device models. [0108] As part of the dynamic analysis, emulator unit 26 may execute application 20 and input application 20 with GPU inputs 32 for analyzing application 20. In these examples, application 20 may be considered as executing on the corresponding virtual model of GPU 14, which is executing on emulator unit 26. In this way, emulator unit 26 may execute application 20, as if application 20 is executing on GPU 14. Emulator unit 26 may monitor the functions performed by the corresponding virtual model of GPU 14 such as memory accesses, rate of execution, termination instance, and other functions pertinent to the functionality of GPU 14. [0109] Emulator unit 26 may determine whether application 20 satisfies one or more performance criteria (54). The one or more performance criteria may be performance criteria associated with static analysis and performance criteria associated with dynamic analysis. For example, the one or more performance criteria may be criteria that there are no errors in the compilation of application 20, as evaluated by compiling application 20 during the static analysis. As another example, the one or more performance criteria may be criteria that application 20 not access out-of-bounds memory locations and not use up resources of GPU 14 such that GPU 14 is not able to perform other tasks in parallel, as evaluated by executing application 20 and providing application 20 with GPU inputs 32 during the dynamic analysis. There may be other examples of performance criteria that emulator unit 26 may determine that application 20 satisfies. [0110] Validation server device 24 may transmit a validation of application 20 to device 12 based on the determination (56). For example, validation server device 24 may transmit a validation of application 20 to device 12 if application 20 satisfies the one or more performance criteria. Otherwise, validation server device 24 may transmit an invalidation if application 20 does not satisfy the one or more performance criteria. For example, if emulator unit 26 determines that application 20 satisfies the one or more performance criteria, validation server device 24 may transmit an indication to device 12 indicating as such. Alternatively, if emulator unit 26 determines that application 20 does not satisfy the one or more performance criteria, validation server device 24 may transmit an indication to device 12 indicating as such. [0111] FIG. 4 is a flowchart illustrating another example operation of validation server device 24. For purposes of illustration only, reference is made to FIGS. 1 and 3. Similar to FIG. 3, validation server device 24 may receive application 20, which is to be executed by GPU 14, from device 12 (58). In this example, emulator unit 26 may modify application 20 (e.g., the source code or intermediate code of application 20) to optimize or tune application 20. For example, emulator unit 26 may modify the code of application 20 so that application 20 executes more efficiently on GPU 14. Validation server device 24 may then transmit modified application 20 to device 12 (62). In some examples, validation server device 24 may transmit the source code or intermediate code of the modified application 20. As another example, validation server device 24 may compile the modified code of application, and transmit the resulting object code to device 12. [0112] FIG. 5 is a block diagram illustrating the example device of FIG. 1 in further detail. For instance, FIG. 5 illustrates device 12 of FIG. 1 in further detail. For example, as indicated above, examples of device 12 include, but are not limited to, mobile wireless telephones, PDAs, video gaming consoles that include video displays, mobile video conferencing units, laptop computers, desktop computers, television set- top boxes, and the like. [0113] As illustrated in FIG. 5, device 12 may include GPU 14, processor 16, device memory 18, transceiver module 64, user interface 66, display 68, and display processor 70. GPU 14, processor 16, and device memory 18 may be substantially similar or identical to those illustrated in FIG. 1. For purposes of brevity, only the components that are shown in FIG. 5, but not shown in FIG. 1 are described in detail. [0114] Device 12 may include additional modules or units not shown in FIG. 5 for purposes of clarity. For example, device 12 may include a speaker and a microphone, neither of which are shown in FIG. 5, to effectuate telephonic communications in examples where device 12 is a mobile wireless telephone, or a speaker where device 12 is a media player. Furthermore, the various modules and units shown in device 12 may not be necessary in every example of device 12. For example, user interface 66 and display 68 may be external to device 12 in examples where device 12 is a desktop computer or other device that is equipped to interface with an external user interface or display. [0115] Examples of user interface 66 include, but are not limited to, a trackball, a mouse, a keyboard, and other types of input devices. User interface 66 may also be a touch screen and may be incorporated as a part of display 68. Transceiver module 64 may include circuitry to allow wireless or wired communication between device 12 and another device or a network. Transceiver module 64 may include one or more modulators, demodulators, amplifiers, antennas and other such circuitry for wired or wireless communication. Display 68 may comprise a liquid crystal display (LCD), an organic light emitting diode display (OLED), a cathode ray tube (CRT) display, a plasma display, a polarized display, or another type of display device. [0116] In some examples, after GPU 14 generates the graphics data for display on display 68, GPU 14 may output the resulting graphics data to device memory 18 for temporary storage. Display processor 70 may retrieve the graphics data from device memory 18, perform any post-processing on the graphics data, and output the resulting the graphics data to display 68. For example, display processor 70 may perform any further enhancements or scale the graphics data generated by GPU 14. [0117] In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a computer-readable medium. Computer-readable media may include computer data storage media. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. By way of example, and not limitation, such computer-readable media can comprise random access memory (RAM), read-only memory (ROM), EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. [0118] The code may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term "processor," as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. Also, the techniques could be fully implemented in one or more circuits or logic elements. [0119] The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (i.e., a chip set). Various components, modules or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware. [0120] Various examples have been described. These and other examples are within the scope of the following claims.
Security techniques for a Peripheral Component Interconnect (PCI) express (PCIE) system include a transport layer protocol (TLP) packet that has a prepended TLP prefix indicating the security features of the TLP packet and an integrity check value (ICV) appended to the TLP packet. The ICV is based on the TLP packet and any TLP prefixes including a security prefix. At a receiver, if the ICV does not match, then the receiver has evidence that the TLP packet may have been subjected to tampering. Further, the TLP packet may be encrypted to prevent snooping, and this feature would be indicated in the TLP prefix. Still further, the TLP prefix may include a counter that may be used to prevent replay attacks. PCIE contemplates flexible TLP prefixes, and thus, the standard readily accommodates the addition of a TLP prefix which indicates the security features of the TLP packet.
What is claimed is:1. A method of providing secure communications between devices on either end of a Peripheral Component Interconnect (PCI) express (PCIE) link, comprising:prepending a transport layer protocol (TLP) prefix onto a TLP packet, wherein the TLP prefix comprises an indication that the TLP packet is a secure packet;appending a cryptographically-generated identifier calculated at least in part on a portion of the TLP packet to the TLP packet; andsending the TLP packet from a first one of the devices over the PCIE link to the other one of the devices.2. The method of claim 1, further comprising forming the TLP packet.3. The method of claim 2, wherein forming the TLP packet comprises making a write command in the TLP packet.4. The method of claim 3, further comprising encrypting a payload in the TLP packet.5. The method of claim 2, wherein forming the TLP packet comprises making a read command in the TLP packet.6. The method of claim 5, further comprising, responsive to sending the TLP packet, receiving a secure completion packet.7. The method of claim 6, further comprising decrypting a payload in the secure completion packet.8. The method of claim 1, wherein prepending the TLP prefix onto the TLP packet comprises prepending with a TLP prefix comprising a payload encrypted bit.9. The method of claim 1, wherein appending the cryptographically-generated identifier to the TLP packet comprises appending an integrity check value (ICV) to the TLP packet.10. The method of claim 1, wherein prepending the TLP prefix onto the TLP packet comprises prepending with a TLP prefix comprising a packet number11. The method of claim 1, wherein prepending the TLP prefix onto the TLP packet comprises prepending with a TLP prefix comprising a key number bit.12. A method of providing secure communications between devices on either end of a Peripheral Component Interconnect (PCI) express (PCIE) link, comprising:prepending a transport layer protocol (TLP) prefix onto a TLP packet, wherein the TLP prefix comprises an indication that the TLP packet is a secure packet;encrypting a payload of the TLP packet; andsending the TLP packet from a first one of the devices over the PCIE link to the other one of the devices.13. The method of claim 12, further comprising forming the TLP packet.14. The method of claim 13, wherein forming the TLP packet comprises making a write command in the TLP packet.15. The method of claim 13, wherein prepending the TLP prefix onto the TLP packet compri ses prepending with a TLP prefix compri sing a payload encrypted bit.16. The method of claim 12, wherein prepending the TLP prefix onto the TLP packet comprises prepending with a TLP prefix comprising a packet number.17. A method of providing secure communications between devices on either end of a Peripheral Component Interconnect (PCI) express (PCIE) link, comprising:prepending a transport layer protocol (TLP) prefix onto a TLP packet, wherein the TLP prefix comprises an indication that the TLP packet is a secure packet and includes a counter value representing a monotonically- increasing counter to detect replay attacks; andsending the TLP packet from a first one of the devices over the PCIE link to the other one of the devices.18. The method of claim 17, further comprising forming the TLP packet.19. The method of claim 18, wherein forming the TLP packet comprises making a write command in the TLP packet20. The method of claim 19, further comprising encrypting a payload in the TLP packet.21. The method of claim 18, wherein forming the TLP packet comprises making a read command in the TLP packet.22. The method of claim 17, wherein prepending the TLP prefix onto the TLP packet comprises prepending with a TLP prefix comprising a payload encrypted bit.23. The method of claim 17, further comprising appending a cryptographically- generated identifier to the TLP packet.24. The method of claim 17, further comprising running different counters for different types of packets.25. The method of claim 17, further comprising separate counters for read commands, write commands, and completion packets.26. A Peripheral Component Interconnect (PCI) express (PCIE) system comprising: a host device comprising:a root complexa host encryption/decryption engine; anda host interface;a PCIE link coupled to the host interface; andan endpoint device comprising:an endpoint interface coupled to the PCIE link; andan endpoint encryption/decryption engine;wherein the root complex is configured to:prepend a transport layer protocol (TLP) prefix onto a TLP packet, wherein the TLP prefix comprises:an indication that the TLP packet is a secure packet; and a counter value representing a monotonically-increasing counter to detect replay attacks;encrypt a payload of the TLP packet;append a cryptographically-generated identifier calculated at least in part on a portion of the TLP packet to the TLP packet; andsend the TLP packet from a first one of the host device and the endpoint device over the PCIE link to the other one of the host device and the endpoint device.27. The PCIE system of claim 26, further comprising a switch positioned within the PCIE link.28. The PCIE system of claim 27, wherein the host device is configured to provide end-to-end security and the switch is configured to pass the TLP packet through without modification.29. The PCIE system of claim 27, wherein the switch is configured to decrypt the TLP packet and re-encrypt prior to sending the TLP packet to the endpoint device.30. The PC IE system of claim 26, wherein the PCIE system is integrated into a device selected from the group consisting of: a set top box; an entertainment unit; a navigation device; a communications device; a fixed location data unit; a mobile location data unit; a global positioning system (GPS) device; a mobile phone; a cellular phone; a smart phone; a session initiation protocol (SIP) phone; a tablet; a phablet; a server; a computer; a portable computer; a mobile computing device; a wearable computing device; a desktop computer; a personal digital assistant (PDA); a monitor; a computer monitor; a television, a tuner; a radio; a satellite radio; a music player; a digital music player; a portable music player; a digital video player; a video player; a digital video disc (DVD) player; a portable digital video player; an automobile, a vehicle component; avionics systems; a drone; and a multicopter.
SECURITY TECHNIQUES FOR A PERIPHERAL COMPONENTINTERCONNECT (PCI) EXPRESS (PCIE) SYSTEMPRIORITY APPLICATIONS[0001] The present application is related to and claims the benefit of U.S. Provisional Patent Application Serial Number 62/731,286, filed September 14, 2018 and entitled “SECURITY TECHNIQUES FOR A PERIPHERAL COMPONENT INTERCONNECT (PCI) EXPRESS (PCIE) SYSTEM.”[0002] The present application is also related to and claims the benefit of U.S. Provisional Patent Application Serial Number 62/745,542, filed October 15, 2018 and entitled “SECURITY TECHNIQUES FOR A PERIPHERAL COMPONENTINTERCONNECT (PCI) EXPRESS (PCIE) SYSTEM”[0003] The present application is also related to and claims the benefit of U.S. Provisional Patent Application Serial Number 62/788,264, filed January 4, 2019 and entitled “SECURITY TECHNIQUES FOR A PERIPHERAL COMPONENTINTERCONNECT (PCI) EXPRESS (PCIE) SYSTEM”[0004] The present application is also related to and claims the benefit of U.S Provisional Patent Application Serial Number 62/840,643, filed April 30, 2019 and entitled “SECURITY TECHNIQUES FOR A PERIPHERAL COMPONENTINTERCONNECT (PCI) EXPRESS (PCIE) SYSTEM.”[0005] The present application is also related to and claims the benefit of U.S. Patent Application Serial Number 16/569,816, filed September 13, 2019 and entitled “SECURITY TECHNIQUES FOR A PERIPHERAL COMPONENTINTERCONNECT (PCI) EXPRESS (PCIE) SYSTEM.”[0006] The above-listed applications are incorporated by reference in their entireties.BACKGROUND L Field of the Disclosure[0007] The technology of the disclosure relates generally to Peripheral Component Interconnect (PCI) express (PC IP) systems and, more particularly, to providing security to such PCIE systems. II. Background[0008] Computing devices have become common in modem society. The increase in use of computing devices is attributable, in part, to increased functionality of the devices in many instances, the increase in functionality is a result of different integrated circuits (ICs) within the computing device, each having different capabilities. A byproduct of having plural ICs in a computing device is a requirement to have some mechanism through which the ICs may communicate.[0009] One popular mechanism is a bus compliant with the Peripheral Component Interconnect (PCI) standard. PCI has evolved through several versions and has a variety of variations. Perhaps the most popular variation as of this writing is PCI Express (PC IE). At the time of the parent provisional applications, the most recent version of PCIE was revision 5.0, version 0.7, which was published on March 31, 2018. More recently revision 5.0, version 1.0 was published on May 28, 2019. While the PCIE standard is flexible and widely used, it has, to date, not incorporated any security measures to prevent unauthorized tampering, snooping, replays, or the like.[0010] Historically, the lack of security measures was mitigated by the fact that the conductors that carried PCIE-compliant signals were relatively inaccessible within a computing device. However, recent developments have seen PCIE adopted outside traditional computing devices and expanded into roles previously not contemplated. For example, PCIE may be used within a wiring harness within a vehicle. The longer conductors between components leads to greater vulnerability and increases the need for a security system for a PCIE link.SUMMARY OF THE DISCLOSURE[0011] Aspects disclosed in the detailed description include security techniques for a Peripheral Component Interconnect (PCI) express (PCIE) system. In an exemplary aspect, a transport layer protocol (TLP) packet has a TLP prefix prepended indicating the security features of the TLP packet. Such security features may include a counter or counter equivalent to prevent replay attacks, encryption of a payload of the TLP packet to prevent snooping, and/or an authentication value calculated from one or more portions of the TLP packet to detect tampering. The TLP prefix may indicate which, if any, of the security features are present in the associated TLP packet. In an exemplary aspect, the counter may be a monotonically-increasing number included in each packet. In an exemplary aspect, the authentication value may be an integrity cheek value (ICY) appended to the TLP packet. The ICY is based on the TLP packet and any TLP prefixes including a security prefix. At a receiver, if the ICY does not match, then the receiver has evidence that the TLP packet may have been subjected to tampering.[0012] In this regard in one aspect, a method of providing secure communications between devices on either end of a PCIE link is disclosed. The method includes prepending a TLP prefix onto a TLP packet. The TLP prefix includes an indication that the TLP packet is a secure packet. The method also includes appending a cryptographically-generated identifier calculated at least in part on a portion of the TLP packet to the TLP packet. The method also includes sending the TLP packet from a first one of the devices over the PCIE link to the other one of the devices.[0013] In another aspect, a method of providing secure communications between devices on either end of a PCIE link is disclosed. The method includes prepending a TLP prefix onto a TLP packet. The TIP prefix includes an indication that the TLP packet is a secure packet. The method also includes encrypting a payload of the TLP packet. The method also includes sending the TLP packet from a first one of the devices over the PCIE link to the other one of the devices.[0014] In another aspect, a method of providing secure communications between devices on either end of a PCIE link is disclosed. The method includes prepending a TLP prefix onto a TLP packet. The TLP prefix includes an indication that the TLP packet is a secure packet and includes a counter value representing a monotonically- increasing counter to detect replay attacks. The method also includes sending the TLP packet from a first one of the devi ces over the PCIE link to the other one of the devices.[0015] In another aspect, a PCIE system is disclosed. The PCIE system includes a host device. The host device includes a root complex, a host encryption/decryption engine, and a host interface. The PCIE system also includes a PCIE link coupled to the host interface. The PCIE system also includes an endpoint device. The endpoint device includes an endpoint interface coupled to the PCIE link and an endpoint encryption/decryption engine. The root complex is configured to prepend a TLP prefix onto a TLP packet. The TLP prefix includes an indication that the TLP packet is a secure packet and a counter value representing a monotonically-increasing counter to detect replay attacks. The root complex is also configured to encrypt a payload of the TLP packet. The root complex is also configured to append a cryptographically- generated identifier calculated at least in part on a portion of the TLP packet to the TLP packet. The root complex is also configured to send the TLP packet from a first one of the host device and the endpoint device over the PCIE link to the other one of the host device and the endpoint device.BRIEF DESCRIPTION OF THE FIGURES[0016] Figure 1 is a block diagram of an exemplary computing system with devices coupled by Peripheral Component Interconnect (PCI) express (PCIE) links;[0017] Figure 2 illustrates a block diagram of an exemplary PCIE endpoint device and, particularly, configuration registers within the endpoint;[0018] Figure 3 illustrates a block diagram of a host having a processor and PCIE hardware with registers according to an exemplary aspect of the present disclosure;[0019] Figure 4 is a simplified schematic diagram of an exemplary computing system within a vehicle;[0020] Figure 5 is a simplified PCIE write packet with a prefix and suffix according to exemplary'aspects of the present disclosure;[0021] Figure 6 is a simplified PCIE read packet with a prefix and suffix according to exemplary aspects of the present disclosure,[0022] Figure 7 is a simplified PCIE completion packet with a prefix and suffix according to exemplary aspects of the present disclosure;[0023] Figure 8 provides a simplified diagram of packet flow through a PCIE system according to exemplary'aspects of the present disclosure,[0024] Figure 9 is a block diagram of an exemplary mobile terminal that can include a PCIE system that uses security features according to the present disclosure; and[0025] Figures 10A and 10B are flowcharts of exemplary security-driven processes according to the present disclosure from a root complex and endpoint perspective, respectively. DETAILED DESCRIPTION[0026] With reference now to the drawing figures, several exemplary aspects of the present disclosure are described. The word“exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects.[0027] Aspects disclosed in the detailed description include security techniques for a Peripheral Component Interconnect (PCI) express (PC IE) system. In an exemplary aspect, a transport layer protocol (TLP) packet has a TLP prefix prepended indicating the security features of the TLP packet. Such security features may include a counter or counter equivalent to prevent replay attacks, encryption of a payload of the TLP packet to prevent snooping and/or an authentication value calculated from one or more portions of the TLP packet to detect tampering. The TLP prefix may indicate which, if any, of the security features are present in the associated TLP packet. In an exemplary' aspect, the counter may be a monotonically-increasing number included in each packet. In an exemplar}- aspect, the authentication value may be an integrity check value (ICV) appended to the TLP packet. The ICV is based on the TLP packet and any TLP prefixes including a security prefix. At a receiver, if the ICV does not match, then the receiver has evidence that the TLP packet may have been subjected to tampering.[0028] Before addressing particular aspects of the present disclosure, an overview of a PCIE system and exemplar}'use cases is provided with reference to Figures 1-4. Exemplary packets according to the present disclosure are provided beginning with reference to Figure 5.[0029] In this regard, Figure 1 illustrates a computing environment 100 with a host 102 coupled to a plurality of devices 104(1)-104(N) directly and to a second plurality of devices 106(1)-106(M) through a switch 108. The host 102 may include a PCIE root complex (RC) 1 10 that includes a link interface (not illustrated directly) that is configured to couple to plural PCIE links 112(I)-112(N+1). Note that PCIE links may sometimes be referred to as a bus or buses. However, as the PCIE link is a point-to- point link, the term link is used in the PCIE specification. The switch 108 communicates to the devices 106(1)-106(M) through PCIE links 114(l)-l l4(M). The devices 104(1)-104(N) and 106(1)-106(M) may be or may include PCIE endpoints. In a first exemplary aspect, the computing environment 100 may be a single computing device such as a computer with the host 102 being a central processing unit (CPU) and the devices 104(1)-104(N) and 106(1)-106(M) being internal components such as hard drives, disk drives, or the like. In a second exemplary aspect, the computing environment 100 may be a computing device where the host 102 is an integrated circuit (IC) on a board and the devices 104(1)-104(N) and 106(1)-106(M) are other ICs within the computing device. In a third exemplar}- aspect, the computing environment 100 may be a computing device having an internal host 102 coupled to external devices 104(l)-l04(N) and 106(l)-l06(M) such as a server coupled to one or more external memory drives. Note that these aspects are not necessarily mutually exclusive in that different ones of the devices may be ICs, internal, or external relative to a single host 102.[0030] Figure 2 provides a block diagram of a device 200 that may be one of the devices 104(1)-104(N) or the devices 106(1)-106(M) of Figure 1. In particular, the device 200 acts as an endpoint in a PCIE system, and may be, for example, a memory device that includes a memory element 202 and a control system 204. Further, the device 200 includes a PCIE hardware element 206 that includes a link interface configured to couple to a PCIE link. The PCIE hardware element 206 may include a physical layer (PHY) 208 that is, or works with, the link interface to communicate over the PCIE link. The control system 204 communicates with the PCIE hardware element 206 through a system bus 210. The PCIE hardware element 206 may further include a plurality of registers 212 The registers 212 may be conceptually separated into configuration registers 214 and capability registers 216. The configuration registers 214 and the capability registers 216 are defined by the original PCI standard, and more recent devices that include the registers 214 and 216 are backward compatible with legacy devices. The ability to use the encryption systems or other security mechanisms of the present disclosure may be stored in the capability registers 216 or extended configuration register space 218 and accessed on start up or initialization. The PCIE hardware element 206 may further have an encryption/decryption engine 220 that encrypts and decrypts packets sent according to the present disclosure.[0031] Similarly, Figure 3 illustrates a host 300 which may be the host 102 of Figure 1. The host 300 may include an application processor 302 or other processor core which communicates with a memory element 304 having an operating system 306 operating therewith. A system bus 308 interconnects the application processor 302 with the memory element 304 and a PCIE RC 310. The PCIE RC 310 may include a PHY 312 that works with or is a link interface configured to couple to a PCIE link. The PCIE RC 310 further includes a plurality of registers including a configuration address register 318 (C O NF IG_ ADDR) and a data register 320 (CONFIG _DATA). The capabilities and configurations of the various endpoints may be stored in the data register 320 so that the root complex may use the security features of the present disclosure with those endpoints which are so enabled. Further, the PCIE RC 310 may include an encryption/decryption engine 322 that encrypts and decrypts packets sent according to the present disclosure.[0032] Note that having the encryption/decryption engines 220 and 322 at both the endpoint and the root complex allows bi-directional encrypted communication.[0033] Figure 4 is a simplified block diagram of a vehicle 400 which may include one or more PCIE links therein. The vehicle 400 is illustrated as an automobile, but could be another form of vehicle such as a motorcycle, a boat, a plane, or the like. The vehicle 400 may include a variety of sensors 402(1 )-402(N), where, as illustrated, N=7. It should be appreciated that more or fewer than seven sensors 402 may be present. The sensors 402(1 )-402(N) may be proximity sensors that use sonar, lasers, or some form of radar to detect proximate objects. Additionally, the vehicle 400 may include one or more internal sensors 404(1)-4Q4(2). The internal sensors 404(l)-404(2) may detect whether a door 406 is open or other internal condition of the vehicle 400. The vehicle 400 may further include one or more cameras 4Q8(1)-408(M), where, as illustrated, M=4. It should be appreciated that more or fewer than four cameras 408 may be present. The vehicle 400 may have a network 410 that couples some or all of the sensors 402 and 404 to a hub 412. Network bridges 414 may be present to assist in providing the network 410. Displays 416 and speakers 418 may also be associated with the network 410. The hub 412 may include a control system that accesses software stored in memory 420.[0034] It should be appreciated that the illustration of the vehicle 400 in Figure 4 is greatly simplified. The network 410 may be a single homogeneous network such as a common bus having a multi-drop or ring topology or may be formed from distinct communication links such as separate point-to-point cables. There may likewise be multiple hubs for multiple purposes. Some inputs may go to one or more hubs. The hubs may be interconnected and/or duplicated for redundancy purposes. In the event that the communication links are point-to-point cables, PCIE may be used, and thus, automotive or other vehicular environments may benefit from exemplary aspects of the present disclosure in particular, automobile providers may not want their algorithms exposed to competitors or to be hacked in such a way that autonomous driving or engine performance is compromised.[0035] Exemplary aspects of the present disclosure provide three supporting security measures to assist in providing secure messaging between hosts and endpoints over a PCIE link. A first security measure is encrypting a payload of a packet such that the packet cannot be snooped. A second security measure is using a counter or counter equivalent to prevent replay attacks. A third security measure is to provide a cryptographically-generated identifier or authentication mechanism calculated at least in part on the contents of a packet to detect tampering. To assist in using these security measures, exemplary aspects of the present disclosure provide a TLP prefix that includes an indication as to whether the specific security measures are used.[0036] In this regard, processes 1000 A and 1000B for implementing the security techniques of the present disclosure are provided with reference to Figures 10A and 10B, respectively. The process 1000A is from the root complex point-of-view, and the process 1000B is from the endpoint point-of-view. While the processes 1000 A and 1000B assume certain points of view for the sake of illustration, it should be appreciated that communication may be bi-directional, and the endpoint 200 can read from and write to the root complex 310 within a host 300. In this regard, the process 1000 A begins with the initialization of the system (block 1002). This initialization may occur at start up, reboot, or the like, but involves the root complex 310 recognizing that the endpoint 200 has been connected to the PCIE link. The root complex 310 reads the capabilities of the endpoint 200 (block 1004) such as by reading the registers in the capability registers 216. Once the root complex 310 knows that the endpoint 200 is capable of secure PCIE, the root complex 310 may enable the secure function in the endpoint 200 and establish and exchange secure keys (block 1006). Keys may be negotiated based on a public key infrastructure (PKI) system through a Diffie-Helman key exchange or any other key exchange protocol. Keys may be provisioned and used as-is or with some derivation if needed or desired.[0037] The process 1000 A continues with the root complex 310 determining if the payload should not be snooped (block 1008). This determination may be based on a default rule, a type of endpoint to which the root complex 310 is communicating, or other rale as needed or desired. At some point the root complex 310 is informed that the host 300 has data that is to be written to the endpoint. Thus, a write command is needed (block 1010) to provide the data to the endpoint 200. The root complex 310 creates a header (block 1012) corresponding to the write command with the appropriate address. The header may include a packet number that acts as a counter to prevent replay attacks. A TLP prefix is created containing an indication of which, if any, security features are being used (block 1014). The payload is encrypted and an integrity check value (ICV) is calculated (block 1016) depending on whether encryption was indicated at block 1008 and/or whether tamper detection is desired. This step may be skipped if there is no requirement to protect the payload from snooping or tamper detection. The root complex 310 appends the ICV (block 1018) based on the prefix, the header, and the payload. The write command is sent to the endpoint 200 (block 1020) over the PCIE link.[0038] In the event the root complex 310 is instructed to acquire data from the endpoint 200, a read command is needed (block 1022), and the process 1000 A continues. The root complex 310 creates a header (block 1024) with an address from which data is to be read. The root complex 310 creates a TLP prefix (block 1026) indicating the secure nature of the packet. The root complex 310 calculates and appends an ICV (block 1028) based on the combined header and prefix. The root complex 310 then sends the read command (block 1030). As a read command, there is no payload typically, and thus, encryption may be omitted. The root complex 310 should eventually receive a secure completion packet from the endpoint 200 (block 1032) The root complex 310 decrypts the payload of the secure completion packet if encryption was indicated (block 1034) and checks the ICV of the secure completion packet (block 1036) to see if the data in the payload of the secure completion packet has been compromised. Note that the decryption of the payload of the packet and the check of the ICV may occur at the same time or in reversed order without departing from the scope of the present disclosure. If the ICY check fails, the receiver may invoke a failure comparable to a failure for an end-to-end cyclic redundancy check (ECRC) error. That is, for a memory read command, a completion is returned with an Unsupported Request (UR) Completion Status. As the receiver cannot differentiate between a naturally occurring error and an attack, software may be used to make such determination. Alternatively, the receiver may terminate the connection as corrupted. After termination, a new encryption link may be established with new keys. Likewise, an alert may be provided to an intrusion detection system or the like. Other shifts in encryption (algorithm, length of key, or the like) may also be performed to facilitate re establishment of a secure or safe state. The packet number of the TLP prefix may also be verified to stop replay attacks. The root complex 310 may then use the data received (block 1038).[0039] Note that the read and write commands may be reversed, duplicated, or otherwise occur in a different order than that presented in process 1000 A. Note further that the ICY may be replaced with some other authentication value or cryptographically-generated identifier that is calculated based at least in part on one or more portions of the packet. Likewise, while a packet number that acts as a monotonicaJly-increasing counter is contemplated as being used to detect replay attacks, other forms of counter equivalents may be used without departing from the scope of the present disclosure.[0040] The process 1000B from the endpoint 200 side is similar in that the process 1000B starts at system initialization (block 1050). During initialization, the endpoint 200 may provide an indication that the endpoint 200 has secure capability (block 1052). If the root complex 310 enables secure communication, the root complex 310 and the endpoint 200 exchange keys (block 1054). At some point, the root complex 310 sends, and the endpoint 200 receives, a write command (block 1056). The endpoint 200 decrypts the payload (block 1058) if the payload is encrypted and checks the ICV of the command (block 1060). The ICV may be checked before the payload is decrypted if desired. The endpoint 200 also checks the packet number to see if this packet is the next in sequence to prevent replay attacks (or at least detect an attempted replay attack). Out of order packets may be discarded. If the ICV is correct, then the endpoint 200 writes the data from the payload to the address in the header (block 1062). Such address may correspond to an address in the memory element 202.[0041] At some other time, a read command is received (block 1064). The endpoint 200 checks the ICV (block 1066) If the ICY is correct, then the endpoint 200 retrieves the data from the address (in the memory element 202) indicated in the header (block 1068) and creates a packet with a header (block 1070). The packet may include an appropriate packet number. The endpoint 200 may then create a TLP prefix (block 1072). The endpoint 200 encrypts the payload if encryption is indicated and calculates an ICV for the packet (block 1074) and appends the ICV (block 1076) to the packet. The endpoint 200 then sends a secure completion packet with the payload to the root complex 310 (block 1078).[0042] Again, it should be noted that the nature of bi-directional communication could invert the roles of the root complex 310 and the endpoint 200 with respect to the origin of a read/write command and the corresponding response.[0043] Further, it should be noted that in instances where there is an intervening switch or bridge (e.g., the switch 108), aspects of the present disclosure are secure end- to-end such that such an intermediate switch does not impact the encryption. For example, the switch 108 may pass the packet through without checking or changing the data. All that the switch 108 has to evaluate is the header for the address so that the pass through may occur, and the header is not encrypted. It should be appreciated that if there is no intervening switch, then exemplary' aspects of the present disclosure are effectively link-based security (e.g , the link between host 102 to EP 104(1) is both end- to-end secure as well as link-based secure). Note that each link of a multi-step PCIE system (e.g., from host 102 to EP 106(1) through the switch 108) may be link-based secure, although in such case, each component may need to be authenticated. This structure requires the switches 108 to be encryption capable. Legacy switches that do not have encryption capability may need to be replaced in such a system.[0044] To implement the processes 1000 A and 1000B, a new TLP prefix is defined and prepended to a packet. Likewise, an ICV or other authentication value calculated based at least in part on portions of the packet is appended to the packet. Exemplary modified packets are illustrated in Figures 5-7. In this regard. Figure 5 illustrates a secure write packet 500 that has a header 502, a payload 504 to be written to the endpoint 200 (or if originating at the endpoint 200, to be written to the host 300), a TLP prefix 506, and an ICV 508. The TLP prefix 506 is prepended to the header 502, and the ICV 508 is appended after the payload 504. The header 502 is never encrypted as the information in the header 502 is required for routing purposes. The header 502 is well understood with fields defined by the PCIE specification.[0045] It should be appreciated that the header 502 may be signed (but still not encrypted) as outlined in AES-GCM-128. Such signed headers may be referred to as “additional authenticated data” (AAD).[0046] The TLP prefix 506 includes a TLP identifier field 510 which indicates that the TLP prefix 506 is a security prefix. Further, the TLP prefix 506 includes a key number (KN) bit 512, a long ICV (LI) bit 514, a payload encrypted (PE) bit 516 (or alternatively referred to as a TLP encrypted (TE) bit), and a packet number field 518. It should be appreciated that TLP prefixes are defined in terms of eight-bit sections. The TLP identifier field 510 is eight bits, and the KN bit 512, the LI bit 514, and the PE bit 516 are another three bits. The packet number field 518 may be twenty-one (21) bits to provide an appropriately-sized TLP prefix 506.[0047] The payload 504 may be encrypted if needed or desired. If the payload 504 is encrypted, this state is indicted by setting the PE bit 516. If the data in the payload 504 merely needs to avoid tampering, and there is no concern about snooping, then the data in the payload 504 may not be encrypted. Forgoing encryption in this fashion may reduce the overall overhead that would otherwise be incurred encrypting at the root complex 310 and decrypting at the endpoint 200. Note that in an exemplary aspect, if the PE bit 516 is set in a read request, the return completion payload should be encrypted . Note that TLP data is DWORD aligned (e.g., 4 bytes). A block cypher may accept 16 bytes of data (e.g., block aligned). Thus, padding data may not be sent but may be appended by a security layer at the receiver to perfor calculations.[0048] The ICV 508 may be a long ICV having thirty-two (32) bytes or a short. ICV having sixteen (16) bytes. The difference in length of the ICV 508 is denoted in the LI bit 514 As noted above, the ICV 508 may be calculated using an integrity cypher algorithm such as AES-GCM-128 and may be calculated across the secure TLP prefix, the TLP header, and the payload (i.e., it is calculated at least in part based on one or more portions of the packet). [0049] Note that while specific fields and bits are contemplated, these bits and fields may be modified without departing from the scope of the present disclosure. For example, the LI bit 514 may be omited if only one size ICV is permitted. Likewise, the LI bit 514 may more generically indicate the presence or absence of an authentication value (or a cryptographically-generated identifier) appended after the packet. While exemplary aspects of the present disclosure contemplate that the ICV may replace a cyclic redundancy check (CRC) field (e.g., the CRC field defined by the PCIE specification) in a packet, it may be possible to append an ICV or other authentication value after the CRC field. It should be appreciated that in addition to detecting tampering, use of the ICV or other cryptographically-generated identifier may also detect bit errors such as is currently done with the CRC[0050] The KN bit 512 designates which key is being used. As is understood in the cryptography field, keys can expire after a certain amount of time. Rekeying is an understood process. However, because the present disclosure relies on a shared key, the endpoint may also be rekeyed when the first key expires. So that the endpoint knows which key is being used (e.g., before or after rekeying), the KN bit 512 may be toggled to indicate which key is being used.[0051] The packet number field 518 contains a packet number which is a monotonically-increasing number that is used to prevent replay atacks. The packet number may also be used to help detect missing packets if needed or desired. In use, the receiver expects valid packets with an incrementing packet number. If a packet is received with a non-sequential packet number, the packet may be discarded, even if the ICV is valid. By discarding packets with duplicative or non-sequential packet numbers, replay attacks (reusing the same packet to achieve duplicative results) are avoided. If tw?enty-one bits does not fit for some reason, the packet number may be shortened or just the least significant bits sent.[0052] Likewise, instead of counting packets with the packet number field 518, the packet number field 518 may refer to a TLP number or some other element may be counted (e.g., sets of four packets or sets of three write commands) so long as it is monotonica!ly increasing and able to be compared with a readily verifiable metric to defeat replay attacks. It should be appreciated that the packet number may also be used to formulate an initialization vector (IV) as an input into a block cypher algorithm such as AES-GCM-128.[0053] In an exemplary aspect, the packet number represents the twenty-one least significant bits of a larger counter (e.g., 50 bits) to represent how long a key may last before being refreshed. When the larger counter is about to overflow, the KN bit 512 may be toggled to indicate the second bit is to be used. The larger the counter, the less frequently the key would have to be refreshed.[0054] To prevent replay, the larger counter is incremented on both the root complex and the endpoint. It should be appreciated that if the counter is reset during a power cycle event, a replay attack may be enabled. Thus, in an exemplary aspect, the counter is maintained across power cycles. To this end, one side may store the last counter value in a secure non-volatile memory. Note that devices which exchange keys during a session start up can restart the counter. A configuration register may also be used to set the counter. As a further option, each type of PCEE TLP (posted, non- posted, completion) may have a separate counter.[0055] Similarly, Figure 6 illustrates a secure read packet 600 that has a header 602, a TLP prefix 606, and an ICY 608. The TLP prefix 606 is prepended to the header 602, and the ICV 608 is appended after the header 602. As this is a read command, there is no payload. The header 602 is never encrypted as the information in the header 602 is required for routing purposes. The header 602 is well understood with fields defined by the PCIE specification.[0056] It should be appreciated that the header 602 may be signed (but still not encrypted) as outlined in AES-GCM-128. Such signed headers may be referred to as “additional authenticated data” (AAD).[0057] The TLP prefix 606 includes a TLP identifier field 610 which indicates that the TLP prefix 606 is a security prefix. Further, the TLP prefix 606 includes a KN bit 612, a LI bit 614, a PE bit 616, and a packet number field 618. It should be appreciated that TLP prefixes are defined in terms of eight-bit sections. The TLP identifier field 610 is eight bits, and the KN bit 612, the LI bit 614, and the PE bit 616 are another three bits. The packet number field 618 may be twenty-one (21) bits to provide an appropriately-sized TLP prefix 606. The sub-portions of the TLP prefix 606 function as described above. The PE bit 616 indicates whether the returned payload should be encrypted.[0058] It should be appreciated that as with the TLP prefix 506 of Figure 5, the bits and fields of the TLP prefix 606 may be varied, renamed, or alternatively implemented without departing from the present disclosure.[0059] Similarly, Figure 7 illustrates a secure completion packet 700 responsive to a read command. The secure completion packet 700 has a header 702, a payload 704 to be written to the endpoint, a TLP prefix 706, and an ICV 708. The TLP prefix 706 is prepended to the header 702, and the ICV 708 is appended after the payload 704. The header 702 is never encrypted as the information in the header 702 is required for routing purposes. The header 702 is well understood with fields defined by the PC EE specification.[0060] It should be appreciated that the header 702 may be signed (but still not encrypted) as outlined in AES-GCM-128. Such signed headers may be referred to as “additional authenticated data” (AAD).[0061] The TLP prefix 706 includes a TLP identifier field 710 which indicates that the TLP prefix 706 is a security prefix. Further, the TLP prefix 706 includes a KN bit 712, a LI bit 714, a PE bit 716, and a packet number field 718. It should be appreciated that TLP prefixes are defined in terms of eight-bit sections. The TLP identifier field 710 is eight bits, and the KN bit 712, the LI bit 714, and the PE bit 716 are another three bits. The packet number field 718 may be twenty-one (21) bits to provide an appropriately-sized TLP prefix 706.[0062] It should be appreciated that as with the TLP prefix 506 of Figure 5, the bits and fields of the TLP prefix 706 may be varied, renamed, or alternatively implemented without departing from the present disclosure.[0063] Figure 8 illustrates where in the stack aspects of secure signaling take place. In this regard Figure 8 illustrates a system 800 having a transmitter 802 and a receiver 804. In the transmitter 802, at a transaction layer 806, the TLP header (e.g., 502, 602, 702) is generated along with the TLP prefix (e.g., 506, 606, 706) and the payload is optionally encrypted. Based on this, an ICV (e.g., 508, 608, 708) is calculated and appended. The combined packet is then passed to a data link layer 808 and to a physical layer 810, where the packet is sent over a PCIE link 812 to the receiver 804. At the receiver 804, the packet passes through a physical layer 814 and a data link layer 816 before reaching a transaction layer 818 where the payload is decrypted (including any padding), the ICY is validated, and the TLP header is evaluated to output the payload to an appropriate address. As noted above, in an end-to-end secure system, the intermediate switches 108 do not alter the TLP packets and are not required to decrypt or validate the data. A PCIE port may send secure or unsecure TLP packets based on whether a secure TLP prefix is prepended. It should be appreciated that a completion for a secure read request TLP must be returned with a secure TLP prefix.[0064] It should be appreciated that an RC 1 10 which is generating secure requests should know which key and counter set it should use to protect the data. If there is only one endpoint 200 that is secure, then such determination is relatively simple. However, if plural endpoints (e.g., 104(I)-104(N)) are using secure messaging, the determination is non -trivial. As described, a key and counter are an end-to-end attribute and would differ from PCIE link to PCIE link for a single RC 110. Thus, the RC 110 needs to be able to discern the destination endpoint. One solution would be to use software that would sniff data within the RC 110 to determine a destination and alert the PHY as to the destination. A second solution would be to define PCIE capabilities in which a PCIE driver would build a table of address ranges and the corresponding security attributes.Secure memory map:[0065] Note that the dashed line in the second row' may indicate an unsecure endpoint, or if the address is not within the table, the unsecure nature may be inferred. [0066] Note that an endpoint may have multiple entries corresponding to different memory regions of the same endpoint (e.g., for different functions) but would still use the same key as the endpoint is the same.[0067] Some additional precautions may be present when a switch, such as switch 108 of Figure 1 is present. Such a switch should be trusted and authenticated. Likewise, the switch 108 should block traffic received on a secure link from being sent on an unsecure link. A key management entity should maintain secure key-pairs through the whole PCIE topology. A secure PCIE link may be brought up sequentially starting with the link closest to the RC 110 and moving outward to assist in making sure the end-to-end link is secure (e.g., link 112(N+l) first, then link 114(1) of Figure 1). Note that there may be additional vulnerabilities for a switch 108 that are caused by internal vulnerabilities. For example, a packet going from endpoint 106(1) to endpoint 106(M) may pass through an internal conductor within the switch 108. If the data is encrypted end-to-end, then the data remains secure on the internal conductor. However, if the security is link based, then the data may have been unencrypted at arrival at the switch 108, and then re-encrypted as it leaves the switch 108, leaving the data unprotected on the internal conductors. Alternatively, additional security measures may be provided to protect the data in such situations.[0068] The security techniques for a PCIE system according to aspects disclosed herein may be provided in or integrated into any processor-based device. Examples, without limitation, include a set top box, an entertainment unit, a navigation device, a communications device, a fixed location data unit, a mobile location data unit, a global positioning system (GPS) device, a mobile phone, a cellular phone, a smart phone, a session initiation protocol (SIP) phone, a tablet, a phablet, a server, a computer, a portable computer, a mobile computing device, a wearable computing device (e.g., a smart watch, a health or fitness tracker, eyewear, etc.), a desktop computer, a personal digital assistant (PDA), a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, a digital music player, a portable music player, a digital video player, a video player, a digital video disc (DVD) player, a portable digital video player, an automobile, a vehicle component, avionics systems, a drone, automobile, vehicle and a multicopter. [0069] In this regard, Figure 9 is a system-level block diagram of an exemplary mobile terminal 900 such as a smart phone, mobile computing device tablet, or the like. The mobile terminal 900 includes an application processor 904 (sometimes referred to as a host) that communicates with a mass storage element 906 through a universal flash storage (UFS) bus 908. The application processor 904 may further be connected to a display 910 through a display serial interface (PS! ) bus 912 and a camera 914 through a camera serial interface (CSI) bus 916. Various audio elements such as a microphone 918, a speaker 920, and an audio codec 922 may be coupled to the application processor 904 through a serial low-power interchip multimedia bus (SLIMbus) 924. Additionally, the audio elements may communicate with each other through a SOUNPWIRE bus 926. A modem 928 may also be coupled to the SLIMbus 924 and/or the SOUNDWIRE bus 926. The modem 928 may further be connected to the application processor 904 through a PCIe bus 930 and/or a system pow'er management interface (SPMI) bus 932 PCIE buses such as the PCIE bus 930 may benefit from exemplar}'aspects of the present disclosure.[0070] With continued reference to Figure 9, the SPMI bus 932 may also be coupled to a wireless local area network (LAN or WLAN) IC (LAN IC or WLAN IC) 934, a power management integrated circuit (PMIC) 936, a companion IC (sometimes referred to as a bridge chip) 938, and a radio frequency IC (RFIC) 940. It should be appreciated that separate PCI buses 942 and 944 may also couple the application processor 904 to the companion IC 938 and the WLAN IC 934. The application processor 904 may further be connected to sensors 946 through a sensor bus 948. The modem 928 and the RFIC 940 may communicate using a bus 950.[0071] With continued reference to Figure 9, the RFIC 940 may couple to one or more RFFE elements, such as an antenna tuner 952, a switch 954, and a power amplifier 956 through a radio frequency front end (RFFE) bus 958. Additionally, the RFIC 940 may couple to an envelope tracking power supply (FTPS) 960 through a bus 962, and the ETPS 960 may communicate with the power amplifier 956. Collectively, the RFFE elements, including the RFIC 940, may be considered an RFFE system 964 It should be appreciated that the RFFE bus 958 may be formed from a clock line and a data line (not illustrated). [0072] It should be appreciated that designers may have different priorities with respect to security. There are trade-offs for using link-based security versus end-to-end security. These differences are noted in the second appendix attached hereto.[0073] Those of skill in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the aspects disclosed herein may be implemented as electronic hardware, instructions stored in memory or in another computer readable medium and executed by a processor or other processing device, or combinations of both. The devices described herein may be employed in any circuit, hardware component, IC, or IC chip, as examples. Memory di sclosed herein may be any type and size of memory' and may be configured to store any type of information desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. How such functionality is implemented depends upon the particular application, design choices, and/or design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation deci sions should not be interpreted as causing a departure from the scope of the present disclosure.[0074] The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration)[0075] The aspects disclosed herein may be embodied in hardware and in instructions that are stored in hardware, and may reside, for example, in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM: (EEPROM), registers, a hard disk, a removable disk, a CD-ROM, or any other form of computer readable medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a remote station. In the alternative, the processor and the storage medium may reside as discrete components in a remote station, base station, or server.[0076] It is also noted that the operational steps described in any of the exemplary aspects herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary aspects may be combined. It is to be understood that the operational steps illustrated in the flowchart diagrams may be subject to numerous different modifications as will be readily apparent to one of skill in the art. Those of skill in the art will also understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0077] The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations. Thus, the disclosure is not intended to be limited to the examples and designs described herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
A nitride read only memory (NROM) cell has a nitride layer that is not located under the center of the transistor. The gate insulator layer, with the nitride layer, is comprised of two sections that each have structurally defined and separated charge trapping regions. A charge is stored on a particular trapping region in response to the direction that the transistor is operated. The two sections of the gate insulator separate outer regions of the polysilicon gate structure from the middle region.
1.A nitride read-only memory flash memory transistor, including:A substrate including a first source / drain region and a second source / drain region;A continuous oxide layer on the substrate, the continuous oxide layer covering the first source / drain region and the second source / drain region;A gate insulating layer coupled to and formed on the top of a portion of the continuous oxide layer, the gate insulating layer including a first portion formed above the first source / drain region and an isolated formed on the second source The second part above the electrode / drain region; andThe gate electrode includes a plurality of parts, namely an intermediate part coupled to the continuous oxide layer and a first outer layer part coupled to the first part of the gate insulating layer and a second outer layer part coupled to the second part of the gate insulating layer The gate insulating layer separates the middle portion of the gate electrode from the first and second outer layers of the gate electrode.2.The transistor according to claim 1, wherein the gate insulating layer comprises a composite oxide-nitride-oxide layer.3.The transistor according to claim 1, wherein the gate insulating layer is an oxide-nitride-alumina composite layer, or an oxide-alumina-oxide composite layer, or an oxide-silicon oxycarbide-oxidation物 组合 层。 The composite layer.4.The transistor according to claim 1, wherein the gate insulating layer is a non-composite layer containing silicon oxide formed by wet oxidation without annealing; or a silicon-rich oxide containing nano-scale silicon particles; or oxygen A silicon nitride layer; or a silicon-rich alumina insulator; or a silicon oxycarbide insulator; or a silicon oxide insulator containing nano-scale silicon carbide particles.5.The transistor of claim 1, wherein the gate insulator is composed of two or more non-stoichiometric single layers of silicon, nitrogen, aluminum, titanium, tantalum, hafnium, lanthanum, or zirconium.6.The transistor of claim 1, wherein the first charge is stored in the first portion of the gate insulating layer and the second charge is stored in the second portion of the gate insulating layer.7.The transistor according to claim 1, further comprising an oxide filling layer coupled to the first and second portions of the gate insulating layer and at least a portion of the first and second outer layer portions of the gate electrode.8.The transistor of claim 1 further comprising metal contacts coupled to multiple portions of the gate electrode.9.The transistor of claim 1, wherein the substrate is p + material and the first and second source / drain regions are n + material.10.The transistor of claim 1, wherein the substrate is coupled to a negative bias that enhances hot electron injection.11.The transistor of claim 1, further comprising an oxide material coupled to the first and second outer layer portions of the gate electrode and the gate insulating layer portion not within the gate electrode.12.The transistor according to claim 1, wherein the transistor operates in a first source / drain region or a second source / drain region which is a source region in response to a working direction of the transistor.13.A method for manufacturing a flash memory cell of a nitride read-only memory, the method comprising:Forming first and second source / drain regions on both sides of the substrate and separated by the channel region;Forming a continuous oxide layer on the substrate including the first source / drain region and the second source / drain region and the channel region, covering the first source / drain region and the second Above the source / drain region;Forming a polysilicon intermediate gate region on the continuous oxide layer above the channel region;Forming a gate insulating layer on the continuous oxide layer and the middle gate region of polysilicon;Forming a polysilicon layer on the gate insulating layer;Etching the polysilicon layer so that the two outer gate regions remain in the polysilicon layer, thereby forming a gate electrode having an intermediate gate region and two outer gate regions isolated from the intermediate gate region by the gate insulating layer;Removing the top of the gate electrode to remove the gate insulating layer from the top of the gate electrode; andA contact coupled to each area of the gate electrode is formed on the gate electrode and the end of the gate insulating layer is retained.14.The method of claim 13, further comprising depositing an oxide filler on the flash memory cell of the nitride read-only memory.15.The method of claim 13, wherein forming the first source / drain region and the second source / drain region includes doping the substrate.16.The method of claim 13, further comprising etching the oxide layer before depositing the gate insulator to expose one of the first source / drain region and the second source / drain region formed on the substrate The channel region.17.The method of claim 13, wherein removing the top of the gate electrode includes planarizing using chemical mechanical polishing.18.A nitride read-only memory flash memory array storage device, including:A plurality of nitride read-only memory flash memory cells arranged in rows and columns, each flash memory cell including:A substrate including a first source / drain region and a second source / drain region;A continuous oxide layer on the substrate, the continuous oxide layer covering the first source / drain region and the second source / drain region;A gate insulating layer coupled and formed on two regions of the oxide layer, wherein the two regions of the oxide layer are each formed on the first source / drain region and the second source / drain region, the gate insulation The layer includes a first portion formed above the first source / drain region and an isolated second portion formed above the second source / drain region; andThe gate electrode includes a plurality of parts, namely a middle part coupled to the continuous oxide layer and a first outer layer part coupled to the first part of the gate insulating layer and a second outer layer part coupled to the second part of the gate insulating layer, The gate insulating layer thus isolates the middle portion of the gate electrode from the first and second outer layer portions of the gate electrode;Multiple word lines, each word line coupled to the gate electrode of the cell row; andA plurality of bit lines coupled to the column of cells, each bit line coupled to the first source / drain region of at least one nitride read-only memory flash memory cell in the column.19.The nitride read-only memory flash memory array storage device of claim 18, wherein the plurality of nitride read-only memory flash memory cells are configured in a NAND flash memory architecture.20.The nitride read-only memory flash memory array storage device of claim 18, wherein the plurality of nitride read-only memory flash memory cells are configured in a NOR flash memory architecture.21.An electronic system, including:A processor that generates control signals for the system; andA nitride read-only memory flash memory array storage device coupled to a processor operating in response to a control signal, the array storage device including:Multiple nitride read-only memory flash memory cells arranged in multiple rows and columns, each cell including:A substrate including a first source / drain region and a second source / drain region;A continuous oxide layer on the substrate, the continuous oxide layer covering the first source / drain region and the second source / drain region;A gate insulating layer coupled and formed on two regions of the oxide layer, wherein the two regions of the oxide layer are each formed on the first source / drain region and the second source / drain region, the gate insulation The layer includes a first portion formed above the first source / drain region and an isolated second portion formed above the second source / drain region; andThe gate electrode includes a plurality of parts, namely a middle part coupled to the continuous oxide layer and a first outer layer part coupled to the first part of the gate insulating layer and a second outer layer part coupled to the second part of the gate insulating layer, The gate insulating layer thus isolates the middle portion of the gate electrode from the first and second outer layer portions of the gate electrode;Multiple word lines, each word line coupled to the gate electrode of the cell row; andA plurality of bit lines coupled to the column of cells, each bit line coupled to the first source / drain region of at least one nitride read-only memory flash memory cell in the column.
NROM flash memory transistor and manufacturing method thereof, NROM flash memory array, and electronic systemTechnical fieldThe present invention relates generally to memory devices, and more particularly to related nitride read-only memory flash memory devices.Background techniqueStorage devices are generally used as internals, semiconductors, and integrated circuits in computers or other electronic devices. There are many different types of memory today, including random access memory (RAM), read only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), and flash memory.Flash memory devices have developed into the mainstream of non-volatile memory in a wide range of electronic applications. Flash memory devices generally use single transistor memory cells that achieve high storage density, high reliability, and low power consumption. Common uses of flash memory include personal computers, personal digital assistants (PDAs), digital video cameras, and cellular phones. Program codes and system data such as a basic input / output system (BIOS) are generally stored in flash memory devices for use in personal computer systems.One type of flash memory is nitride read only memory (NROM). NROM has some characteristics of flash memory but does not require special manufacturing process of flash memory. NROM integrated circuits can be implemented using standard CMOS processing.FIG. 1 shows a cross-sectional view of a typical prior art NROM memory cell with a channel length L greater than 100 nm. The cell consists of a control gate 100 formed on top of an oxide-nitride-oxide (ONO) layer. This layer is composed of an oxide layer 101 on top of the nitride layer 103, and charges are stored on the nitride layer 103 corresponding to various states of the cell. In one embodiment, the cell has trap regions 105, 106 to store two bits of data on the nitride layer 103. The nitride layer 103 is deposited on another oxide layer 104 on the substrate.Two source / drain regions 109, 111 are located at both ends of the gate 100. The source / drain regions 109, 111 are connected to each other through the channel region 110 between the two source / drain regions 109, 111. The function of each source / drain region 109 or 111 (eg, whether it is a source or a drain) depends on which one of the special zones 105 or 106 is read or written. For example, in a read operation, if carriers are input to the left side of the source / drain region 111 and output from the right side region 109, the left side is the source 111 and the right side is the drain 109, and data bit charge is stored The source terminal 111 is used for the nitride 103 of the bit region 106.As IC manufacturers tried to increase the storage density of NROM devices, the channel length began to decrease. Figure 2 shows a typical prior art planar NROM device with a channel length less than 100nm. In this case, the channel length is so short that the bit trap regions 205, 206 overlap. Overlaps can cause data write / read errors.For the above reasons and other reasons that will be easily understood by those skilled in the art described below by reading and understanding this specification, there is a need in the industry for multi-bit NROM devices that do not overlap trap areas and are smaller.Summary of the inventionThe present invention solves the above-mentioned problems related to overlapping of trap points and other problems, and can better understand them by reading and studying the following description.The present invention is developed around nitride read-only memory (ORM) flash transistors. The transistor is composed of a substrate having a first source / drain region and a second source / drain region. A continuous oxide layer is deposited on the substrate, the continuous oxide layer overlies the first source / drain region and the second source / drain region.The gate insulating layer is coupled and formed on top of a portion of the continuous oxide layer. The gate insulating layer includes an isolated first part formed above the first source / drain region and a second part formed above the second source / drain region, the two parts are structurally separated by the polysilicon gate electrode The middle part is separated. Each part can store isolated charge.The middle part of the gate electrode is isolated from the outer part of the gate electrode by the gate insulating layer. The top of the gate electrode and the portion of the gate insulator deposited on the top of the gate electrode are planarized and metal contact coupled to the three parts of the gate structure and the ends of each part of the gate insulator.Other embodiments of the invention include methods and devices with variable ranges.BRIEF DESCRIPTION1 shows a cross-sectional view of a typical prior art NROM cell with a channel longer than 100 nm;2 shows a cross-sectional view of a typical prior art NROM cell with a channel shorter than 100 nm;3 shows a cross-sectional view of an embodiment of the NROM cell of the present invention;4 shows the resulting charge isolation and distribution diagram of the present invention according to the embodiment of FIG. 3;5 shows a cross-sectional view of the details of the charge storage region according to the embodiment of FIG. 3;6 shows a cross-sectional view of an embodiment of the manufacturing steps of the NROM cell of the present invention;7 shows a cross-sectional view of an embodiment of the subsequent steps of manufacturing the NROM cell of the present invention;8 shows a cross-sectional view of an embodiment of the subsequent steps of manufacturing the NROM cell of the present invention;9 shows a cross-sectional view of an embodiment of the subsequent steps of manufacturing the NROM cell of the present invention;10 shows a cross-sectional view of an embodiment of the subsequent steps of manufacturing the NROM cell of the present invention;11 shows a cross-sectional view of one embodiment of programming an NROM cell of the present invention using substrate enhanced hot electron injection;FIG. 12 shows a block diagram of the electronic system of the present invention.detailed descriptionIn the following detailed description of the present invention, reference is made to the accompanying drawings, which are part of the present invention and illustratively show specific embodiments for implementing the present invention. In the drawings, the same reference numerals indicate substantially similar parts in several figures. These embodiments are fully described to enable those skilled in the art to implement the present invention. Other embodiments and structural, logical, and electrical changes may be adopted without departing from the scope of the invention. Therefore, the following detailed description should not be construed as limiting, and the scope of the present invention is defined only by the appended claims and their equivalents.Figure 3 shows a cross-sectional view of one embodiment of the NROM cell of the present invention. The cell is composed of two charge storage areas 301, 302, which will be described in more detail later in conjunction with FIG. In this embodiment, unlike the prior art, a nitride layer is not provided below the center of the transistor channel.The cell has a polysilicon gate structure 313-315 composed of an intermediate portion 315 and two outer layer portions 313, 314. Gate insulators are formed on both sides of the middle portion of the gate structure 315 so that the gate insulator separates the middle portion 315 from the two gate outer layer portions 313, 314. Control gate metal contacts 312 are formed on all three parts 313-315 of the gate structure.The middle gate portion 315 has only one oxide insulator 320 and does not trap injected electrons in the NROM device structure. In one embodiment, the gate insulator is a composite insulator including an oxide-nitride-oxide (ONO) structure, in which charge trapping is achieved in the nitride layers 305, 306. In one embodiment, the top oxide layers 301, 302 are part of the oxide fillers 303, 304, respectively.Other embodiments use other gate insulators than the ONO structure shown. These structures include oxide-nitride-alumina composite layers, oxide-alumina-oxide composite layers, oxides, silicon oxycarbide composite layers, and other composite layers.In yet another embodiment, the gate insulator may include: a layer formed by wet oxidation without annealing, which is thicker than the general silicon oxide; a silicon-rich oxide containing nano-scale silicon particles; Silicon oxide layer; silicon-rich alumina insulator that is not used as a composite layer; silicon oxide insulator that is not used as a composite layer; silicon oxide insulator that contains nano-scale silicon carbide particles; and non-stoichiometric amounts of two or more gate insulators Single layer, these layers generally use insulating materials such as Si, N, Al, Ti, Ta, Hf, Zr, and La.The embodiment of FIG. 3 also includes two source / drain regions 310 and 311. In the described embodiment, these regions are n + type semiconductor material, while the substrate is p + type semiconductor material. In another embodiment, p + type semiconductor material may be used for the source / drain regions, and the substrate is n +.The function of each source / drain region 310 or 311 depends on whether the bit regions 301, 302 are read or written. For example, in a read operation, if carriers are input to the left source / drain region 311 and output from the right region 310, then the left side is the source 311 and the right side is the drain 310, and the data bit charge is It is stored on the nitride layer 306 at the source terminal 311 in the bit region 302.FIG. 4 shows an embodiment of the charge isolation and distribution diagram of the embodiment of FIG. 3 concerning the NROM cell of the present invention. The graph shows the charge storage density in the vertical direction and the distance of the cells in the horizontal direction. The channel length between the source / drain regions of FIG. 3 is represented as L.The two charges stored in the NROM cell are shown on the charge isolation and distribution map, which is consistent with the charge storage areas 301, 302 of FIG. The figure also shows that there is no charge 405 in the middle of the cell.FIG. 5 shows a more detailed cross-sectional view of the charge storage region 302 of the embodiment of FIG. 3. This figure clearly shows the oxide 304-nitride 306-oxide 320 composite insulator on the left side of the NROM cell of FIG. In addition, a charge storage region 302 and a source / drain region 311 and a part of the polysilicon gate structure 313 are also shown.The above embodiment shows a part of each side of the substantially horizontal gate insulating layer and a second part of each side that is substantially vertical and extends upward through the gate structure. However, the present invention does not limit the angle between the substantially horizontal part and the substantially vertical part. In other words, the "horizontal" and "vertical" portions may not be horizontal or vertical. It is not limited to each side of the gate insulating layer being symmetrical with the other side.6 shows a cross-sectional view of one embodiment of the NROM cell manufacturing steps of FIG. A thick gate oxide 601 is grown on the substrate 600. The source / drain regions 604, 605 are implanted. In addition, the polysilicon gate electrode 610 is defined using conventional techniques well known in the industry.The gate oxide 601 in the regions 602, 603 outside the polysilicon gate region is then removed by an etching process to define the polysilicon gate structure 610. The oxide is then grown back to the new required thickness.FIG. 7 shows regrown oxide regions 720, 721 outside the polysilicon gate electrode. The structure is then covered with composite insulators 701, 703 such as nitride or other insulators as described above.8 shows a cross-sectional view of an NROM cell with a polysilicon layer 801 deposited on top of the composite insulator of FIG. The second polysilicon 801 is then directionally etched to leave only the sidewalls 901, 902 shown in FIG. This provides a composite gate insulator 905 structure under the polysilicon gate and along the sidewalls 901,902. A single gate oxide 910 is located below the central polysilicon gate region 903.Figure 10 shows an NROM cell with deposited silicon oxide fillers 1001, 1002. The top of the structure is flattened by chemical mechanical polishing (CMP). This process removes the insulator from the top 1005 of the central polysilicon gate. Formed metal contacts selectively adhered to polysilicon are deposited on top of the gate structure 1006-1008. The electrical grid provides contact to all three grid regions 1006-1008.In one embodiment, the NROM flash memory cell of the present invention is operated by conventional tunnel implantation of a positive gate voltage to the substrate / p-well. In another embodiment, channel hot electron injection (HEI) may be used for programming. This embodiment applies a conventional positive gate voltage to the substrate / p-well. The tunneling effect can be used to implement the erase operation.By using HEI, the NROM device of the present invention provides two-bit storage like the NROM device of the prior art. Charge is stored near the drain to read the device in the opposite direction. Either end of the channel can be used as a drain, and charge is stored at both ends of the channel near the surface of the n + region.FIG. 11 shows an embodiment of programming NROM flash memory cells. In this embodiment, a negative substrate bias voltage VSUB is applied to the p-type substrate 1100. This bias voltage increases the surface lateral region near the source / drain region 1101 or 1102 (this depends on which direction the cell is operating), thereby increasing the number of hot electrons. This substrate enhanced thermal electron (SEHE) injection embodiment requires a lower drain voltage during the programming operation. In one embodiment, the negative substrate bias is in the range of 0V-3V. Other embodiments and other voltage ranges are used.As is well known in the industry, applying a drain voltage to the first source / drain region 1101 and grounding the second source / drain region 1102 results in hot electron injection into the gate of the charge storage region 1105 closest to the drain region 1101 Insulator. The second charge storage region 1106 is programmed by equally biasing the source / drain regions 1101, 1102 in opposite directions.For the erase operation, substrate enhanced band-band tunneling (SEBBHH) that induces hot hole injection can be used. SEBBHH and SEHE are both well-known technologies in the industry, so they will not be discussed further.FIG. 12 shows a functional block diagram of a storage device 1200 including the NAND flash memory cell of the present invention. The storage device 1200 is coupled to the processor 1210. The processor 1201 may be a microprocessor or any other type of control circuit. The storage device 1200 and the processor 121O form part of the electronic system 1220. The storage device 1200 has been simplified to focus on helping to understand the memory features of the present invention.The storage device includes an array of NROM flash memory cells 1230. In one embodiment, the memory cells are NROM flash memory cells and the memory array 1230 is configured in rows and columns of memory banks. The control gate of each row of memory cells is coupled to the word line and the drain and source connections of the memory cell are coupled to the bit line. As is well known in the industry, the connection of a cell to a bit line depends on whether the array is of NAND architecture or NOR architecture.The address buffer circuit 1240 is provided to latch the address signal on the address input connection A0-Ax1242. The address signal is received and decoded by the row decoder 1244 and the column decoder 1246 to access the memory array 1230. Those skilled in the art can understand from this description that the number of address input connections depends on the density and architecture of the storage array 1230. That is, the number of addresses increases as the number of memory cells increases and the number of memory banks and memory blocks increases.The storage device 1200 reads the data in the storage array 1230 by sensing the voltage or current changes in the columns of the storage array through the sense / cache circuit 1250. In one embodiment, the sense / cache circuit is coupled to read and latch a row of data from the storage array 1230. A data input / output buffer circuit 1260 may be included to enable bidirectional data communication with the controller 1210 on multiple data connections 1262. The write circuit 1255 is provided to write data to the memory array.The control circuit 1270 decodes the signal from the processor 1210 provided to the control connection 1272. These signals are used to control the operation of the memory array 1230, including data read, data write, and erase operations. The control circuit 1270 may be a state machine, sequencer, or other type of controller.Since the NROM storage unit of the present invention uses a CMOS compatible process, the storage device 1200 of FIG. 12 may be an embedded device having a CMOS processor.The flash memory device shown in FIG. 12 has been simplified to facilitate a basic understanding of storage features. A more detailed understanding of the internal circuits and functions of flash memory is well known to those skilled in the art.In summary, the NROM flash transistors of the present invention provide self-aligned structural charge isolation, which allows for the manufacture of smaller cells without bit area overlap. The unit provides low initial threshold voltage, fast operation, low power consumption and high storage density. NROM cells can be used in NOR type memory arrays, NAND type memory arrays, or other memory array structures.Although specific embodiments have been exemplified and described herein, those skilled in the art will understand that any configuration that can achieve the same purpose can be substituted for the specific embodiments shown. Many adaptations of the present invention will be apparent to those skilled in the art. Therefore, the present invention is intended to cover any adaptive changes or variations of the invention. It is obviously hoped that the invention is only limited by the following claims and their equivalents.
A package (300) includes a redistribution portion (302), a first portion (204), and a second portion (206). The first portion is coupled to the redistribution portion. The first portion includes a first switch (241) comprising a plurality of switch interconnects (245), and a first encapsulation layer (240) that at least partially encapsulates the first switch. The second portion is coupled to the first portion. The second portion includes a first plurality of filters (261). Each filter includes a plurality of filter interconnects (265). The second portion also includes a second encapsulation layer (260) that at least partially encapsulates the first plurality of filters. The first portion includes a second switch (243) positioned next to the first switch, where the first encapsulation layer at least partially encapsulates the second switch. The second portion includes a second plurality of filters (263) positioned next to the first plurality of filters, where the second encapsulation layer at least partially encapsulates the second plurality of filters.
CLAIMS1. A package comprising:a redistribution portion;a first portion coupled to the redistribution portion, the first portion comprising: a first switch comprising a plurality of switch interconnects; and a first encapsulation layer at least partially encapsulating the first switch; anda second portion coupled to the first portion, the second portion comprising: a plurality of first filters, each first filter comprising a plurality of first filter interconnects; anda second encapsulation layer at least partially encapsulating the plurality of first filters.2. The package of claim 1, wherein the first portion further comprises a second switch positioned next to the first switch, wherein the first encapsulation layer at least partially encapsulates the second switch.3. The package of claim 1, wherein the second portion further comprises a plurality of second filters positioned next to the plurality of first filters, wherein the second encapsulation layer at least partially encapsulates the plurality of second filters.4. The package of claim 1, wherein two neighboring filters from the plurality of first filters have a spacing of about 100 microns (μιτι) or less.5. The package of claim 1, wherein the redistribution portion comprises:at least one dielectric layer; andat least one redistribution interconnect.6. The package of claim 5, wherein the at least one redistribution interconnect is configured to provide impedance matching between the first switch and at least one first filter from the plurality of first filters.7. The package of claim 1 , wherein the second portion further comprises a through encapsulation interconnect that travels through the second portion, the through encapsulation interconnect is configured to provide an electrical path between the plurality of first filters and the redistribution portion.8. The package of claim 7, wherein the plurality of first filters and the first switch are electrically coupled to each other through at least one electrical path that includes the plurality of first switch interconnects, the redistribution portion, the through encapsulation interconnect and the plurality of first filter interconnects.9. The package of claim 1 , wherein the first switch is located between the redistribution portion and the plurality of first filters.10. The package of claim 1 , wherein the package is incorporated into a device selected from the group consisting of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, an Internet of things (IoT) device, a laptop computer, a server, and a device in a automotive vehicle.1 1. An apparatus comprising:a redistribution portion;a first portion coupled to the redistribution portion, the first portion comprising: a first switching means comprising a plurality of switch interconnects; anda first encapsulation layer at least partially encapsulating the first switching means; anda second portion coupled to the first portion, the second portion comprising: a plurality of first filtering means, each first filtering means comprising a plurality of filter interconnects; anda second encapsulation layer at least partially encapsulating the plurality of first filtering means.12. The apparatus of claim 11, wherein the first portion further comprises a second switching means positioned next to the first switching means, wherein the first encapsulation layer at least partially encapsulates the second switching means.13. The apparatus of claim 11, wherein the second portion further comprises a plurality of second filtering means positioned next to the plurality of first filtering means, wherein the second encapsulation layer at least partially encapsulates the plurality of second filtering means.14. The apparatus of claim 11, wherein two neighboring filtering means from the plurality of first filtering means have a spacing of about 100 microns (μιτι) or less.15. The apparatus of claim 11, wherein the redistribution portion comprises:at least one dielectric layer; andat least one redistribution interconnect.16. The apparatus of claim 15, wherein the at least one redistribution interconnect is configured to provide impedance matching between the first switching means and at least one first filtering means from the plurality of first filtering means.17. The apparatus of claim 11, wherein the second portion further comprises a through encapsulation interconnect that travels through the second portion, the through encapsulation interconnect configured to provide an electrical path between the plurality of first filtering means and the redistribution portion.18. The apparatus of claim 11, wherein the apparatus is incorporated into a device selected from the group consisting of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, an Internet of things (IoT) device, a laptop computer, a server, and a device in a automotive vehicle.19. A method for fabricating a package, comprising:forming a redistribution portion; forming a first portion and coupling the first portion to the redistribution portion, wherein forming the first portion comprises:providing a first switch comprising a plurality of switch interconnects; andforming a first encapsulation layer that at least partially encapsulates the first switch; andforming a second portion and coupling the second portion to the first portion, wherein forming the second portion comprises:providing a plurality of first filters, each first filter comprising a plurality of filter interconnects; andforming a second encapsulation layer that at least partially encapsulates the plurality of first filters.20. The method of claim 19, wherein the forming first portion further comprises providing a second switch next to the first switch, wherein the first encapsulation layer at least partially encapsulates the second switch.21. The method of claim 19, wherein forming the second portion further comprises providing a plurality of second filters next to the plurality of first filters, wherein the second encapsulation layer at least partially encapsulates the plurality of second filters.22. The method of claim 19, wherein forming the second portion further comprises a providing a through encapsulation interconnect that travels through the second portion, the through encapsulation interconnect configured to provide an electrical path between the plurality of first filters and the redistribution portion.
PACKAGE COMPRISING SWITCHES AND FILTERSCROSS-REFERENCE TO RELATED APPLICATION[0001] This application claims priority to and the benefit on Non-Provisional Application No. 15/235,790 filed in the U.S. Patent and Trademark Office on August 12, 2016, the entire content of which is incorporated herein by reference.BACKGROUNDField of the Disclosure[0002] Various features relate generally to a package, and more specifically to a package that includes switches and filters.Background[0003] FIG. 1 illustrates a package that includes a substrate 102, a power amplifier (PA) 120, a switch 122, a filter 124 and an antenna switch 126. The power amplifier (PA) 120, the switch 122, the filter 124 and the antenna switch 126 are mounted on the substrate 126. The power amplifier (PA) 120, the switch 122, the filter 124 and the antenna switch 126 are all co-planar to each other on the substrate 102. The power amplifier (PA) 120, the switch 122, the filter 124 and the antenna switch 126 may be mounted over the substrate 102 using a surface mount process. The substrate 102 is mounted over a printed circuit board (PCB) 100. A duplexer 110 is also mounted over the PCB 100.[0004] One downside to the power amplifier (PA) 120, the switch 122, the filter 124 and the antenna switch 126 being co-planar to each other is that the configuration takes up a lot of real estate on the substrate 102. As shown in FIG. 1, the power amplifier (PA) 120, the switch 122, the filter 124 and the antenna switch 126 are spread out over the substrate 102, resulting in a package that has a big surface area.[0005] Another downside to the configuration of FIG. 1, is that the surface mount process that is used to couple the power amplifier (PA) 120, the switch 122, the filter 124 and the antenna switch 126 to the substrate 102 requires a relatively large spacing between components, which further increases the overall surface area of the package that includes the substrate 102. [0006] It is desirable to reduce the size, height and/or spaces of devices and packages, so that these devices and packages can be placed in smaller devices. Ideally, such a device or package will have a better form factor, be cheaper to fabricate, while at the same time meeting the needs and/or requirements of mobile devices, Intemet of things (IoT) devices, and/or wearable devices.SUMMARY[0007] Various features relate generally to a package, and more specifically to a package that includes switches and filters.[0008] One example provides a package that includes a redistribution portion, a first portion, and a second portion. The first portion is coupled to the redistribution portion. The first portion includes a first switch comprising a plurality of switch interconnects, and a first encapsulation layer that at least partially encapsulates the first switch. The second portion is coupled to the first portion. The second portion includes a first plurality of filters, each filter comprising a plurality of filter interconnects. The second portion also includes a second encapsulation layer that at least partially encapsulates the first plurality of filters.[0009] One example provides an apparatus that includes a redistribution portion, a first portion, and a second portion. The first portion is coupled to the redistribution portion. The first portion includes a first switching means comprising a plurality of switch interconnects, and a first encapsulation layer that at least partially encapsulates the first switching means. The second portion is coupled to the first portion. The second portion includes a first plurality of filterings means, each filtering means comprising a plurality of filter interconnects. The second portion also includes a second encapsulation layer that at least partially encapsulates the first plurality of filtering means.[0010] Another example provides a method for fabricating a package. The method forms a redistribution portion. The method forms a first portion and couples the first portion to the redistribution portion. Forming the first portion includes providing a first switch that includes a plurality of switch interconnects, and forming a first encapsulation layer that at least partially encapsulates the first switch. The method forms a second portion and couples the second portion to the first portion. Forming the second portion includes providing a first plurality of filters, each filter includes a plurality of filter interconnects. The method forms a second encapsulation layer that at least partially encapsulates the first plurality of filters. DRAWINGS[0011] Various features, nature and advantages may become apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference characters identify correspondingly throughout.[0012] FIG. 1 illustrates a profile view of a package that includes a filter and a switch coupled to a printed circuit board (PCB).[0013] FIG. 2 illustrates a profile view of a package that includes several filters and several switches, where filters are positioned over the switches.[0014] FIG. 3 illustrates a profile view of another package that includes several filters and several switches, where filters are positioned over the switches.[0015] FIG. 4 (which includes FIGS. 4A-4C) illustrates an example of a sequence for fabricating a package that includes several filters and several switches, where filters are positioned over the switches.[0016] FIG. 5 (which includes FIGS. 5A-5C) illustrates an example of a sequence for fabricating a package that includes several filters and several switches, where filters are positioned over the switches.[0017] FIG. 6 illustrates a flow diagram of an exemplary method for fabricating a package that includes several filters and several switches, where filters are positioned over the switches.[0018] FIG. 7 illustrates various electronic devices that may include the various integrated devices, integrated device packages, semiconductor devices, dies, integrated circuits, and/or packages described herein.DETAILED DESCRIPTION[0019] In the following description, specific details are given to provide a thorough understanding of the various aspects of the disclosure. However, it will be understood by one of ordinary skill in the art that the aspects may be practiced without these specific details. For example, circuits may be shown in block diagrams in order to avoid obscuring the aspects in unnecessary detail. In other instances, well-known circuits, structures and techniques may not be shown in detail in order not to obscure the aspects of the disclosure.[0020] Some features pertain to a package that includes a redistribution portion, a first portion, and a second portion. The first portion is coupled to the redistribution portion. The first portion includes a first switch comprising a plurality of switch interconnects, and a first encapsulation layer that at least partially encapsulates the first switch. The second portion is coupled to the first portion. The second portion includes a first plurality of filters, each filter comprising a plurality of filter interconnects. The second portion also includes a second encapsulation layer that at least partially encapsulates the first plurality of filters. In some implementations, the first portion further includes a second switch positioned next to the first switch, where the first encapsulation layer at least partially encapsulates the second switch. In some implementations, where the second portion further includes a second plurality of filters positioned next to the first plurality of filters, where the second encapsulation layer at least partially encapsulates the second plurality of filters. In some implementations, where the second portion further includes a through encapsulation interconnect that travels through the second portion. The through encapsulation interconnect is configured to provide an electrical path between the first plurality of filters and the redistribution portion.[0021] In some implementations, the height of the package may be defined along the Z-direction of the package, which is shown in the figures of the present disclosure. In some implementations, the Z-direction of the package may be defined along an axis between a top portion and a bottom portion of the package. The terms top and bottom may be arbitrarily assigned, however as an example, the top portion of the package may be a portion comprising an encapsulation layer, while a bottom portion of the package may be a portion comprising a redistribution portion or a plurality of solder balls. In some implementations, the top portion of the package may be a back side of the package, and the bottom portion of the package may be a front side of the package. The front side of the package may be an active side of the package. A top portion may be a higher portion relative to a lower portion. A bottom portion may be a lower portion relative to a higher portion. Further examples of top portions and bottom portions will be further described below. The X-Y directions of the package may refer to the lateral direction and/or footprint of the package. Examples of X-Y directions are shown in the figures of the present disclosure and/or further described below. In many of the figures of the present disclosure, the packages and their respective components are shown across a X-Z cross-section or X-Z plane. However, in some implementations, the packages and their representative components may be represented across a Y-Z cross- section or Y-Z plane. [0022] In some implementations, an interconnect is an element or component of a device or package that allows or facilitates an electrical connection between two points, elements and/or components. In some implementations, an interconnect may include a trace, a via, a pad, a pillar, a redistribution metal layer, and/or an under bump metallization (UBM) layer. In some implementations, an interconnect is an electrically conductive material that may be configured to provide an electrical path for a signal (e.g., data signal, ground signal, power signal). An interconnect may be part of a circuit. An interconnect may include more than one element or component.Exemplary Package Comprising Switches and Filters[0023] FIG. 2 illustrates a package 200 coupled to a printed circuit board (PCB) 100 through a plurality of solder interconnects 210. As will be further described below, the package 200 includes a plurality of switches (e.g., means for switching, switching means) and a plurality of filters (e.g., means for filtering, filtering means). These switches and filters may be positioned co-planar and/or over each other in such a way as to minimize the overall size of the package 200. The spacing between at least some of the neighboring switches and/or neighboring filters may be about 100 microns (μιτι) or less. In some implementations, the spacing between at least some of the neighboring switches and/or neighboring filters may be about 50 microns (μιτι) or less. Although not shown, the package 200 may be electrically coupled to other components and/or devices, such as an integrated device (e.g., chip, die). The package 200 may be configured to provide radio frequency (RF) filters and switches.[0024] The package 200 includes a redistribution portion 202, a first portion 204 and a second portion 206. The redistribution portion 202 includes at least one dielectric layer 220, a plurality of first redistribution interconnects 223, a plurality of second redistribution interconnects 225 and a plurality of third redistribution interconnects 227. The plurality of first redistribution interconnects 223 may include traces and/or pads. The plurality of second redistribution interconnects 225 may include vias. The plurality of third redistribution interconnects 227 may include pads. The plurality of first redistribution interconnects 223 is coupled to the plurality of second redistribution interconnects 225. The plurality of second redistribution interconnects 225 is coupled to the plurality of third redistribution interconnects 227. The plurality of third redistribution interconnects 227 is coupled to the plurality of solder interconnects 210. [0025] FIG. 2 illustrates that the first portion 204 is coupled to the redistribution portion 202. The first portion 204 may be a switching portion. The first portion 204 includes a first encapsulation layer 240, a first switch 241 (e.g., means for first switching, first switching means), a second switch 243 (e.g., means for second switching, second switching means), a plurality of first switch interconnects 245, a plurality of second switch interconnects 247 and a plurality of through encapsulation interconnects 249. The first encapsulation layer 240 at least partially encapsulates the first switch 241 , the second switch 243, the plurality of first switch interconnects 245, a plurality of second switch interconnects 247 and the plurality of through encapsulation interconnects 249. The plurality of first switch interconnects 245 and the plurality of through encapsulation interconnects 249 are coupled to the plurality of first redistribution interconnects 223. The plurality of first switch interconnects 245 and the plurality of second switch interconnects 247 are coupled to the plurality of through encapsulation interconnects 249 through the plurality of first redistribution interconnects 223. The plurality of through encapsulation interconnects 249 travels entirely through the first encapsulation layer 240. The plurality of through encapsulation interconnects 249 may include interconnect posts (e.g., copper (Cu) posts).[0026] The first switch 241 is substantially co-planar to the second switch 243 in the first portion 204. However, in some implementations, the first switch 241 and the second switch 243 may be positioned differently in the first portion 204.[0027] FIG. 2 illustrates that the second portion 206 is coupled to the first portion 204. The second portion 206 may be a filtering portion. The second portion 206 includes a second encapsulation layer 260, a plurality of first filters 261, a plurality of second filters 263, a plurality of first filter interconnects 265, a plurality of second filter interconnects 267, a passivation layer 262 and a plurality of interconnects 269.[0028] The second encapsulation layer 260 at least partially encapsulates the plurality of first filters 261 (e.g., means for first filtering, first filtering means), the plurality of second filters 263 (e.g., means for second filtering, second filtering means), the plurality of first filter interconnects 265 and the plurality of second filter interconnects 267. The plurality of first filters 261 is coupled to the plurality of interconnects 269 through the plurality of first filter interconnects 265. The plurality of second filters 263 is coupled to the plurality of interconnects 269 through the plurality of second filter interconnects 267. The plurality of interconnects 269 is coupled to the plurality of through encapsulation interconnects 249. The passivation layer 262 at least partially covers the plurality of interconnects 269. The plurality of first filters 261 are positioned substantially over the first switch 241. The plurality of second filters 263 are positioned substantially over the second switch 243.[0029] As shown in FIG. 2, at least some of the first filters from the plurality of first filters 261 are positioned in the second portion 206 such that the first filters are substantially co-planar to each other. In some implementations, at least some of the neighboring first filters from the plurality of first filters 261 have a spacing that is about 100 microns (μιτι) or less. In some implementations, the spacing between at least some of neighboring first filters may be about 50 microns (μιτι) or less.[0030] At least some of the second filters from the plurality of second filters 263 are positioned in the second portion 206 such that the second filters are substantially co- planar to each other. In some implementations, at least some of the neighboring second filters from the plurality of second filters 263 have a spacing that is about 100 microns (μιτι) or less. In some implementations, the spacing between at least some of neighboring first filters may be about 50 microns (μιτι) or less.[0031] In some implementations, the small spacing is enabled through a fabrication process that allows filters (e.g., means for filtering, filtering means) to be placed close to each other while still being able to keep the alignment of interconnects under control and within tolerances. The small spacing further enables a package 200 that includes a small form factor.[0032] Another advantage of positioning the switches and filters close to each other in the package is that no impedance matching may be required (due to their proximity to each other), in some implementations. In instances where impedance matching may be desired, some of the interconnects between the switches and filters can be configured for impedance matching, instead of having a separate device or component to provide impedance matching between the switches and filters. For example, some of the plurality of through encapsulation interconnects 249, the plurality of first redistribution interconnects 223, and/or the plurality of second redistribution interconnects 225 may be configured to provide impedance matching between the filters (e.g., first filter) and switches (e.g., first switch 241), thus bypassing the need for a separate impedance matching device or component.[0033] In some implementations, some interconnects from the plurality of through encapsulation interconnects 249, the plurality of first redistribution interconnects 223, and/or the plurality of second redistribution interconnects 225 may be configured to provide one or more first impedance matching (e.g., means for first impedance matching) between the plurality of first filters 261 and the first switch 241, and/or some interconnects from the plurality of through encapsulation interconnects 249, the plurality of first redistribution interconnects 223, and/or the plurality of second redistribution interconnects 225 may be configured to provide one or more second impedance matching (e.g., means for second impedance matching) between the plurality of second filters 263 and the second switch 243.[0034] In some implementations, the package 200 may include an adhesive layer 208, which is optional. The adhesive layer 208 is coupled to the second encapsulation layer 260. The adhesive layer 208 may cover the plurality of first filters 261 and the plurality of second filters 263. In some implementations, the adhesive layer 208 is a result of the fabrication process that fabricates the package 200.[0035] It is noted that different implementations may include different numbers of switches and filters (e.g., one switch and several filters). Thus, the package 200 of FIG. 2 is merely exemplary, and different implementations may have other configurations and/or combinations of switches and filters.Exemplary Package Comprising Switches and Filters[0036] FIG. 3 illustrates another configuration of a package that includes switches and filters. More specifically, FIG. 3 illustrates a package 300 that includes switches and filters. The package 300 is similar to the package 200 of FIG. 2. The package 300 includes similar components as the package 200. The package 300 is coupled to the PCB 100 through the plurality of solder interconnects 210. Although not shown, the package 300 may be electrically coupled to other components and/or devices, such as an integrated device (e.g., chip, die). The package 300 may be configured to provide radio frequency (RF) filters and switches.[0037] The package 300 includes a redistribution portion 302, the first portion 204 and the second portion 206. The package 200 also includes the first switch 241, the second switch 243, the plurality of first filters 261 and the plurality of second filters 263. The redistribution portion 302 is coupled to the first portion 204. The first portion 204 is coupled to the second portion 206. The redistribution portion 302 includes at least one dielectric layer 220, a plurality of first redistribution interconnects 323, a plurality of second redistribution interconnects 325 and a plurality of under bump metallization (UBM) layers 327. The plurality of first redistribution interconnects 323, the plurality of second redistribution interconnects 325 and the plurality of under bump metallization (UBM) layers 327 may include portions that are U shaped and/or V shaped.[0038] The plurality of first redistribution interconnects 323 is coupled to the plurality of second redistribution interconnects 325. The plurality of second redistribution interconnects 325 is coupled to the plurality of under bump metallization (UBM) layers 327. The plurality of UBM layers 327 is coupled to the plurality of solder interconnects 210.[0039] The plurality of first redistribution interconnects 323 is coupled to the plurality of first switch interconnects 245 and the plurality of second switch interconnects 247. The plurality of first redistribution interconnects 323 is coupled to the plurality of through encapsulation interconnects 249.[0040] FIG. 3 illustrates that at least some of the first filters from the plurality of first filters 261 are positioned in the second portion 206 such that the first filters are substantially co-planar to each other. In some implementations, at least some of the neighboring first filters from the plurality of first filters 261 have a spacing that is about 100 microns (μιτι) or less. In some implementations, the spacing between at least some of neighboring first filters may be about 50 microns (μιτι) or less.[0041] FIG. 3 also illustrates that at least some of the second filters from the plurality of second filters 263 are positioned in the second portion 206 such that the second filters are substantially co-planar to each other. In some implementations, at least some of the neighboring second filters from the plurality of second filters 263 have a spacing that is about 100 microns (μιτι) or less. In some implementations, the spacing between at least some of neighboring first filters may be about 50 microns (μιτι) or less.[0042] In some implementations, the small spacing is enabled through a fabrication process that allows filters (e.g., means for filtering, filtering means) to be placed close to each other while still being able to keep the alignment of interconnects under control and within tolerances. The small spacing further enables a package 300 that includes a small form factor.[0043] As mentioned above, another advantage of positioning the switches and filters close to each other in the package is that no impedance matching may be required (due to their proximity to each other), in some implementations. In instances where impedance matching may be desired, some of the interconnects between the switches and filters can be configured for impedance matching, instead of having a separate device or component to provide impedance matching between the switches and filters. For example, some of the plurality of through encapsulation interconnects 249, the plurality of first redistribution interconnects 323, and/or the plurality of second redistribution interconnects 325 may be configured to provide impedance matching between the filters (e.g., first filter) and switches (e.g., first switch 241), thus bypassing the need for a separate impedance matching device or component.[0044] In some implementations, some interconnects from the plurality of through encapsulation interconnects 249, the plurality of first redistribution interconnects 323, and/or the plurality of second redistribution interconnects 325 may be configured to provide one or more first impedance matching (e.g., means for first impedance matching) between the plurality of first filters 261 and the first switch 241, and/or some interconnects from the plurality of through encapsulation interconnects 249, the plurality of first redistribution interconnects 323, and/or the plurality of second redistribution interconnects 325 may be configured to provide one or more second impedance matching (e.g., means for second impedance matching) between the plurality of second filters 263 and the second switch 243.[0045] It is noted that different implementations may include different numbers of switches and filters (e.g., one switch and several filters). Thus, the package 300 of FIG. 3 is merely exemplary, and different implementations may have other configurations and/or combinations of switches and filters.[0046] Having described various examples of packages that include switches and filters, various processes and methods for fabricating a package that includes switches and filters will now be described.Exemplary Sequence for Fabricating a Package Comprising Switches and Filters[0047] In some implementations, providing / fabricating a package that includes switches and filters includes several processes. FIG. 4 (which includes FIGS. 4A-4C) illustrates an exemplary sequence for providing / fabricating a package that includes switches and filters. In some implementations, the sequence of FIGS. 4A-4C may be used to fabricate the package that includes switches and filters of FIG. 2 and/or other packages described in the present disclosure. However, for the purpose of simplification, FIGS. 4A-4C will be described in the context of fabricating a package of FIG. 2. In particular, FIGS. 4A-4C will be described in the context of fabricating the package 200 of FIG. 2. [0048] It should be noted that the sequence of FIGS. 4A-4C may combine one or more stages in order to simplify and/or clarify the sequence for providing a package. In some implementations, the order of the processes may be changed or modified.[0049] Stage 1, as shown in FIG. 4A, illustrates a state after a carrier 400 and an adhesive layer 208 are provided. The adhesive layer 208 is formed over the carrier 400. Different implementations may use different materials for the carrier 400. In some implementations, the carrier 400 includes glass and/or silicon.[0050] Stage 2 illustrates a state after the plurality of first filters 261 and the plurality of second filters 263 are placed over the adhesive layer 208 using a pick and place process. In some implementations, the filters are places that such at least some of the neighboring filters (from the plurality of first filters 261, the plurality of second filters 263) have a spacing that is about 100 microns (μιτι) or less. In some implementations, the spacing between at least some of neighboring filters may be about 50 microns (μιτι) or less.[0051] Stage 3 illustrates a state after the second encapsulation layer 260 is formed over the adhesive layer 208, the plurality of first filters 261, the plurality of second filters 263, the plurality of first filter interconnects 265 and the plurality of second filter interconnects 267. The second encapsulation layer 260 may include a mold compound and/or epoxy fill. In some implementations, the second encapsulation layer 260 may be formed such as to at least partially encapsulate the plurality of first filters 261, the plurality of second filters 263, the plurality of first filter interconnects 265 and the plurality of second filter interconnects 267. In some implementations, the second encapsulation layer 260 is formed over the plurality of first filters 261, the plurality of second filters 263, the plurality of first filter interconnects 265 and the plurality of second filter interconnects 267 and portions of the second encapsulation layer 260 is removed (e.g., grinded).[0052] Stage 4 illustrates a state after the plurality of interconnects 269 is formed over the second encapsulation layer 260. The plurality of interconnects 269 is formed such as to couple to the plurality of first filter interconnects 265 and the plurality of second filter interconnects 267. In some implementations, the plurality of interconnects 269 is formed using a plating process (e.g., Damascene, Semi Additive Process (SAP)).[0053] Stage 5 illustrates a state after the passivation layer 242 is formed over the second encapsulation layer 260 and the plurality of interconnects 269. In some implementations, stage 5 illustrates the second portion 206 of a package 200. [0054] Stage 6 illustrates a state after the plurality of through encapsulation interconnects 249 is formed over the plurality of interconnects 269. In some implementations, the plurality of through encapsulation interconnects 249 is formed by removing portions of the passivation layer 262 and using a plating process to form the plurality of through encapsulation interconnects 249. The plurality of through encapsulation interconnects 249 may include copper (Cu) posts.[0055] Stage 7, as shown in FIG. 4B, illustrates a state after the first switch 241 and the second switch 243 is placed over the passivation layer 262.[0056] Stage 8 illustrates a state after the first encapsulation layer 240 is formed over the passivation layer 262, the first switch 241, the second switch 243, the plurality of first switch interconnects 245, the plurality of second switch interconnects 247 and the plurality of through encapsulation interconnects 249. The first encapsulation layer 240 may include a mold compound and/or epoxy fill. In some implementations, the first encapsulation layer 240 may be formed such as to at least partially encapsulate the first switch 241, the second switch 243, the plurality of first switch interconnects 245 and the plurality of second switch interconnects 247. In some implementations, the first encapsulation layer 240 is formed over the first switch 241, the second switch 243, the plurality of first switch interconnects 245 and the plurality of second switch interconnects 247 and portions of the first encapsulation layer 240 is removed (e.g., grinded).[0057] Stage 9 illustrates a state after the plurality of first redistribution interconnects 223 is formed over the first encapsulation layer 240. The plurality of first redistribution interconnects 223 is formed such as to couple to the plurality of through encapsulation interconnects 249, the plurality of first switch interconnects 245 and the plurality of second switch interconnects 247. A plating process may be used to form the plurality of first redistribution interconnects 223.[0058] Stage 10 illustrates a state after the at least one dielectric layer 220 is formed over the first encapsulation layer 240 and the plurality of first redistribution interconnects 223.[0059] Stage 11, as shown in FIG. 4C, illustrates a state after a plurality of cavities 420 is formed in the at least one dielectric layer 220.[0060] Stage 12 illustrates a state after the plurality of second redistribution interconnects 225 is formed in the plurality of cavities 420, and the plurality of third redistribution interconnects 227 is formed over the at least one dielectric layer 220. A plating process may be used to form the plurality of second redistribution interconnects 225 and the plurality of third redistribution interconnects 227.[0061] Stage 13 illustrates a state after the plurality of solder interconnects 210 is provided over the plurality of third redistribution interconnects 227.[0062] Stage 14 illustrates a state after the carrier 400 is removed (e.g., grinded) from the package 200. In some implementations, the adhesive layer 208 is also removed (e.g., grinded) from the package 200.[0063] In some implementations, several first packages are concurrently fabricated on a wafer, and a singulation process is performed to cut the wafer into individual packages.Exemplary Sequence for Fabricating a Package Comprising Switches and Filters[0064] In some implementations, providing / fabricating a package that includes switches and filters includes several processes. FIG. 5 (which includes FIGS. 5A-5C) illustrates an exemplary sequence for providing / fabricating a package that includes switches and filters. In some implementations, the sequence of FIGS. 5A-5C may be used to fabricate the package that includes switches and filters of FIG. 3 and/or other packages described in the present disclosure. However, for the purpose of simplification, FIGS. 5A-5C will be described in the context of fabricating a package of FIG. 3. In particular, FIGS. 5A-5C will be described in the context of fabricating the package 300 of FIG. 3.[0065] It should be noted that the sequence of FIGS. 5A-5C may combine one or more stages in order to simplify and/or clarify the sequence for providing a package. In some implementations, the order of the processes may be changed or modified.[0066] Stage 1, as shown in FIG. 5 A, illustrates a state after a carrier 400 and an adhesive layer 208 are provided. The adhesive layer 208 is formed over the carrier 400. Different implementations may use different materials for the carrier 400. In some implementations, the carrier 400 includes glass and/or silicon.[0067] Stage 2 illustrates a state after the plurality of first filters 261 and the plurality of second filters 263 are placed over the adhesive layer 208 using a pick and place process. In some implementations, the filters are places that such at least some of the neighboring filters (from the plurality of first filters 261, the plurality of second filters 263) have a spacing that is about 100 microns (μιτι) or less. In some implementations, the spacing between at least some of neighboring filters may be about 50 microns (μιτι) or less.[0068] Stage 3 illustrates a state after the second encapsulation layer 260 is formed over the adhesive layer 208, the plurality of first filters 261, the plurality of second filters 263, the plurality of first filter interconnects 265 and the plurality of second filter interconnects 267. The second encapsulation layer 260 may include a mold compound and/or epoxy fill. In some implementations, the second encapsulation layer 260 may be formed such as to at least partially encapsulate the plurality of first filters 261, the plurality of second filters 263, the plurality of first filter interconnects 265 and the plurality of second filter interconnects 267. In some implementations, the second encapsulation layer 260 is formed over the plurality of first filters 261, the plurality of second filters 263, the plurality of first filter interconnects 265 and the plurality of second filter interconnects 267 and portions of the second encapsulation layer 260 is removed (e.g., grinded).[0069] Stage 4 illustrates a state after the plurality of interconnects 269 is formed over the second encapsulation layer 260. The plurality of interconnects 269 is formed such as to couple to the plurality of first filter interconnects 265 and the plurality of second filter interconnects 267. In some implementations, the plurality of interconnects 269 is formed using a plating process (e.g., Damascene, Semi Additive Process (SAP)).[0070] Stage 5 illustrates a state after the passivation layer 242 is formed over the second encapsulation layer 260 and the plurality of interconnects 269. In some implementations, stage 5 illustrates the second portion 206 of a package 200.[0071] Stage 6 illustrates a state after the plurality of through encapsulation interconnects 249 is formed over the plurality of interconnects 269. In some implementations, the plurality of through encapsulation interconnects 249 is formed by removing portions of the passivation layer 262 and using a plating process to form the plurality of through encapsulation interconnects 249. The plurality of through encapsulation interconnects 249 may include copper (Cu) posts.[0072] Stage 7, as shown in FIG. 5B, illustrates a state after the first switch 241 and the second switch 243 is placed over the passivation layer 262.[0073] Stage 8 illustrates a state after the first encapsulation layer 240 is formed over the passivation layer 262, the first switch 241, the second switch 243, the plurality of first switch interconnects 245, the plurality of second switch interconnects 247 and the plurality of through encapsulation interconnects 249. The first encapsulation layer 240 may include a mold compound and/or epoxy fill. In some implementations, the first encapsulation layer 240 may be formed such as to at least partially encapsulate the first switch 241, the second switch 243, the plurality of first switch interconnects 245 and the plurality of second switch interconnects 247. In some implementations, the first encapsulation layer 240 is formed over the first switch 241, the second switch 243, the plurality of first switch interconnects 245 and the plurality of second switch interconnects 247 and portions of the first encapsulation layer 240 is removed (e.g., grinded).[0074] Stage 9 illustrates a state after a dielectric layer 520 and the plurality of first redistribution interconnects 323 is formed over the first encapsulation layer 240. The plurality of first redistribution interconnects 323 is formed such as to couple to the plurality of through encapsulation interconnects 249, the plurality of first switch interconnects 245 and the plurality of second switch interconnects 247. A plating process may be used to form the plurality of first redistribution interconnects 323.[0075] Stage 10 illustrates a state after a dielectric layer 522 is formed over the dielectric layer 522 and the plurality of first redistribution interconnects 323. In some implementations, the dielectric layer 520 and the dielectric layer 522 may represent the at least one dielectric layer 220.[0076] Stage 11, as shown in FIG. 5C, illustrates a state after the plurality of second redistribution interconnects 325 is formed over the dielectric layer 522 and the plurality of first redistribution interconnects 323. A plating process may be used to form the plurality of second redistribution interconnects 325.[0077] Stage 12 illustrates a state after the plurality of UBM layers 327 are formed over the plurality of second redistribution interconnects 325. A plating process may be used to form the plurality of UBM layers 327.[0078] Stage 13 illustrates a state after the plurality of solder interconnects 210 is provided over the plurality of UBM layers 327.[0079] Stage 14 illustrates a state after the carrier 400 is removed (e.g., grinded) from the package 300. In some implementations, the adhesive layer 208 is also removed (e.g., grinded) from the package 300.[0080] In some implementations, several first packages are concurrently fabricated on a wafer, and a singulation process is performed to cut the wafer into individual packages. Exemplary Method for Fabricating a Package Comprising Switches and Filters[0081] In some implementations, providing / fabricating a package that includes switches and filters includes several processes. FIG. 6 illustrates an exemplary flow diagram of a method for fabricating package that includes switches and filters. In some implementations, the method of FIG. 6 may be used to fabricate the package of FIGS. 2-3 and/or other packages described in the present disclosure. However, for the purpose of simplification, FIG. 6 will be described in the context of fabricating the package of FIG. 2.[0082] It should be noted that the flow diagram of FIG. 6 may combine one or more processes in order to simplify and/or clarify the method for providing a package. In some implementations, the order of the processes may be changed or modified.[0083] The method provides (at 605) a carrier (e.g., carrier 400). The carrier may also include an adhesive layer (e.g., adhesive layer 208). In some implementations, the adhesive layer is formed over the carrier. Different implementations may use different materials for the carrier. In some implementations, the carrier may include glass and/or silicon.[0084] The method couples (at 610) a plurality of filters (e.g., 261, 263) to the carrier (e.g., 400). In some implementations, a pick and place process is used to couple the filters to the carrier, which may include the adhesive layer. In some implementations, the filters are places that such at least some of the neighboring filters (from the plurality of first filters 261, the plurality of second filters 263) have a spacing that is about 100 microns (μιτι) or less. In some implementations, the spacing between at least some of neighboring filters may be about 50 microns (μιτι) or less.[0085] The method forms (at 615) a second encapsulation layer (e.g., second encapsulation layer 260) over the adhesive layer, the filters (e.g., the plurality of first filters 261 , the plurality of second filters 263) and the filter interconnects (e.g., the plurality of first filter interconnects 265 and the plurality of second filter interconnects 267). The second encapsulation layer may include a mold compound and/or epoxy fill.[0086] The method forms (at 620) a plurality of interconnects in and over the second encapsulation layer 260. The plurality of interconnects may include the plurality of interconnects 269 and the plurality of through encapsulation interconnects 249. The plurality of through encapsulation interconnects 249 may include copper (Cu) posts.[0087] The method provides (at 625) switches (e.g., first switch 241, the second switch 243) over the second encapsulation layer. In some implementations, providing the switches includes providing the switches over a passivation layer (e.g., passivation layer 262) located of the second encapsulation layer 260.[0088] The method forms (at 630) a first encapsulation layer (e.g., first encapsulation layer 240) the passivation layer, the switches (e.g., first switch 241, the second switch 243), the switch interconnects (e.g., plurality of first switch interconnects 245, the plurality of second switch interconnects 247) and the plurality of through encapsulation interconnects 249. The first encapsulation layer may include a mold compound and/or epoxy fill.[0089] The method forms (at 625) a redistribution portion over the first encapsulation layer. Different implementations may form the redistribution portion differently. In some implementations, forming a redistribution portion includes forming at least one dielectric layer and forming at least one redistribution interconnect. Examples of forming redistribution portions are illustrated and described in stages 9-13 of FIGS. 4B-4C, and stages 9-13 of FIGS. 5B-5C.[0090] The method provides (at 640) a plurality of solder interconnects (e.g., solder balls, solder interconnects 210) to the redistribution portion, and decouples (at 640) the carrier. In some implementations, the adhesive layer is also decoupled.Exemplary Electronic Devices[0091] FIG. 7 illustrates various electronic devices that may be integrated with any of the aforementioned package, integrated device, semiconductor device, integrated circuit, die, interposer, package or package-on-package (PoP). For example, a mobile phone device 702, a laptop computer device 704, a fixed location terminal device 706, a wearable device 708 may include an integrated device 700 as described herein. The integrated device 700 may be, for example, any of the integrated circuits, dies, integrated devices, integrated device packages, integrated circuit devices, device packages, integrated circuit (IC) packages, package-on-package devices described herein. The devices 702, 704, 706, 708 illustrated in FIG. 7 are merely exemplary. Other electronic devices may also feature the integrated device 700 including, but not limited to, a group of devices (e.g., electronic devices) that includes mobile devices, hand-held personal communication systems (PCS) units, portable data units such as personal digital assistants, global positioning system (GPS) enabled devices, navigation devices, set top boxes, music players, video players, entertainment units, fixed location data units such as meter reading equipment, communications devices, smartphones, tablet computers, computers, wearable devices (e.g., watch, glasses), Internet of things (IoT) devices, servers, routers, electronic devices implemented in automotive vehicles (e.g., autonomous vehicles), or any other device that stores or retrieves data or computer instructions, or any combination thereof.[0092] One or more of the components, processes, features, and/or functions illustrated in FIGS. 2, 3, 4A-4C, 5A-5C, 6, and/or 7 may be rearranged and/or combined into a single component, process, feature or function or embodied in several components, proceses, or functions. Additional elements, components, processes, and/or functions may also be added without departing from the disclosure. It should also be noted that FIGS. 2, 3, 4A-4C, 5A-5C, 6, and/or 7 and its corresponding description in the present disclosure is not limited to dies and/or ICs. In some implementations, FIGS. 2, 3, 4A-4C, 5A-5C, 6, and/or 7 and its corresponding description may be used to manufacture, create, provide, and/or produce integrated devices. In some implementations, a device may include a die, an integrated device, a die package, an integrated circuit (IC), a device package, an integrated circuit (IC) package, a wafer, a semiconductor device, a package on package (PoP) device, and/or an interposer.[0093] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any implementation or aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects of the disclosure. Likewise, the term "aspects" does not require that all aspects of the disclosure include the discussed feature, advantage or mode of operation. The term "coupled" is used herein to refer to the direct or indirect coupling between two objects. For example, if object A physically touches object B, and object B touches object C, then objects A and C may still be considered coupled to one another— even if they do not directly physically touch each other.[0094] Also, it is noted that various disclosures contained herein may be described as a process that is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed.[0095] The various features of the disclosure described herein can be implemented in different systems without departing from the disclosure. It should be noted that the foregoing aspects of the disclosure are merely examples and are not to be construed as limiting the disclosure. The description of the aspects of the present disclosure is intended to be illustrative, and not to limit the scope of the claims. As such, the present teachings can be readily applied to other types of apparatuses and many altematives, modifications, and variations will be apparent to those skilled in the art.
Elongated features may be incorporated at least partially in an alignment region. The alignment region may be defined by a plurality of alignment features aligned along a first axis. A long axis of the elongated features may be neither parallel nor perpendicular to the first axis. The alignment region may further include another plurality of alignment features aligned a second axis that is not parallel to the first axis. The second axis may be neither parallel to or perpendicular to the long axis.
1.A method for improving alignment includes:A plurality of elongated features are formed on the semiconductor component, each of the elongated features having an associated long dimension substantially along a long axis and an associated short dimension, the associated long dimension being greater than the associated Short size; andA plurality of alignment features are formed on the semiconductor component, the plurality of alignment features are substantially aligned along a long axis, the plurality of alignment features define an alignment area, the alignment area An external alignment feature and a second external alignment feature are bordered and extend downward, wherein a portion of at least one of the plurality of elongated features is included in the alignment zone; andamong them:The long axis of the plurality of elongated features and the long axis of the alignment feature are neither parallel nor perpendicular to each other, andThe size of the elongated feature is such that it will interact with light, the interaction leads to the assumption that aligning the elongated feature parallel to the long axis of the alignment feature will interfere with Department of detection.2.The method of claim 1, wherein the elongated features include solid modeled features formed on a substrate.3.The method of claim 1, wherein one or more of the plurality of elongated features is located at least partially between the first alignment feature and the second alignment feature.4.The method of claim 1, wherein the plurality of alignment features are formed in the current layer, and wherein one or more of the plurality of elongated features are formed in the current layer .5.The method of claim 1, wherein the elongated features are linear features.6.The method of claim 1, wherein the alignment feature is configured to determine alignment parameters of a lithography system.7.The method of claim 6, wherein the alignment feature is included in a biaxial alignment mark.8.The method of claim 1, wherein the alignment feature is included in a uniaxial alignment mark.9.The method of claim 1, wherein the alignment feature is configured to determine coverage parameters.10.The method of claim 1, wherein the semiconductor component includes one of a mask, a photomask, and a substrate.11.The method of claim 1, wherein the long axes of the plurality of alignment features are at an angle θ to the long axes of the plurality of elongated features, and wherein θ is included from two degrees to eight In the range of eighteen degrees.12.The method of claim 1, wherein the plurality of alignment features have an average line width L, and wherein the long axis of the plurality of alignment features is relative to the plurality of elongated features The long axis forms an angle θ, and for a specific signal-to-noise ratio S and a specific field height H, θ is between zero degrees plus Δθm and ninety degrees minus Δθm, where Δθm = cos-1 (LS / H).13.A method of aligning semiconductor components, including:Emit light to an alignment zone defined by a plurality of elongated alignment features each having a long dimension and a short dimension along a first axis, wherein the alignment zone includes a plurality of elongated features, the elongated The features each have an associated long dimension along the long axis and an associated short dimension, wherein the long axis of the elongate feature and the first axis of the alignment feature are neither parallel nor perpendicular to each other, wherein The emitted light interacts with the plurality of alignment features and with at least one of the plurality of elongated features during alignment, and wherein the size of the elongated features is such that it interacts with the light May interact, the interaction may result in interference with the detection of the alignment features if the elongated features are aligned parallel to the first axis;Receiving light interacting with the plurality of alignment features and light interacting with the plurality of elongated features as receiving light; andThe alignment parameter is determined based on the received light.14.The method of claim 13, wherein the received light is reflected from at least one of the plurality of alignment features.15.The method of claim 13, wherein the received light includes diffracted light scattered by at least one of the plurality of alignment features.16.The method of claim 15, wherein the diffracted light includes at least one non-zero-order diffracted light.17.The method of claim 13, wherein the first axis and the long axes of the plurality of elongated features are at an angle θ to each other, and wherein θ is between two degrees and eighty eight degrees.18.The method of claim 17, wherein the plurality of elongated features are formed in the same layer as the plurality of alignment features in the integrated circuit.19.The method of claim 13, wherein the alignment parameter indicates the alignment of the lithography system.20.The method of claim 13, wherein the alignment parameter represents the coverage between the first layer and the second layer of the circuit structure.21.The method of claim 13, wherein the elongated features include solid modeled features.22.The method of claim 13, wherein the alignment feature is included in at least one of an alignment mark and a cover structure.23.The method of claim 13, wherein the plurality of alignment features have an average line width L, and wherein the first axis is at an angle θ to the long axis of the plurality of elongated features, And for a specific signal-to-noise ratio S and a specific field height H, θ is between zero degrees plus Δθm and ninety degrees minus Δθm, where Δθm = cos-1 (LS / H).24.A device for improving alignment, including:A plurality of alignment features aligned along a first axis on the semiconductor component; andA plurality of elongated features in an alignment zone defined by the plurality of aligned features, the elongated features having a long axis that is neither parallel to the first axis nor perpendicular On the first axis,Wherein the size of the plurality of elongated features is such that they interact with light, and the interaction results in interference with the plurality of elongated features if aligned parallel to the first axis Detection of alignment features.25.The device of claim 24, further comprising a plurality of additional alignment features aligned along a second axis, the second axis being neither parallel to the first axis nor perpendicular to the first One axis.26.The apparatus of claim 25, wherein the second axis is neither parallel to the long axis nor perpendicular to the long axis.27.The apparatus of claim 24, wherein the alignment zone extends from an outer edge of the first outer alignment feature on the current layer to an outer edge of the second outer alignment feature, the alignment The zone extends down to one or more previous layers.28.The device of claim 27, wherein the one or more elongated features are at least partially contained in the alignment area on the current layer.29.The device of claim 27, wherein the one or more elongated features are at least partially contained in the alignment region on the previous layer.30.The device of claim 24, wherein the one or more elongated features have a length and a width, the length being at least three times the width.31.The device of claim 24, wherein the one or more elongated features are linear.32.The device of claim 31, wherein the plurality of alignment features are linear.33.The device of claim 24, wherein the long axis of the elongated feature forms an angle θ with the first axis, and wherein θ is between two degrees and eighty-eight degrees.34.The device of claim 24, wherein the plurality of alignment features have an average line width L, and wherein the first axis is at an angle θ to the long axis of the plurality of elongated features, And for a specific signal-to-noise ratio S and a specific field height H, θ is between zero degrees plus Δθm and ninety degrees minus Δθm, where Δθm = cos-1 (LS / H).35.The apparatus of claim 24, wherein the plurality of alignment features are included in alignment marks.36.The device of claim 24, wherein the plurality of alignment features are included in the cover structure.37.The apparatus of claim 24, wherein the semiconductor component includes one of a mask, a photomask, and a semiconductor substrate.38.The apparatus of claim 24, further comprising a lithography system, and wherein the semiconductor component is included in the lithography system.
Angled elongated features integrated for improved alignment processbackgroundIntegrated circuits can be manufactured by forming a series of patterned layers. One process that can be used in the manufacture of integrated circuits is the chemical mechanical polishing (CMP) process. The chemical mechanical polishing process uses chemical and physical interactions between the polishing system and the surface of the substrate (eg, wafer) to improve the flatness of the surface.One concern in the CMP process is to polish the wafer uniformly over its entire surface to obtain the desired flatness. However, regions of the substrate with more features are generally polished at a different rate than regions with fewer features.To reduce polishing unevenness, specific features called "dummification" features can be added. FIG. 1 shows a solid modeled grid 110 including regularly arranged square features 120. These features can provide a more uniform feature density, but are not required for actual circuit design. Therefore, physical modeling can improve the uniformity of the CMP process. For example, the CMP process can be improved by more closely matching the density of the solid modeled area to its surroundings. However, when used near the alignment features, it turns out that the features 110 are problematic.Alignment features are generally used by lithography systems to determine proper alignment with the previous layer to pattern the set of parallel lines of the new layer with the correct spatial relationship to the previous patterned layer. Alignment features are detected using bright field (video) alignment or dark field (diffraction) alignment. With any of these schemes, features located near the alignment features (such as the solid modeled features 110) can interact with the alignment light and prevent correct detection of the alignment features. As a result, solid modeling is generally omitted in areas near the alignment features.BRIEF DESCRIPTIONFigure 1 is a grid of solid modeled features.Figure 2 shows alignment features for uniaxial alignment.FIG. 3A shows an alignment area having alignment features such as that shown in FIG. 2 with square solid modeled features contained in the alignment area.3B shows a graph of normalized simulated contrast based on a configuration such as that shown in FIG. 3A.FIG. 4A shows alignment features in areas that are not physically modeled according to the prior art.4B shows a graph of normalized simulated contrast based on a configuration such as that shown in FIG. 4A.FIG. 5A illustrates a hexagonal feature that can provide improved integration of alignment and manufacturing processes according to one implementation.FIG. 5B shows a graph of normalized simulated contrast based on the configuration shown in FIG. 5A.Figure 6A shows an implementation that includes angled elongated features.6B shows a graph of normalized simulated contrast based on the configuration shown in FIG. 6A.FIG. 7A shows simulated bright-field contrast signals modeled for entities of different densities.7B and 7C show two implementations of solid modeled features of different densities incorporated in the alignment zone.FIG. 7D shows simulated bright-field contrast signals for the configurations of FIGS. 7B and 7C.8A and 8B show two implementations of solid modeled features at different relative angles to the alignment features.FIG. 8C shows simulated bright-field contrast signals for the configurations of FIGS. 8A and 8B.Fig. 9 shows the implementation of a biaxial alignment mark.FIG. 10 shows a zone coverage mark with angled features at least partially contained in the alignment zone.Similar reference symbols in each figure refer to similar elements.Detailed DescriptionThe systems and techniques described herein may allow for improved integration of alignment and manufacturing processes.FIG. 2 shows an example of alignment features 230A to 230C (eg, grooves) located near the square solid modeled feature 220. The alignment features 230A to 230C can be used to align the lithography system so that successive layers are patterned in the correct spatial relationship. The alignment features 230A to 230C have a line width L (which may be between about 0.1 microns and 0.4 microns or more), and may be separated by intervals having a width of about 4 microns to about 20 microns. Of course, many other line and space widths can be used.During the alignment process, the light is scanned along one or more measurement axes. The light interacts with the features 230A to 230C and is detected in the detector. Other features in the vicinity of the alignment features can also interact with the alignment light, thus making the detection of alignment features more difficult.The alignment features 230A to 230C may define an alignment zone 238 that spans the outer edges 231A and 231C defined by the features 230A and 230C and extends from the top 232A of the feature 230A to the top 232C of the feature 230C The area further defined by the line and the line extending from the bottom 233A of the feature 230A to the bottom 233C of the feature 230C. The alignment region 238 extends to the previous layer, and the layer in which the alignment features are formed. Features other than the alignment features (in the current layer, or in the previous layer) within the alignment area 238 may interact with the alignment light and thus interfere with the detection of alignment features during the alignment process.In some implementations, an extended alignment region 235 can be defined. The extended alignment region 235 is bounded by the extension of the top and bottom boundaries of the alignment region 238 at the bottom and top, and is bounded by the line 236 on the left and the line 237 on the right. Line 236 may be about S to 2S away from outer edge 231A, and line 237 may be about S to 2S away from outer edge 231C. The extended alignment region 235 also extends to the previous layer. The features within the extended alignment area 235 may also interact with the alignment light and make it more difficult to detect alignment features. For example, features in the portion of the region 235 between the line 236 and the outer edge 231A may interfere with the detection of the edge of the alignment mark.Brightfield (video) or darkfield (diffraction) alignment can be used to achieve alignment. In bright-field alignment, the alignment features are illuminated, and the detected image is used to determine alignment. In dark field alignment, coherent light (eg, light from a laser source) is incident on the alignment feature. Examine the resulting diffraction pattern and use it to determine the alignment of the lithography system.The alignment mark may be referred to as a uniaxial or biaxial alignment mark. Uniaxial marking is used to align the lithography system in a single direction (eg, x or y direction). In order to align the system in both the x and y directions (or equivalently, in two parallel directions, so that the two directions cover the alignment plane), two single-axis marks can be used. The biaxial alignment mark can be used to align the lithography system in two directions (eg, x and y directions, or other directions covering the alignment plane).FIG. 3A shows an example where the elongated alignment features include uniaxial bright-field alignment grooves 330A to 330C, and where the solid modeled features 320 are used near the alignment features. In FIG. 3A, the bright areas represent lines or raised areas, and the dark areas represent recessed areas such as holes or trenches. Note that the term "nearby" applies not only to the solid modeled features on the same layer as the alignment features, but also to the solid modeled features in the previous layer. The solid modeled feature is in the vicinity of the alignment feature-if it is placed so that during the alignment process, it interacts with the alignment light and the generation can be received by a detector configured to detect the alignment feature.For example, the solid modeled features 320 are included in the alignment area 338 (and outside of the area 338). The mock-up feature 320 may be on the same layer as the alignment trenches 330A to 330C, or on a different (eg, previous) layer. The solid modeled features 320 in the alignment area 338 may cause the contrast to change, which interferes with the ability to detect alignment features.An example of the above is shown in FIG. 3B. FIG. 3B shows a bright field contrast signal simulation of three aligned trenches such as trenches 330A to 330C of FIG. 3A superimposed on a 50% density square solid modeled grid. The signals generated by the solid modeled grid make it more difficult to detect the position of the alignment mark than the alignment region without the solid modeled features.4A and 4B show solutions to the above problems. FIG. 4A shows an extended alignment region 435 without solid modeled features. Note that in the implementation of FIG. 4A, the area 435 is larger than the alignment area 438 defined similarly to the alignment area 238 of FIG. That is, the solid modeling is omitted for a region larger than the region defined by the alignment feature itself. FIG. 4B shows a bright field contrast signal simulation obtained by combining the images of FIG. 4A in the y direction. As shown in FIG. 4B, the effect of the solid modeled region can be reduced or eliminated by omitting the solid modeled region near the alignment features.Although this allows easier detection of alignment features, it can create process integration issues due to process variation issues. For example, the CMP process may cause region 435 to polish more than the surrounding region and at the interface between region 435 and the surrounding portion of the wafer, resulting in depressions and other defects in region 435.FIG. 5A shows an implementation in which generally hexagonal features 522 can be used for solid modeling. Using hexagonal features 522 instead of square features such as feature 110 of FIG. 1 can reduce the contrast generated by solid modeled features instead of alignment features such as trenches 530A, 530B, and 530C signal. Figure 5A shows a hexagonal solid modeled feature of 63% pattern density (shown as white) and 37% pore density (shown as gray). As shown in FIG. 5A, the measurement axes are x and y axes.FIG. 5B shows a graph of normalized simulated contrast based on the configuration of FIG. 5A. Compared with FIG. 3B, FIG. 5B generates an alternating signal smaller than that generated by a 50% density square grid. Therefore, it may be easier to detect alignment features using hexagonal features instead of square features. However, some alternating signals can be generated using hexagonal features. In addition, as the density of hexagonal features increases, the amplitude of the generated alternating signal may decrease. Therefore, incorporating hexagonal features in the alignment zone is not ideal in some applications.FIG. 6A illustrates the implementation of multiple angled elongated features 626 that can provide improved process integration without excessively sacrificing the detection of alignment features. Note that although features 626 can be used for solid modeling, the following description applies to other features that can be located near the alignment features. However, in the following discussion, the features 626 are referred to as solid modeling features because they can be used for solid modeling.The solid modeled feature 626 is elongated: that is, its long dimension (eg, length) is greater than its short dimension (eg, width). For example, the length of an elongated solid modeling feature is at least three times its width. Of course, the ratio of the long dimension to the short dimension can be greater, for example, 10 to 1. The solid modeling feature may be linear; therefore, solid modeling may be referred to as line / space solid modeling.At least a portion of one of the plurality of elongated features can be included in the alignment zone. That is, at least a portion of the solid modeled features 626 may be included in the alignment features such as the area 638 of FIG. 6 defined similarly to the area 238 of FIG. 2. The alignment area 638 includes a plurality of elongated (eg, linear) alignment features, such as the illustrated features 630A to 630C having long axes. The alignment features 630A to 630C can be used to align the lithography system during the lithography process, or to determine coverage parameters. As shown in FIG. 6A, the measurement axes are x and y axes.The long axis of the alignment feature and the long axis of the elongated feature are at an angle θ with respect to each other. The angle θ is neither zero nor ninety degrees: that is, the long axis of the alignment feature and the long axis of the elongated feature are neither perpendicular nor parallel to each other.As θ approaches zero or ninety degrees, the detected signal-to-noise ratio decreases. For the smallest acceptable signal-to-noise ratio, the acceptable value θ depends on the field of view and line or interval width. For a specific expected signal-to-noise ratio S, a specific line width L and a specific field height H, the minimum acceptable difference Δθm from zero or ninety degrees is given by: Δθm = cos-1 (LS / H ). In general, a relative angle between about two degrees and about eighty-eight degrees (Δθm is about two degrees) can provide an acceptable signal-to-noise ratio. However, for some implementations, a wider range of angles can be used.FIG. 6B shows simulated bright-field contrast signals obtained with the alignment features 630A to 630C and the angled solid modeled features 626 as shown in FIG. 6A. Contrary to the intermittent signals generated with the square solid model and the features, the background contrast signal generated from the solid model is generally constant. Therefore, the signal can be significantly amplified without excessively sacrificing signal quality. This allows the use of alignment features that generate relatively weak signals.Referring again to FIG. 6A, the width of one of the solid modeled features 626 shown is denoted by L, and the width of the specific interval between two consecutive solid modeled features 626 is denoted by S. Although the line widths shown in FIG. 6A are all equal, they need not be so (for example, for i lines, different values of Li can be used for different lines). Similarly, the width of the interval can be changed. Although the width of the lines and spaces can be varied, the density of the lines is generally selected to provide the desired density of features. For example, the line density can be selected so that the total feature density near the alignment features more accurately matches the pattern density around the layer, enough to achieve the desired level of flatness.Note that both feature density and pattern density in the vicinity of alignment features are generally discussed in terms of specific window sizes. That is, feature density is the percentage of windows covered by features (rather than the spacing between features). The window size is selected to be large enough so that the determined density provides a correct reflection of the overall density, while being small enough to reflect the spatial variation of the density of the features.7A shows an angle such as the feature 626 of FIG. 6A with different line / space density (17, 33, 50, 67, and 83% line density) without alignment features The analog bright field contrast signal obtained by the elongated features. Figure 7A shows that the signal generated by the angled elongated features during the alignment process is generally a constant signal. The signal can therefore be amplified without significantly damaging the ability to detect alignment features.FIG. 7B shows an implementation with a linear density of 17%, and FIG. 7C shows an implementation with a linear density of 83%. FIG. 7D shows simulated bright-field contrast signals for the implementation of FIGS. 7C and 7D. Although the normalized signal amplitude of the alignment features decreases as the line density increases, because the effect of the solid modeled features is generally constant, the signal can be amplified to increase the signal level. Therefore, the ability to detect alignment features is not unduly sacrificed.Different relative angles can be used. 8A and 8B show an implementation where the relative angle between the long axis of the alignment feature and the long axis of the elongated feature is 63.4 and 26.6 degrees, respectively. Figure 8C shows that the two implementations produce substantially similar contrast signals due to elongated features. In addition, although the x position is shifted, the normalized signal amplitude of the alignment features is the same.In some implementations, dual-axis marking may be used. Biaxial marks generally allow the same mark to be used to determine the alignment in the two axes covering the alignment plane. FIG. 9 shows an example of Nikon biaxial x / y markings with angled solid modeled features 926. 9 includes multiple vertical alignment features 930V and multiple horizontal alignment features 930H. For example, the alignment region 938 extends from the first outer edge of the alignment feature 930V to the second outer edge of the alignment feature 930V. The alignment area 938 extends down from the current layer including the alignment features to one or more previous layers. The angled feature 926 is at an angle θ to the feature 930V, and at an angle (90-θ) to the feature 930H. The angle θ is neither zero nor ninety degrees; that is, the long axis of the angled elongated feature is neither parallel to the long axis of the feature 930V or 930H nor perpendicular to the long axis of the feature 930V or 930H. Note that the term "parallel" means that the vector cross product is basically zero, and the term "vertical" means that the vector dot product is basically zero. As discussed above, this feature need not be in the same plane.Another type of alignment feature is the overlay feature. The purpose of the coverage measurement is to determine how well the continuous layers are aligned. In addition to aligning the lithography system, angled solid modeled features such as feature 626 of FIG. 6A can be used for coverage measurements. Coverage measurements are generally obtained using registration tools such as those made by KLA-Tencor.FIG. 10 shows an implementation using overlay marks such as KLA-Tencor Advanced Imaging Metrology (AIM) overlay marks. The overlay mark includes a plurality of alignment features 1030 in the alignment area 1038 and can be patterned in different layers above the layer including the angled elongated features 1026.Angled elongated features can be used with other types of alignment features. For example, angled elongated features can be used with a dual axis zoned alignment scheme.Similar to feature 626 of FIG. 6A, feature 1026 is angled. The non-angled features can be used for solid modeling with uniaxial marks, and can be used with partition marks similar to the overlay marks shown in FIG. However, the use of angled features with the partition overlay marks as shown in FIG. 10 or with the partition alignment marks can eliminate or reduce the glitches generated at the boundary of the partition, which can occur in non-angled features.The alignment features described above can be used as follows. For implementations where alignment features are used to align lithography systems, light may be emitted to one or more elongated alignment features (eg, multiple linear alignment features), where angled elongated model features The part is located near the alignment feature. The light interacts with the alignment features and the solid modeled features. However, due to the shape and relative direction of the alignment and physical modeling features, the received light corresponding to the physical modeling features is generally a constant background signal.The received light is then analyzed to determine the alignment status of the lithography system. The position error of a portion of the lithography system relative to the alignment marks on the substrate can be determined and corrected by the lithography system during exposure of the wafer to within acceptable limits.For implementations where the alignment features are used to determine coverage, light may be emitted to one or more elongated alignment features (eg, elongated alignment features included in the coverage mark), where the angled elongate The model features are located near the alignment features. In addition, light interacts with the alignment features and the solid modeled features, but the effect of the solid modeled features is constant. The received light can be analyzed and coverage can be determined.Various applications have been described, however, it should be understood that various modifications can be made without departing from the spirit and scope of the invention. For example, some variations in the angle and shape of the solid modeled features can be used. In general, there is a desired signal-to-noise ratio, and some noise due to solid modeled features can be tolerated. In addition, there are acceptable line / space density ranges for specific layer designs.Likewise, although the techniques described above with specific "solid modeled" features are described above, it should be understood that these techniques can be used with any semiconductor feature. Furthermore, although the above description discusses physical modeling and alignment features patterned on the wafer, they can be incorporated into one or more semiconductor components such as masks, reticles, substrates, and so on. In some implementations, angled solid modeling features can be used with dark field alignment schemes. Therefore, other implementations fall within the scope of the appended claims.
To provide a graphics, media and computing device having a tiled architecture composed of a large number of tiles of smaller graphics devices.SOLUTION: In a data processing system, a work distribution infrastructure enables scaled workload dispatch across a variable number of multiple tiles. Work items can be submitted to any one or more of the multiple tiles, with workloads that are able to span multiple tiles. Additionally, upon completion of a work item, graphics, media and/or computing engines within the device can readily acquire new work items for execution with minimal latency.SELECTED DRAWING: Figure 15
A graphics processor comprising: a first tile of a graphics processing engine; a second tile of a graphics processing engine; and an interface between a host system and the graphics processor; receive a set of commands for a workload having one partition and a second partition, send the set of commands to a first tile of the graphics processing engine, send the set of commands to a second tile of the graphics processing engine; a first tile of the graphics processing engine reads a first partition identifier associated with the first partition from a first hardware context and transmits the first tile of the graphics processing engine to the first tile while bypassing commands of the second partition; conditionally executing commands for one partition; and a second tile of the graphics processing engine reads a second partition identifier associated with the second partition from a second hardware context; conditionally executing commands of the second partition while bypassing commands; the interface further receives a command associating the first hardware context with a first tile of the graphics processing engine; processor.The graphics processor of claim 1 , wherein the interface to the host system further receives a command to set the first hardware context based on a first logical render context.3. The graphics processor of claim 2, wherein the interface to the host system further receives a command associating the second hardware context with a second tile of the graphics processing engine.4. The graphics processor of claim 3, wherein the interface to the host system further receives a command to set the second hardware context based on a second logical render context.5. The graphics processor of claim 4, wherein the interface receives the set of commands for the workload via a memory buffer containing commands to be executed for the workload.The first hardware context includes a first offset associated with the beginning of the first partition in the memory buffer, and the second hardware context includes a first offset associated with the beginning of the second partition in the memory buffer. 6. The graphics processor of claim 5, including a second offset associated with a beginning.7. The graphics processing engine of claim 6, wherein the first tile of the graphics processing engine initiates execution of a command of the first partition with a command stored at a first offset in the memory buffer.・Processor.8. A second tile of the graphics processing engine initiates execution of a command of the second partition with a command stored at a second offset location in the memory buffer. graphics processor.9 . The graphics processing engine of claim 8 , wherein a first tile of the graphics processing engine is synchronized with a second tile of the graphics processing engine when execution of the first partition and the second partition is completed. processor.A computer program product having instructions for causing one or more processors to perform operations, the operations comprising: generating a set of commands for a workload to be executed by a graphics processor having multiple tiles of a graphics processing engine; dividing the set of commands into a first partition and a second partition; associating a first partition identifier identifying the first partition with a first render context; associating a partition identifier with a second render context; a first graphics processing engine tile and a second graphics processing engine tile of the plurality of tiles of the graphics processing engine each having a partition identifier associated with the first partition; executing the first partition with the first graphics processing engine tile; and executing the second partition with the second graphics processing engine tile. and the operation further includes: initializing the first render context and defining an execution state used to execute the first partition; and initializing the second render context and defining an execution state used to execute the first partition. A computer program product, comprising: defining an execution state used to execute two partitions.Before performing the operations on the first partition and the second partition, the operations further include: assigning the first partition identifier to the first partition; and assigning the second partition identifier to the second partition. The computer program according to claim 10, comprising:The step of submitting the first partition and the second partition to each of the first graphics processing engine tile and the second graphics processing engine tile includes commands for the first partition and the second partition. 11. The computer program product of claim 10, comprising the step of submitting a batch buffer containing a batch buffer.The operation assigns the first render context to the first graphics processing engine tile and assigns the second render context to the second graphics processing engine tile before executing the first partition and the second partition. 14. The computer program product of claim 12, further comprising the step of assigning to a processing engine tile.The first render context includes a first offset relative to the start of the first partition in the batch buffer, and the second render context includes a second offset relative to the start of the second partition in the batch buffer. 14. The computer program according to claim 13, comprising an offset.16. The computer program product of claim 14, wherein the batch buffer includes synchronous commands at the ends of the first partition and the second partition.A method of executing commands in a distributed graphics computer, the method comprising: receiving a set of commands at a graphics processor, the set of commands executing a workload having a first partition and a second partition; representing a first partition identifier associated with the first partition by a first tile of a graphics processing engine, wherein the graphics processor includes a plurality of tiles of a graphics processing engine; reading from a second hardware context, by a second tile of a graphics processing engine, a second partition identifier associated with the second partition; conditional on a command having a partition identifier associated with the respective tile; configuring a first tile of the graphics processing engine and a second tile of the graphics processing engine to execute accordingly; executing the commands of the first partition on a tile; and executing the commands of the second partition on a second tile of the graphics processing engine while bypassing the commands of the first partition; Receiving a set of commands includes: receiving a command associating a first tile of the graphics processing engine with the first hardware context; and associating a second tile of the graphics processing engine with the second hardware context. The method further comprises: - receiving a command to associate with the context.receiving a trigger to transition execution of the first partition from a first tile of the graphics processing engine to a third tile of the graphics processing engine before completing execution of the first partition; and 20. The method of claim 16, further comprising: executing at least a portion of the first partition by a third tile of a graphics processing engine.further comprising migrating execution of the first partition from a first tile of the graphics processing engine to a third tile of the graphics processing engine by atomically reassigning a partition identifier of the first partition. The method according to claim 17.A system comprising means for performing according to any one of claims 16-18.A storage medium storing the computer program according to claim 10.
[0001] Computing systems may include graphics processors that perform graphics processing and compute workloads such as linear interpolation, mosaicking, rasterization, texture mapping, depth testing, etc. in parallel. Traditionally, graphics processors have used fixed function computational units to process graphics data. However, modern graphics processors include programmable parts that allow such processors to perform a wider variety of operations for processing vertex and fragment data, as well as general-purpose parallel computing workloads. make it possible to support Such processors typically include an interface through which programmable workloads can be scheduled for execution on the processor.[0002] In order that the above-described features of the present embodiments may be understood in detail, a more specific description of the embodiments briefly summarized above is made by reference to the embodiments, some of which are described below. As shown in the accompanying drawings.[0003] FIG. 1 is a block diagram of a processing system according to an embodiment. [0004] FIG. 2 is a block diagram of a processor according to an embodiment. [0005] FIG. 3 is a block diagram of a graphics processor according to an embodiment. [0006] FIG. 4 is a block diagram of a graphics processing engine of a graphics processor according to some embodiments. [0007] FIG. 5 is a block diagram of the hardware logic of a graphics processor core in accordance with some embodiments described herein. [0008] FIGS. 6A-6B illustrate thread execution logic including an array of processing elements used in a graphics processor core, according to embodiments described herein. 6A-6B illustrate thread execution logic including an array of processing elements used in a graphics processor core, according to embodiments described herein. [0009] FIG. 7 is a block diagram illustrating a graphics processor instruction format according to some embodiments. [0010] FIG. 8 is a block diagram of a graphics processor according to another embodiment. [0011] FIGS. 9A-9B illustrate graphics processor command formats and command sequences according to some embodiments. 9A-9B illustrate graphics processor command formats and command sequences according to some embodiments. [0012] FIG. 10 illustrates an example graphics software architecture for a data processing system according to some embodiments. [0013] FIG. 11A is a block diagram illustrating an IP core development system according to an embodiment. [0014] FIG. 11B illustrates a side cross-sectional view of an integrated circuit package assembly according to some embodiments described herein. [0015] FIG. 12 is a block diagram illustrating an exemplary system on a chip integrated circuit according to an embodiment. [0016] FIGS. 13A-13B are block diagrams illustrating an example graphics processor for use within an SoC according to embodiments described herein. 13A-13B are block diagrams illustrating example graphics processors for use within SoCs according to embodiments described herein. [0017] FIGS. 14A-14B illustrate further example graphics processor logic in accordance with embodiments described herein. 14A-14B illustrate further example graphics processor logic according to embodiments described herein. [0018] FIG. 15 is a block diagram of a data processing system according to an embodiment. [0019] FIGS. 16A-16C illustrate a graphics processing system that performs multi-tile work scheduling according to embodiments. 16A-16C illustrate a graphics processing system that performs multi-tile work scheduling according to embodiments. 16A-16C illustrate a graphics processing system that performs multi-tile work scheduling according to embodiments. [0020] FIG. 17 illustrates a tile work distribution and scheduling system according to embodiments described herein. [0021] FIG. 18 illustrates a system that enables load balancing on a multi-tile graphics processing system according to an embodiment. [0022] FIG. 19 depicts a flow diagram of a multi-tile workload scheduling method according to an embodiment. [0023] FIG. 20 depicts a flow diagram of a method for executing a multi-tile workload according to an embodiment. [0024] FIG. 21 depicts a flow diagram of a method for migrating workloads between tiles according to an embodiment. [0025] FIG. 22 is a block diagram of a computing device that includes a graphics processor, according to an embodiment.[0026] Embodiments described herein provide graphics, media, and computing devices that have a tiled architecture that is comprised of multiple tiles of smaller graphics devices. Such devices can be scaled to include more or fewer tiles depending on the power and/or performance targets of the device. The scaled devices described herein may utilize a specially adapted work distribution infrastructure to enable efficient distribution of workloads across multiple tiles. The work distribution infrastructure described herein enables scaled workload dispatch across a variable number of multiple tiles. Work items can be submitted to any one or more of the tiles, with workloads that can be spread across multiple tiles. Additionally, upon completion of a work item, graphics, media, and/or computer engines within the device can easily obtain new work items to execute with minimal latency.[0027] In graphics, media, and/or computing devices known in the art, one or more software layers are used to distribute work items to various engines within the device. There is. The software can monitor the load on various engines and attempt to efficiently distribute or redistribute the workload to those engines. Such software may run on one or more host processors (e.g., CPU cores) of a data processing system or computing device that includes graphics, media, and/or computing devices. It can be part of a driver or device support framework. However, relying on host software to monitor and distribute workloads introduces various inefficiencies. For example, command buffer repacketization is required, introducing extra CPU cycles, adding latency, and increasing power consumption for device operation.[0028] One embodiment provides a work scheduling and submission infrastructure in which software can create a unified command buffer for workloads, including workload distribution configurations. Software can then submit work items directly to the tile, and a local hardware scheduler within the tile can schedule the workload to the appropriate engines within the tile. Each engine is capable of executing the same command buffer. When the engine is ready to perform a new work item, the engine can dynamically and atomically acquire the next chunk (eg, partition) of work to perform. In one embodiment, the unified command buffer includes synchronous commands that wait at the end of execution of distributed distributed workloads.[0029] During operation, an application or user mode graphics driver (UMD) may submit workload commands in a format that facilitates distributed execution. Workload commands are inserted into a command buffer that has a uniform command buffer format. Commands in the command buffer are partitioned into partitions to allow execution to be distributed across multiple tiles. Engines in graphics, media, and computing devices include mechanisms to atomically obtain a workload partition to execute and are capable of executing commands associated with that partition. No intervention is required by the device's high-level scheduler to monitor engine execution status. Instead of the partitions provided to the engine by the high-level scheduler, the engine can obtain work partitions as needed.[0030] For purposes of explanation, numerous specific details are set forth to provide a thorough understanding of the various embodiments described below. However, it will be apparent to one of ordinary skill in the art that the embodiments may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring fundamental principles, and to provide a better understanding of the embodiments. Although some embodiments below are described in connection with a graphics processor, the techniques and teachings described herein can be applied to various types of circuits or circuits, including general-purpose or graphics processing devices. It can be applied to semiconductor devices. Reference herein to "an embodiment" or "an embodiment" refers to the fact that a particular feature, structure, or characteristic relating to or described in connection with an embodiment is included in at least one of such embodiments. Indicates that it can be included in However, the appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.[0031] In the following description and claims, the terms "coupled" and "connected" may be used with their derivatives. It should be understood that these terms are not intended as synonyms for each other. "Coupled" is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, cooperate or interact with each other. "Connected" is used to indicate the establishment of communications between two or more elements that are coupled together.[0032] In the following discussion, FIGS. 1-14 provide overviews of example data processing systems and graphics processor logic that incorporate or are associated with various embodiments. 15-22 provide specific details of various embodiments. Some aspects of the embodiments below are described in the context of a graphics processor, and other aspects are described in the context of a general-purpose processor, such as a central processing unit (CPU). Similar techniques and teachings can be applied to other types of circuits or semiconductor devices, such as many integrated core processors, GPU clusters, or field processors. Including, but not limited to, one or more instances of a programmable gate array. Generally, the teachings are applicable to any processor or machine that manipulates or processes images (eg, samples, pixels), vertex data, or geometric data.[0042] System Overview FIG. 1 is a block diagram of a processing system 100 according to an embodiment. In various embodiments, system 100 includes one or more processors 102 and one or more graphics processors 108, and is configured as a single processor desktop system, a multiprocessor workstation system, or multiple processors 102. Alternatively, it may be a server system having a processor core 107. In one embodiment, system 100 is a processing platform embedded within a system-on-a-chip (SoC) integrated circuit for use in a mobile, handheld, or implanted device. .[0043] In one embodiment, system 100 includes a server-based gaming platform, a game console including a game and media console, a mobile game console, a handheld game console, or an online game console. or can be incorporated therein. In some embodiments, system 100 is a mobile phone, smart phone, tablet computing device, or mobile Internet device. Processing system 100 may include, be coupled to, or be integrated within a wearable device, such as a smart watch wearable device, smart glasses device, augmented reality device, or virtual reality device. In some embodiments, processing system 100 is a television or set top box device having a graphical interface generated by one or more processors 102 and one or more graphics processors 108.[0044] In some embodiments, one or more processors 102 each include one or more processor cores 107 that process instructions that, when executed, execute system operations and user software. In some embodiments, each of the one or more processor cores 107 is configured to process a particular set of instructions 109. In some embodiments, instruction set 109 may facilitate complex instruction set computation (CISC), reduced instruction set computation (RISC), or very long instruction word (VLIW) computation. Each of the plurality of processor cores 107 may process a different instruction set 109, which may include instructions that facilitate emulation of other instruction sets. Processor core 107 may also include other processing devices such as a digital signal processor (DSP).[0045] In some embodiments, processor 102 includes cache memory 104. Depending on the architecture, processor 102 may have a single internal cache or multiple levels of internal cache. In some embodiments, cache memory is shared between various components of processor 102. In some embodiments, processor 102 also includes an external cache (e.g., a Level-3 (L3) cache or a last-level cache) that may be shared among processor cores 107 using known cache coherence techniques. - A cache (Last Level Cache: LLC) (not shown) is also used. A register file 106 is further included in processor 102, which may include different types of registers (e.g., integer registers, floating point registers, status registers, and instruction pointer registers) that store different types of data. . Some registers may be general purpose registers, while other registers may be specific to the design of processor 102.[0046] In some embodiments, one or more processors 102 are coupled to one or more interface buses 110 to communicate communication signals, such as address, data, or control signals, between processors 102 and system 100. transmit to and from other components of the . Interface bus 110 may be a processor bus, such as a version of a Direct Media Interface (DMI) bus, in one embodiment. However, a processor bus is not limited to a DMI bus, but may include one or more peripheral interconnect buses (eg, PCI, PCI Express), memory buses, or other types of interface buses. In one embodiment, processor 102 includes an integrated memory controller 116 and a platform controller hub 130. Memory controller 116 facilitates communication between memory devices and other components of system 100, while platform controller hub (PCH) 130 provides communication to I/O devices via a local I/O bus. connection.[0047] The memory 120 has suitable performance to function as a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, a flash memory device, a phase change memory device, or a processing memory. It could be some other memory device. In one embodiment, memory device 120 operates as system memory for system 100 to store data 122 and instructions 121 for use by one or more processors 102 to execute applications or processes. Is possible. Memory controller 116 also couples to an optional external graphics processor 112 that may communicate with one or more graphics processors 108 within processor 102 to perform graphics and media operations. In some embodiments, display device 111 may be connected to processor 102. Display device 111 may be one or more of an internal display device, such as within a mobile electronic device or laptop device, or an external display device attached via a display interface (e.g., DisplayPort, etc.). could be. In one embodiment, display device 111 may be a head mounted display (HMD), such as a stereoscopic display device for use in virtual reality (VR) or augmented reality (AR) applications.[0048] In some embodiments, platform controller hub 130 allows peripherals to connect to memory device 120 and processor 102 via a high-speed I/O bus. I/O peripherals include audio controller 146, network controller 134, firmware interface 128, wireless transceiver 126, touch sensor 125, data storage device 124 (e.g., hard disk drive, flash memory, etc.). ), including but not limited to. Data storage device 124 may be connected via a storage interface (eg, SATA) or via a peripheral bus, such as a peripheral component interconnect bus (eg, PCI, PCI Express). Touch sensor 125 may include a touch screen sensor, a pressure sensor, or a fingerprint sensor. Wireless transceiver 126 may be a Wi-Fi transceiver, a Bluetooth transceiver, or a mobile network transceiver, such as a 3G, 4G, or Long Term Evolution (LTE) transceiver. Firmware interface 128 enables communication with system firmware and may be, for example, a unified extensible firmware interface (UEFI). Network controller 134 enables network connectivity to wired networks. In some embodiments, a high performance network controller (not shown) couples with interface bus 110. Audio controller 146, in one embodiment, is a multi-channel high-resolution audio controller. In one embodiment, system 100 includes an optional legacy I/O controller 140 that couples legacy (eg, Personal System 2 (PS/2)) devices to the system. Platform controller hub 130 may also connect connected input devices, such as a keyboard and mouse 143 combination, camera 144, or other USB input devices, to one or more Universal Serial Bus (USB) controllers 142. .[0049] It is understood that the illustrated system 100 is exemplary and not limiting, and that other types of data processing systems configured differently may also be used. For example, the example memory controller 116 and platform controller hub 130 may be integrated into a separate external graphics processor, such as external graphics processor 112. In one embodiment, platform controller hub 130 and/or memory controller 116 may be external to one or more processors 102. For example, system 100 may include an external memory controller 116 and platform controller hub 130 that may be configured as a memory controller hub and a peripheral controller hub within the system chipset in communication with processor 102.[0050] FIG. 2 is a block diagram of an embodiment of a processor 200 having one or more processor cores 202A-202N, an integrated memory controller 214, and an integrated graphics processor 208. Those elements of FIG. 2 that have the same reference number (or name) as elements of any other figure herein operate or function in a similar manner as described elsewhere in the specification. may be possible, but is not so limited. Processor 200 may include additional cores, up to and including additional cores 202N indicated by dashed boxes. Each of processor cores 202A-202N includes one or more internal cache units 204A-204N. In some embodiments, each processor core also has access to one or more shared cache units 206.[0051] Internal cache units 204A-204N and shared cache unit 206 represent a cache memory hierarchy within processor 200. The cache memory hierarchy includes at least one level of instruction and data cache within each processor core, and one or more levels of shared intermediate level cache (e.g., Level 2 (L2), Level 3 (L3)). , Level 4 (L4), or other levels of cache), where the previous highest level of cache in external memory is classified as LLC. In some embodiments, cache coherence logic maintains coherence between various cache units 206 and 204A-204N.[0052] In some embodiments, processor 200 may also include one or more bus controller units 216 and a set of system agent cores 210. One or more bus controller units 216 manage a set of peripheral buses, such as one or more PCI or PCI Express buses. System agent core 210 provides management functions for various processor components. In some embodiments, system agent core 210 includes one or more integrated memory controllers 214 that manage access to various external memory devices (not shown).[0053] In some embodiments, one or more of processor cores 202A-202N include support for simultaneous multi-threading. In such embodiments, system agent core 210 includes components that coordinate and operate cores 202A-202N during multi-threaded processing. System agent core 210 may additionally include a power control unit (PCU) that includes logic and components that regulate the power state of processor cores 202A-202N and graphics processor 208.[0054] In some embodiments, processor 200 additionally includes a graphics processor 208 that performs graphics processing operations. In some embodiments, graphics processor 208 is coupled to a set of shared cache units 206 and system agent cores 210 that include one or more integrated memory controllers 214. In some embodiments, system agent core 210 also includes a display controller 211 that drives graphics processor output to one or more coupled displays. In some embodiments, display controller 211 may be a separate module coupled to graphics processor via at least one interconnect, or may be integrated within graphics processor 208. .[0055] In some embodiments, ring-based interconnection unit 212 is used to couple internal components of processor 200. However, alternative interconnect units may be used, such as point-to-point interconnects, switched interconnects, or other technologies, including those well known in the art. In some embodiments, graphics processor 208 couples to ring interconnect 212 via I/O link 213.[0056] Exemplary I/O links 213 include multiple types of packaged I/O interconnects that facilitate communication between various processor components and high-performance embedded memory modules 218, such as eDRAM modules. represents at least one of the I/O interconnects. In some embodiments, each of processor cores 202A-202N and graphics processor 208 use internal memory module 218 as a shared last level cache.[0057] In some embodiments, processor cores 202A-202N are homogeneous cores that execute the same instruction set architecture. In another embodiment, the processor cores 202A-202N are disparate from an instruction set architecture (ISA) perspective, such that one or more processor cores 202A-202N execute a first instruction set; One other core executes a portion of the first instruction set or a different instruction set. In one embodiment, processor cores 202A-202N are heterogeneous from a microarchitectural standpoint, with one or more cores of relatively high power consumption coupled to one or more power cores of relatively low power consumption. do. Furthermore, processor 200 may be implemented on one or more chips or as an SoC integrated circuit comprising the illustrated components in addition to other components.[0058] FIG. 3 is a block diagram of a graphics processor 300, which may be a separate graphics processing unit or an integrated graphics processor with multiple processing cores. In some embodiments, the graphics processors communicate through a memory mapped I/O interface to registers on the graphics processor and through commands placed within processor memory. In some embodiments, graphics processor 300 includes a memory interface 314 for accessing memory. Memory interface 314 may be an interface to local memory, one or more internal caches, one or more shared external caches, and/or system memory.[0059] In some embodiments, graphics processor 300 also includes a display controller 302 that drives display output data to display device 320. Display controller 302 includes hardware for one or more overlay planes of the display and configuration of multiple layers of video or user interface elements. Display device 320 may be an internal or external display device. In one embodiment, display device 320 is a head mounted display device, such as a virtual reality (VR) display device or an augmented reality (AR) display device. In some embodiments, graphics processor 300 supports MPEG (Moving Picture Experts Group) formats, such as, but not limited to, MPEG-2, H. AVC (Advanced Video Coding) formats such as 264/MPEG-4 AVC and SMPTE (Society of Motion Picture & Television Engineers) 421M/VC-1, as well as JPEG and MJPEG ( JPEG (Joint Photographic Experts The video codec engine 306 encodes, decodes, or transcodes media into or between one or more media encoding formats, including (Group) formats.[0060] In some embodiments, graphics processor 300 includes a block image transfer (BLIT) engine 304 that performs two-dimensional (2D) rasterizer operations, including, for example, bit-bounded block transfers. However, in one embodiment, 2D graphics operations are performed using one or more components of a graphics processing engine (GPE) 310. In some embodiments, GPE 310 is a computer engine that performs graphics operations, including three-dimensional (3D) graphics operations and media operations.[0061] In some embodiments, GPE 310 performs 3D operations such as rendering three-dimensional images and scenes using processing functions that operate on 3D primitive shapes (e.g., rectangles, triangles, etc.). Includes a 3D pipeline 312. 3D pipeline 312 includes programmable and fixed function elements that perform various tasks within the elements and/or spawn threads of execution to 3D/media subsystem 315. Although the 3D pipeline 312 can be used to perform media operations, embodiments of the GPE 310 may also be used to perform media operations, such as video post-processing and image enhancement. Includes pipeline 316.[0062] In some embodiments, media pipeline 316 performs video decoding acceleration, video deinterlacing, and video encoding acceleration instead of or on behalf of video codec engine 306. A fixed function or programmable logic unit that performs one or more specific media operations, such as: In some embodiments, media pipeline 316 further includes a thread generation unit that spawns threads for execution on 3D/media subsystem 315. The spawned threads perform computations for media operations on one or more graphics execution units included in the 3D/media subsystem 315.[0063] In some embodiments, 3D/media subsystem 315 includes logic that executes threads spawned by 3D pipeline 312 and media pipeline 316. In one embodiment, the pipeline sends thread execution requests to the 3D/media subsystem 315, which uses thread dispatch logic to arbitrate and dispatch various requests to available thread execution resources. including. Execution resources include an array of graphics execution units that process 3D and media threads. In some embodiments, 3D/media subsystem 315 includes one or more internal caches for thread instructions and data. In some embodiments, the subsystem also includes shared memory, including registers and addressable memory, for sharing data between threads and storing output data.[0064] Graphics Processing Engine FIG. 4 is a block diagram of a graphics processing engine 410 of a graphics processor, according to some embodiments. In one embodiment, graphics processing engine (GPE) 410 is a version of GPE 310 shown in FIG. Elements in FIG. 4 that have the same reference number (or name) as elements in any other figure herein operate in any manner similar to those described elsewhere in the specification or may function, but is not so limited. For example, 3D pipeline 312 and media pipeline 316 of FIG. 3 will be described. Media pipeline 316 is optional in some embodiments of GPE 410 and may not be explicitly included within GPE 410. For example, in at least one embodiment, separate media and/or image processors are coupled to GPE 410.[0065] In some embodiments, GPE 410 couples to or includes a command streamer 403 that provides a command stream to 3D pipeline 312 and/or media pipeline 316. In some embodiments, command streamer 403 is coupled to memory, which can be system memory or one or more of internal cache memory and shared cache memory. In some embodiments, command streamer 403 receives commands from memory and sends the commands to 3D pipeline 312 and/or media pipeline 316. Commands are instructions fetched from a ring buffer that stores commands for 3D pipeline 312 and media pipeline 316. In one embodiment, the ring buffer may further include a batch command buffer that stores batches of multiple commands. Commands for the 3D pipeline 312 may also include references to data stored in memory, such as, but not limited to, vertex and geometry data for the 3D pipeline 312 and/or image data and memory objects for the media pipeline 316. obtain. 3D pipeline 312 and media pipeline 316 execute commands and commands by performing operations through logic within their respective pipelines or by dispatching one or more threads of execution to graphics core array 414. Process the data. In one embodiment, graphics core array 414 includes one or more blocks of graphics cores (e.g., graphics core 415A, graphics core 415B), each block containing one or more graphics cores. Includes core. Each graphics core includes a set of graphics execution resources that include general and graphics-specific execution logic that performs graphics and computational operations, as well as fixed function texturing and/or machine learning and artificial intelligence acceleration logic.[0066] In various embodiments, the 3D pipeline 312 runs one or more shader programs, such as a vertex shader, a shape shader, a pixel shader, a fragment shader, a compute shader, or other shader program. , includes fixed functionality and programmable logic that processes instructions by dispatching threads of execution to graphics core array 414 . Graphics core array 414 provides an integrated block of execution resources for use in processing these shader programs. Multipurpose execution logic (e.g., execution units) within graphics cores 415A-415B of graphics core array 414 includes support for various 3D API shader languages and supports multiple concurrent executions associated with multiple shaders. It is possible to execute threads.[0067] In some embodiments, graphics core array 414 also includes execution logic to perform media functions such as video and/or image processing. In one embodiment, the execution unit further includes general purpose logic programmable to perform parallel general purpose computer operations in addition to graphics processing operations. The general purpose logic may perform computations in parallel with or in conjunction with general purpose logic within processor core 107 of FIG. 1 or cores 202A-202N of FIG.[0068] Output data generated by threads executing on graphics core array 414 may output data to memory within unified return buffer (URB) 418. URB 418 can store data for multiple threads. In some embodiments, URB 418 may be used to send data between various threads running on graphics core array 414. In some embodiments, URB 418 may additionally be used for synchronization between threads in the graphics core array and fixed function logic within shared function logic 420.[0069] In some embodiments, the graphics core array 414 is scalable such that the array includes a variable number of graphics cores, each graphics core having a GPE 410 has a variable number of execution units based on target power and performance levels. In one embodiment, the execution resources are dynamically scalable so that they can be enabled or disabled as needed.[0070] Graphics core array 414 couples to shared functionality logic 420, which includes multiple resources shared among graphics cores within the graphics core array. Shared functions within shared function logic 420 are hardware logic units that provide special supplementary functions to graphics core array 414. In various embodiments, shared function logic 420 includes, but is not limited to, sampler 421, math 422, and inter-thread communication (ITC) 423 logic. Additionally, some embodiments implement one or more caches 425 within the shared functional logic 420.[0071] Shared functionality is implemented, in which case the requirement for a given special functionality is insufficient for what is included in graphics core array 414. Instead, a single instantiation of the special functionality is implemented as a standalone entity within shared functionality logic 420 and shared among execution resources within graphics core array 414. The detailed set of functionality shared between and contained within graphics core array 414 varies between embodiments. In some embodiments, certain shared functions within shared function logic 420 that are widely used by graphics core array 414 are included within shared function logic 416 within graphics core array 414. It's fine. In various embodiments, shared functionality logic 416 within graphics core array 414 may include some or all of the logic within shared functionality logic 420. In one embodiment, all logic elements within shared functionality logic 420 may be replicated within shared functionality logic 416 of graphics core array 414. In one embodiment, shared functionality logic 420 is excluded in favor of shared functionality logic 416 of graphics core array 416.[0072] FIG. 5 is a block diagram of the hardware logic of a graphics processor core 500 in accordance with some embodiments described herein. Elements in FIG. 5 that have the same reference number (or name) as elements in any other figure herein operate in any manner similar to those described elsewhere in the specification or may function, but is not so limited. The illustrated graphics processor core 500 is included within the graphics core array 414 of FIG. 4 in some embodiments. Graphics processor core 500 is often referred to as a core slice and may be one or more graphics cores within a modular graphics processor. Graphics processor core 500 is an example of one graphics core slice, and the graphics processors described herein can be divided into multiple graphics core slices based on target power and performance envelopes. May contain slices. Each graphics processor core 500 may include a fixed function block 530 coupled to a plurality of sub-cores 501A-501F, also referred to as sub-slices, including modular blocks of general purpose and fixed function logic.[0073] In some embodiments, fixed function block 530 is shared by all sub-cores within graphics processor core 500, such as in lower performance and/or lower power graphics processor implementations. The shape/fixed function pipeline 536 includes a shape/fixed function pipeline 536 that can be configured. In various embodiments, shape/fixed function pipeline 536 includes a 3D fixed function pipeline (e.g., 3D pipeline 312 of FIGS. 3 and 4), a video front end unit, a thread generator, and a thread dispatcher. and a unified return buffer manager that manages the unified return buffer, such as unified return buffer 418 of FIG.[0074] In one embodiment, fixed function block 530 also includes a graphics SoC interface 537, a graphics microcontroller 538, and a media pipeline 539. Graphics SoC interface 537 provides an interface between graphics core 500 and other processor cores within the system-on-chip integrated circuit. Graphics microcontroller 538 is a programmable sub-processor that is configurable to manage various functions of graphics processor core 500, including thread dispatch, scheduling, and pre-emption. . Media pipeline 539 (e.g., the media pipeline of FIGS. 3 and 4) includes logic that facilitates decoding, encoding, pre-processing, and/or post-processing of multimedia data, including image and video data. including. Media pipeline 539 performs media operations by requesting computation or sampling logic within sub-cores 501A-501F.[0075] In one embodiment, SoC interface 537 includes a general-purpose application processor core (e.g., CPU), and/or shared last level cache memory, system RAM, and/or integrated on-chip or on-package DRAM. It enables graphics processor core 500 to communicate with other components within the SoC, including memory hierarchical elements such as. SoC interface 537 may also enable communication with fixed function devices within the SoC, such as a camera imaging pipeline, and provide global interfaces that may be shared between graphics core 500 and the CPU within the SoC. Enabling and/or realizing the use of memory atomics. SoC interface 537 also provides power management control for graphics core 500 and may enable interfaces between the graphics core 500 clock domain and other clock domains within the SoC. In one embodiment, SoC interface 537 provides commands and instructions from a command streamer and global thread dispatcher configured to provide commands and instructions to each of the one or more graphics cores within the graphics processor. Enables command buffer reception. Commands and instructions are sent to the media pipeline 539 if media operations are to be performed, or to the shape and fixed function pipeline (e.g., shape and fixed function pipeline 536, shape and fixed function pipeline 514).[0076] Graphics microcontroller 538 may be configured to perform various scheduling and management tasks for graphics core 500. In one embodiment, graphics microcontroller 538 performs graphics and/or computation with various graphics parallel engines within execution unit (EU) arrays 502A-502F, 504A-504F within sub-cores 501A-501F. It is possible to perform workload scheduling. In this scheduling model, host software running on the CPU core of the SoC, including graphics core 500, calls any one of several graphics processor doorbells ( It is possible to submit the workload to the doorbell. Scheduling operations determine which workload to run next, submit the workload to the command streamer, pre-empt existing workloads running on the engine, monitor the progress of the workload, and Includes notifying host software when a workload is complete. In one embodiment, the graphics microcontroller 538 also facilitates low power or idle states for the graphics core 500, providing low power control independent of the graphics driver software and/or operating system in the system. Provides graphics core 500 with the ability to save and restore registers within graphics core 400 across state transitions.[0077] Graphics core 500 may have up to N modular sub-cores, more or less than the illustrated sub-cores 501A-501F. For each set of N sub-cores, graphics core 500 also includes shared function logic 510, shared and/or cache memory 512, shape/fixed function pipeline 514, and various graphics and computer processing operations. Additional fixed function logic 516 may be included to accelerate the process. Shared functionality logic 510 may be shared by shared functionality logic 420 of FIG. 4 (e.g., sampler, mass, and/or inter-thread communication logic) that may be shared by each of the N sub-cores within graphics core 500. may include a logic unit associated with the . Shared and/or cache memory 512 may be a last level cache for a set of N sub-cores 501A-501F within graphics core 500, and may be a It can also function as accessible shared memory. Shape/fixed function pipeline 514 may be included in place of shape/fixed function pipeline 536 within fixed function block 530 and may include the same or similar logic units.[0078] In one embodiment, graphics core 500 includes additional fixed function blocks 516 that may include various fixed function acceleration logic for use by graphics core 500. In one embodiment, additional fixed function logic 516 includes an additional shape pipeline for use in position only shading. For positional shading, there are two shape pipelines: a full shape pipeline within the shape/fixed function pipelines 516, 536, and an additional shape pipeline that may be included in additional fixed function logic 516. A cull pipeline. In one embodiment, the cull pipeline is a reduced version of the full-shape pipeline. Full pipelines and cull pipelines are capable of running different instances of the same application, with each instance having a separate context. Positional shading can hide long cull runs of discarded triangles, and in some instances can allow shading to complete sooner. For example, in one embodiment, the additional no cull pipeline logic within the fixed function logic 516 allows the positional shader to run in parallel with the main application and typically delivers critical results faster than a full pipeline. , because the cull pipeline only fetches and shades the position attributes of the vertices without rasterizing and rendering the pixels to the frame buffer. The cull pipeline uses the generated critical results to compute visibility information for all triangles, whether or not they are culled. A full pipeline (which may be called a replay pipeline in this example) uses the visibility information to skip the culled triangles and only shade the visible triangles that are ultimately passed to the rasterization stage.[0079] In one embodiment, additional fixed function logic 516 may also further include machine learning acceleration logic, such as fixed function matrix multiplication logic, for implementations that include machine learning training or estimation optimization.[0080] Within each graphics sub-core 501A-501F for performing graphics, media, and computer operations in response to requests by a graphics pipeline, media pipeline, or shader program. Contains a set of execution resources that may be used for The graphics sub-cores 501A-501F include multiple EU arrays 502A-502F, 504A-504F, thread dispatch and inter-thread communication (TD/IC) logic 503A-503F, 3D ( For example, texture samplers 505A-505F, media samplers 506A-506F, shader processors 507A-507F, and shared local memory (SLM) 508A-508F. EU arrays 502A-502F, 504A-504F each include a plurality of execution units that perform floating point and integer/integer operations in the service of graphics, media, or arithmetic operations, including graphics, media, or arithmetic shader programs. A general-purpose graphics processing unit capable of performing fixed-point logic operations. TD/IC logic 503A-503F performs local thread dispatch and thread control operations for execution units within a sub-core and facilitates communication between threads running on execution units of a sub-core. do. 3D samplers 505A-505F can read textures or other 3D graphics related data into memory. The 3D sampler can read texture data differently based on the set sampler state and the texture format associated with a given texture. Media samplers 505A-505F may perform similar reading operations based on the type and format associated with the media data. In one embodiment, each graphics sub-core 501A-501F may alternatively include an integrated 3D and media sampler. Threads running on execution units within each of sub-cores 501A-501F can utilize shared local memory 508A-508F within each sub-core, and threads running within a thread group can utilize shared local memory 508A-508F within each sub-core. can be executed using a common pool of on-chip memory.[0081] Execution Units FIGS. 6A-6B illustrate thread execution logic 600 that includes an array of processing elements utilized in a graphics processor core according to embodiments described herein. Elements in FIGS. 6A-6B that have the same reference numeral (or name) as an element in any other figure herein operate in any manner similar to that described elsewhere in this specification. may function or function, but is not so limited. FIG. 6A shows an overview of thread execution logic 600, which may include variations of the hardware logic described with each sub-core 501A-501F of FIG. FIG. 6B shows exemplary internal details of an execution unit.[0082] As shown in FIG. 6A, in some embodiments, thread execution logic 600 includes a scalable execution unit that includes a shader processor 602, a thread dispatcher 604, an instruction cache 606, and a plurality of execution units 608A-608N. includes an array, a sampler 610, a data cache 612, and a data port 614. In one embodiment, the scalable execution unit array enables or disables one or more execution units (e.g., execution units 608A, 608B, 608C, 608D-608N-1, and 608N) based on the computational requirements of the workload. By making it sable, it is possible to dynamically scale it. In one embodiment, the included components are interconnected by interconnection facilities linking each of the components. In some embodiments, thread execution logic 600 provides access to memory, such as system memory or cache memory, through one or more of instruction cache 606, data port 614, sampler 610, and execution units 608A-608N. Contains one or more connections. In some embodiments, each execution unit (e.g., 608A) is a standalone programmable general purpose unit capable of executing multiple simultaneous hardware threads, processing multiple data elements in parallel for each thread. It is a calculation unit. In various embodiments, the array of execution units 608A-608N is scalable to include any number of individual execution units.[0083] In some embodiments, execution units 608A-608N are used primarily to execute shader programs. Shader processor 602 is capable of processing various shader programs and dispatching threads of execution associated with the shader programs by thread dispatcher 604 . In one embodiment, the thread dispatcher arbitrates thread initiation requests from the graphics and media pipeline and instantiates the requested threads on one or more execution units within execution units 608A-608N. Contains logic. For example, a shape pipeline can dispatch vertices, mosaics or tessellation, or shape shaders to threaded execution logic for processing. In some embodiments, thread dispatcher 604 may also handle runtime thread creation requests from executing shader programs.[0084] In some embodiments, execution units 608A-608N support an instruction set that includes native support for graphics shader instructions from many 3D graphics standards, such that graphics Shader programs from standard libraries (eg Direct 3D and OpenGL) are executed with minimal conversion. The execution units support vertex and shape processing (e.g., vertex programs, shape programs, vertex shaders), pixel processing (e.g., pixel shaders, fragment shaders), and general purpose processing (e.g., compute and media shaders). . Each of the execution units 608A-608N is capable of multi-issue single instruction multiple data (SIMD) execution, and multi-threaded operation allows for an efficient execution environment in the face of higher latency memory accesses. Make it. Each hardware thread within each execution unit has a dedicated high bandwidth register file and associated independent thread state. Execution is multi-issue per clock to a pipeline capable of performing integer, single- and double-precision floating-point operations, SIMD branching capability, logical operations, transcendental operations, and a variety of other operations. issue). While waiting for data from memory or any shared function, dependent logic within execution units 608A-608N causes the waiting thread to sleep until the requested data is returned. While the waiting thread sleeps, hardware resources may be dedicated to processing other threads. For example, during a delay associated with a vertex shader operation, an execution unit may perform operations for a pixel shader, a fragment shader, or another type of shader program, including a different vertex shader.[0085] Each execution unit among execution units 608A-608N operates on an array of data elements. The number of data elements is the "execution size" or number of channels for the instruction. An execution channel is a logical unit for execution for data element access, masking, and flow control within instructions. The number of channels may be independent of the number of physical Arithmetic Logic Units (ALUs) or Floating Point Units (FPUs) of a particular graphics processor. In some embodiments, execution units 608A-608N support integer and floating point data types.[0086] The execution unit instruction set includes SIMD instructions. Different data elements can be stored in registers as packed data types, and the execution unit will process different elements based on the data size of the elements. For example, when operating on a 256-bit wide vector, the 256 bits of the vector are stored in registers and the execution unit stores four separate 64-bit packed data elements (Quad-Word: ) size data elements), 8 separate 32-bit packed data elements (DoubleWord (DW) size data elements), 16 separate 16-bit packed data elements (word (Word:W size data elements) or as 32 separate 8-bit data elements (byte:B size data elements). However, different vector widths and register sizes are possible.[0087] In some embodiments, one or more execution units may be coupled to a fused EU 609A-609N having thread control logic (607A-607N) common to the fused execution units. It is possible for multiple EUs to be merged into an EU group. Each EU within a fused EU group may be configured to run a separate SIMD hardware thread. The number of EUs in a fused EU group may vary depending on the embodiment. Additionally, various SIMD widths may be implemented per EU, including but not limited to SIMD8, SIMD16, and SIMD32. Each fused graphics execution unit 609A-609N includes at least two execution units. For example, the first execution unit 609A includes a first EU 608A, a second EU 608B, and thread control logic 607A common to the first EU 608A and the second EU 608B. Thread control logic 607A controls the threads executed in fused graphics execution unit 609A and allows each EU within fused execution units 609A-609N to execute using a common instruction pointer register.[0088] One or more internal instruction caches (eg, 606) are included in thread execution logic 600 to cache thread instructions for execution units. In some embodiments, one or more data caches (eg, 612) are included to cache thread data during thread execution. In some embodiments, sampler 610 is included to provide texture sampling for 3D operations and media sampling for media operations. In some embodiments, sampler 610 includes special texture or media sampling functionality to process the texture or media data during the sampling process before providing the sampled data to the execution unit. .[0089] During execution, the graphics and media pipeline sends thread initiation requests to thread execution logic 600 via thread spawning and dispatch logic. Once the group of geometric objects is processed and rasterized into pixel data, pixel processor logic (e.g., pixel shader logic, fragment shader logic, etc.) within shader processor 602 is called; The output information is further calculated and results are generated to be written to an output surface (eg, color buffer, depth buffer, stencil buffer, etc.). In some embodiments, a pixel or fragment shader calculates values for various vertex attributes to be interpolated across the rasterized object. In some embodiments, pixel processor logic within shader processor 602 executes an application programming interface (API)-supplied pixel or fragment shader program. To execute a shader program, shader processor 602 dispatches threads to execution units (eg, 608A) via thread dispatcher 604. In some embodiments, shader processor 602 uses texture sampling logic within sampler 610 to access texture data in texture maps stored in memory. Arithmetic operations on the texture data and input geometry data calculate pixel color data for each geometry fragment or discard one or more pixels from future processing.[0090] In some embodiments, data port 614 provides memory access to thread execution logic 600 to output processed data to memory for further processing in a graphics processor output pipeline.・Provide a mechanism. In some embodiments, data port 614 includes or is connected to one or more cache memories (e.g., data cache 612) to cache data for memory access through the data port. Join.[0091] As shown in FIG. 6B, the graphics execution unit 608 includes an instruction fetch unit 637, a general register file array (GRF) 624, and an architectural register file array (GRF) 624. file array (ARF) 626, a thread arbiter 622, a send unit 630, a branch unit 632, a set of SIMD floating point units (FPUs) 634, and in one embodiment a set of dedicated integer SIMD ALUs 635. GRF 624 and ARF 626 include a set of general purpose register files and architectural register files, respectively, associated with concurrent hardware threads that may be active within graphics execution unit 608. In one embodiment, per-thread architectural state is maintained in ARF 626, while data used during thread execution is stored in GRF 624. The execution state of each thread, including instruction pointers for each thread, may be maintained in thread-specific registers within ARF 626.[0092] In one embodiment, graphics execution unit 608 has an architecture that is a combination of Simultaneous Multi-Threading (SMT) and Interleaved Multi-Threading (IMT). The architecture has a modular configuration that can be fine-tuned at design time based on the target number of concurrent threads and number of registers per execution unit, where execution unit resources are used to run multiple concurrent threads. Split across the logic used to do so.[0093] In one embodiment, graphics execution unit 608 is capable of co-issuing multiple instructions, each of which may be a different instruction. A thread arbiter 622 of graphics execution unit thread 608 may dispatch instructions to send unit 630, branch unit 642, or SIMD FPU 634 for execution. Each thread of execution has access to 128 general-purpose registers in the GRF624, where each register can store 32 bytes that can be accessed as a SIMD 8-element vector of 32-bit data elements. It is. In one embodiment, each execution unit thread has access to 4K bytes in the GRF624, but embodiments are not so limited, and other embodiments may provide more or fewer register resources. can be done. In one embodiment, up to seven threads can execute simultaneously, although the number of threads per execution unit may vary depending on the embodiment. In an embodiment where 7 threads can access 4K bytes, the GRF 624 can store a total of 28K bytes. The flexible addressing mode allows registers to be addressed together to efficiently build wider registers or represent strided rectangular block data structures.[0094] In one embodiment, memory operations, sampler operations, and other long-latency system communications are dispatched by "send" instructions executed by message passing and sending unit 630. In one embodiment, branch instructions are dispatched to a dedicated branch unit 632 that facilitates SIMD divergence and eventual convergence.[0095] In one embodiment, graphics execution unit 608 includes one or more SIMD floating point units (FPUs) 634 to perform floating point operations. In one embodiment, FPU 634 also supports integer calculations. In one embodiment, the FPU 634 can SIMD perform up to M 32-bit floating point (or integer) operations, or up to 2M 16-bit integer or 16-bit floating point operations. In one embodiment, at least one of the FPUs provides enhanced computing power to support high throughput superior computing capabilities and double precision 64-bit floating point. In some embodiments, a set of 8-bit integer SIMD ALUs 635 may also be present and specifically optimized for performing operations related to machine learning computations.[0096] In one embodiment, an array of multiple instances of graphics execution unit 608 may be instantiated in graphics sub-core groupings (eg, subslices). For scalability, the product architecture can choose the exact number of execution units per sub-core grouping. In one embodiment, execution unit 608 is capable of executing instructions across multiple execution channels. In another embodiment, each thread executing on graphics execution unit 608 executes on a different channel.[0097] FIG. 7 is a block diagram illustrating a graphics processor instruction format according to some embodiments. In one or more embodiments, the graphics processor execution unit supports an instruction set having multiple formats of instructions. Solid boxes indicate components that are commonly included in execution unit instructions, while dashed lines include components that are included in any or only a subset of instructions. In some embodiments, the instruction format 700 described and illustrated is a macroinstruction, where a macroinstruction is an instruction that is provided to an execution unit, whereas a microoperation is an instruction that is resulting from instruction decoding.[0098] In some embodiments, the graphics processor execution unit inherently supports instructions in a 128-bit instruction format 710. A 64-bit small instruction format 730 is available for some instructions based on the selected instruction, instruction options, and number of operands. While the native 128-bit instruction format 710 provides access to all instruction options, some options and operations are restricted in the 64-bit format 730. The native instructions available in 64-bit format 730 vary by implementation. In some embodiments, instructions are partially compressed using a set of index values in index field 713. The execution unit hardware consults the set of compression tables based on the index value and uses the compression table output to reconstruct the native instructions within the 128-bit instruction format 710.[0099] For each format, instruction opcode 712 defines the operation that the execution unit attempts to perform. The execution unit executes each instruction in parallel across multiple data elements of each operand. For example, in response to an add instruction, the execution unit performs a simultaneous add operation across each color channel representing a texture or picture element. By default, the execution unit executes each instruction across all data channels of the operands. In some embodiments, command control field 714 allows control over certain execution options, such as channel selection (eg, prediction) and data channel ordering (eg, swizzle). For instructions in 128-bit instruction format 710, execution size field 716 limits the number of data channels that are executed in parallel. In some embodiments, execution size field 716 is not available for use with 64-bit small instruction format 730.[0100] Some execution unit instructions have up to three operands, including two source operands, src0 720 and src1 722, and one destination 718. In some embodiments, the execution unit supports dual destination instructions where one of the destinations is implied. The data manipulation instruction has a third source operand (eg, SRC2 724), where the instruction opcode 712 determines the number of source operands. The last source operand of an instruction may be an intermediate (eg, hard-coded) value passed with the instruction.[0101] In some embodiments, 128-bit instruction format 710 includes an access/address mode field 726 that specifies, for example, that direct register addressing mode or indirect register addressing mode is used. When direct register addressing mode is used, the register address of one or more operands is provided directly by a bit in the instruction.[0102] In some embodiments, the 128-bit instruction format 710 includes an access/address mode field 726 that specifies the address mode and/or access mode of the instruction. In one embodiment, the access mode is used to define the data access alignment of an instruction. Some embodiments support access modes including a 16-byte aligned access mode and a 1-byte aligned access mode, where the byte alignment of the access mode determines the access alignment of the instruction operands. . For example, in the first mode, the instruction uses byte-aligned addressing for the source and destination operands, and in the second mode, the instruction uses 16-byte aligned addressing for all source and destination operands. May be used. [0103] In one embodiment, the address mode portion of access/address mode field 726 determines whether the instruction uses direct or indirect addressing. When direct register addressing mode is used, bits within the instruction directly provide the register address of one or more operands. When indirect register addressing mode is used, the register address of one or more operands may be computed based on the address register value and address intermediate fields within the instruction.[0104] In some embodiments, instructions are grouped based on opcode 712 bit fields to simplify opcode decoding 740. In an 8-bit opcode, bits 4, 5, and 6 allow the execution unit to determine the type of opcode. The illustrated explicit opcode grouping is just one example. In some embodiments, move and logic opcode group 742 includes data movement and logic instructions (eg, move (mov), compare (cmp)). In some embodiments, move and logic groups 742 share five most significant bits (MSBs), where move (mov) instructions are in the form 0000xxxxb and logic instructions are in the form 0001xxxxb. be. Flow control instruction group 744 (eg, call, jump) includes instructions in the format 0010xxxxb (eg, 0x20). Various instruction groups 746 include mixed instructions that include synchronous instructions (eg, wait, send) in the format 0011xxxxb (eg, 0x30). Parallel operation instruction group 748 includes arithmetic operation instructions (eg, add, mul) that span the components in the format 0100xxxxb (eg, 0x40). Parallel arithmetic group 748 performs arithmetic operations in parallel across the data channels. Vector arithmetic group 750 includes arithmetic operations instructions (eg, dp4) in the format 0101xxxxb (eg, 0x50). The vector arithmetic group performs arithmetic operations such as dot product calculations on vector operands.[0105] Graphics Pipeline FIG. 8 is a block diagram of another embodiment of a graphics processor 800. Elements in FIG. 8 that have the same reference numerals (or names) as elements in any other figure herein operate in any manner similar to those described elsewhere in the specification or may function, but is not so limited.[0106] In some embodiments, graphics processor 800 includes a geometry pipeline 820, a media pipeline 830, a display engine 840, thread execution logic 850, and a render output pipeline 870. In some embodiments, graphics processor 800 is a graphics processor in a multi-core processing system that includes one or more general-purpose processing cores. The graphics processor is controlled by register writes to one or more control registers (not shown) or by commands issued to the graphics processor 800 via the ring interconnect 802. In some embodiments, ring interconnect 802 couples graphics processor 800 to other processing components, such as other graphics processors or general purpose processors. Commands from ring interconnect 802 are interpreted by command streamer 803, which provides instructions to individual components of geometry pipeline 820 or media pipeline 830.[0107] In some embodiments, command streamer 803 directs the operation of vertex fetcher 805, which reads vertex data from memory and executes vertex processing commands provided by command streamer 803. In some embodiments, vertex fetcher 805 provides vertex data to vertex shader 807, which performs coordinate space transformations and lighting operations on each vertex. In some embodiments, vertex fetcher 805 and vertex shader 807 execute vertex processing instructions by dispatching execution threads to execution units 852A-852B via thread dispatcher 831.[0108] In some embodiments, execution units 852A-852B are arrays of vector processors with instruction sets for performing graphics and media operations. In some embodiments, execution units 852A-852B have an attached L1 cache 851 that is specific to each array or shared between arrays. The cache can be configured as a data cache, an instruction cache, or a single cache that is partitioned to contain data and instructions in different partitions.[0109] In some embodiments, geometry pipeline 820 includes a tessellation component that performs hardware-accelerated tessellation of 3D objects. In some embodiments, a programmable hull shader 811 configures the tessellation operation. A programmable domain shader 817 provides back-end evaluation of the tessellation output. The tessellator 813 operates at the direction of the hull shader 811 and provides special purpose logic for generating a set of detailed geometric objects based on the coarse geometric model provided as input to the geometry pipeline 820. include. In some embodiments, tessellation components (eg, hull shader 811, tessellator 813, and domain shader 817) can be bypassed if tessellation is not used.[0110] In some embodiments, completed geometric objects may be processed by the geometry shader 819 via one or more threads dispatched to execution units 852A-852B, or by a clipper. It is possible to proceed directly to 829. In some embodiments, the geometry shader operates on the entire geometric object rather than on vertices or patches of vertices as in earlier stages of the graphics pipeline. If tessellation is disabled, geometry shader 819 receives input from vertex shader 807. In some embodiments, geometry shader 819 is programmable by a geometry shader program to perform geometry tessellation when the tessellation unit is disabled.[0111] Prior to rasterization, clipper 829 may process the vertex data. Clipper 829 may be a fixed function clipper or a programmable clipper with clipping and geometric shader functionality. In some embodiments, the rasterizer and depth test component 873 within the render output pipeline 870 dispatches pixel shaders to transform the geometric objects pixel by pixel representation. In some embodiments, pixel shader logic is included in thread execution logic 850. In some embodiments, the application can bypass the rasterizer and depth test component 873 and access the unrasterized vertex data via the stream output unit 823.[0112] Graphics processor 800 has an interconnect bus, interconnect fabric, or some other interconnect mechanism that allows data and messages to be passed between the main components of the processor. In some embodiments, execution units 852A-852B and associated logic units (e.g., L1 cache 851, sampler 854, texture cache 858, etc.) provide memory access and render output pipeline components of the processor. are interconnected via data port 856 to perform communications. In some embodiments, sampler 854, caches 851, 858, and execution units 852A-852B each have separate memory access paths. In one embodiment, texture cache 858 may also be configured as a sampler cache.[0113] In some embodiments, the render output pipeline 870 includes a rasterizer and depth test component 873 that converts vertex-based objects into associated pixel-based representations. In some embodiments, the rasterizer logic includes a windower/masker unit to perform fixed function triangle and line rasterization. In some embodiments, an associated render cache 878 and depth cache 879 are also available. Although pixel operations component 877 performs pixel-based operations on data, in some examples pixel operations related to 2D operations (e.g., bit block image transfer by blending) are performed by 2D engine 841. executed or replaced at the point of display by display controller 843 using an overlay display surface. In some embodiments, a shared L3 cache 875 is available to all graphics components, allowing data to be shared without using main system memory.[0114] In some embodiments, graphics processor media pipeline 830 includes a media engine 837 and a video front end 834. In some embodiments, video front end 834 receives pipeline commands from command streamer 803. In some embodiments, media pipeline 830 includes a separate command streamer. In some embodiments, video front end 834 processes media commands before sending the commands to media engine 837. In some embodiments, media engine 837 includes a thread creation function that creates threads for dispatch to thread execution unit 850 by thread dispatcher 831.[0115] In some embodiments, graphics processor 800 includes a display engine 840. In some embodiments, display engine 840 is external to processor 800 and couples to the graphics processor via ring interconnect 802 or some other interconnect bus or fabric. In some embodiments, display engine 840 includes a 2D engine 841 and a display controller 843. In some embodiments, display engine 840 includes special purpose logic that can operate independently of the 3D pipeline. In some embodiments, display controller 843 includes a display device (which may be a system-integrated display device, such as in a laptop computer, or an external display device attached via a display device connector). (not shown).[0116] In some embodiments, geometry pipeline 820 and media pipeline 830 are configurable to perform operations based on multiple graphics and media programming interfaces, and may be - Not specific to programming interfaces (APIs). In some embodiments, driver software for the graphics processor converts API calls specific to a particular graphics or media library into commands that can be processed by the graphics processor. . In some embodiments, support is provided for Open GL (Open Graphics Library), Open CL (Open Computing Language), and/or Vulkan graphics and computing APIs, all from the Khronos Group. be done. In some embodiments, support may also be provided for direct 3D libraries from Microsoft Corporation. In some embodiments, a combination of these libraries may be supported. Support may also be provided for Open Source Computer Vision Library (OpenCV). Future APIs with compatible 3D pipelines will also be supported if a mapping can be done from the future API's pipeline to the graphics processor's pipeline.[0117] Graphics Pipeline Programming FIG. 9A is a block diagram illustrating a graphics processor command format according to some embodiments. FIG. 9B is a block diagram illustrating a graphics processor command sequence 910 according to an embodiment. Solid boxes in FIG. 9A indicate components that are commonly included in graphics commands, while dashed lines include components that are optional or included only in a subset of graphics commands. The example graphics processor command format 900 of FIG. 9A includes data fields that identify a client 902, a command operation code (opcode) 904, and data 906 for the command. A sub-opcode 905 and command size 908 are also included in some commands.[0118] In some embodiments, client 902 specifies a graphics device client unit that processes command data. In some embodiments, the graphics processor command parser examines the client field of each command, conditions further processing of the command, and routes command data to the appropriate client unit. In some embodiments, the graphics processor client unit includes a memory interface unit, a render unit, a 2D unit, a 3D unit, and a media unit. Each client unit has a corresponding processing pipeline for processing commands. When a command is received by a client unit, the client unit reads opcode 904 and sub-opcode 905, if present, to determine the operation to perform. The client unit uses the information in data field 906 to execute the command. For some commands, an explicit command size 908 is expected to specify the size of the command. In some embodiments, the command parser automatically determines the size of at least some of the commands based on the command opcode. In some embodiments, commands are aligned with multiple double words.[0119] The flow diagram of FIG. 9B depicts an example graphics processor command sequence 910. In some embodiments, data processing system software or firmware characterizing an embodiment of the graphics processor uses the illustrated version of the command sequence to configure and execute the set of graphics operations; and end. For purposes of illustration only, sample command sequences are shown and described, but embodiments are not limited to these particular commands or this command sequence. Further, the commands may be issued as a batch of commands within a command sequence, resulting in the graphics processor processing a sequence of commands that are at least partially concurrent.[0120] In some embodiments, the graphics processor command sequence 910 begins with a pipeline flush command 912 that flushes any active graphics pipeline with the pipeline's currently pending command. complete. In some embodiments, 3D pipeline 922 and media pipeline 924 do not operate simultaneously. A pipeline flush is performed to allow the active graphics pipeline to complete any pending commands. In response to a pipeline flush, the command parser for the graphics processor temporarily suspends command processing until the active drawing engine completes pending operations and the associated read cache is invalidated. Stop. Optionally, any data in the render cache that is marked as "dirty" may be flushed to memory. In some embodiments, pipeline flush command 912 may be used for pipeline synchronization or before placing the graphics processor in a low power state.[0121] In some embodiments, pipeline selection command 913 is used when a command sequence requires the graphics processor to explicitly switch between pipelines. In some embodiments, the pipeline selection command 913 is only requested once within an execution context before issuing a pipeline command, unless the context is one that issues commands to both pipelines. In some embodiments, the pipeline flush command 912 is requested immediately before a pipeline switch by the pipeline selection command 913.[0122] In some embodiments, pipeline control commands 914 are used to configure the graphics pipeline for operation and program the 3D pipeline 922 and media pipeline 924. In some embodiments, pipeline control commands 914 configure the pipeline state of the active pipeline. In one embodiment, pipeline control commands 914 are used for pipeline synchronization to clear data from one or more cache memories in an active pipeline before processing a batch of commands.[0123] In some embodiments, return buffer status commands 916 are used to configure each pipeline's set of return buffers for writing data. Some pipeline operations require the allocation, selection, or setting of one or more return buffers into which the operation writes intermediate data while processing. In some embodiments, the graphics processor also uses one or more return buffers to store output data and perform inter-thread communication. In some embodiments, return buffer state 916 includes selecting the size and number of return buffers to use for a set of pipeline operations.[0124] The remaining commands in the command sequence differ based on the active pipeline for the operation. Based on the pipeline determination 920, the command sequence is aligned to a 3D pipeline 922 starting in a 3D pipeline state 930 or a media pipeline 924 starting in a media pipeline state 940.[0125] The commands for configuring the 3D pipeline state 930 include vertex buffer state, vertex element state, constant color state, depth buffer state, and other settings that should be configured before the 3D primitive command is processed. Contains 3D state setting commands for state variables. The values of these commands are determined, at least in part, based on the particular 3D API in use. In some embodiments, the 3D pipeline state 930 command can also selectively disable or bypass certain pipeline elements if they are not used.[0126] In some embodiments, the 3D Primitive 932 command is used to submit 3D primitives to be processed by the 3D pipeline. The commands and associated parameters passed to the graphics processor by the 3D primitive 932 commands are forwarded to the vertex fetch function within the graphics pipeline. The vertex fetch function uses the 3D primitive 932 command data to generate vertex data structures. Vertex data structures are stored in one or more return buffers. In some embodiments, 3D primitive 932 commands are used by vertex shaders to perform vertex operations on 3D primitives. To process the vertex shader, 3D pipeline 922 dispatches shader execution threads to the graphics processor execution unit.[0127] In some embodiments, the 3D pipeline 922 is triggered by an execute 934 command or event. In some embodiments, register writes trigger command execution. In some embodiments, execution is triggered by a "go" or "kick" command within a command sequence. In one embodiment, command execution is triggered using a pipeline synchronization command to flush the command sequence through the graphics pipeline. The 3D pipeline will perform geometric processing for the 3D primitives. Once the operation is complete, the resulting geometric object is rasterized and the pixel engine colors the resulting pixels. Additional commands to control pixel shading and pixel back end operations may also be included for these operations.[0128] In some embodiments, graphics processor command sequences 910 follow media pipeline 924 paths when performing media operations. Typically, the specific use and method of programming for media pipeline 924 depends on the media or computational operations to be performed. Certain media decoding operations may be offloaded to the media pipeline during media decoding. In some embodiments, the media pipeline may also be bypassed, and media decoding is performed in whole or in part using resources provided by one or more general purpose processing cores. Is possible. In one embodiment, the media pipeline also includes elements for general-purpose graphics processor unit (GPGPU) operations, where the graphics processor Used to perform SIMD vector operations using computer shader programs that are not explicitly related to rendering.[0129] In some embodiments, media pipeline 924 is configured in a similar manner as 3D pipeline 922. The set of commands that make up media pipeline state 940 are dispatched or placed into a command sequence before media object commands 924. In some embodiments, commands for media pipeline state 940 include data for configuring media pipeline elements used to process media objects. This includes data for configuring video decoding and video encoding logic within the media pipeline, such as encoder or decoder formats. In some embodiments, commands for media pipeline state 940 also support the use of one or more pointers to "indirect" state elements, including batches of state settings.[0130] In some embodiments, media object command 942 provides a pointer to a media object for processing by a media pipeline. Media objects include memory buffers containing video data to be processed. In some embodiments, all media pipeline states must be enabled before issuing media object commands 942. Once the pipeline state is configured and the media object command 942 is queued, the media pipeline 924 is triggered by an execution command 944 or an equivalent execution event (eg, a register write). The output from media pipeline 924 may then be post-processed by operations provided by 3D pipeline 922 or media pipeline 924. In some embodiments, GPGPU operations are configured and performed in a similar manner as media operations.[0131] Graphics Software Architecture FIG. 10 illustrates an example graphics software architecture of data processing system 1000, according to some embodiments. In some embodiments, the software architecture includes a 3D graphics application 1010, an operating system 1020, and at least one processor 1030. In some embodiments, processor 1030 includes a graphics processor 1032 and one or more general purpose processor cores 1034. Graphics applications 1010 and operating system 1020 each execute within system memory 1050 of the data processing system.[0132] In some embodiments, 3D graphics application 1010 includes one or more shader programs that include shader instructions 1012. The shader language instructions may be of a high level shader language such as High Level Shader Language (HLSL) or OpenGL Shader Language (GLSL). The application also includes machine language executable instructions 1014 suitable for execution by general purpose processor core 1034. The application also includes graphics objects 1016 defined by vertex data.[0133] In some embodiments, the operating system 1020 is an operating system such as Microsoft's Microsoft® Windows® operating system, a proprietary UNIX®-like operating system, or a Linux operating system. An open source operating system such as UNIX that uses a variant of the ® kernel. Operating system 1020 may support graphics APIs 1022, such as Direct3D API, OpenGL API, or Vulkan API. When the Direct3D API is used, the operating system 1020 uses a front-end shader compiler 1024 to compile any shader instructions 1012 in HLSL into a low-level shader language. The compilation can be a just-in-time (JIT) compilation, or the application can run the shader before compilation. In some embodiments, high-level shaders are compiled into low-level shaders during compilation of 3D graphics application 1010. In some embodiments, shader instructions 1012 are provided in an intermediate format, such as some version of Standard Portable Intermediate Representation (SPIR) used by the Vulkan API.[0134] In some embodiments, user mode graphics driver 1026 includes a backend shader compiler 1027 that converts shader instructions 1012 into a hardware-specific representation. When the OpenGL API is used, the GLSL high-level language shader instructions are passed to the user-mode graphics driver 1026 for compilation. In some embodiments, user mode graphics driver 1026 uses operating system kernel mode functionality 1028 to communicate with kernel mode graphics driver 1029. In some embodiments, kernel mode graphics driver 1029 communicates with graphics processor 1032 to dispatch commands and instructions.[0135] IP Core Implementation One or more aspects of at least one embodiment may be implemented by representative code stored on a machine-readable medium that represents and/or defines logic within an integrated circuit, such as a processor. Good too. For example, a machine-readable medium may include instructions representing various logic within a processor. When read by a machine, the instructions can cause the machine to create logic to perform the techniques described herein. Such a representation is known as an "IP core" and is a reusable representation of the logic for an integrated circuit that may be stored on a tangible, machine-readable medium, such as a hardware model that describes the structure of the integrated circuit. It is a unit. The hardware model may be provided to various customers or manufacturing facilities that load the hardware model onto manufacturing equipment that manufactures integrated circuits. An integrated circuit may be manufactured such that the circuit performs the operations described in connection with any of the embodiments described herein.[0136] FIG. 11A is a block diagram illustrating an IP core development system 1100 that may be used to manufacture integrated circuits for performing operations according to one embodiment. IP core development system 1100 can be used to generate modular, reusable designs that can be incorporated into larger designs or used to configure entire integrated circuits (eg, SOC integrated circuits). Design facility 1130 may generate a software simulation 1110 of the IP core design in a high-level programming language (eg, C/C++). Software simulation 1110 may be used to design, test, and verify the operation of the IP core using simulation model 1112. Simulation model 1112 may include functional, operational, and/or timing simulations. A register transfer level (RTL) 1115 may then be generated or synthesized from the simulation model 1112. RTL design 1115 is an abstraction of integrated circuit operation that models the flow of digital signals between hardware registers, including associated logic that is executed using the modeled digital signals. In addition to the RTL design 1115, low-level designs at the logic level or transistor level may also be generated, designed, or synthesized. Accordingly, the specific details of the initial design and simulation may vary.[0137] The RTL design 1115 or equivalent may be further synthesized by a design facility into a hardware model 1120, which may be implemented using a hardware description language (HDL) or some other may be physical design data expressed as The HDL may be further simulated or tested to verify the IP core design. The IP core design may be stored for distribution to third party manufacturing facilities 1165 using non-volatile memory 1140 (eg, hard disk, flash memory, or any non-volatile storage medium). Alternatively, the IP core design may be transmitted via a wired connection 1150 or a wireless connection 1160 (eg, via the Internet). Manufacturing facility 1165 may then manufacture integrated circuits based at least in part on the IP core design. A manufactured integrated circuit may be configured to perform operations in accordance with at least one embodiment described herein.[0138] FIG. 11B illustrates a side cross-sectional view of an integrated circuit package assembly 1170 according to some embodiments described herein. Integrated circuit package assembly 1170 represents an implementation of one or more processor or accelerator devices as described herein. Package assembly 1170 includes multiple units of hardware logic 1172, 1174 connected to a substrate 1180. Logic 1172, 1174 may be implemented at least in part with configurable logic or fixed function logic hardware, including any of the processor cores, graphics processors, or other accelerator devices described herein. may include one or more portions of. Each unit of logic 1172, 1174 may be implemented within a semiconductor die and coupled to substrate 1180 via interconnect structure 1173. Interconnect structure 1173 may be configured to route electrical signals between logic 1172, 1174 and substrate 1180, and may include interconnects such as, but not limited to, bumps or pillars. In some embodiments, interconnect structure 1173 is configured to route electrical signals associated with the operations of logic 1172, 1174, such as input/output (I/O) signals and/or power or ground signals. It's okay to be. In some embodiments, substrate 1180 is an epoxy-based laminate substrate. Package substrate 1180 may include other suitable types of substrates in other embodiments. Package assembly 1170 may be connected to other electrical devices via package interconnect 1183. Package interconnects 1183 may be coupled to the surface of substrate 1180 for routing electrical signals to other electrical devices, such as a motherboard, other chipsets, or multichip modules.[0139] In some embodiments, the units of logic 1172, 1174 are electrically coupled to a bridge 1182 that is configured to route electrical signals between the logic 1172, 1174. Bridge 1182 may be a dense interconnect structure that provides a route for electrical signals. Bridge 1182 may include a bridge substrate constructed of glass or a suitable semiconductor material. Electrical routing features can be formed on the bridge substrate to provide chip-to-chip connections between logic 1172, 1174.[0140] Although two units of logic 1172, 1174 and bridge 1182 are shown, embodiments described herein may include more or fewer logic units on one or more dies. good. If the logic is included in a single die, bridge 1182 may be omitted, so one or more dies may be connected by zero or more bridges. Alternatively, multiple dies or units of logic may be connected by one or more bridges. Additionally, multiple logic units, dies, and bridges may be connected together in other possible configurations, including three-dimensional configurations.[0141] Exemplary System-on-Chip Integrated Circuits FIGS. 12-14 illustrate example integrated circuits and associated circuits that may be fabricated using one or more IP cores, according to various embodiments described herein. graphics processor. Other logic and circuitry may be included in addition to what is shown, including additional graphics processors/cores, peripheral interface controllers, or general purpose processor cores.[0142] FIG. 12 is a block diagram illustrating an example system-on-chip integrated circuit 1200 that may be fabricated using one or more IP cores, according to embodiments. The example integrated circuit 1200 includes one or more application processors 1205 (e.g., CPUs), at least one graphics processor 1210, and additionally includes an image processor 1215 and/or a video processor 1220. Often, any of these may be modular IP cores from the same or multiple different design facilities. Integrated circuit 1200 includes peripherals or bus logic, including USB controller 1225, UART controller 1230, SPI/SDIO controller 1235, and I2S/I2C controller 1240. Additionally, the integrated circuit includes a display device 1245 coupled to one or more of a high-definition multimedia interface (HDMI) controller 1250 and a mobile industry processor interface (MIPI) display interface 1255. including Is possible. Storage may be provided by a flash memory subsystem 1260 that includes flash memory and a flash memory controller. A memory interface may be provided by memory controller 1265 for access to SDRAM or SRAM memory devices. Some integrated circuits additionally include a built-in security engine 1270.[0143] FIGS. 13A-13Bh are block diagrams illustrating example graphics processors for use within an SoC, according to embodiments described herein. FIG. 13A illustrates an example graphics processor 1310 of a system-on-chip integrated circuit that may be manufactured using one or more IP cores, according to embodiments. FIG. 13B illustrates an additional exemplary graphics processor 1340 in a system-on-chip integrated circuit that may be manufactured using one or more IP cores, according to embodiments. Graphics processor 1310 of FIG. 13A is an example of a low power graphics processor core. Graphics processor 1340 of FIG. 13B is an example of a high performance graphics processor core. Each of graphics processors 1310, 1340 may be a variation of the graphics processor of FIG.[0144] As shown in FIG. 13A, graphics processor 1310 includes vertex processor 1305 and one or more fragment processors 1315A-1315N (eg, 1315A, 1315, 1315C, 1315D, through 1315N-1, and 1315N). including. Graphics processor 1310 is capable of executing different shader programs with separate logic, so that vertex processor 1305 is optimized to perform the operations of the vertex shader program, while one One or more fragment processors 1315A-1315N perform fragment (eg, pixel) shading operations for a fragment or pixel shader program. Vertex processor 1305 performs the vertex processing stage of the 3D graphics pipeline and generates primitive and vertex data. Fragment processors 1305A-1315N use the primitive and vertex data generated by vertex processor 1305 to generate a frame buffer that is displayed on a display device. In one embodiment, when the fragment processors 1315A-1315N are provided within the OpenGL API, which may be used to perform operations similar to pixel shader programs such as those provided within the Direct 3D API, Optimized to run fragment shader programs.[0145] Graphics processor 1310 additionally includes one or more memory management units (MMUs) 1320A-1320B, caches 1325A-1325B, and circuit interconnects 1330A-1330B. One or more MMUs 1320A-1320B are vertex processors that may reference vertex or image/texture data stored in memory in addition to vertex or image/texture data stored in one or more caches 1325A-1325B. 1305 and/or fragment processors 1315A-1315N. In one embodiment, one or more MMUs 1320A-1320B may be used in a system, including one or more MMUs associated with one or more application processors 1205, image processors 1215, and/or video processors 1220 of FIG. can be synchronized with other MMUs such that each processor 1205-1220 can participate in a shared or unified virtual memory system. One or more circuit interconnects 1330A-1330B allow graphics processor 1310 to interface with other IP cores within the SoC, either via an internal bus of the SoC or via a direct connection, according to an embodiment. make it work.[0146] As shown in FIG. 13B, graphics processor 1340 includes one or more MMUs 1320A-1320B, caches 1325A-1325B, and circuit interconnects 1330A-1330B of graphics processor 1310 of FIG. 13A. Graphics processor 1340 includes one or more shader cores 1355A-1355N (e.g., 1355A, 1355B, 1355C, 1355D, 1355E, 1355F-1355N-1, and 1355N) that provide a unified shader core architecture. , a unified shader core architecture in which a single core or type or core supports all types of programmable code including shader program code for implementing vertex shaders, fragment shaders, and/or computer shaders. It is possible to execute shader code. The exact number of shader cores present may vary between embodiments and implementations. Additionally, graphics processor 1340 includes an inter-core task manager 1345, which includes a thread dispatcher that dispatches threads of execution to one or more shader cores 1355A-1355N; and a tiling unit 1358 for accelerating tiling operations for tile-based rendering, where rendering operations of a scene are performed, e.g., to take advantage of local spatial coherence within the scene, or Subdivided in image space to optimize cache usage.[0147] FIGS. 14A-14B illustrate additional example graphics processor logic in accordance with embodiments described herein. FIG. 14A shows a graphics core 1400, which may be included within graphics processor 1210 of FIG. 12 and may be a unified shader core 1355A-1355N, such as the one in FIG. 13B. FIG. 14B shows a highly parallelized general purpose graphics processing unit 1430 suitable for deployment in a multi-chip module.[0148] As shown in FIG. 14A, graphics core 1400 includes a shared instruction cache 1402, a texture unit 1418, and a cache/shared memory 1420 that are common to execution resources within graphics core 1400. Graphics core 1400 may include multiple slices 1401A-1401N or partitions for each core, and graphics processor may include multiple instances of graphics core 1400. . Slices 1401A-1401N may include support logic including local instruction caches 1404A-1404N, thread schedulers 1406A-1406N, thread dispatchers 1408A-1408N, and a set of registers 1410A-1410N. To perform logic operations, slices 1401A-1401N include additional function units (AFUs) 1412A-1412N, floating-point units (FPUs) 1414A-1414N, and integer arithmetic logic units ( ALU (arithmetic logic unit) 1416A to 1416N), address calculation unit (ACU (address computational unit) 1413A to 1413N), double-precision floating point unit (DPFPU) ting-point unit) 1415A to 1415N), and matrix processing It may include a set of units (matrix processing units (MPUs) 1417A-1417N).[0149] Some of the computational units operate with a particular precision. For example, FPU1414A-1414N can perform single-precision (32-bit) and half-precision (16-bit) floating-point operations, while DPFPU1415A-1415N can perform double-precision (64-bit) floating-point operations. do. ALUs 1416A-1416N are capable of performing variable precision integer operations with 8-bit, 16-bit, and 32-bit precision and can be configured for mixed-precision operations. MPUs 1417A-1417N may also be configured for mixed-precision matrix operations, including half-precision floating point and 8-bit integer operations. MPUs 1417A-1417N are capable of performing various matrix operations to accelerate machine learning application frameworks, including enabling support for accelerated general matrix to matrix multiplication (GEMM). AFUs 1412A-1412N are capable of performing additional logical operations not supported by floating point or integer units, including trigonometric operations (eg, Sine, Cosine, etc.).[0150] As shown in FIG. 14B, a general purpose processing unit (GPGPU) 1430 can be configured to allow highly parallel computational operations to be performed by an array of graphics processing units. Furthermore, GPGPU 1430 can be directly linked to other instances of GPGPU to create multi-GPU clusters to increase the training speed of certain deep neural networks. GPGPU 1430 includes a host interface 1432 to enable connection to a host processor. In one embodiment, host interface 1432 is a PCI Express interface. However, the host interface may also be a vendor-specific communication interface or communication facility. GPGPU 1430 receives commands from the host processor and uses global scheduler 1434 to distribute execution threads associated with these commands to a set of compute clusters 1436A-1436H. Compute clusters 1436A-1436H share cache memory 1438. Cache memory 1438 may function as a higher level cache for cache memory within compute clusters 1436A-1436H.[0151] GPGPU 1430 includes memory 14434A-14434B coupled to compute clusters 1436A-1436H via a set of memory controllers 1442A-1442B. In various embodiments, the memory 1434A-1434B is a dynamic random access memory (DRAM) or a synchronous graphics random access memory (SGRAM) that includes a graphics double data rate (GDDR) memory. Graphics random access memory, such as may include various types of memory devices, including:[0152] In one embodiment, each of the compute clusters 1436A-1436H includes a set of graphics cores, such as graphics core 1400 of FIG. 14A, where the graphics cores are suitable for machine learning computations. may include multiple types of integer and floating point logic units capable of performing computational operations with a range of precision. For example, and in one embodiment, at least some subset of floating point units in each of computational clusters 1436A-1436H may be configured to perform 16-bit or 32-bit floating point operations; Different subsets of floating point units may be configured to perform 64-bit floating point operations.[0153] Multiple instances of GPGPU 1430 may be configured to operate as a compute cluster. The communication mechanisms used by computational clusters for synchronization and data exchange vary depending on the embodiment. In one embodiment, multiple instances of GPGPU 1430 communicate via host interface 1432. In one embodiment, GPGPU 1430 includes an I/O hub 1439 that couples GPGPU 1430 to a GPU link 1440 that allows direct connections to other instances of GPGPU. In one embodiment, GPU link 1440 is coupled to a dedicated GPU-GPU bridge that allows communication and synchronization between multiple instances of GPGPU 1430. In one embodiment, GPU link 1440 couples in a high speed interconnect to send and receive data to other GPGPUs or parallel processors. In one embodiment, multiple instances of GPGPU 1430 are located within separate data processing systems and communicate through a network device accessible through host interface 1432. In one embodiment, GPU link 1440 may be configured to allow connection to a host processor in addition to or in place of host interface 1432.[0154] Although the illustrated configuration of GPGPU 1430 may be configured for training neural networks, one embodiment includes a configuration of GPGPU 1430 that may be configured for deployment within a high performance or low power inferencing platform. Provide alternative configurations. In the estimation configuration, GPGPU 1430 includes fewer computational clusters 1436A-1436H compared to the training configuration. Furthermore, the memory technology associated with memories 1434A-1434B may differ between the estimation and training configurations, with higher bandwidth memory technology being devoted to the training configuration. In one embodiment, the estimation configuration of GPGPU 1430 may support estimation of specific instructions. For example, the estimation configuration may provide support for one or more 8-bit integer dot product instructions commonly used during estimation operations of deployed neural networks.[0155] Workload Scheduling and Distribution on Distributed Graphics Devices Embodiments described herein provide a method for handling graphics having a tiled architecture comprised of multiple tiles of smaller graphics devices; media and computing devices. Such devices can be scaled to include a larger number or fewer tiles depending on the power and/or performance targets of the device. Such scaled devices can utilize tailored work distribution infrastructure to enable efficient distribution of workloads across multiple tiles. The work distribution infrastructure described herein enables workload dispatch to be scaled across a variable number of multiple tiles. Work items can be submitted to any one or more of the tiles, with workloads that can span multiple tiles. Additionally, upon completion of a work item, graphics, media, and/or computational engines within the device can easily obtain new work items for execution with minimal latency.[0156] FIG. 15 is a block diagram of a data processing system 1500 according to one embodiment. Data processing system 1500 is a heterogeneous processing system having a processor 1502, a unified memory 1510, and a GPGPU 1520 that includes machine learning acceleration logic. Processor 1502 and GPGPU 1520 may be any of the processors and GPGPU/parallel processors described herein. Processor 1502 may execute instructions for compiler 1515 stored in system memory 1512. Compiler 1515 executes on processor 1502 to compile source code 1514A into compiled code 1514B. Compiled code 1514B may include instructions that may be executed by processor 1502 and/or instructions that may be executed by GPGPU 1520. During compilation, compilation 1515 includes hints regarding the level of data parallelism present in compiled code 1514B and/or hints regarding data locality associated with threads dispatched based on compiled code 1514B. , can perform operations to insert metadata. Compiler 1515 may include the information necessary to perform such operations, or the operations may be performed with the assistance of runtime library 1516. Runtime library 1516 can also assist compiler 1515 in compiling source code 1514A and link with compiled code 1514B at runtime to facilitate execution of the compiled instructions on GPGPU 1520. It is also possible to include instructions to be executed.[0157] Unified memory 1510 represents a unified address space that can be accessed by processor 1502 and GPGPU 1520. In addition to system memory 1512, unified memory may include GPGPU memory 1518. GPGPU memory 1518 is memory within the address space of GPGPU 1520 and may include some or all of system memory 1512. In one embodiment, GPGPU memory 1518 may also include at least a portion of any memory dedicated for exclusive use by GPGPU 1520. In one embodiment, compiled code 1514B stored in system memory 1512 may be mapped to GPGPU memory 1518 for access by GPGPU 1520.[0158] GPGPU 1520 includes a plurality of engine block tiles 1524A-1524N, which may include one or more of the various compute units or execution elements described herein. In one embodiment, GPGPU 1520 further includes a matrix accelerator 1523, which can include one or more special function computation units designed to accelerate a subset of matrix operations (eg, dot products, etc.). GPGPU 1520 may also include a set of resources including, but not limited to, a set of global registers 1525, a power performance module 1526, and a shared cache 1527, the set of resources being an engine block. - Can be shared by tiles 1524A-1524N. In one embodiment, global registers 1525 include directly and indirectly accessible registers. Power performance module 1526 is configured to adjust power distribution and clock frequency for engine block tiles 1524A-1524N to adjust power consumption of components within engine block tiles 1524A-1524N. It is possible to For example, in one embodiment, the power distributed to components within engine block tiles 1524A-1524N may be dynamically switched based on power or performance targets of data processing system 1500. (e.g. gated). In various embodiments, shared cache 1527 may include an instruction cache and/or a lower level data cache.[0159] In one embodiment, each engine block tile 1524A-1524N can operate independently or in coordination to execute multiple workloads or a single distributed workload. It includes a set of graphics processing engines. Each tile contains various engines that perform different activities. The various engines process commands provided in batch buffers, which are memory buffers containing batches of commands, and use execution units in engine block tiles 1524A-1524N to execute these commands. be able to. Software running on the host processor can submit work items to a global scheduler 1522, which distributes various work items to one or more engine block tiles 1524A-1524N. Is possible. Alternatively, software can submit work items directly to tiles, and hardware scheduling within the tile schedules the workload to the appropriate engines within the tile.[0001] FIGS. 16A-16C illustrate a graphics processing system 1600 that implements multi-tile work scheduling, according to an embodiment. FIG. 16A shows an overview of a graphics processing system 1600 according to one embodiment. FIG. 16B shows an example of a system graphics interface 1602. FIG. 16C shows an example of an engine block tile 1605.[0002] As shown in FIG. 16A, graphics processing system 1600 includes application and/or graphics drivers (that can route workloads 1604A-1604D to one or more engine block tiles 1605A-1605D). app/driver 1601), which may be similar to or a variation of engine block tiles 1524A-1524N of FIG. Workloads 1604A-1604D may be part of the same workload and/or separate workloads. Workloads 1604A-1604D can be executed in concert with each other or independently, depending on the relationship (or lack thereof) between the workloads. An application of application/driver 1601 may be any application capable of or configured to submit a workload to a graphics processing system. The driver may be a user mode driver that allows applications to submit workloads through an associated kernel mode driver.[0003] In some embodiments and/or implementations, a global scheduler (eg, global scheduler 1522 of FIG. 15) dispatches workloads to engine block tiles. In other embodiments, workloads are routed to engine block tiles 1605A-1605D via doorbells 1603A-1603D in system graphics interfaces 1602A-1602D, each associated with a respective engine block tile 1605A. Can be submitted directly.[0004] The system graphics interface 1602A-1602D associated with each engine block tile 1605A-1604D provides an interface between the host system and the graphics system in which the engine block tile resides. I will provide a. System graphics interfaces 1602A-1602D contain graphics device logic that presents the graphics processing system as a device to the host system and enables communication with the graphics device over the PCIe bus. Contains PCIe configuration space data. In other embodiments, various types of hosts, such as processor interface buses (e.g., processor ring or mesh buses, CoreLink or AMBA buses, etc.), or other types of mesh or fabric interfaces such as NVLink, etc. Or it can be used for a device interface bus. In one embodiment, interrupt generation is handled through system graphics interfaces 1602A-1602D. For purposes of interrupt generation, one of the system graphics interfaces 1602A-1602D may function as a master interface for the graphics system. In one embodiment, system graphics interfaces 1602A-1602D may perform host address space to local address space translation for the graphics processing system.[0005] In one embodiment, each of the system graphics interfaces 1602A-1602D includes a doorbell 1604A-1604D through which workloads 1604A-1604D can be submitted. In one embodiment, each doorbell 1604A-1604D is a doorbell block that supports 256 doorbells, although a doorbell block can include any number of doorbells. Doorbells can be assigned to applications, and the application's affinity for tiles can be managed in the graphics driver. An application can be assigned to one or more doorbells, and doorbell assignments can be cross-tile assignments. To submit a workload, the application or driver software can ring the appropriate doorbell based on the type of work the application is submitting. Scheduling can be managed locally within the tile, as the software rings the doorbell of the door associated with the tile. For example, a system graphics interface associated with a tile can use a local scheduler within the tile to schedule requested workloads to the local engine.[0006] FIG. 16B illustrates a system graphics interface 1602 according to one embodiment. System graphics interface 1602 includes an interrupt unit 612, a device interface 1614, a doorbell 1603, a system/device address translator 1616, and a batch buffer submitter 1618. Interrupt unit 1612 can be configured as a remote or master interrupt unit and sends a message signaled interrupt (MSI) that is generated according to a value stored in an interrupt register of interrupt unit 1612. It is possible to do so. Device interface 1614 includes hardware that allows the graphics system, either as a whole or as individual tiles, to be provided as a device on an interface bus, such as (but not limited to) a PCIe bus. . Doorbell 1603 is one of a plurality of doorbell interfaces through which workload 1604 can be submitted, where workload 1604 is one of workloads 1604A-1604D in FIG. 16A. It could be any of them. Doorbell 1603 may be a doorbell structure or register that may be used to notify the associated engine block tile that a work request is available for processing. In one embodiment, the work request is provided in the form of a buffer of batched commands (eg, a batch buffer). Batch buffers can be processed via batch buffer submitter 1618. In one embodiment, batch buffer submitter 1618 may use system/device address translator 1616 to convert from system addresses to device local addresses of engine block tiles. Batch buffer commands can be submitted to the associated engine block tile.[0007] FIG. 16C shows an engine block tile 1605 that can receive workload from an application or driver via a system graphics interface. Engine block tile 1605 includes multiple engines capable of processing commands received from a host system. The engine is capable of performing various operations and the instructions underlying those commands are capable of being executed by one or more blocks of execution units 629A-629N. Engine block tile 1605 also includes a scheduler 621, which is a local scheduler for engine block tile 1605, and which schedules commands for processing by various tiles and/or by execution units. 629A-629N. In one embodiment, the engines of the engine block tile include a render command streamer (RCS623), a position-only command streamer (POCS624), a compute command streamer (CCS625), a copy engine 622, and one The media engine slices 626 include one or more video command streamers (VCSn 627) for performing video decoding operations and one or more video command streamers (VCSn 627) for performing video encoding operations. Video Encoding Command Streamer (VECSn628). The input batch buffer may contain commands that are processed by any one or more of the engines shown, as well as other engines not shown.[0008] Embodiments described herein allow an application or graphics driver to explicitly submit workloads that span multiple tiles. Furthermore, load balancing between tiles can be performed after the workload is submitted. In one embodiment, to enable cross-tile workloads, the same batch buffer containing a superset of work items to be performed is used for each tile to be included within a tile work group. Submitted to All commands are submitted to all tiles that will execute the command, even if a given tile is not intended to execute all submitted workloads. Instead, each tile executes a subset of the submitted workload. In one embodiment, a given tile may execute a particular subset of the workload based on an identifier provided to the hardware context associated with the tile.[0009] FIG. 17 illustrates a tile work distribution and scheduling system 1700, according to embodiments described herein. Tile work distribution and scheduling system 1700 allows workload to be distributed across multiple GPUs 1730A-1730D, where each of the multiple GPUs has an engine block tile as in FIG. 16A. 1605A-1605D. GPUs 1730A-1730D may be enumerated as one or more devices based on the configuration of their respective system graphics interfaces.[0010] To enable execution of GPU spanning workloads, individual hardware contexts 1720A-1720D may be created and each associated with a respective GPU 1730A-1730D. In one embodiment, the hardware context and batch buffer 1708 may be created by a user mode driver running on a host system processor. This batch buffer 1708 contains commands that define the execution state in which the commands are to be executed, and GPGPU walker commands that cause the GPU to dispatch threads of execution to execute the workload. Each hardware context 1720A-1720D defines further execution states for its respective GPU 1730A-1730D. In one embodiment, an execution state defined within the hardware context may specify a tile group offset (TG_OFFSET) and a tile group step (TG_STEP) for each GPU. The tile group offset specifies the starting position within batch buffer 1708 for commands executed by each GPU. A tile group step can specify the number of partitions for a workload. The batch buffer start command is inserted into the command ring buffer 1710A-1710D associated with each GPU 1730A-1730D. GPUs 1730A-1730D execute GPU-related commands and enter standby states 1702A-1702D upon completion of these commands. In one embodiment, the wait states 1702A-1702D are entered based on an explicit semaphore wait or another synchronization/wait command inserted at the end of the command to the GPU. In other embodiments, an automatic synchronization system is used to synchronize GPUs 1720A-1720D.[0011] FIG. 18 illustrates a system 1800 that enables load balancing in a multi-tile graphics processing system according to an embodiment. The same batch buffer 1810 may contain commands 1812A-1812C that are provided to multiple virtual engines within a set of physical engine block tiles 1822A-1822C. The physical engines within engine block tiles 1822A-1822C may be virtualized so that commands reference virtual engines that may physically reside within any physical tile. In one embodiment, the application/driver 1601 is capable of dividing the workload among N sets of virtual engines by generating N local render context addresses (LRCAs). . Each LRCA 1802A-1802C includes a workload partition identifier (WPARID 1801) that identifies the workload partition with which the LRCA is associated. Each LRCA can be submitted to a different engine within physical engine block tiles 1822A-1822C to enable parallel execution of workloads. During execution of a workload, the engine uses the WPARID provided in the LRCA to identify the portion of the workload to be executed.[0012] For example, LRCA 1802A associated with physical tile 1822A may be assigned a WPARID of X. LRCA 1802A may reference a portion of batch buffer 1810 containing commands to be executed by virtual engine X, which may be associated with physical tile 0 (physical engine tile 1822A). Physical tile 0 may then execute virtual engine X command 1812A as workload 1814A. LRCA 1802B may reference a portion of batch buffer 1810 containing commands executed by virtual engine Y, which may be associated with physical tile 1 (physical engine tile 1822B). Physical tile 1 may then execute virtual engine Y command 1812B as workload 1814B. LRCA 1802C may reference a portion of batch buffer 1810 containing commands executed by virtual engine Z, which may be associated with physical tile 2 (physical engine tile 1822C). Physical tile 2 may then execute virtual engine Z command 1812C as workload 1814C.[0013] In one embodiment, WPARID 1801 is a whitelist parameter that the user mode driver can modify to enable dynamic load balancing. Changes to WPARID can be performed while the workload is running. The changes are performed atomically to prevent different concurrency engines from claiming ownership of the same WPARID. In one embodiment, the WPARID is saved and restored as part of saving and restoring the hardware context.[0014] Dynamically obtaining the WPARID ID with LRCA allows context to be transparently migrated to any physical tile. During execution, the same subset of commands intended for a given WPARID will be executed regardless of the physical tile on which the workload is executed. In one embodiment, multiple LRCAs can be submitted to the same engine to serialize command execution. For example, if virtual engine X command 1812A and virtual engine Y command 1812B are both submitted to physical engine tile 1822B, both sets of commands are executed in a serialized manner.[0015] In one embodiment, execution of virtual engine commands is handled within physical engine tiles 1822A-1822C using predictive execution logic. If a portion of the batch buffer provided to the hardware matches a WPARID provided by an LRCA associated with a set of physical tiles and/or virtual engines, instructions and/or commands to the engine are conditionally executed. It is possible to In one embodiment, if conditional execution is enabled, the tile automatically bypasses execution of commands that do not match the WPARID provided by the LRCA, so a separate batch buffer start position may not be needed. do not have. In one embodiment, when dynamic load balancing is performed by migrating a WPARID between tiles, the tile executes and resets the conditional execution unit within the tile to execute commands for the new WPARID. The batch buffer can be rescanned for commands that If a WPARID is deleted, the tile may stop executing commands associated with the deleted WPARID.[0016] FIG. 19 depicts a flow diagram of a method 1900 for multi-tile workload scheduling, according to one embodiment. Method 1900 can be performed by an application or driver of a host processing system, including a multi-tile, multi-core, or multi-GPU graphics processing system described herein. Similar techniques can be applied to multi-GPU or multi-core GPUs, where multiple GPUs or multiple GPU cores are referred to herein as tiles of a graphics processing engine. is or includes a multi-tile graphics processor architecture as described in . For example, a multi-tile GPU may be configured to present to the host processing system as a single graphics processor or one or more graphics processor devices, depending on the configuration of the system graphics interface. It is possible to[0017] As shown at block 1902, in one embodiment, method 1900 includes operations for generating a set of commands for a workload to be executed by a graphics processor having multiple tiles of a graphics processing engine. including carrying out. At block 1904, the operation distributes the set of commands to the first partition and the second partition. At block 1906, the operation may assign a first partition identifier to the first partition and a second partition identifier to the second partition. At block 1908, the operation additionally associates a first partition identifier with a first hardware context associated with a first graphics processing engine tile of the plurality of graphics processing engine tiles; a second partition identifier is associated with a second hardware context associated with a second graphics processing engine tile of the plurality of tiles of graphics processing engines; At block 1910, the operation submits the first partition and the second partition to each of the first graphics processing engine tile and the second graphics processing engine tile. At block 1912, the operation additionally executes the first partition via the first graphics processing engine tile and executes the second partition via the second graphics processing engine tile.[0018] Partitioning of a group of commands can be performed as shown in FIGS. 17 and 18. Commands for each partition can be loaded into a batch buffer, and the same batch buffer can be submitted to each tile of the graphics processing engine. Each tile's hardware context can set a batch buffer start position that corresponds to the start of the command processed by the tile. In one embodiment, a partition identifier may also be associated with a command's partition. Tiles can be set to conditionally execute commands in the associated partition's batch buffer while bypassing commands in other partitions.[0019] FIG. 20 depicts a flow diagram of a method 2000 of executing a multi-tile workload, according to one embodiment. Method 2000 can be performed by a multi-tile, multi-core, or multi-GPU graphics processing system as described herein. Similar techniques can be applied to multi-GPU or multi-core GPUs, where the multiple GPUs or multiple GPU cores are referred to as tiles of a graphics processing engine, as described above. Includes multi-tile graphics processor architecture.[0020] As shown at block 2002, method 2000 includes a graphics processor receiving a series of commands. The set of commands represents a workload having a first partition and a second partition, and the graphics processor includes multiple tiles of a graphics processing engine. At block 2004, the graphics processor may associate a first tile of the graphics processing engine with the first hardware context and associate a second tile of the graphics processing engine with the second hardware context. be. At block 2006, the graphics processor then reads the first partition identifier from the first hardware context by the first tile of the graphics processing engine and reads the first partition identifier from the first hardware context by the second tile of the graphics processing engine. - A second partition identifier can be read from the context. A first partition identifier is associated with the first partition and a second partition identifier is associated with the second partition. At block 2008, the graphics processor configures the first tile of the graphics processing engine and the second tile of the graphics processing engine to conditionally execute a command having a partition identifier associated with each tile. It is possible to do so. At block 2010, the graphics processor then executes the commands of the first partition on a first tile of the graphics processing engine while bypassing the commands of the second partition, and executes the commands of the first partition while bypassing the commands of the first partition. The commands in the second partition can be executed in the second tile of the processing engine.[0021] FIG. 21 depicts a flow diagram of a method 2100 for migrating workloads between tiles according to one embodiment. Method 2100 can be performed by a multi-tile, multi-core, or multi-GPU graphics processing system as described herein. Similar techniques can be applied to multi-GPU or multi-core GPUs, where the multiple GPUs or multiple GPU cores are referred to as tiles of a graphics processing engine, as described above. Includes multi-tile graphics processor architecture.[0022] As shown at block 2102, method 2100 includes a graphics processor receiving a set of commands. The set of commands represents a workload having a first partition and a second partition. At block 2104, the graphics processor may configure a first tile of the graphics processing engine to execute the first partition. At block 2106, the graphics processor may configure a second tile of the graphics processing engine to execute the second partition. At block 2108, before execution of the first partition is complete, the graphics processor receives a trigger to transfer execution of the first partition from a first tile of the graphics processing engine to a third tile of the graphics processing engine. be able to. The graphics processor may then migrate the first partition to a third tile and execute at least a portion of the first partition with the third tile of the graphics processing engine, as shown at block 2110. In one embodiment, the trigger that causes the execution of the first partition to move to the third tile is configured to move the execution of the first partition from the first tile to the third tile. Contains an atomic update of the partition identifier (WPARID). The third tile conditionally executes the batch buffer command associated with the WPARID of the first partition, while the first tile no longer executes the batch buffer command associated with the WPARID of the first partition. Will.[0033] FIG. 22 is a block diagram of a computing device 2200 that includes a graphics processor 2204, according to one embodiment. Computing device 2200 may be a computing device described herein, such as data processing system 100, such as that in FIG. Additionally, computing device 2200 can also be a communication device, such as a set top box (e.g., an Internet-based cable television set-top box, etc.), a Global Positioning System (GPS)-based device, etc. or may be included therein. Computing device 2200 may also include a cellular phone, a smartphone, a personal digital assistant (PDA), a tablet computer, a laptop computer, an e-reader, a smart TV, a television platform, a wearable device (e.g., eyeglasses, etc.). , watches, bracelets, smart cards, jewelry, clothing, etc.), or may be included in a mobile computing device such as a media player. For example, in one embodiment, computing device 2200 uses an integrated circuit (“IC”), such as a system on a chip (“SoC” or “SOC”), to integrate various hardware components of computing device 2200. mobile computing devices that integrate hardware and/or software components on a single chip.[0135] Computing device 2200 includes a graphics processor 2204. Graphics processor 2204 represents any graphics processor described herein. A graphics processor includes one or more graphics engines, graphics processor cores, and other graphics execution resources described herein. Such graphics execution resources may include execution units, shader engines, fragment processors, vertex processors, streaming multiprocessors, graphics processor clusters, or any computing device suitable for processing graphics and image resources. - Can be provided in a form including, but not limited to, a collection of resources.[0136] In one embodiment, graphics processor 2204 includes a cache 2214, which may be a single cache or may be divided into multiple segments. and any number of L1, L2, L3, or L4 caches, rendering caches, depth caches, sampler caches, and/or shader unit caches. Graphics processor 2204 also includes multiple GPGPU tiles 2204, each tile including multiple graphics processor engines and execution units, such as shown in FIG. 16C. GPGPU tiles 2204 may be similar or identical in architecture. Each GPGPU tile may include a virtualized set of graphics processor engines. The virtualization engine allows command streams to be constructed that are agnostic to the physical GPGPU tiles that execute the commands. Additionally, commands can be dynamically migrated between virtual engines within or across physical tiles. In various embodiments, GPGPU engine tile 2240 may include a barrier/synchronization unit 2242 and a conditional execution unit 2244. Barrier/synchronization unit 2242 may be used to synchronize GPGPU engine tiles 2240 upon completion of a workload partition. Conditional execution unit 2244 conditionally executes batch buffer commands based on a match between a workload partition identifier associated with the command and a workload partition identifier associated with a GPGPU engine tile. and/or can be used to enable predicted execution.[0137] As illustrated, in one embodiment, in addition to graphics processor 2204, computing device 2200 may further include any number and type of hardware and/or software components. It often includes, but is not limited to, an application processor 2206, memory 2208, and input/output (I/O) sources 2210. Application processor 2206 can interact with and share graphics pipeline functionality with a hardware graphics pipeline, as shown with reference to FIG. Processed data is stored in buffers within the hardware graphics pipeline and state information is stored in memory 2208. The resulting data can be transferred to a display controller for output via a display device, such as display device 323 in FIG. The display device may be of various types, such as a cathode ray tube (CRT), thin film transistor (TFT), liquid crystal display (LCD), organic light emitting diode (OLED) array, etc., and provides information to the user through a graphical user interface. may be configured to display.[0138] Application processor 2206 may include one or more processors, such as processor 102 of FIG. It may also be a central processing unit (CPU) used in a computer. OS 2202 can serve as an interface between the hardware and/or physical resources of computing device 2200 and one or more users. The OS 2202 may include graphics driver logic 2222, such as a user mode driver (UMD 2223) and a kernel mode driver (KMD 2224), which are the user mode graphics driver of FIG. 1026 and/or a kernel mode graphics driver 1029. UMD 2223 may interface with applications running on the computing device and allow these applications to submit workloads across multiple GPGPU engine tiles 2224 of graphics processor 2204. can.[0139] In some embodiments, graphics processor 2204 may exist as part of application processor 2206 (such as part of a physical CPU package), in which case at least a portion of memory 2208 may be shared by application processor 2206 and graphics processor 2204, but at least a portion of memory 2208 may be exclusive to graphics processor 2204, or graphics processor 2204 may It is envisaged that the computer may have a separate storage unit. Memory 2208 may include a pre-allocated region of a buffer (e.g., a frame buffer); embodiments are not so limited, and any memory accessible to the underlying graphics pipeline may be used. It should be understood by those skilled in the art that Memory 2208 may include various forms of random access memory (RAM) (e.g., SDRAM, SRAM, etc.), including desktops or applications that use graphics processor 2204 to render 3D graphics scenes. good. A memory controller may be used to access data in memory 2208 and transfer the data to graphics processor 2204 for graphics pipeline processing. Memory 2208 may be made available to other components within computing device 2200. For example, any data (e.g., input graphics data) received from various I/O sources 2210 of computing device 2200 may be sent to one or more processors (e.g., application - may be temporarily queued in memory 2208 prior to their use by processor 2206). Similarly, data that a software program determines should be transmitted from computing device 2200 to an external entity through one of the computing system interfaces or stored in an internal storage element. The determined data is often temporarily queued in memory 2208 before it is transmitted or stored.[0140] I/O sources may include devices such as touch screens, touch panels, touch pads, virtual or regular keyboards, virtual or regular mice, ports, connectors, network devices, and the like. Additionally, the I/O source 2210 may be to and/or from the computing device 2200 (e.g., a networking adapter); Non-volatile storage may include one or more I/O devices implemented to transfer data. User input devices, including alphanumeric and other keys, may be used to communicate information and command selections to graphics processor 2204. Another type of user input device is a cursor control device, such as a mouse, trackball, touch screen, touch pad, or cursor direction keys, which conveys directional information and command selections to the GPU and controls the cursor on the display. Control movement. The camera and microphone array of computing device 2200 may be used to observe gestures, record audio and video, and receive and send visual and audio commands.[0141] The I/O source 2210 configured as a network interface can be a LAN, a wide area network (WAN), a metropolitan area network (MAN), a personal area network (PAN), a Bluetooth® ), cloud networks, cellular or mobile networks (e.g., third generation (3G), fourth generation (4G), etc.), intranets, the Internet, and the like. The network interface may include, for example, a wireless network interface with one or more antennas. A network interface also includes a cable for communicating with a remote device via a network cable, which may be, for example, an Ethernet cable, a coaxial cable, a fiber optic cable, a serial cable, or a parallel cable. It may also include a wired network interface.[0142] The network interface may provide access to a LAN, e.g. by complying with the IEEE 802.11 standard, and/or the wireless network interface may provide access to a personal area network, e.g. by complying with the Bluetooth standard. Can provide access to the network. Other wireless network interfaces and/or protocols may also be supported, including earlier and later versions of standards. In addition to, or instead of, communicating according to wireless LAN standards, the network interface may be implemented using, for example, Time Division Multiple Access (TDMA) protocols, Global System for Mobile Communications (GSM) protocols, Code Division Multiple Access ( CDMA) protocol, and/or any other type of wireless communication protocol may be used to provide wireless communications.[0143] It should be understood that a smaller or more equipped system than the examples described above may be preferred for particular implementations. Accordingly, the configuration of computing device 2200 may vary from implementation to implementation depending on many factors such as price constraints, performance requirements, technological improvements, or other circumstances. Examples include mobile devices, personal digital assistants, mobile computing devices, smartphones, cellular telephones, handsets, one-way pagers, two-way pagers, messaging devices, computers, personal computers (PCs), desktop computers, etc. computer, laptop computer, notebook computer, handheld computer, tablet computer, server, server array or server farm, web server, network server, internet server, work station, mini computer, Main frame computers, supercomputers, network equipment, web equipment, distributed computing systems, multiprocessor systems, processor-based systems, appliances, programmable appliances, televisions, digital televisions, set boxes , wireless access points, base stations, subscriber stations, mobile subscriber centers, radio network controllers, routers, hubs, gateways, bridges, switches, machines, or combinations thereof. .[0144] Embodiments include one or more microchips or integrated circuits interconnected using a parent board, hardwired logic circuits, stored by memory devices, and microprocessors, firmware, application specific integrated circuits ( ASIC) and/or software executed by a field programmable gate array (FPGA). The term "logic" or "logical" can include, for example, software or hardware, and/or a combination of software and hardware.[0145] Embodiments may be provided, for example, as a computer program product, which when executed by one or more machines, such as a computer, a network of computers, or other electronic device, can: It can include one or more machine-readable media storing machine-executable instructions that can cause one or more machines to perform operations in accordance with the embodiments described herein. Machine-readable media include floppy diskettes, optical disks, CD-ROMs (compact disk read-only memory), and magneto-optical disks, ROM, RAM, EPROM (erasable programmable read-only memory), EEPROM ( electrically erasable programmable read-only memory), magnetic or optical cards, flash memory, or other types of media/machine-readable media suitable for storing machine-executable instructions. but not limited to.[0146] Additionally, embodiments may be downloaded as a computer program product, where the program is embodied on a carrier wave or other propagation medium via a communications link (e.g., a modem and/or network connection). may be transferred from a remote computer (eg, a client) to a requesting computer (eg, a client) by one or more modulated and/or modulated data signals.[0147] The following sections and/or examples relate to particular embodiments or examples thereof. Specifications in the examples may be used anywhere in one or more embodiments. The various features of the various embodiments or examples may be combined in various ways, with some features included and other features excluded, to suit a variety of different applications. Examples include subject matter such as a method, means for performing the operations of the method, at least one machine-readable medium containing instructions that, when executed by a machine, cause the machine to perform the operations of the method, or as described herein. may include an apparatus or system according to embodiments and examples described herein. Various components may be means for performing the described operations or functions.[0148] One embodiment provides a graphics processor that includes a graphics processing engine first tile, a graphics processing engine second tile, and an interface between a host system and the graphics processor. The interface receives a set of commands for a workload having a first partition and a second partition, sends the set of commands to a first tile of the graphics processing engine, and sends the set of commands to a second tile of the graphics processing engine. can be configured to send a set. A first tile of the graphics processing engine is capable of reading a first partition identifier from a first hardware context, and the first partition identifier is associated with the first partition. The first tile can then conditionally execute the commands of the first partition while bypassing the commands of the second partition. A second tile of the graphics processing engine is capable of reading a second partition identifier from a second hardware context, and the second partition identifier is associated with the second partition. The second tile can then conditionally execute the commands of the second partition while bypassing the commands of the first partition.[0149] One embodiment provides a non-transitory machine-readable medium storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations, the operations being: generating a set of commands for a workload to be executed by a graphics processor having multiple tiles of a graphics processing engine; dividing the set of commands into a first partition and a second partition; associating a first partition identifier identifying a first partition with a first rendering context; associating a second partition identifier identifying a second partition with a second rendering context; submitting a first partition and a second partition to one graphics processing engine tile and a second graphics processing engine tile, respectively; and executing the first partition by the first graphics processing engine tile; executing the second partition with a second graphics processing engine tile.[0150] One embodiment provides a data processing system that includes a graphics processor that includes a graphics processing engine first tile and a graphics processing engine second tile. A graphics processor receives a set of commands for a workload having a first partition and a second partition, configures a first tile of the graphics processing engine to execute the first partition, and configures a first tile of the graphics processing engine to execute the first partition. The second tile of the graphics processing engine can be configured to execute the second partition simultaneously with the execution of the graphics processing engine. In one embodiment, prior to completion of execution of the first partition, the graphics processor transfers execution of the first partition from a first tile of the graphics processing engine to a third tile of the graphics processing engine; A third tile of can receive a trigger to execute at least a portion of the first partition. Transferring execution of the first partition includes atomically reassigning a partition identifier of the first partition from a first tile of the graphics processing engine to a third tile of the graphics processing engine. In one embodiment, the first partition is migrated before starting execution of the first partition.[0151] One embodiment provides a method, the method comprising receiving a set of commands at a graphics processor, the set of commands representing a workload having a first partition and a second partition. , the graphics processor includes a plurality of tiles of the graphics processing engine; and reading a first partition identifier associated with the first partition by the first tile of the graphics processing engine from the first hardware context. and, by a second tile of the graphics processing engine, reading a second partition identifier associated with the second partition from the second hardware context, and conditionally executing a command having the partition identifier associated with the respective tile. configuring a first tile of the graphics processing engine and a second tile of the graphics processing engine to execute commands in the first tile of the graphics processing engine in the first partition while bypassing commands in the second partition; and executing the commands of the second partition in a second tile of the graphics processing engine while bypassing the commands of the first partition. In one implementation, the method further includes associating a first tile of the graphics processing engine with a first hardware context and associating a second tile of the graphics processing engine with a second hardware context.[0152] Those skilled in the art will understand from the foregoing description that the broad techniques of the embodiments may be implemented in a variety of forms. Thus, while the embodiments have been described with respect to specific examples thereof, other modifications of the embodiments will become apparent to those skilled in the art upon consideration of the drawings, specification, and claims that follow. The true scope should not be so limited.
The invention relates to integrated assemblies, methods of forming integrated assemblies and an NAND memory array. Some embodiments include a NAND memory array having a vertical stack of alternating insulative levels and wordline levels. The wordline levels have primary regions of a first vertical thickness, and have terminal projections of a second vertical thickness which is greater than the first vertical thickness. The terminal projections include control gate regions. Charge-blocking regions are adjacent the control gate regions, and are vertically spaced from one another. Charge-storageregions are adjacent the charge-blocking regions and are vertically spaced from one another. Gate-dielectric material is adjacent the charge-storage regions. Channel material is adjacent the gate dielectric material. Some embodiments include methods of forming integrated assemblies.
1.An integrated structure including:Vertical stacking of alternating insulating levels and conductive levels;The conductive layer has a main area with a first vertical thickness and an end protrusion with a second vertical thickness, the second vertical thickness being greater than the first vertical thickness;A charge blocking material arranged in a first section of a vertical stack, the first section being along the conductive level and adjacent to the end protrusion, the first section being vertically separated from each other by a first gap ;A charge storage material arranged in a second section of a vertical stack, the second section being along the conductive level and adjacent to the first section, the second section being vertically separated from each other by a second gap ;A gate dielectric material, which is adjacent to the charge storage material; andThe channel material is adjacent to the gate dielectric material.2.The integrated structure of claim 1, wherein each of the second segments has a vertical length greater than the second vertical thickness.3.The integrated structure according to claim 1, wherein each conductive level includes a conductive core surrounded by an outer conductive layer, wherein the conductive core includes a different composition from the outer conductive layer.4.3. The integrated structure of claim 3, further comprising a high-k dielectric material between the outer conductive layer of the conductive level and the first segment of the charge blocking material.5.The integrated structure of claim 1, wherein the high-k dielectric material includes one or more of HfO, HfSiO, ZrO, and ZrSiO; wherein the chemical formula indicates the main component rather than a specific stoichiometry.6.The integrated structure of claim 5, wherein the regions of the high-k dielectric material are above and below the end protrusions of the conductive level; wherein a third vertical thickness is defined as including the second The vertical thickness and the thickness of the zone of the high-k dielectric material; and wherein each of the second sections has a vertical length that is substantially the same as the third vertical thickness.7.The integrated structure of claim 1, wherein the channel material is flat along the vertical stack.8.The integrated structure of claim 1, wherein the second section is flat along the first section.9.The integrated structure of claim 1, wherein the insulating level includes voids.10.The integrated structure of claim 1, wherein the insulating level does not include voids.11.The integrated structure of claim 1, wherein the void is within one or more of the end protrusions.12.The integrated structure of claim 1, wherein the void is not in any of the end protrusions.13.The integrated structure of claim 1, wherein the first section of the vertical stack of the charge blocking material is a single homogeneous composition.14.The integrated structure of claim 1, wherein the first section of the vertical stack of the charge blocking material comprises a laminate of two or more different compositions, wherein the composition is along the vertical The straight extending interfaces are combined with each other.15.The integrated structure according to claim 14, wherein one of the two or more different compositions includes silicon oxynitride, and wherein the other of the two or more different compositions Includes silica.16.A NAND memory array, which includes:Vertical stacking of alternating insulation levels and word line levels;The word line level has a main area with a first vertical thickness and an end protrusion with a second vertical thickness, the second vertical thickness is greater than the first vertical thickness, and the end protrusion includes Control gate area;Charge blocking regions, which are adjacent to the control gate region and are vertically separated from each other;Charge storage regions which are adjacent to the charge blocking region and are vertically separated from each other;A gate dielectric material adjacent to the charge storage region; andA channel material that extends vertically along the vertical stack and is adjacent to the gate dielectric material.17.The NAND memory array of claim 16, wherein each word line level includes a conductive core surrounded by an outer conductive layer, wherein the conductive core includes a different composition from the outer conductive layer; and wherein the insulating material is located Between the outer conductive layer and the charge blocking region.18.18. The NAND memory array of claim 17, wherein the conductive core includes one or more metals, wherein the outer conductive layer includes a metal nitride, and wherein the insulating material is a high-k material.19.The NAND memory array of claim 18, wherein:The conductive core includes tungsten;The outer conductive layer includes titanium nitride; andThe insulating material includes one or more of HfO, HfSiO, ZrO, and ZrSiO, where the chemical formula indicates the main component rather than the specific stoichiometry.20.The NAND memory array of claim 16, wherein the charge storage region includes a charge trapping material.21.The NAND memory array of claim 20, wherein the charge storage region includes silicon nitride.22.The NAND memory array of claim 16, wherein the void is in the end protrusion.23.A method of forming an integrated structure, which includes:Forming a vertical stack of alternating first and second levels, the first level including a first material, and the second level including a second material;The first level is recessed relative to the second level, the second level has a protruding end extending beyond the recessed first level, the end has a surface of the second material, the The concave first level has a surface of the first material;A third material is selectively formed along the second material with respect to the first material, the third material extends around the end of the second level to widen the end, and the widened end Vertically separated from each other by gaps;Forming a fourth material in the gap, the third material and the fourth material having outer surfaces forming vertical edges, and the inner surface of the fourth material is adjacent to the surface of the first material;Forming a charge storage material extending vertically along the vertical edge;Forming a gate dielectric material extending vertically along the charge storage material;Forming a channel material extending vertically along the gate dielectric material;Removing the second material and the third material to leave a first gap;A conductive level is formed in the first gap, and the conductive level has a main area with a first vertical thickness and an end protrusion with a second vertical thickness, the second vertical thickness being greater than the first Vertical thicknessRemoving the first material and the fourth material to leave a second void; andThe second void is extended through the charge storage material to divide the charge storage material into vertically spaced segments.24.The method of claim 23, further comprising at least partially filling the second void with an insulating material.25.23. The method of claim 23, wherein the conductive level only partially fills the first void, and wherein a region of the first void remains in the end protrusion of the conductive level.26.The method of claim 23, further comprising forming a charge blocking material extending along the third material; and wherein the forming of the charge storage material includes forming the charge storage material along the The charge blocking material extends vertically.27.The method of claim 23, further comprising forming a charge blocking material extending along the third material; and wherein:The forming of the charge storage material includes forming the charge storage material to extend vertically along the charge blocking material;The vertically spaced section of the charge storage material is a vertically spaced second section; andThe extension of the second void includes extending the second void through the charge blocking material to divide the charge blocking material into vertically spaced first segments.28.The method of claim 23, further comprising forming a fifth material around the widened end before forming the fourth material.29.The method according to claim 28, wherein the fifth material includes one or both of silicon oxynitride and silicon dioxide.30.The method of claim 28, wherein the fifth material is formed to span the gap and extend along the first surface.31.The method of claim 28, wherein the charge storage material is formed to directly abut the fifth material.32.The method of claim 28, wherein the fifth material is a charge blocking material, and wherein an additional charge blocking material is formed along the fifth material before forming the charge storage material.33.The method according to claim 23, wherein the composition of the second material and the third material are the same as each other.34.The method of claim 33, wherein the second material and the third material comprise silicon nitride, and wherein the first material comprises silicon dioxide.35.The method of claim 34, wherein the fourth material consists essentially of silicon.
Integrated structure, method for forming integrated structure, and NAND memory arrayTechnical fieldAn integrated assembly (such as an integrated NAND memory) with vertically spaced channel material segments and a method of forming the integrated assembly.Background techniqueThe memory provides data storage for the electronic system. Flash memory is a type of memory and is widely used in modern computers and devices. For example, modern personal computers can store BIOS on flash memory chips. As another example, it is increasingly common for computers and other devices to replace conventional hard disk drives with flash memory in the form of solid state drives. As yet another example, flash memory is popular in wireless electronic devices because flash memory enables manufacturers to support new communication protocols when they become standardized, and enables manufacturers to provide targeted enhancements. Features the ability to upgrade the device remotely.NAND may be the basic architecture of flash memory, and may be configured to include vertically stacked memory cells.Before describing NAND in detail, it may be helpful to describe the relationship of the memory array within an integrated arrangement more generally. 1 shows a block diagram of a prior art device 1000 including the following: a memory array 1002, which has a plurality of memory cells 1003 arranged in rows and columns; and access lines 1004 (such as word lines for conducting signals WL0 to WLm ); and the first data line 1006 (for example, bit lines for conducting signals BL0 to BLn). The access line 1004 and the first data line 1006 can be used to transfer information to and from the memory unit 1003. The row decoder 1007 and the column decoder 1008 decode the address signals A0 to AX on the address line 1009 to determine which of the memory cells 1003 to be accessed. The sense amplifier circuit 1015 operates to determine the value of the information read from the memory cell 1003. The I/O circuit 1017 transfers the value of information between the memory array 1002 and an input/output (I/O) line 1005. The signals DQ0 to DQN on the I/O line 1005 may represent the values of information read from the memory cell 1003 or to be written into the memory cell 1003. Other devices can communicate with the device 1000 through the I/O line 1005, the address line 1009, or the control line 1020. The memory control unit 1018 is used to control the memory operations to be performed on the memory unit 1003, and uses the signal on the control line 1020. The device 1000 may receive supply voltage signals Vcc and Vss on the first supply line 1030 and the second supply line 1032, respectively. The device 1000 includes a selection circuit 1040 and an input/output (I/O) circuit 1017. The selection circuit 1040 can respond to the signals CSEL1 to CSELn via the I/O circuit 1017 to select the data on the first data line 1006 and the second data line 1013 that can indicate that the memory cell 1003 is to be read or programmed to the memory cell. Signal of the value of the information in 1003. The column decoder 1008 can selectively activate the CSEL1 to CSELn signals based on the A0 to AX address signals on the address line 1009. The selection circuit 1040 can select the signals on the first data line 1006 and the second data line 1013 to provide communication between the memory array 1002 and the I/O circuit 1017 during read and program operations.The memory array 1002 of FIG. 1 may be a NAND memory array, and FIG. 2 shows a block diagram of a three-dimensional NAND memory device 200 that may be used in the memory array 1002 of FIG. 1. The device 200 includes multiple strings of charge storage devices. In the first direction (Z-Z'), each charge storage device string may include, for example, thirty-two charge storage devices stacked on top of each other, where each charge storage device corresponds to, for example, thirty-two rows (e.g., One of Tier0 to Tier31). The charge storage devices of the corresponding strings may share a common channel region, such as a common channel region formed in a corresponding column of semiconductor material (eg, polysilicon), and the string of charge storage devices is formed around the column of semiconductor material. In the second direction (X-X'), each first group (e.g., sixteen first groups) in the plurality of strings may include, for example, sharing multiple (e.g. thirty-two) access lines ( That is, eight strings of "global control gate (CG) lines", also called word lines WL). Each of the access lines can be coupled to the charge storage device in the row. When each charge storage device includes a cell capable of storing two bits of information, the charge storage devices coupled by the same access line (and therefore corresponding to the same row) can be logically grouped into two pages, such as P0/P32, P1 /P33, P2/P34, etc. In the third direction (Y-Y'), each second group (for example, eight second groups) of the plurality of strings may include sixteen strings coupled by a corresponding one of the eight data lines. The size of the memory block may include 1,024 pages and a total of about 16MB (for example, 16WL×32 rows×2 bits=1,024 pages/block, block size=1,024 pages×16KB/page=16MB). The number of strings, rows, access lines, data lines, first groups, second groups, and/or pages may be greater or less than those shown in FIG. 2.3 shows a cross-sectional view of the memory block 300 of the 3D NAND memory device 200 of FIG. 2 in the XX' direction. The memory block 300 includes sixteen first groups of the strings described with respect to FIG. 2 Fifteen charge storage device strings in one. The strings of multiple memory blocks 300 can be grouped into multiple subsets 310, 320, 330 (e.g., tile columns), such as tile column I, tile column j, and tile column K, where each subset (such as tile column The column) includes "partial blocks" of the memory block 300. A global drain side select gate (SGD) line 340 may be coupled to multiple strings of SGD. For example, the global SGD line 340 may be coupled to a plurality of (e.g., three) sub-SGD lines 342, 344, and 346 via a corresponding one of the multiple (e.g., three) sub-SGD drivers 332, 334, 336, each of which The sub SGD line corresponds to the corresponding subset (for example, tile column). Each of the sub SGD drivers 332, 334, and 336 can simultaneously couple or cut off the SGD of the string of the corresponding partial block (for example, a block column) independently of the SGD of the string of other partial blocks. The global source side select gate (SGS) line 360 may be coupled to multiple strings of SGS. For example, the global SGS line 360 can be coupled to a plurality of sub SGS lines 362, 364, 366 via a corresponding one of the plurality of sub SGS drivers 322, 324, 326, wherein each sub SGS line corresponds to a corresponding subset (eg, a block Column). Each of the sub-SGS drivers 322, 324, and 326 may be independent of the SGS of the strings of other partial blocks while simultaneously coupling or disconnecting the SGS of the strings of the corresponding partial blocks (for example, a block column). A global access line (eg, a global CG line) 350 may be coupled to the corresponding row of charge storage devices corresponding to each of the multiple strings. Each global CG line (for example, the global CG line 350) may be coupled to a plurality of sub-access lines (for example, the sub-CG line) 352, 354, 356 via a corresponding one of the plurality of substring drivers 312, 314, and 316. Each of the substring drivers can be independent of other partial blocks and/or other rows of charge storage devices while simultaneously coupling or disconnecting the charge storage devices corresponding to the corresponding partial blocks and/or rows. The charge storage devices corresponding to respective subsets (eg, partial blocks) and corresponding rows may include "partial rows" (eg, single "tiles") of charge storage devices. Strings corresponding to respective subsets (e.g., partial blocks) may be coupled to a corresponding one of sub-sources 372, 374, and 376 (e.g., "tile sources"), where each sub-source is coupled to a corresponding power source.The NAND memory device 200 is described with reference to the schematic diagram of FIG. 4 instead.The memory array 200 includes word lines 2021 to 202N and bit lines 2281 to 228M.The memory array 200 also includes NAND strings 2061 to 206M. Each NAND string includes charge storage transistors 2081 to 208N. The charge storage transistor may use a floating gate material (such as polysilicon) to store charge, or may use a charge trap material (such as silicon nitride, metal nanodots, etc.) to store charge.The charge storage transistor 208 is located at the intersection of the word line 202 and the string 206. The charge storage transistor 208 represents a non-volatile memory cell for storing data. The charge storage transistor 208 of each NAND string 206 is source-drain connected in series between the source selection device (e.g., source side selection gate SGS) 210 and the drain selection device (e.g., drain side selection gate SGD) 212 connection. Each source selection device 210 is located at the intersection of the string 206 and the source selection line 214, and each drain selection device 212 is located at the intersection of the string 206 and the drain selection line 215. The selection devices 210 and 212 may be any suitable access devices, and are generally illustrated by the blocks in FIG. 4.The source of each source selection device 210 is connected to a common source line 216. The drain of each source selection device 210 is connected to the source of the first charge storage transistor 208 of the corresponding NAND string 206. For example, the drain of the source selection device 2101 is connected to the source of the charge storage transistor 2081 of the corresponding NAND string 2061. The source selection device 210 is connected to the source selection line 214.The drain of each drain select device 212 is connected to a bit line (ie, digit line) 228 at the drain contact. For example, the drain of the drain selection device 2121 is connected to the bit line 2281. The source of each drain selection device 212 is connected to the drain of the previous charge storage transistor 208 of the corresponding NAND string 206. For example, the source of the drain selection device 2121 is connected to the drain of the charge storage transistor 208N of the corresponding NAND string 2061.The charge storage transistor 208 includes a source 230, a drain 232, a charge storage area 234 and a control gate 236. The control gate 236 of the charge storage transistor 208 is coupled to the word line 202. The columns of charge storage transistors 208 are those transistors within the NAND string 206 coupled to a given bit line 228. The rows of charge storage transistors 208 are those transistors that are generally coupled to a given word line 202.It is hoped to develop improved NAND architectures and improved methods for manufacturing NAND architectures.Summary of the inventionIn one aspect, the present application provides an integrated structure, which includes: a vertical stack of alternating insulating and conductive levels; and the conductive level has a main area with a first vertical thickness and a second vertical The thickness of the end protrusion, the second vertical thickness is greater than the first vertical thickness; the charge blocking material, which is arranged in the first section of the vertical stack, the first section is along the conductive level and adjacent to End protrusions, the first section is vertically separated from each other by a first gap; the charge storage material is arranged in a second section of the vertical stack, the second section is along the conductive level and adjacent to the first section The second section is vertically separated from each other by a second gap; a gate dielectric material, which is adjacent to the charge storage material; and a channel material, which is adjacent to the gate dielectric material.In another aspect, the present application further provides a NAND memory array, which includes: a vertical stack of alternating insulation levels and word line levels; and the word line level has a main area with a first vertical thickness and An end protrusion of a second vertical thickness, the second vertical thickness is greater than the first vertical thickness, the end protrusion includes a control gate region; a charge blocking region, which is adjacent to the control gate region and mutually Vertically spaced; a charge storage region, which is adjacent to the charge blocking region and vertically spaced apart from each other; a gate dielectric material, which is adjacent to the charge storage region; and a channel material, which extends vertically along the vertical stack And adjacent to the gate dielectric material.In yet another aspect, the present application further provides a method of forming an integrated structure, which includes forming a vertical stack of alternating first and second levels, the first level including a first material, and the The second level includes a second material; the first level is recessed relative to the second level, the second level having a protruding end extending beyond the recessed first level, the end having a surface of the second material, the The concave first level has a surface of the first material; a third material is selectively formed along the second material relative to the first material, and the third material extends around the end of the second level to widen the end, The widening ends are vertically separated from each other by a gap; a fourth material is formed in the gap, the third material and the fourth material have outer surfaces forming vertical edges, and the inner surface of the fourth material is adjacent to the surface of the first material; forming Charge storage material extending vertically along the vertical edge; forming a gate dielectric material extending vertically along the charge storage material; forming a channel material extending vertically along the gate dielectric material; removing A second material and a third material to leave a first gap; forming a conductive level in the first gap, the conductive level having a main area with a first vertical thickness and an end protrusion with a second vertical thickness, The second vertical thickness is greater than the first vertical thickness; the first material and the fourth material are removed to leave a second void; and the second void extends through the charge storage material to divide the charge storage material into Separate segments vertically.Description of the drawingsFigure 1 shows a block diagram of a prior art memory device having a memory array containing memory cells.Figure 2 shows a schematic diagram of the prior art memory array of Figure 1 in the form of a 3D NAND memory device.FIG. 3 shows a cross-sectional view of the prior art 3D NAND memory device of FIG. 2 in the XX′ direction.Figure 4 is a schematic diagram of a prior art NAND memory array.Figure 5 is a schematic cross-sectional side view showing an integrated assembly of regions of an example NAND memory array.5A is a diagrammatic top view of a portion of the integrated assembly of FIG. 5. FIG.Figures 6-10 are schematic cross-sectional side views showing integrated assemblies of regions of example NAND memory arrays.Figure 11 is a schematic cross-sectional side view of an integrated assembly at an example process stage of an example method for forming an example memory array.12 to 18 are schematic cross-sectional side views of the area of the integrated assembly of FIG. 11 shown as an example sequential process stage after the process stage of FIG. 11.18A is a schematic cross-sectional side view of the area of the integrated assembly of FIG. 11 shown as an example process stage after and replacing the illustrated process stage of FIG. 18.19-22 are schematic cross-sectional side views of the area of the integrated assembly of FIG. 11 shown as an example sequential process stage after the process stage of FIG. 18.Figure 23 is a schematic cross-sectional side view of an integrated assembly at an example process stage of an example method for forming an example memory array.FIG. 23A is a schematic cross-sectional side view of an integrated assembly at an example process stage of an example method for forming an example memory array, and the example process stage of the example method may replace the illustrated process stage of FIG. 23 .24 and 25 are schematic cross-sectional side views of regions of the integrated assembly of FIG. 23 shown as an example sequential process stage after the process stage of FIG. 23.25A is a schematic cross-sectional side view of the area of the integrated assembly of FIG. 23 shown as an example process stage after and replacing the illustrated process stage of FIG. 25.26 to 30 are schematic cross-sectional side views of regions of the integrated assembly of FIG. 23 shown as an example sequential process stage after the process stage of FIG. 25.Detailed waysThe operation of NAND memory cells includes the movement of charge between the channel material and the charge storage material. For example, the programming of NAND memory cells may include moving charges (ie, electrons) from the channel material into the charge storage material, and then storing the charge in the charge storage material. The erasure of the NAND memory cell may include moving holes into the charge storage material to recombine with the electrons stored in the charge storage material, and in turn release the charge from the charge storage material. The charge storage material may include charge trapping materials (e.g., silicon nitride, metal dots, etc.). One problem with conventional NAND may be that the charge trapping material extends across multiple memory cells of the memory array, and this may result in charge migration from one memory cell to another. Charge migration can cause data retention problems. Some embodiments include a NAND architecture with fractures in the charge trapping material in the regions between memory cells, and such fractures may hinder the transfer of charge between memory cells. Example embodiments are described with reference to FIGS. 5 to 30.Referring to FIG. 5, the construction (ie, assembly, framework, etc.) 10 includes a vertical stack 12 of alternating first levels 14 and second levels 16. The first level 14 is an insulating level, and the second level 16 is a conductive level.The conductive level 16 is the memory cell level of the NAND configuration (also referred to herein as the word line level). The NAND configuration includes strings of memory cells (ie, NAND strings), where the number of memory cells in the string is determined by the number of vertically stacked levels 16. A NAND string can include any suitable number of levels of memory cells. For example, a NAND string may have 8 memory cell levels, 16 memory cell levels, 32 memory cell levels, 64 memory cell levels, 512 memory cell levels, 1024 memory cell levels, and so on. The vertical stack 12 is indicated to extend vertically beyond the illustrated area to show that there may be more vertically stacked levels than the vertical stacked levels specifically illustrated in the diagram of FIG. 5.The stack 12 is shown as being supported above the base 18. The substrate 18 may include a semiconductor material, and may, for example, include single crystal silicon, consist essentially of single crystal silicon, or consist of single crystal silicon. The base 18 may be referred to as a semiconductor substrate. The term "semiconductor substrate" means any structure that includes semiconductor materials, including but not limited to bulk semiconductor materials, such as semiconductor wafers (alone or in a combination including other materials) and (alone or in a combination including other materials) In the piece) semiconductor material layer. The term "substrate" refers to any support structure, including but not limited to the semiconductor substrate described above. In some applications, the substrate 18 may correspond to a semiconductor substrate containing one or more materials associated with integrated circuit manufacturing. Such materials may include, for example, one or more of refractory metal materials, barrier materials, diffusion materials, insulator materials, and the like.A gap is provided between the stack 12 and the substrate 18 to indicate that other components and materials may be provided between the stack 12 and the substrate 18. Such other components and materials may include additional stack levels, source line levels, source side select gates (SGS), etc.The insulating level 14 includes an insulating material 20. The insulating material 20 may include any suitable composition, and in some embodiments, may include silicon dioxide, consist essentially of silicon dioxide, or consist of silicon dioxide.The conductive level 16 includes conductive regions 22. The conductive area includes an inner conductive material 24 and an outer conductive material 26. The inner conductive material 24 can be considered to be configured as the conductive core 25, and the outer conductive material 26 can be considered to be configured as the outer conductive layer 27 surrounding the conductive core.The conductive materials 24 and 26 may include any suitable conductive composition, such as various metals (such as titanium, tungsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (such as metal silicides, metal nitrides, metal carbides, etc.). One or more of conductively-doped semiconductor materials (for example, conductively-doped silicon, conductively-doped germanium, etc.). The conductive materials 24 and 26 are different from each other in composition. In some embodiments, the core material 24 may include one or more metals (for example, may include tungsten), and the outer conductive material 26 may include one or more metal nitrides (for example, may include titanium nitride).The dielectric material 28 is along the outer conductive material 26. The dielectric material 28 may be a dielectric resistance barrier material, and may include any suitable composition. In some embodiments, the dielectric material 28 includes a high-k material, where the term "high-k" means that the dielectric constant is greater than that of silicon dioxide. In some embodiments, the dielectric material 28 may include one or more of the following, consist essentially of one or more of the following, or consist of one or more of the following: AlO, HfO, HfSiO, ZrO, and ZrSiO, where the chemical formula indicates the main component rather than a specific stoichiometry. In some embodiments, it may be advantageous to use a high-k material other than aluminum oxide (AlO) as the dielectric material 28 due to the processing limitations described below. In such embodiments, it may be advantageous for the dielectric material 28 to include one or more of hafnium oxide (HfO), hafnium silicate (HfSiO), zirconium oxide (ZrO), and zirconium silicate (ZrSiO).The conductive level (word line level) 16 has a main region 30 having a first vertical thickness T1 and an end protrusion 32 having a second vertical thickness T2, which is greater than the first vertical thickness. In some embodiments, the second vertical thickness T2 is greater than the first vertical thickness T1 by an amount ranging from about 10% to about 70%. In the illustrated embodiment, the main region 30 is substantially vertically centered relative to the end protrusion 32.The charge blocking material 34 is along the end protrusion 32. The charge blocking material 34 is arranged in the vertical stack section 36. The segments 36 are vertically separated from each other by a gap 39. The charge blocking material 34 may include any suitable composition, and in some embodiments, may include one or both of silicon oxynitride (SiON) and silicon dioxide (SiO2), consisting essentially of silicon oxynitride and silicon dioxide. One or two of silicon or one or two of silicon oxynitride and silicon dioxide.The segment 36 of the charge blocking material 34 is adjacent to the dielectric barrier material 28 and is separated by the dielectric barrier material 28 from the conductive material 26 of the end protrusion 32.The charge storage material 38 is adjacent to the charge blocking material and is arranged in the vertical stack section 40. The segments 36 and 40 may be called the first segment and the second segment, respectively, to distinguish each other.The second segments 40 (ie, segments of charge storage material 38) are vertically separated from each other by gaps 41. In some embodiments, the gaps 39 and 41 may be referred to as a first gap and a second gap to distinguish each other. In some embodiments, the gap 41 can be considered an expansion of the gap 39.The charge storage material 38 may include any suitable composition. In some embodiments, the charge storage material 38 may include charge trapping materials, such as silicon nitride, silicon oxynitride, conductive nanodots, and the like. For example, in some embodiments, the charge storage material 38 may include silicon nitride, consist essentially of silicon nitride, or consist of silicon nitride. In an alternative embodiment, the charge storage material 38 may be configured to include a floating gate material (e.g., polysilicon).The gate dielectric material (ie, tunneling material) 42 is adjacent to the charge storage material 38. The gate dielectric material 42 may include any suitable composition. In some embodiments, the gate dielectric material 42 may include, for example, one or more of silicon dioxide, silicon nitride, silicon oxynitride, aluminum oxide, hafnium oxide, zirconium oxide, and the like. The gate dielectric material 42 may be bandgap designed to achieve desired electrical properties, and thus may include a combination of two or more different materials.The channel material 44 is adjacent to the gate dielectric material 42 and extends vertically along the stack 12. The channel material 44 includes a semiconductor material, and may include any suitable composition or combination of compositions. For example, the channel material 44 may include one or more of silicon, germanium, III/V semiconductor materials (such as gallium phosphide), semiconductor oxides, etc., where the term III/V semiconductor material refers to the The semiconductor materials of the elements of groups III and V of the table (wherein groups III and V are old terms and are currently called groups 13 and 15). In some embodiments, the channel material 44 may include silicon, consist essentially of silicon, or consist of silicon.The insulating material 46 is adjacent to the channel material 44. The insulating material 46 may include any suitable composition, and in some embodiments, may include silicon dioxide, consist essentially of silicon dioxide, or consist of silicon dioxide.FIG. 5A shows a top view of the area of the assembly 10 and shows that the channel material 44 can be configured as an annular ring surrounding the insulating material 46. The illustrated configuration of channel material can be considered to include a hollow channel configuration, where the insulating material 46 is disposed within the "hollow" in the annular ring-shaped channel configuration. In other embodiments (not shown), the channel material may be configured as a solid pillar configuration.Referring again to FIG. 5, it can be considered that the conductive layer 16 includes a control gate region 48 close to the channel material 44 and includes a word line region 50 adjacent to the control gate region. In the illustrated embodiment, the control gate region 48 includes at least part of the end protrusion 32.The control gate region 48, the dielectric resistance barrier material 28, the charge blocking material 34, the charge storage material 38, the gate dielectric material 42 and the channel material 44 are incorporated into the NAND memory cell 52. The illustrated NAND memory cell 52 forms part of a vertically extending string of memory cells. Such strings may represent a large number of substantially identical NAND strings formed during the manufacture of the NAND memory array (the term "substantially identical" means the same within reasonable tolerances of manufacture and measurement).In the illustrated embodiment of FIG. 5, the section 40 of the charge storage material 38 has a vertical thickness T3 that is greater than the vertical thickness T2 of the conductive end protrusion 32, and the vertical thickness T3 is approximately equal to The vertical thickness through the dielectric barrier material 28 and the conductive protrusion 32 is the same. In some embodiments, the vertical thickness of the section 40 of the charge storage material 38 may be less than the vertical thickness shown in FIG. 5 due to some etching of the charge storage material during the formation of the section 40. In other embodiments, the vertical thickness of the section 40 may be greater than the vertical thickness shown in FIG. 5. In some embodiments, it can be considered that the thickness T3 of the charge storage material section 40 is tailored to approximately match the vertical thickness of the conductive protrusion 32.In particular, in the configuration of FIG. 5, the channel material 44 is "flat" (i.e., is a substantially vertical continuous thickness and is substantially vertically straight) rather than wavy. Compared with the non-flat configuration of some conventional designs, the flat channel material may positively affect the string current. In some embodiments, the configuration of the channel material 44 may be referred to as a "flat configuration." In particular, the segments 40 of charge storage material 38 are also "flat", and can be considered each in a "flat configuration." The flat section 40 may have a more even charge distribution than the uneven section of the charge storage material.In operation, the charge storage material 38 may be configured to store information in the memory cell 52. The value of the information stored in the individual memory cell (where the term "value" means one bit or more) may be based on the amount of charge (eg, the number of electrons) stored in the charge storage area of the memory cell. The amount of charge in individual charge storage regions may be controlled (eg, increased or decreased) based at least in part on the value of the voltage applied to the associated gate 48 and/or based on the value of the voltage applied to the channel material 44.The tunneling material 42 forms the tunneling area of the memory cell 52. Such tunneling regions may be configured to achieve the desired transfer (e.g., transport) of charges (e.g., electrons) between the charge storage material 38 and the channel material 44. The tunneling region may be configured (i.e., designed) to achieve selection criteria, such as but not limited to equivalent oxide thickness (EOT). EOT quantifies the electrical properties (such as capacitance) of the tunneling region based on the representative physical thickness. For example, EOT can be defined as the theoretical silicon dioxide layer thickness that would be required to have the same capacitance density as a given dielectric while ignoring leakage current and reliability issues.The charge blocking material 34 is adjacent to the charge storage material 38 and can provide a mechanism to block the flow of charge from the charge storage material 38 to the associated gate 48.The dielectric resistance barrier material 28 is disposed between the charge blocking material 34 and the associated gate 48 and can be used to prevent charge carriers from tunneling in the reverse direction from the gate 48 toward the charge storage material 38. In some embodiments, the dielectric barrier material 28 can be considered to form a dielectric barrier within the memory cell 52.The embodiment of FIG. 5 has an insulating material 20 passing through the insulating layer 14. In other embodiments, there may be voids in the insulation level. For example, FIG. 6 shows the assembly 10a, which is similar to the assembly 10 of FIG. 5 but includes voids 54 in the insulation level 14. In the illustrated embodiment, the void 54 is terminated with an insulating material 20. The void 54 can be filled with air or any other suitable gas. One advantage of having voids 54 within the insulating level is that it can alleviate capacitive coupling between vertically adjacent materials if it is found to be problematic. In the illustrated embodiment, the void 54 extends within the gap 41 between the vertically stacked sections 40 of the charge storage material 38.A void may also be present in the end protrusion 32, as shown as a void 56 in the example assembly 10b of FIG. The void 56 can be created by the process used to form the conductive materials 24 and 26, as described in more detail below. Although the embodiment of FIG. 7 shows the void 56 in each of the end protrusions 32, in other embodiments, the void 56 may be in only some of the end protrusions 32 instead of all of the end protrusions. However, it may be advantageous that the electrical properties of all end protrusions are substantially identical to each other; and therefore, it may be advantageous that all end protrusions are substantially identical to each other physically. Therefore, if the void 56 is formed, it may be advantageous that the void 56 is in all the protrusions 32, and the void in each of the end protrusions has substantially the same size and shape as the void in the other end protrusion.In some embodiments, both voids 54 and 56 may be present, as shown with respect to assembly 10c of FIG. 8.In the embodiment of FIG. 5, the segment 36 of the charge blocking material 34 is along one edge of the end protrusion 32. In other embodiments, the segment 36 may partially surround the end protrusion 32, as shown with respect to the assembly 10d of FIG. 9. It should be noted that the charge storage section 40 of FIG. 9 has a vertical thickness T4, which is larger than the vertical thicknesses T2 and T3.In the embodiment of Figures 5 to 9, the segment 36 of the charge blocking material 34 includes only a single homogeneous composition. In other embodiments, the segments may include laminates of two or more different compositions. For example, FIG. 10 shows an assembly 10e in which the charge blocking material 34 includes a laminate of two different compositions 34a and 34b, wherein the compositions 34a and 34b are bonded to each other along a vertically extending interface 57.Compositions 34a and 34b may include any suitable materials. In some embodiments, one of the compositions may include silicon oxynitride, consist essentially of silicon oxynitride, or consist of silicon oxynitride; and the other composition may include silicon dioxide, consist essentially of silicon dioxide, or consist of silicon oxynitride; Constructed of silica.The embodiment of FIGS. 9 and 10 shows the protrusion 32 without the void 56 (FIG. 7 ), and shows the insulating layer 14 without the void 54 (FIG. 8 ). In other embodiments, an assembly similar to the assembly of FIGS. 9 and 10 may be formed to include void 54 and/or void 56.Any suitable method may be used to form the assembly described above. Example methods are described with reference to Figures 11 to 30.Referring to FIG. 11, the construction 10 includes a vertical stack 12 of alternating first levels 14 and second levels 16. The first level 14 includes a first material 60 and the second level 16 includes a second material 62. The first material and the second material may include any suitable composition and have different compositions relative to each other. In some embodiments, the first material 60 may include silicon dioxide, consist essentially of silicon dioxide, or consist of silicon dioxide; and the second material 62 may include silicon nitride, consist essentially of silicon nitride, or consist of silicon dioxide. Made of silicon nitride. The second level 16 will eventually become the word line level described above with reference to, for example, FIG. 5. The levels 14 and 16 may have any suitable thickness at the process stage of FIG. 11, and may have the same thickness as each other, or may have different thicknesses relative to each other. In some embodiments, levels 14 and 16 may have a vertical thickness ranging from about 10 nanometers (nm) to about 400 nm. In some embodiments, levels 14 and 16 may have a thickness ranging from about 10 nm to about 50 nm.Referring to FIG. 12, the opening 64 is formed to extend through the stack 12. The opening has a side wall 65 extending along the first material 60 and the second material 62.Referring to FIG. 13, the first level 14 is recessed relative to the second level 16 along the sidewall 65 of the opening 64. After the recess, the second level 16 has a protruding end 66 that extends inwardly beyond the recessed first level 14. The end 66 has a surface 67 of the second material 62. The recessed first level 14 has a surface 69 of the first material 60. The cavity (gap) 68 is vertically between the ends 66. It can be considered that the surface 69 is along the inner edge of the cavity 68.Referring to FIG. 14, the third material 70 is selectively formed to be along the second material 62 with respect to the first material 60. Therefore, the material 70 is selectively formed along the surface 67 with respect to the surface 69. The material 70 may include any suitable composition, and in some embodiments, may include silicon nitride, consist essentially of silicon nitride, or consist of silicon nitride. Therefore, the third material 70 may include the same composition as the second material 62.Any suitable process may be utilized to selectively form the material 70 relative to the first material 60 along the second material 62. In some embodiments, the barrier material (also referred to herein as a poisoning material) may be selectively formed along the first material 60 relative to the second material 62 to prevent the subsequent formation of the material 70 along the surface of the first material 60 , And then the material 70 can be formed by a suitable deposition process (for example, atomic layer deposition, chemical vapor deposition, etc.). The barrier material may include any suitable composition, and in some embodiments, may include one or more of the following: N,N dimethylaminotrimethylsilane, bis(N,N-dimethyl Amino) dimethyl silane, ethylene diamine, 1-trimethylsilylpyrrolidine, 1-trimethylsilylpyrrole, 3,5-dimethyl-1-trimethylsilyl and R1-(C -OH)-R2, where R1 and R2 are organic moieties.The third material 70 surrounds the end 66 of the second level 16 to widen the end. The widening ends are vertically separated from each other by the remaining areas of the gap 68.The material 70 can be formed to any suitable thickness, and in some embodiments, can be formed to a thickness from about 1 nm to about 10 nm. In some embodiments, the thickness of the material 70 can be used to adjust the vertical thickness T2 of the conductive protrusion 32 (FIG. 5).Referring to 15, the fourth material 72 is formed in the gap 68 (FIG. 14). The fourth material 72 may include any suitable composition, and in some embodiments, may include silicon, consist essentially of silicon, or consist of silicon. For example, the fourth material 72 may include polysilicon.The fourth material 72 has an inner surface 71 which is adjacent to (along) the surface 69 of the first material 60.The third material 70 and the fourth material 72 have outer edges that together form a vertical edge 73 along the side wall of the opening 64.16, the charge blocking material 34 is formed along the vertical edge 73, the charge storage material 38 is formed along the charge blocking material, the gate dielectric material 42 is formed along the charge storage material, and the channel material 44 is formed along the gate dielectric The material is formed, and the insulating material 46 is formed to fill the remaining inner portion of the opening 64. In some embodiments, the materials 34, 38, 42, 44, and 46 can be considered to be formed adjacent to each other. In some embodiments, it can be considered that the charge storage material 38 is formed along the vertical edge 73 and is separated from such vertical edges by the charge blocking material 34. In some embodiments, the materials 34, 38, 42, 44, and 46 can be considered to extend vertically through the stack 12.Referring to FIG. 17, the second material 62 and the third material 70 (FIG. 16) are removed to leave a void 74. The void 74 may be referred to as a first void to distinguish it from other voids formed at a later stage of the process.Referring to FIG. 18, the dielectric barrier material 28, the conductive material 26, and the conductive material 24 are formed in the void 74 (FIG. 17). Therefore, the first level 16 becomes a conductive level similar to the conductive level described above with reference to FIG. 5. The conductive level has a main region 30 having a first vertical thickness T1 and an end protrusion 32 having a second vertical thickness T2. In the illustrated embodiment, the conductive material 24 completely fills the end protrusion 32 to form a configuration similar to the configuration described above with reference to FIG. 5. In other embodiments, the conductive material 24 may only partially fill the end protrusion 32 to leave a void (or key hole) 56 in the end protrusion 32, as shown in FIG. 18A.Referring to FIG. 19, the configuration 10 at a process stage subsequent to the process stage of FIG. 18 is shown. The first material 60 (FIG. 18) is removed to form voids 76 along level 14.Referring to FIG. 20, the fourth material 72 (FIG. 19) is removed to extend the void 76. One advantage of using one or more of hafnium oxide, zirconium oxide, hafnium silicate, and zirconium silicate as material 28 may be that it can be resistant to the etching conditions used to form and extend void 76. Alumina may not be sufficiently resistant to etching conditions suitable for use in material 28 (unless the alumina is in a laminate that has external alumina such as hafnium oxide, zirconium oxide, hafnium silicate, and One or more of the zirconium silicate to protect the alumina).The gap 76 in FIG. 20 may be referred to as a second gap to distinguish it from the first gap 74 in FIG. 17.Referring to FIG. 21, the second void 76 extends through the charge blocking material 34 and the charge storage material 38 to divide such materials into segments 36 and 40, respectively. In some embodiments (not shown), the void 76 may also extend through the gate dielectric material 42.Referring to FIG. 22, the void 76 (FIG. 21) is filled with an insulating material 20 to form a configuration similar to the configuration described above with reference to FIG. 5. In other embodiments, the void 20 may remain at least partially open (ie, gas-filled) to form a configuration similar to the configuration described above with reference to FIGS. 6 and 8.Another example method for forming an example integrated assembly is described with reference to FIGS. 23 to 30.Referring to FIG. 23, the configuration 10 is shown at a process stage that may be after the process stage of FIG. The charge blocking material 34 is formed along the third material 70. In some embodiments, the third material 70 includes silicon nitride, and the charge blocking material 34 includes silicon oxynitride (and/or silicon dioxide) formed by oxidizing the third material 70. In some embodiments, the material 34 may be referred to as the fifth material. The third material 70 can be considered to form a widened end surrounding the protrusion of the material 62, and the fifth material 34 can be considered to be formed around such a widened end.The material 34 narrows the gap 68.The material 34 can be formed to any suitable thickness, and in some embodiments, can be formed to a thickness ranging from about 1 nm to about 5 nm.In some embodiments, the material 34 (the fifth material) may be formed by a deposition process, and may be formed to span the surface 69 within the gap 68 and extend along the material 70, as shown in FIG. 23A.Referring to FIG. 24, the assembly 10 is shown at a process stage subsequent to FIG. 23. The fourth material 72 is formed in the narrowed gap 68 (FIG. 23). The vertical edge 73 extends along the materials 34 and 72.25, the charge storage material 38 is formed along the vertical edge 73, the gate dielectric material 42 is formed along the charge storage material, the channel material 44 is formed along the gate dielectric material, and the insulating material 46 is formed to fill the opening The remaining part of 64.In some embodiments, the material 34 of FIG. 24 may be the first charge blocking material 34a, and the second charge blocking material 34b may be deposited in a subsequent process stage. For example, FIG. 25A shows the assembly 10 in a process stage that replaces the process stage described above with reference to FIG. 25. The second charge blocking material 34b is formed along the vertical edge 73, and then the charge storage material 38, the gate dielectric material 42, the channel material 44, and the insulating material 46 are formed. The assembly of FIG. 25A can be used to form a configuration similar to that described above with reference to FIG. 10.Referring to Fig. 26, the assembly 10 at a process stage subsequent to the process stage of Fig. 25 is shown. The second material 62 and the third material 70 (FIG. 25) are removed and replaced by materials 24, 26, and 28. This type of removal and replacement can be utilized in a process similar to the process described above with reference to FIGS. 17 and 18.Referring to FIG. 27, the first material 60 (FIG. 26) is removed to form a second void 76 along the level 14. In the illustrated embodiment, some of the material 34 is removed during the etching to remove the material 60. Depending on the relative composition of materials 34 and 60 and depending on the etching conditions utilized, material 34 may or may not be removed during the etching to remove material 60.Referring to FIG. 28, the fourth material 72 (FIG. 27) is removed to extend the void 76.Referring to FIG. 29, voids 76 extend through charge storage material 38 to divide such material into segments 40.Referring to FIG. 30, the void 76 (FIG. 29) is filled with the insulating material 20 to form a configuration similar to the configuration described above with reference to FIG. In other embodiments, the void 20 may remain at least partially open (ie, gas-filled) to form a configuration similar to the configuration described above with reference to FIGS. 6 and 8.The assemblies and structures discussed above can be used within integrated circuits (where the term "integrated circuit" means an electronic circuit supported by a semiconductor substrate), and can be incorporated into electronic systems. Such electronic systems can be used in, for example, memory modules, device drivers, power modules, communication modems, processor modules, and dedicated modules, and can include multi-layer, multi-chip modules. The electronic system can be any one of the following wide range of systems: for example, cameras, wireless devices, displays, chipsets, set-top boxes, games, lighting, vehicles, clocks, televisions, cellular phones, personal computers, automobiles, industrial control systems, Aircraft etc.Unless otherwise specified, the various materials, substances, compositions, etc. described herein can be formed by any suitable method now known or to be developed, including, for example, atomic layer deposition (ALD), chemical vapor deposition ( CVD), physical vapor deposition (PVD), etc.The terms "dielectric" and "insulating" can be used to describe materials with insulating electrical properties. The terms are considered synonymous in this disclosure. In some cases, the term "dielectric" and in other cases the term "insulation" (or "electrical insulation") may be used in the present disclosure to provide language changes to simplify the premises in the appended claims, rather than being used Indicates any significant chemical or electrical differences.The terms "electrically connected" and "electrically coupled" can both be used in this disclosure. The terms are considered synonymous. The use of one term in some cases and the use of another term in other cases may be used to provide language changes within this disclosure to simplify the premises in the appended claims.The specific orientations of the various embodiments in the drawings are for illustration purposes only, and in some applications, the embodiments may be rotated relative to the orientation shown. The description provided herein and the appended claims relate to any structure having the described relationship between the various features, regardless of whether the structure is in a particular orientation of each figure or rotated relative to such orientation.Unless otherwise specified, the accompanying cross-sectional diagrams only show features in the cross-sectional plane and do not show materials behind the cross-sectional plane in order to simplify the drawings.When a structure is referred to as being "on", "adjacent to" or "abutting" another structure, the structure may be directly on the other structure or intervening structures may also be present. In contrast, when a structure is referred to as being "directly" on, "directly adjacent" or "directly against" another structure, there is no intervening structure. The terms "directly below", "directly above", etc. do not indicate direct physical contact (unless explicitly stated otherwise), but instead indicate vertical alignment.The structure (e.g., layer, material, etc.) may be referred to as "extending vertically" to indicate that the structure generally extends upward from the underlying base (e.g., substrate). The vertically extending structure may or may not extend substantially orthogonally with respect to the upper surface of the substrate.Some embodiments include an integrated structure including a vertical stack of alternating insulating and conductive levels. The conductive level has a main area that is a first vertical thickness and has an end protrusion that is a second vertical thickness, the second vertical thickness being greater than the first vertical thickness. The charge blocking material is arranged in the first section of the vertical stack. The first segment is along the conductive level and adjacent to the end protrusion. The first section is vertically separated from each other by a first gap. The charge storage material is arranged in the second section of the vertical stack. The second section is along the conductive level and adjacent to the first section. The second section is vertically separated from each other by a second gap. The gate dielectric material is adjacent to the charge storage material. The channel material is adjacent to the gate dielectric material. The channel material extends vertically along the vertical stack.Some embodiments include a NAND memory array having a vertical stack of alternating insulation levels and word line levels. The word line level has a main area having a first vertical thickness and an end protrusion having a second vertical thickness, the second vertical thickness being greater than the first vertical thickness. The end protrusion includes a control gate region. The charge blocking region is adjacent to the control gate region and is vertically separated from each other. The charge storage region is adjacent to the charge blocking region and is vertically separated from each other. The gate dielectric material is adjacent to the charge storage area. The channel material extends vertically along the vertical stack and is adjacent to the gate dielectric material.Some embodiments include a method of forming an integrated structure. A vertical stack containing alternating first and second levels is formed. The first level includes the first material, and the second level includes the second material. The first level is recessed relative to the second level. The second level has a protruding end that extends beyond the recessed first level. The end has a surface of the second material. The concave first level has a surface of the first material. The third material is selectively formed along the second material with respect to the first material. The third material extends around the end of the second level to widen the end. The widening ends are vertically separated from each other by gaps. The fourth material is formed in the gap. The third material and the fourth material have outer surfaces that form vertical edges. The inner surface of the fourth material is adjacent to the surface of the first material. The charge storage material is formed to extend vertically along the vertical edge. The gate dielectric material is formed to extend vertically along the charge storage material. The channel material is formed to extend vertically along the gate dielectric material. The second material and the third material are removed to leave a first void. The conductive layer is formed in the first gap. The conductive level has a main area that is a first vertical thickness and has an end protrusion that is a second vertical thickness, the second vertical thickness being greater than the first vertical thickness. The first material and the fourth material are removed to leave a second void. The second void extends through the charge storage material to divide the charge storage material into vertically spaced segments.According to regulations, the subject matter disclosed herein has been described in more specific or less specific language in terms of structure and method characteristics. However, it should be understood that the claims are not limited to the specific features shown and described, as the devices disclosed herein include example embodiments. Therefore, the claims have the entire scope as stated in the writing and should be properly interpreted according to the principle of equivalents.
The invention relates to peer-to-peer link sharing for uplink communication from an XPUS to a host processor. A processor unit includes a first controller to couple to a host processing unit through a first link; the second controller is used for being coupled to a second processor unit through a second link, and the second processor unit is coupled to the host central processing unit through a third link; and circuitry to determine whether to send a cache consistent request to the host central processing unit over the first link or over the second link via the second processing unit.
1. A processor unit comprising:a first controller configured to be coupled to the host processing unit via a first link;a second controller for coupling to a second processor unit via a second link, wherein the second processor unit is coupled to the host processing unit via a third link; andCircuitry for determining whether to send a cache coherency request to the host processing unit over the first link or over the second link via the second processing unit.2. The processor unit of claim 1, wherein the first link and the third link are each links according to the Compute Fast Link protocol.3. The processor unit of claim 1 , wherein the circuitry determines whether to pass the first link or the second link based on an amount of available upstream bandwidth on the first link Sending the cache coherency request.4. The processor unit of claim 3, wherein the circuitry determines the amount of available upstream bandwidth on the first link based on a number of link credits available.5. The processor unit of claim 3, wherein the circuitry is to determine the amount of available upstream bandwidth on the first link based on a raw upstream bandwidth metric.6. The processor unit of claim 1 , wherein the circuitry determines whether to send over the first link or over the second link based on an amount of available bandwidth on the second link The cache coherency request.7. The processor unit of claim 1 , wherein the circuitry determines whether to pass the first link or the second link based on an amount of available upstream bandwidth on the third link Sending the cache coherency request.8. The processor unit of claim 7 , wherein the circuitry determines the third link based on a number of requests to the host received by the processor unit from the second processing unit. An amount of available uplink bandwidth on , wherein the processor unit sends the host-to-host request to the host processing unit over the first link.9. The processor unit of claim 1, further comprising a second circuit for:tracking memory requests received from the second processor unit for memory of the host processing unit; andA snoop request associated with such memory is responded to from the host processing unit.10. The processor unit of claim 1, wherein the processor unit and the second processor unit are each a graphics processing unit.11. A method comprising:communicating with the host processing unit by the first processor unit over the first link;communicating by the first processor unit with a second processor unit via a second link, wherein the second processor unit is coupled to the host processing unit via a third link; andDetermining whether to send a cache coherency request to the host processing unit over the first link or via the second processing unit over the second link.12. The method of claim 11 , further comprising: determining whether to send the cached link over the first link or over the second link based on an amount of uplink bandwidth available on the first link unanimous request.13. The method of claim 12, further comprising determining an amount of available uplink bandwidth on the first link based on a number of link credits available.14. The method of claim 12, further comprising determining an amount of available uplink bandwidth on the first link based on a raw uplink bandwidth metric.15. The method of claim 11 , further comprising: determining whether to send the cache coherency over the first link or over the second link based on an amount of bandwidth available on the second link ask.16. The method of claim 11 , further comprising: determining whether to send the cached link over the first link or over the second link based on an amount of uplink bandwidth available on the third link. unanimous request.17. The method of claim 11, further comprising:determining an amount of available upstream bandwidth on the third link based on a number of requests received from the second processing unit to a host; andThe host-to-host request is sent to the host processing unit over the first link.18. The method of claim 11, further comprising:tracking memory requests received from the second processor unit for memory of the host processing unit; andA snoop request associated with such memory is responded to from the host processing unit.19. The method of claim 11, wherein the first link and the third link are each links according to Compute Fast Link protocol.20. The method of claim 11, wherein the first processor unit and the second processor unit are each a graphics processing unit.21. A system comprising means for performing the method of any one of claims 11-20.22. A computer program product comprising instructions which, when executed by a processor, cause the processor to perform the method of any one of claims 11-20.
Peer-to-peer link sharing for uplink communication from XPUS to host processortechnical fieldThe present disclosure relates generally to the field of computer development and, more particularly, to peer-to-peer link sharing for upstream communications from a processor unit (XPU) to a host processor.Background techniqueDisaggregation of hosts, memory, and processor units (XPUs) across multiple servers is a way to build systems that deliver high performance in a cost-effective and power-efficient manner.Contents of the inventionAccording to an embodiment of the present disclosure, there is provided a processor unit, comprising: a first controller configured to be coupled to a host processing unit via a first link; a second controller configured to be coupled to a host processing unit via a second link two processor units, wherein the second processor unit is coupled to the host processing unit via a third link; and circuitry for determining whether to pass through the host processing unit via the first link or via the second processing unit The second link sends a cache coherence request to the host processing unit.According to an embodiment of the present disclosure, a method is provided, including: communicating with a host processing unit by a first processor unit through a first link; communicating with a second processor by the first processor unit through a second link unit communication, wherein the second processor unit is coupled to the host processing unit via a third link; and determining whether to communicate via the second link via the first link or via the second processing unit to The host processing unit sends a cache coherency request.According to an embodiment of the present disclosure, there is provided a system including an apparatus for performing the method as described above.According to an embodiment of the present disclosure, there is provided a computer program product comprising instructions which, when executed by a processor, cause the processor to perform the method as described above.Description of drawings1 is a block diagram of a computing system for peer-to-peer link sharing for upstream communications from a processor unit (XPU) to a host processor, according to various embodiments.Figure 2 is a block diagram illustrating uplink communication according to various embodiments.Figure 3 is a block diagram of the architecture of an XPU according to various embodiments.4 is a diagram for determining when to enable peer-to-peer link sharing for upstream communications from an XPU to a host processor, according to various embodiments.5 is a flow for peer-to-peer link sharing for uplink communication from an XPU to a host processor, according to various embodiments.Figure 6 illustrates a block diagram of components found in a computing system according to various embodiments.Figure 7 illustrates a block diagram of another computing system according to various embodiments.Like numbers and nomenclature in the various figures indicate like elements.detailed description1 is a diagram of peer-to-peer link sharing for uplink communication from a processor unit (XPU) 102 (e.g., 102A to 102F) to a host (104A or 104B) including a processor (e.g., a CPU), according to various embodiments. A block diagram of the computing system 100 . In the depicted embodiment, each XPU 102 is coupled to every other XPU 102 via a peer-to-peer link 106 . XPUs 102 are also each coupled to host 104A or 104B via respective host links 108 . In the depicted embodiment, XPUs 102A, 102B, and 102F are each connected to host 104A via respective host links 108 , and XPUs 102C, 102D, and 102E are each connected to host 104B via respective host links 108 . Any other suitable connectivity arrangements are contemplated in various embodiments, such as one or more XPUs connected to only a subset of other XPUs, one or more XPUs not connected to any host 104, each XPU having a connection to each Dedicated link 108 to host 104, multiple links (eg, 106, 108) between any components, or other suitable connectivity. Hosts 104A and 104B may also be connected to each other via link 110 . In some embodiments, system 100 is a cache coherent system that provides coherency of shared data across XPU 102 and one or more hosts 104 , where data may be stored in multiple local caches of XPUs and/or hosts 104 .XPU deployments in various environments (e.g., data centers) for segments such as artificial intelligence (AI) training and high performance computing (HPC) (among others) may include multi-XPU scaling A system (e.g., system 100) in which each host processor (e.g., a central processing unit (CPU)) can host multiple XPU devices that are also directly attached to each other.In some segments (e.g., HPC), the workload has a substantial amount of computation on both the processor (e.g., CPU) of the host (e.g., 104 ) and the XPU cluster (e.g., XPU 102 ) and there is a large amount of shared memory access, higher link bandwidth between the XPU cluster and the host processor is advantageous. Access to such shared memory residing on (or otherwise accessible by) the host 104 may be bursty across the XPU 102 such that at any point in time the XPU 102 has a link to the host (e.g. , host link 108) may be quite different from another XPU's utilization of the link to the host.Various embodiments of the present disclosure provide the XPUs 102 in a multi-XPU expansion system with the ability to share upstream bandwidth to the host 104 in order to achieve higher average upstream bandwidth to the memory of the host 104 . For example, some embodiments allow an XPU 102 to access host memory via a peer XPU's uplink to the host 104 in addition to its own uplink to the host 104 . XPU 102 may share its upstream bandwidth to host 104 with more than one peer XPU. Various embodiments may allow an XPU 102 to determine when to opportunistically utilize the upstream bandwidth of one or more of its peer XPUs to efficiently utilize the available bandwidth. Various embodiments allow XPUs that are part of an extended cluster to dynamically borrow and achieve higher bandwidth towards host processor memory, especially when such accesses are bursty in nature. Various embodiments may provide particular benefits in use cases such as HPC where there is frequent graphics processing unit (GPU) (where GPU is an example of XPU) via shared memory ) and the CPU, and the memory footprint is generally higher in the host 104 memory than in the individual XPU 102 memory.Processor unit 102 may include any suitable processing or storage device, such as a hardware accelerator, GPU, field programmable gate array, neural network processing unit, artificial intelligence processing unit, inference engine, data processing unit, or infrastructure processing unit, I I/O device, or other suitable computing device capable of communicating with other XPUs 102 and one or more hosts 104.Host 104 may include any electronic device capable of performing computing operations (eg, processing or storage operations) and communicating with one or more XPUs 102 over a link. In various embodiments, host 104 may include a processor, such as a CPU or other processor unit. In some embodiments, a computing device may also include supporting architecture, such as BIOS, memory, or I/O services. In some embodiments, host 104 may include a server.Host link 108 may include a link according to any suitable protocol that enables communication between XPU 102 and host 104 . A link may refer to a logical connection between computing devices and may be defined by the number of lanes (eg, 16, 8, or 4; denoted as x16, x8, or x4). In some embodiments, each lane may include a transmit path and a receive path (each path includes a unidirectional differential pair). Other embodiments may have other arrangements.In various embodiments, host link 108 is a Peripheral Component Interconnect Express (PCIe) link (eg, as defined in the PCIe 5.0 Base Specification or other suitable PCIe specification). In various embodiments, host link 108 may be a link that enables cache coherency between hosts 104 . For example, in some embodiments, host link 108 is a Compute Express Link™ (CXL) (eg, as defined in the CXL 2.0 specification or other suitable CXL specification). CXL is a protocol for connections between a device (eg, XPU 102 ) and a processor (eg, CPU) of host 104 over a PCIe link. CXL provides the benefit of shared coherent cacheable memory between a device (eg, XPU 102 ) and a host (eg, 104 ). In one implementation, the bandwidth achieved on the CXL link is similar to that achievable on the PCIe link (eg, 64GBps on x16 Gen5 phy).The traffic sent by the XPU 102 to the host 104 through the CXL can be sent through the CXL.cache channel or the CXL.io channel. CXL.cache traffic allows for coherent cacheability semantics (providing coherency between host 104 and XPU 102 memory), while CXL.io uses conventional PCIe semantics to provide a non-coherent load/store interface to devices. The host 104 can also use a third channel (CXL.mem) to communicate with its memory using memory semantics. In various embodiments, upstream traffic sent from the XPU 102 to the host 104 (whether directly through the host link 108 of the XPU 102 or through another XPU) includes cache-coherent transactions, such as reads to the memory of the host 104 or write. The CXL.cache protocol (or other link protocol) may define the interaction between the XPU 102 and the host 104 as several requests, where each request has at least one associated response message and sometimes a data transfer (eg, such as 64 memory data lines such as bytes).Peer-to-peer links 106 may also include links according to any suitable protocol that enables communication between peer XPUs 102 . In various embodiments, peer-to-peer link 106 also supports cache coherency between XPUs 102 . In some embodiments, the peer-to-peer link 106 is a high-bandwidth extension link, such as an Xe link or this link 106 may include a high-bandwidth SERDES option, and a native wide link between peer XPUs or multiple High-bandwidth communications over links.Figure 2 is a block diagram illustrating uplink communication according to various embodiments. The figure depicts XPU 102A sending upstream communications to host 104A. XPU 102A is coupled to host 104A via host link 108A and to peer XPU 102B via peer link 106 . XPU 102B is coupled to host 104A via host link 108B.XPU 102A may typically send communications (eg, requests to write or read memory controlled by host 104A, such as CXL.cache traffic) to host 104A via data path 202 over host link 108A. However, in the depicted embodiment, XPU 102A may also send such communications to host 104A over data path 204, which includes peer link 106 and host link 108B of the peer XPU. Thus, XPU 102A can send communications to host 104A via peer link 106, and peer XPU 102B can provide that communication to host 104A via host link 108B.FIG. 3 is a block diagram of an architecture 300 of the XPU 102 in accordance with various embodiments. In various embodiments, each XPU 102 in system 100 may have some or all of the components depicted. In an expanded system, (eg, as shown in FIG. 1 ), each XPU 102 can access three different types of system memory components via three different data paths. First, XPU 102 can access its own local memory (eg, device memory 314 or other local memory coupled to XPU 102 ) via an internal memory architecture. Second, XPU 102 may access the peer XPU's memory (eg, peer XPU's device memory 314 ) via peer-to-peer link 106 . Finally, XPU 102 may access memory of host 104 (eg, memory resident on host 104 or other memory accessible via host 104 ) via host link 108 .The device memory 314 or the memory of the host 104 can be any suitable type of memory, such as double data rate (double data rate, DDR), low-power double data rate (low-power double data rate, LPDDR), high bandwidth memory (high bandwidth) memory, HBM), or other appropriate memory. In some embodiments, a high bandwidth network on chip (NoC) on the XPU 102 may allow the XPU 102 to achieve the desired memory bandwidth. In various embodiments, device memory 314 may be centralized or distributed across XPU 102 .The XPU engine 302 may generate cache coherent requests (eg, requests to read memory or write memory in the address space of system 100 in a cache coherent manner). In some embodiments, XPU engine 302 may execute a thread that accesses memory, and the request may be generated in response to the thread. The request may be passed to a memory management unit (MMU) 304 . MMU 304 may manage memory owned by XPU 102 . Memory management unit (MMU) 304 may perform logical-to-physical address translation for the request. If MMU 304 cannot perform address translation (eg, because memory is located at host 104), XPU 102 may send an address translation request to an entity (eg, IOMMU of host 104) and receive a physical address in response. MMU 304 may also include or be coupled to an address translation cache (ATC), which may cache logical-to-physical address translations received by XPU 102 (eg, in a manner similar to a translation lookaside buffer).A host/device memory demux (demux) 306 can determine the location of the requested memory and route the request accordingly. If the memory is located within (or otherwise owned by) XPU 102 or a peer XPU, the request is routed to local/remote address demultiplexer 308 . Local/remote address demultiplexer 308 determines (eg, by range comparison) whether memory is local to XPU 102 or owned by a peer XPU. If memory is local to XPU 102 , the request is routed through multiplexer (mux) 310 to device memory 314 . If the memory is owned by a peer XPU, the request is routed by the local/remote address demultiplexer 308 to the peer link controller 318 and through the peer link 106 to the peer XPU. Multiplexer 310 also routes memory requests received from peer XPUs over peer link 106 and memory requests received from host 104 over link 108 to device memory 314 .To enable an XPU 102 to borrow from the larger pool of aggregated upstream bandwidth to hosts 104 (via host links 108 of other XPUs) available to the XPU-scale cluster, the architecture 300 of the XPU 102 includes a path that allows conditionally Outbound traffic destined for host 104 (eg, memory requests such as CXL.cache traffic) is diverted to peer link controller 318 and peer link 106 (for transmission to the host via the peer XPU). Architecture 300 also includes a path that allows for conditional diversion of inbound traffic from peer link 106 and peer link controller 318 to host link controller 316 and host link 108 (e.g., when the XPU While forwarding traffic to host 104 for another XPU). This path may pass through demultiplexer 312 and may be marked with tag 326 .Controllers 316 and 318 may include any suitable circuitry to set up and control communications over the respective links. In some embodiments, host link controller 316 may include a CXL.cache controller.Requests generated by XPU engine 302 destined for host 104 may pass through request forwarding circuitry 322 . Under normal conditions (e.g., when host link 108 has sufficient bandwidth available), uplink requests to host 104 may be sent from request diversion circuit 322 to host link controller 316 and via host link 108 to Host 104. In some cases, when the available upstream bandwidth on link 108 is low, traffic destined for host 104 may be routed through another XPU so that the traffic may be routed by request diversion circuit 322 to peer link controller 318 , and a tag 324 may be applied to the traffic to indicate that the traffic is host-bound. In various embodiments, the decision made by request diversion circuitry 322 as to whether to route upstream traffic to host link 108 or peer link 106 may be based on available upstream bandwidth information tracked by bandwidth monitor 320 . Further details on such determinations are described below.A peer XPU receiving traffic from another XPU on one of its peer links 106 can inspect (e.g., using demux 312) that traffic to determine whether it is destined for the host (e.g., whether it has Go to the Hosts tab 324). If this flag is set, the peer XPU can send this inbound traffic to its own host link 108 (instead of sending it to its device memory as is done for standard requests received from another XPU via link 106 314) to go to the host 104. If the tag is not set, the XPU may determine that the request is from a peer XPU for a portion of its memory and send the request to device memory 314 .XPU 102 may support any relevant protocol translation or tunneling so that requests destined for the host can be sent over peer-to-peer link 106 . In various embodiments, the protocol used to convey traffic to the host (e.g., CXL.cache) may be source-ordered, enabling the flow described herein without introducing additional ordering on the peer-to-peer link 106 Require.Various embodiments may include any suitable circuitry to implement consistent semantics in system 100 . For example, host 104 may include a snoop filter to track possible caching of memory lines from host 104's memory by XPU 102 . For example, a snoop filter of the host 104 may track memory accesses to determine if the memory has been modified, or to notify the device when cached memory should be invalidated (eg, because it has been modified by the host 104 or another XPU 102).As noted above, traffic to host 104 (eg, CXL.cache traffic) may be source ordered. In various embodiments, the host's snoop filter tracks caching per host link 108 connected to the host 104 (eg, the CXL 2.0 specification only allows one CXL.cache enabled device per link). Thus, a cacheable request from XPU 102 will be marked by the host's snoop filter as being cached behind the link used to receive the request. Therefore, the host's snooping filter may not distinguish which XPU 102 has cached the memory line, but only tracks the specific host link 108 used to receive the request for that memory line. Thus, if a request from one XPU is received through another XPU, the host's snoop filter may not be able to tell which XPU has cached the memory line identified in the request.Since the host's snoop filter tracks the cache per host link 108, any subsequent snoops from the host as a consequence of cacheable requests sent by the XPU via the peer XPU will be passed by the host through the peer XPU's host chain Route 108 sent. In various embodiments, such peeking may be handled based on the XPU's cache model for host 104's memory.In some embodiments, the XPU's cache for memory accessed from the host 104 is placed near the host link controller 316 (this is sometimes referred to as a shallow cache model). Such caching can be used, for example, in popular use cases such as in-place device atomic support for system memory (e.g. where XPU primarily uses cacheable semantics for fired atomic operations and non-cacheable/ReadCurrent semantics for other requests) . In this case, all memory access requests (eg, CXL.cache requests) sent by an XPU over host link 108 (including those it sends on behalf of a peer XPU) are cached at this XPU. In this case, the XPU itself can serve all snoops it receives from the host over the host link. In some embodiments, the XPU responds to the snoop by indicating the state of a memory line in the XPU's cache and/or may indicate that data is being returned to the host 104 to a provided data buffer.In various embodiments, a cache of memory accessed from the host 104 is arranged deeper within the XPU (eg, the cache may be in other device memory, which may include the peer XPU's memory). In this case, when an XPU receives a snoop over its host link 108, the XPU can determine whether the memory line is cached by itself or by a peer XPU. In various embodiments, an XPU may maintain a snoop filter (eg, near host link 108 ) that tracks any cacheable accesses that peer XPUs may have made to host 104 over this host link. If this snoop filter indicates that a particular snoop from the host 104 needs to be sent to the peer XPU (e.g., because the peer XPU has cached memory), the XPU converts the snoop from the host to be used over the peer link. Snoop semantics to send to the peer XPU (eg, similar to the semantics an XPU might use with respect to snoops associated with its own memory cached by the peer XPU).4 depicts a diagram 400 for determining when to enable peer-to-peer link sharing for upstream communications from the XPU 102 to the host 104 in accordance with various embodiments. As previously described, request diversion circuitry 322 may determine whether an XPU's request to host 104 should be sent to host 104 via the XPU's own host link 108 or via another XPU via its peer link 106 .In various embodiments, this determination may be based on one or more metrics of bandwidth availability tracked by bandwidth monitor 320 . Metrics of bandwidth availability that may affect the decision of whether to send a request to a host through a peer XPU may include one or more of the following: bandwidth availability on the XPU's host link 108, peer XPU to peer XPU Bandwidth availability on link 106, and bandwidth availability on host link 108 of the peer XPU.In various embodiments, the determination of bandwidth availability on the XPU's own host link 108 may be based on raw bandwidth utilization in the upstream direction on the host link 108 . For example, bandwidth monitor 320 may determine bandwidth utilization at a particular point in time, average bandwidth utilization over a period of time, or other raw bandwidth utilization metrics based on the number of observed requests and/or the size of the requests. In some embodiments, bandwidth availability on the XPU's host link 108 is based on available credits in the upstream direction of the host link 108 (e.g., link layer credits, such as link on CXL.cache channel tier credit). To determine this, bandwidth monitor 320 may track available upstream credits (eg, CXL.cache credits). A low available credit indicates backpressure on the link 108 due to insufficient available bandwidth. In some embodiments, each channel (eg, a CXL.cache channel) sends messages using credits, and collects credits back from the recipient of the message (the recipient may return credits when it processed the message). By tracking available credits over time, bandwidth monitor 320 can estimate available upstream bandwidth on host link 108 .Bandwidth availability to peer XPUs on peer link 106 may be determined in any suitable manner, such as any of the manners described above for availability on host link 108 (eg, based on raw bandwidth utilization or available credits).The bandwidth availability in the upstream direction of the corresponding peer XPU's host link 108 may be tracked in any suitable manner. In some embodiments, bandwidth monitor 320 may determine this availability based on the rate at which the peer XPU sends its own traffic to the host through the XPU. If the peer XPU has sufficient bandwidth available on its own host link 108, then the peer XPU will not send its traffic to the host through this XPU (or will send very little traffic through this XPU). If the amount of bandwidth available to a peer XPU on its own host link 108 is low, it may begin sending a portion of its requests to the host through one or more peer XPUs. If the XPU detects that a particular peer XPU has sufficient upstream bandwidth available, then the XPU can send requests through the peer XPU at a reasonable rate. If an XPU detects that the amount of upstream bandwidth available to a particular peer XPU is low, that XPU may send requests at a lower rate. Thus, the XPU can detect the available upstream bandwidth of the peer XPU based on the number of requests it receives from the peer XPU to the host. In an alternate embodiment, XPUs 102 may periodically send messages to each other indicating the amount of bandwidth available on their respective host links 108 .Diagram 400 depicts an example scheme that may be implemented by request diversion circuitry 322 of an XPU to determine whether to send a request destined for a host to host 104 via the XPU's host link 108 or via a peer XPU. Graph 400 includes various values for the XPU's own upstream available bandwidth on its host link 108, the XPU's peer link available bandwidth with a specific peer XPU, and the specific peer XPU's upstream available bandwidth on its host link .In this chart, an "X" represents don't care. Thus, when an XPU's own available upstream bandwidth on its host link 108 is high, that XPU does not send traffic destined for the host via any of its peer XPUs. If the XPU's own upstream available bandwidth is low, but the available bandwidth to the peer XPU's peer link and the peer XPU's upstream available bandwidth on its host link 108 is high, then the XPU can Wait for the XPU to send traffic to the host at a reasonable rate to increase the bandwidth for the XPU to send requests to the host. If the XPU's own upstream available bandwidth is low, and either (or both) of the available bandwidth on the peer-to-peer link to the peer XPU or the upstream available bandwidth of the peer XPU on its host link 108 ) is lower, then the XPU can send traffic to the host via the peer XPU at a very low rate (so as not to overwhelm the other XPU's host link).In various embodiments, the rate at which an XPU sends requests to a host through a peer XPU may decrease as a detected decrease in the available bandwidth of the associated peer link and/or the peer XPU's host link. In various embodiments, an XPU may track the available bandwidth of each of its peer XPUs, and send requests to the host at a different rate for each peer XPU based on the available bandwidth.5 is a flow for peer-to-peer link sharing for uplink communication from an XPU to a host processor, according to various embodiments. The process begins by determining whether the upstream available bandwidth of the XPU 102 is high. If the available bandwidth is higher, the XPU may send the request to the host at 504 via the XPU's host link 108 . As long as the upstream available bandwidth remains high, the XPU can continue to send requests over its own host link. Once it is determined that the uplink available bandwidth is no longer high, the process moves to 506 .At 506, it is determined whether substantial upstream bandwidth is available at the peer XPU. If so, the XPU 102 can send some requests to the host via its own host link 108 at 508 and other requests to the host at a reasonable rate via the peer XPU at 510 . If no high bandwidth is available at the peer XPU, the XPU 102 may send some requests to the host via its own host link 108 at 512 and others at a low rate via the peer XPU at 514 request. The flow can then return to 502 .In some embodiments, the determination of 506 and subsequent operations may be performed for each peer XPU. In various embodiments, an XPU may monitor the available upstream bandwidth of multiple peer XPUs, and select the peer XPU with the greatest available upstream bandwidth, and send host-destined traffic to the selected peer XPU. An XPU may send host-bound traffic to any number of peer XPUs at any suitable ratio (eg, based on its available upstream bandwidth). The available bandwidth between an XPU and peer XPUs can also be used to determine how much host-bound traffic to send to each XPU.The flows depicted in the figures herein are merely representative of operations that might occur in certain embodiments. In other embodiments, additional operations may be performed by components of the various systems described herein. Various embodiments of the present disclosure contemplate any suitable signaling mechanism to achieve the functionality described herein. Some operations illustrated in the drawings may be repeated, combined, modified or deleted as appropriate. Furthermore, operations may be performed in any suitable order without departing from the scope of a particular embodiment.Numerous specific details are set forth herein, such as examples of specific types of processors and system configurations, specific hardware structures, and specific architectural and microarchitectural details in order to provide a thorough understanding of the present disclosure. It will be apparent, however, to one skilled in the art that use of these specific details is not required to practice the present disclosure. In other instances, well-known components or methods have not been described in detail, such as specific and alternative processor architectures, specific logic circuits/code for the described algorithms, specific firmware code, specific interconnect operations, specific logic configurations, Specific fabrication techniques and materials, specific compiler implementations, specific representations of algorithms in code, specific power down and gating techniques/logic, and other specific operational details of computer systems are used to avoid unnecessarily obscuring the disclosure.Any portion of the systems or components described herein may be included within a device capable of sending and/or receiving data. For example, any portion of system 100 may be included in a computing device, such as host 104 or XPU 102, either of which may include a processor, system-on-a-chip (SoC), or other suitable circuitry. In some embodiments, a host may include any suitable computing system operable to connect to, and send data to and/or receive data from, a peripheral device. A host may include one or more processors and one or more ports. A host may include or be coupled to any other suitable circuitry, such as memory, interconnects, one or more communication controllers, or other suitable circuitry.Although embodiments herein are described with reference to particular integrated circuits, such as in computing platforms or microprocessors, other embodiments are applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of the embodiments described herein may be applied to other types of circuits or semiconductor devices that may also benefit from the features described herein. For example, the disclosed embodiments are not limited to a particular host device or peripheral device, but may be applied to any suitable host or peripheral device, such as desktop computer systems, server computer systems, handheld devices, tablet devices, other thin notebooks, System-on-Chip (SoC) devices, and embedded applications. Some examples of handheld devices include cellular telephones, Internet protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications typically include microcontrollers, digital signal processors (DSPs), system-on-chips, network computers (NetPCs), set-top boxes, network hubs, wide area network (wide area network (WAN) switches, or devices capable of implementing the following teachings: function and operation of any other system. Furthermore, the apparatus, methods, and systems described herein are not limited to physical computing devices, but may involve software optimization as well.6 and 7 depict example systems in which various embodiments described herein may be implemented. For example, XPU 102 or host 104 may include any one or more of the components depicted in FIG. 6 or FIG. 7 .Referring now to FIG. 6 , depicted is a block diagram of components present in a computer system that may function as host 104 or an XPU, according to some embodiments. As shown in FIG. 6, system 600 includes any combination of components. These components may be implemented as an IC, a portion thereof, as discrete electronic devices, or as other modules, logic, hardware, software, firmware, or a combination thereof adapted in a computer system, or as an Components inside the enclosure. Note also that the block diagram of Figure 6 is intended to show a high-level view of the many components of a computer system. However, it is understood that in other implementations, some of the components shown may be omitted, additional components may be present, and different arrangements of the components shown may occur. Accordingly, the disclosure described above may be implemented in any portion of one or more interconnections illustrated or described below.As can be seen in FIG. 6 , processor 610 includes, in one embodiment, a microprocessor, a multi-core processor, a multi-threaded processor, an ultra-low voltage processor, an embedded processor, or other known processing elements. In the illustrated implementation, processor 610 acts as the main processing unit and central hub for communicating with many of the various components of system 600 . As one example, processor 610 is implemented as a system on chip (SoC). As a specific illustrative example, processor 610 includes an Architecture Core™-based processor, such as an i3, i5, i7, or another such processor, available from Intel Corporation of Santa Clara, California. However, other low power processors such as those available from Advanced Micro Devices, Inc. (AMD) of Sunnyvale, Calif., MIPS-based designs from MIPS Technologies, Inc. of Sunnyvale, Calif. , ARM-based designs licensed from ARM Holdings Limited or its customers or its licensees or adopters, may instead exist in other embodiments such as Apple A5/A6 processors, Qualcomm Snapdragon processors, or TI OMAP processor. Note that many customer versions of such processors are modified and changed; however, they may support or recognize specific instruction sets that execute defined algorithms as set forth by the processor's licensor. Here, microarchitectural implementations may vary, but the architectural capabilities of the processors are generally consistent. Certain details regarding the architecture and operation of processor 610 in one implementation are discussed further below to provide an illustrative example.Processor 610 is in one embodiment in communication with system memory 615 . As an illustrative example, this may in one embodiment be implemented via multiple memory devices to provide a given amount of system memory. As an example, the memory may be designed according to the Joint Electron Devices Engineering Council (JEDEC) low power double data rate (LPDDR), for example according to JEDEC JESD 209-2E (published April 2009 ), or a next-generation LPDDR standard to be called LPDDR3 or LPDDR4 that will provide extensions to LPDDR2 to increase bandwidth. In various implementations, individual memory devices may have different package types, such as single die package (SDP), dual die package (DDP), or quad die package (QDP). These devices are in some embodiments soldered directly to the motherboard to provide a lower profile solution, while in other embodiments the devices are configured as one or more memory modules which in turn are coupled by the given connectors to the motherboard. Of course, other memory implementations are possible, such as other types of memory modules, such as different types of dual inline memory modules (dual inline memory module, DIMM), including but not limited to microDIMM, MiniDIMM. In certain illustrative embodiments, the memory is between 2GB and 16GB in size and may be configured as a DDR3LM package or as an LPDDR2 or LPDDR3 memory that is soldered to the motherboard via a ball grid array (BGA).A mass storage device 620 may also be coupled to the processor 610 in order to provide persistent storage of information such as data, applications, one or more operating systems, and the like. In various embodiments, to enable thinner and lighter system designs, and to improve system responsiveness, such mass storage may be implemented via SSDs. However, in other embodiments, mass storage may be implemented primarily with hard disk drives (HDDs), with a smaller amount of SSD storage acting as SSD cache to enable context state and other This information is stored non-volatile, allowing fast power-up to occur when system activity is reinitiated. Also shown in FIG. 6 is that flash memory device 622 may be coupled to processor 610, eg, via a serial peripheral interface (SPI). The flash memory device can provide non-volatile storage for system software, including basic input/output software (BIOS) and other firmware of the system.In various embodiments, the system's mass storage is implemented by an SSD alone, or as a disk, optical disk, or other drive with SSD cache. In some embodiments, the mass storage device is implemented as an SSD or HDD together with a restore (RST) cache module. In various implementations, HDDs provide storage between 320GB-4 terabytes (TB) and above, while RST caching is implemented with SSDs ranging in capacity from 24GB-256GB. Note that this SSD cache can be configured as a single level cache (SLC) or multi-level cache (MLC) option to provide the appropriate level of responsiveness. In SSD-only options, the module can be placed in various locations, such as in mSATA or NGFF slots. As an example, SSDs have capacities ranging from 120GB-1TB.Various input/output (IO) devices may exist within system 600 . Specifically shown in the embodiment of FIG. 6 is a display 624, which may be a high definition LCD or LED panel configured within the lid portion of the housing. This display panel may also provide a touch screen 625, e.g., externally adapted on the display panel, so that via user interaction with this touch screen, user input may be provided to the system to achieve desired operations, e.g., regarding the display of information, access to information, etc. In one embodiment, display 624 may be coupled to processor 610 via a display interconnect, which may be implemented as a high-performance graphics interconnect. Touch screen 625 may be coupled to processor 610 via another interconnect, which in one embodiment may be an I2C interconnect. As also shown in FIG. 6 , in addition to touchscreen 625 , user input by touch can also occur via touchpad 630 , which can be configured within the chassis and also coupled to the same I2C interconnect as touchscreen 625 . even.The display panel can operate in various modes. In a first mode, the display panel may be arranged in a transparent state, wherein the display panel is transparent to visible light. In various embodiments, the majority of the display panel may be the display, except for the bezel around the periphery. When operating the system in notebook mode and operating the display panel in a transparent state, the user can view information presented on the display panel while also being able to view objects behind the display. In addition, the information displayed on the display panel can be viewed by a user located behind the display. Alternatively, the operating state of the display panel may be an opaque state, wherein visible light does not pass through the display panel.In tablet mode, when the bottom surface of the substrate is resting on the surface or held by the user, the system is folded closed so that the rear display surface of the display panel rests in a position such that it faces outwardly towards the user. In the tablet mode of operation, the rear display surface functions as both a display and a user interface, as this surface can be touchscreen capable and can perform other known functions of conventional touchscreen devices such as tablet devices. To this end, the display panel may comprise a transparency adjustment layer arranged between the touch screen layer and the front display surface. In some embodiments, the transparency adjustment layer may be an electrochromic (EC), LCD layer, or a combination of EC and LCD layers.In various embodiments, the display may have different sizes, such as an 11.6" or 13.3" screen, and may have a 16:9 aspect ratio, and a brightness of at least 300 nits. Additionally, the display may have a full high definition (HD) resolution (at least 1920x 1080p), be compatible with an embedded display port (eDP), and be a low power panel with panel self-refresh.As for touch screen capability, the system can provide a display multi-touch panel that is multi-touch capacitive and capable of at least 5 fingers. Also, in some embodiments, the display may be 10 finger capable. In one embodiment, the touchscreen is housed within a damage- and scratch-resistant glass and coating (e.g., Gorilla Glass™ or Gorilla Glass 2™) for low friction to reduce "finger burn" and avoid "finger jumping." ". To provide an enhanced touch experience and responsiveness, in some embodiments, the touch panel has multi-touch capabilities, such as less than 2 frames per static view (30Hz) and 200ms (finger-to-pointer lag) during pinch-to-zoom The single touch function of each frame (30Hz) is less than 1cm. In some implementations, the display supports borderless glass with a minimal screen bezel that is also flush with the panel surface and has limited IO interference when using multi-touch.Various sensors may be present within the system and may be coupled to processor 610 in different ways for sensory computing and other purposes. Certain inertial and environmental sensors may be coupled to processor 610 through sensor hub 640, eg, via an I2C interconnect. In the embodiment shown in FIG. 6 , these sensors may include an accelerometer 641 , an ambient light sensor (ambient light sensor, ALS) 642 , a compass 643 , and a gyroscope 644 . Other environmental sensors may include one or more thermal sensors 646 , which in some embodiments are coupled to processor 610 via a system management bus (SMBus) bus.Many different use cases can be achieved using the various inertial and environmental sensors present in the platform. These use cases enable advanced computing operations, including perceptual computing, and also allow enhancements with respect to power management/battery life, security, and system responsiveness.For example, with respect to power management/battery life issues, based at least in part on information from ambient light sensors, the ambient light conditions of the platform's location are determined and the intensity of the display is controlled accordingly. Thus, in certain light conditions, the power consumed to operate the display is reduced.As for security operations, based on contextual information obtained from sensors, such as location information, it can be determined whether to allow users to access certain secure documents. For example, a user may be permitted to access such documents at a work or home location. However, users are prevented from accessing such documents when the platform exists in a public location. In one embodiment, this determination is based on location information, eg, location information determined via GPS sensor or camera recognition of landmarks. Other security operations may include providing pairing of devices within close range of each other, eg, a portable platform as described herein and a user's desktop computer, mobile phone, etc. In some implementations, when the devices are so paired, some sharing is achieved via near field communication. However, this sharing may be disabled when these devices are out of range. Additionally, when pairing a platform as described herein with a smartphone, an alarm may be configured to trigger when the devices move greater than a predetermined distance from each other while in a public location. In contrast, when the paired devices are in a safe location, such as a workplace or home location, the devices can exceed this predetermined limit without triggering such an alarm.Sensor information can also be utilized to enhance responsiveness. For example, even when the platform is in a low power state, it may still enable the sensors to operate at relatively low frequencies. Thus, any changes in the position of the platform, eg, changes determined by inertial sensors, GPS sensors, etc., are determined. If this change is not registered, a faster connection to a previous wireless hub such as a Wi-FiTM access point or similar wireless enabler will occur, since in this case there is no need to scan for available wireless network resources . Thus, a higher level of responsiveness when waking up from a low power state is achieved.It is understood that many other use cases can be achieved using sensor information obtained via integrated sensors within a platform as described herein, and that the above examples are for illustration purposes only. Using a system as described herein, a perceptual computing system may allow for the addition of alternative input modes, including gesture recognition, and enable the system to sense user actions and intent.In some embodiments, there may be one or more infrared or other thermal sensing elements, or any other element for sensing the presence or movement of a user. Such sensing elements may include multiple different elements that work together, in sequence, or both. For example, sensing elements include elements that provide initial sensing, such as light or sound projection, followed by sensing for gesture detection through, for example, an ultrasonic time-of-flight camera or a patterned light camera.Additionally, in some embodiments, the system includes a light generator to generate the illumination lines. In some embodiments, this line provides a visual cue about a virtual boundary, ie, an imagined or virtual location in space, where the user's action of passing or breaking through the virtual boundary or plane is interpreted as an intent to engage with the computing system. In some embodiments, the lighting line may change color as the computing system transitions into different states associated with the user. The lighting lines can be used to provide the user with visual cues of virtual boundaries in the space, and can be used by the system to determine transitions in the computer's state with respect to the user, including determining when the user wishes to engage the computer.In some embodiments, the computer senses the user's position and is operative to interpret the movement of the user's hand across the virtual boundary as a gesture indicating the user's intent to make contact with the computer. In some embodiments, the light generated by the light generator may vary as the user passes through the virtual line or plane, thereby providing visual feedback to the user that the user has entered an area for providing gestures to provide input to the computer.The display screen may provide a visual indication of the computing system's transitions with respect to the user's status. In some embodiments, the first screen is provided in a first state in which the presence of a user is sensed by the system, for example by using one or more sensing elements.In some implementations, the system functions to sense the identity of the user, such as through facial recognition. Here, a transition to a second screen may be provided in a second state in which the computing system has identified the user, wherein this second screen provides visual feedback to the user that the user has transitioned to a new state. Transitioning to the third screen may occur in a third state in which the user has confirmed identification of the user.In some embodiments, the computing system may use a transition mechanism to determine the location of the virtual boundary for the user, where the location of the virtual boundary may vary by user and context. Computing systems may generate lights, such as lighting lines, to indicate virtual boundaries for interacting with the system. In some embodiments, the computing system may be in a wait state, and light may be generated in the first color. The computing system can detect whether the user has reached and crossed the virtual boundary, for example by using sensing elements to sense the user's presence and movement.In some embodiments, if the user is detected to have crossed the virtual boundary (e.g., the user's hand is closer to the computing system than the virtual boundary line), the computing system may transition to a state for receiving gesture input from the user, wherein The mechanism to indicate the transition may include changing the light indicating the virtual boundary to a second color.In some embodiments, the computing system may then determine whether gesture motion was detected. If gesture motion is detected, the computing system may conduct a gesture recognition process, which may include using data from a gesture database, which may reside in memory in the computing device or otherwise be accessible by the computing device.If the user's gesture is recognized, the computing system may perform a function in response to that input, and return to receive additional gestures if the user is within the virtual boundary. In some embodiments, if the gesture is not recognized, the computing system may transition into an error state, wherein the mechanism for indicating the error state may include changing the light indicating the virtual boundary to a third color, and if the user is using the If the system touches within the virtual boundaries, the system returns to receive additional gestures.As noted above, in other embodiments, the system can be configured as a convertible tablet system that can be used in at least two different modes, tablet mode and notebook mode. The convertible system may have two panels, a display panel and a base panel, such that in tablet mode the two panels are arranged to be stacked on top of each other. In tablet mode, the display panel faces outward and can provide touchscreen functionality found in traditional tablet devices. In notebook mode, the two panels can be arranged in an open clamshell configuration.In various embodiments, the accelerometer may be a 3-axis accelerometer with a data rate of at least 50 Hz. A gyroscope may also be included, which may be a 3-axis gyroscope. Additionally, there may be an electronic compass/magnetometer. Additionally, one or more proximity sensors may be provided (eg, to open the lid, to sense when a person is approaching (or not approaching) the system, and adjust power/performance to extend battery life). Enhanced features may be provided for some OS's sensor fusion capabilities, including accelerometer, gyroscope, and compass. Furthermore, via a sensor hub with a real-time clock (RTC), a mechanism to wake up from the sensor can be implemented to enable receiving sensor input while the rest of the system is in a low power state.In some embodiments, an internal lid/display open switch or sensor indicates when the lid is closed/opened, and can be used to put the system in or automatically wake up from connected standby. Other system sensors may include ACPI sensors for internal processor, memory, and skin temperature monitoring to enable changes to processor and system operating states based on sensed parameters.As can also be seen in FIG. 6 , various peripheral devices may be coupled to processor 610 . In the illustrated embodiment, various components may be coupled through an embedded controller 635 . Such components may include a keyboard 636 (eg, coupled via a PS2 interface), a fan 637 , and a thermal sensor 639 . In some embodiments, touchpad 630 may also be coupled to EC 635 via a PS2 interface. In addition, a security processor, such as a trusted platform module (TPM) 638 according to the Trusted Computing Group (Trusted Computing Group, TCG) TPM specification version 1.2 on October 2, 2003, can also be via this The LPC interconnect is coupled to processor 610 . However, it is to be understood that the scope of the present disclosure is not limited in this respect, and that secure processing and storage of secure information may be in another protected location, such as static random access memory in a secure coprocessor. , SRAM), or as encrypted data blocks that are only decrypted when protected by the secure enclave (SE) processor mode.In certain implementations, peripheral ports may include a high definition media interface (HDMI) connector (which may have different form factors, such as full-size, mini, or micro); one or more USB ports, such as According to the Universal Serial Bus (USB) Revision 3.2 specification (September 2017) of the full-size external ports, at least one is powered when the system is in connected standby and plugged into an AC wall power supply For charging USB devices such as smartphones. Additionally, one or more ThunderboltTM ports may be provided. Other ports may include externally accessible card readers, such as a full-size SD-XC card reader and/or a SIM card reader for WWAN (eg, an 8-pin card reader). For audio, there may be a 3.5mm jack with stereo and microphone capability (eg, combo functionality), and support for jack detection (eg, only headphones that use the mic in the cover or have a mic in the cable). In some embodiments, this jack can be re-tasked between stereo headphone and stereo microphone inputs. Additionally, a power jack may be provided for coupling to an AC outlet.System 600 can communicate with external devices in a variety of ways, including wirelessly. In the embodiment shown in FIG. 6, there are various wireless modules, each of which may correspond to a radio configured for a particular wireless communication protocol. One means for wireless communication in short ranges, such as near field, may be via a near field communication (NFC) unit 645, which in one embodiment may communicate with processor 610 via the SMBus. Note that via this NFC unit 645, devices in close proximity to each other can communicate. For example, a user may enable the system 600 to communicate with another portable device by fitting the two devices together in close association and enabling the transfer of information such as identification information, payment information, data such as image data, etc. (such as the user's smart phone) communication. Wireless power transfer can also be performed using the NFC system.Using the NFC unit described herein, a user can implement near-field coupling functionality (e.g., near-field communication and wireless power transfer (WPT)). More specifically, embodiments provide devices with strategically shaped and placed ferrite materials to provide better coupling of coils. Each coil has associated therewith an inductance that can be selected in conjunction with resistive, capacitive, and other characteristics of the system to achieve a common resonant frequency for the system.As can also be seen in FIG. 6 , additional wireless units may include other short-range wireless engines, including WLAN unit 650 and Bluetooth unit 652 . With the WLAN unit 650, Wi-Fi™ communication according to a given Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard can be implemented, while via the Bluetooth unit 652, short-range communication via the Bluetooth protocol can occur. These units may communicate with processor 610 via, for example, a USB link or a universal asynchronous receiver transmitter (UART) link. Alternatively, the units may be based on the Peripheral Component Interconnect Express™ (PCIe™) protocol, e.g., according to the PCI Express™ Specification Base Specification Version 3.0 (released January 17, 2007), or another such protocol, e.g. A serial data input/output (SDIO) standard, coupled to the processor 610 via an interconnect. Of course, the actual physical connections between these peripherals, which may be configured on one or more add-in cards, may be via NGFF connectors fitted to the motherboard.Additionally, wireless wide area communications, eg, according to cellular or other wireless wide area protocols, may occur via a WWAN unit 656 , which in turn may be coupled to a subscriber identity module (SIM) 657 . Additionally, to enable the reception and use of location information, a GPS module 655 may also be present. Note that in the embodiment shown in FIG. 6, the WWAN unit 656 and the integrated capture device such as the camera module 654 may communicate via a given USB protocol such as a USB 2.0 or 3.0 link, or a UART or I2C protocol . Also, the actual physical connection of these units can be via fitting NGFF add-in cards to NGFF connectors configured on the motherboard.In certain embodiments, wireless functionality may be provided modularly, for example, with a Windows 8 CS enabled WiFi™ 802.11ac solution (eg, an add-in card that is backward compatible with IEEE 802.11abgn). This card can be configured in an internal slot (eg, via an NGFF adapter). Additional modules can provide Bluetooth capability (for example, Bluetooth 4.0 for backward compatibility), as well as Intel Wireless Display. Furthermore, NFC support may be provided via a separate device or a multi-function device, and as an example may be positioned in the right front portion of the case for easy access. There is also an additional module that can be a WWAN device, which can provide support for 3G/4G/LTE and GPS. This module can be implemented in an internal (eg, NGFF) slot. Can provide integrated antenna support for WiFiTM, Bluetooth, WWAN, NFC, and GPS to enable a seamless transition from WiFiTM to WWAN radios, Wireless Gigabit (WiGig) according to the Wireless Gigabit Specification (July 2010), vice versa.As mentioned above, an integrated camera can be included in the cover. As an example, this camera may be a high resolution camera, eg, with a resolution of at least 2.0 megapixels (MP), and extending to 6.0 MP and beyond.To provide audio input and output, an audio processor may be implemented via a digital signal processor (DSP) 660, which may be coupled to processor 610 via a high definition audio (HDA) link. Similarly, DSP 660 can communicate with an integrated coder/decoder (CODEC) and amplifier 662, which in turn can be coupled to an output speaker 663, which can be implemented in a chassis Inside. Similarly, amplifier and codec 662 may be coupled to receive audio input from microphone 665, which in one embodiment may be implemented via a dual array microphone (e.g., a digital microphone array) to provide high quality audio input, thereby Enables voice-activated control of various operations within the system. Note also that audio output may be provided from amplifier/codec 662 to headphone jack 664 . Although shown with these particular components in the embodiment of FIG. 6 , understand that the scope of the present disclosure is not limited in this respect.In a particular embodiment, the digital audio codec and amplifier are capable of driving a stereo headphone jack, a stereo microphone jack, an internal microphone array, and stereo speakers. In different implementations, the codec can be integrated into the audio DSP, or coupled to a peripheral controller hub (PCH) via the HD audio path. In some implementations, one or more subwoofers can be provided in addition to the integrated stereo speakers, and the speaker scheme can support DTS audio.In some embodiments, the processor 610 may be composed of an external voltage regulator (voltage regulator, VR) and multiple internal voltage regulators (referred to as fully integrated voltage regulators, FIVR)) power supply. The use of multiple FIVRs in a processor allows grouping of components into separate power planes such that power is regulated by the FIVRs and supplied only to those components in the group. During power management, a given power plane of one FIVR may be powered down or powered down when the processor is placed in a certain low power state, while another power plane of another FIVR remains active, or is fully powered.Power control in the processor can result in enhanced power savings. For example, power can be dynamically allocated between cores, individual cores can change frequency/voltage, and multiple deep low power states can be provided for very low power consumption. Additionally, dynamic control of cores or individual core portions can provide a reduction in power consumption by powering down components when they are not in use.In different implementations, a security module such as a TPM may be integrated into the processor, or may be a separate device, such as a TPM 2.0 device. With an integrated security module (also known as Platform Trust Technology (PTT)), it is possible to enable BIOS/firmware to expose certain hardware features for certain security features, including secure instructions, secure boot, Intel Anti-Theft Technology, Intel Identity Protection Technology, Intel Trusted Execution Technology (TxT) and Intel Manageability Engine Technology and secure user interfaces such as secure keyboards and displays.Turning next to FIG. 7 , another block diagram of an example computing system that may act as host 104 or XPU 102 is shown in accordance with certain embodiments. As a specific illustrative example, SoC 700 is included in user equipment (UE). In one embodiment, a UE refers to any device that will be used by an end user to communicate, such as a handset, smart phone, tablet, ultra-thin notebook, notebook with a broadband adapter, or any other similar communication device. Often, a UE is connected to a base station or node, which may correspond in nature to a mobile station (MS) in a GSM network.Here, SoC 700 includes 2 cores: 706 and 707. Similar to the above discussion, the cores 706 and 707 may conform to an instruction set architecture, such as an Intel Architecture Core™-based processor, an Advanced Micro Semiconductor Corporation (AMD) processor, a MIPS-based processor, an ARM-based processor design, or their customers, and their licensees or adopters. Cores 706 and 707 are coupled to cache control 708 associated with bus interface unit 709 and L2 cache 710 to communicate with the rest of system 700 . Interconnects 712 include on-chip interconnects, such as IOSF, AMBA, or other interconnects discussed above, which may implement one or more aspects of the described disclosure.Interconnect 712 provides communication channels to other components, such as to Subscriber Identity Module (SIM) 730 to interface with a SIM card, to boot rom 735 to hold boot code for execution by cores 706 and 707 to initialize and boot SoC 700, to SDRAM controller 740 to interface with external memory (e.g., DRAM 760), to flash controller 745 to interface with non-volatile memory (e.g., flash memory 765), to peripheral control 750 (e.g., serial peripheral interface ) to interface with peripherals, to video codec 720 and video interface 725 to display and receive input (eg, touch-enabled input), to GPU 715 to perform graphics-related calculations, and so on. Any of these interfaces may incorporate aspects of the disclosure described herein.Additionally, the system illustrates peripherals for communication, such as a Bluetooth module 770 , a 3G modem 775 , GPS 780 and WiFi 785 . Note that, as described above, a UE includes a radio for communication. Therefore, not all of these peripheral communication modules are required. However, in the UE some form of radio is included for external communication.Designs can go through various stages from creation to simulation to manufacturing. Data representing a design can represent the design in several ways. First, as is useful in simulation, hardware can be represented using a hardware description language (HDL) or another functional description language. Additionally, circuit-level models with logic and/or transistor gates may be generated at some stage in the design process. In addition, most designs at some stage reach the level of data representing the physical placement of various devices in the hardware model. Using conventional semiconductor fabrication techniques, the data representing the hardware model may be data specifying the presence or absence of various features on different mask layers for masks used to produce integrated circuits. In some implementations, such data may be stored in a database file format, such as Graphic Data System II (GDS II), OpenArtwork System Interchange Standard (OASIS), or similar Format.In some implementations, the software-based hardware model and HDL and other functional description language objects may include register transfer language (RTL) files, among other examples. Such objects may be machine-parsable such that a design tool can accept an HDL object (or model), parse the HDL object for properties of the described hardware, and determine the physical circuit and/or on-chip layout from the object. The output of the design tool can be used to fabricate a physical device. For example, a design tool can determine from HDL objects the configuration of various hardware and/or firmware elements, such as bus widths, registers (including size and type), memory blocks, physical link paths, architectural topology, and Additional properties of the system modeled in the HDL object. Design tools may include tools for determining topology and architectural configurations of systems-on-chip (SoCs) and other hardware devices. In some cases, HDL objects can be used as a basis for developing models and design files that can be used by manufacturing equipment to manufacture the described hardware. In fact, the HDL objects themselves can be provided as input to the manufacturing system software to cause the described hardware.In any representation designed, data may be stored on any form of machine-readable media. A memory or a magnetic or optical storage device such as a disk may be a machine-readable medium for storing information transmitted via optical or electrical waves modulated or otherwise generated to transmit such information. When transmitting an instruction or an electrical carrier carrying a code or design, a new copy is made as far as copying, buffering or retransmission of the electrical signal is performed. Thus, a communications provider or network provider may at least temporarily store on a tangible machine-readable medium an item, such as information encoded into a carrier wave, embodying techniques of embodiments of the present disclosure.A module as used herein refers to any combination of hardware, software and/or firmware. As an example, a module includes hardware (eg, a microcontroller) associated with a non-transitory medium to store code suitable for execution by the microcontroller. Thus, in one embodiment, reference to a module refers to hardware that is specifically configured to recognize and/or execute code to be stored on a non-transitory medium. Also, in another embodiment, use of a module refers to a non-transitory medium that includes code that is particularly adapted to be executed by a microcontroller to perform predetermined operations. As can be deduced, in another embodiment, the term module (in this example) may refer to a combination of microcontroller and non-transitory media. Frequently, module boundaries that are illustrated as separate generally vary and may overlap. For example, the first and second modules may share hardware, software, firmware, or a combination thereof, while possibly retaining some independent hardware, software, or firmware. In one embodiment, use of the term logic includes hardware such as transistors, registers or other hardware such as programmable logic devices.Use of the phrases "to" or "configured to" refers in one embodiment to arranging, assembling, manufacturing, offering for sale, importing and/or designing a device, hardware, logic or element to perform a specified or determined Task. In this example, a device or element thereof that is not in operation is still "configured" to perform the specified tasks if it is designed, coupled and/or interconnected to perform the specified tasks. As a purely illustrative example, a logic gate may provide a 0 or a 1 during operation. But a logic gate "configured" to provide an enable signal to a clock does not include every possible logic gate that can provide a 1 or 0. Instead, the logic gates are the ones that are coupled in such a way that a 1 or 0 output will enable the clock during operation. Note again that the use of the term "configured to" does not require an operation, but rather focuses on a potential state of the device, hardware, and/or element where the device, hardware, and/or element are designed to operate when the Devices, hardware and/or components perform specific tasks when in operation.Additionally, use of the phrases "capable of" and/or "operable to" in one embodiment refers to a device, logic, hardware, and/or element that is configured to enable the device, logic, hardware, or and/or components are designed by way of. As above, note that use of "to", "capable of" or "operable to" in one embodiment refers to the underlying state of means, logic, hardware and/or elements, where means, logic , hardware and/or components are not operating, but are designed in such a way that the device can be used in the specified manner.As used herein, value includes a number, state, logical state, or any known representation of a binary logical state. Often, the use of logic levels, logic values, or logical values is also referred to as 1 and 0, which simply represent binary logic states. For example, 1 refers to a high logic level and 0 refers to a low logic level. In one embodiment, a memory cell, such as a transistor or a flash memory cell, may be capable of holding a single logical value or multiple logical values. However, other representations of values in computer systems are used. For example, the decimal number ten may also be represented as the binary value 710 and the hexadecimal letter A. Accordingly, a value includes any representation of information capable of being stored in a computer system.Additionally, a state may be represented by a value or a portion of a value. As an example, a first value, such as a logical one, may represent a default or initial state, while a second value, such as a logical zero, may represent a non-default state. Additionally, the terms reset and set refer to default and updated values or states, respectively, in one embodiment. For example, a default value might include a high logical value, ie reset, while an updated value might include a low logical value, ie set. Note that any combination of values can be utilized to represent any number of states.The method, hardware, software, firmware or code embodiments described above may be implemented via instructions or code stored on a machine-accessible, machine-readable, computer-accessible or computer-readable medium executable by a processing element. A non-transitory machine-accessible/readable medium includes any non-transitory mechanism that provides (ie, stores and/or transmits) information in a form readable by a machine (eg, a computer or electronic system). For example, non-transitory machine-accessible media include random-access memory (RAM), such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic storage media or optical storage media ; flash memory devices; electrical storage devices; optical storage devices; acoustic storage devices; or other forms of storage for storing information received from transient (propagated) signals (e.g., carrier waves, infrared signals, digital signals); , which are to be distinguished from non-transitory media from which information can be received.Instructions for programming logic to perform embodiments of the present disclosure may be stored in memory in the system, such as DRAM, cache, flash memory, or other storage devices. Additionally, instructions may be distributed over a network or by other computer-readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to floppy disks, optical disks, Compact Discs, Read-Only Memory, CD-ROM), and magneto-optical disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), erasable programmable read-only memory (Erasable Programmable Read-Only Memory, EPROM) , Electrically Erasable Programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), magnetic card or optical card, flash memory, or via electrical, optical, acoustic or other forms of propagation signals (for example, carrier, Infrared signals, digital signals, etc.) are tangible machine-readable storage devices used to transmit information over the Internet. Thus, computer-readable media includes any type of tangible machine-readable media suitable for storing or transmitting electronic instructions or information in a form readable by a machine (eg, a computer).Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, appearances of the phrase "in one embodiment" or "in an embodiment" in various places in this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments.Example 1 includes a processor unit comprising: a first controller coupled to a host processing unit via a first link; a second controller coupled to a second processor unit via a second link, wherein the second processor unit is coupled to the host processing unit via a third link; and circuitry for determining whether to communicate via the second link via the first link or via the second processing unit to The host processing unit sends a cache coherency request.Example 2 includes the subject matter of Example 1, and wherein each of the first link and the third link is a link according to a Compute Fast Link protocol.Example 3 includes the subject matter of any one of Examples 1 and 2, and wherein the circuitry determines whether to pass through the first link or The second link sends the cache coherency request.Example 4 includes the subject matter of any one of Examples 1-3, and wherein the circuitry determines the amount of available uplink bandwidth on the first link based on a number of link credits available.Example 5 includes the subject matter of any one of Examples 1-4, and wherein the circuitry determines the amount of available uplink bandwidth on the first link based on a raw uplink bandwidth metric.Example 6 includes the subject matter of any one of Examples 1-5, and wherein the circuitry determines whether to pass through the first link or the second link based on an amount of available bandwidth on the second link The second link sends the cache coherency request.Example 7 includes the subject matter of any one of Examples 1-6, and wherein the circuitry determines whether to pass through the first link or through the third link based on an amount of uplink bandwidth available on the third link The second link sends the cache coherency request.Example 8 includes the subject matter of any one of Examples 1-7, and wherein the circuitry determines based on a number of requests to the host received by the processor unit from the second processing unit The amount of available uplink bandwidth on the third link, wherein the processor unit sends the request to the host to the host processing unit over the first link.Example 9 includes the subject matter of any one of Examples 1-8, and further comprising a second circuit for tracking memory requests received from the second processor unit for memory of the host processing unit; and responding to snoop requests associated with such memory from said host processing unit.Example 10 includes the subject matter of any one of Examples 1-9, wherein the processor unit and the second processor unit are each a graphics processing unit.Example 11 includes a method comprising: communicating, by a first processor unit, with a host processing unit over a first link; communicating by the first processor unit, with a second processor unit over a second link, wherein the A second processor unit is coupled to the host processing unit via a third link; and determining whether to send to the host processing unit over the first link or via the second processing unit over the second link Cache-coherent requests.Example 12 includes the subject matter of Example 11, and wherein each of the first link and the third link is a link according to Compute Fast Link protocol.Example 13 includes the subject matter of any one of Examples 11 and 12, and further includes determining whether to pass through the first link or the second link based on an amount of uplink bandwidth available on the first link. The second link sends the cache coherency request.Example 14 includes the subject matter of any one of Examples 11-13, and further includes determining an amount of available uplink bandwidth on the first link based on a number of link credits available.Example 15 includes the subject matter of any one of Examples 11-14, and further includes determining an amount of available uplink bandwidth on the first link based on a raw uplink bandwidth metric.Example 16 includes the subject matter of any one of Examples 11-15, and further comprising determining whether to pass through the first link or the second link based on an amount of bandwidth available on the second link. The link sends the cache coherency request.Example 17 includes the subject matter of any one of Examples 11-16, and further comprising determining whether to pass through the first link or the third link based on an amount of uplink bandwidth available on the third link The second link sends the cache coherency request.Example 18 includes the subject matter of any one of Examples 11-17, and further comprising determining the second processing unit based on a number of requests to the host received by the processing unit from the second processing unit. The amount of available uplink bandwidth on three links, wherein the processor unit sends the request to the host to the host processing unit over the first link.Example 19 includes the subject matter of any one of Examples 11-18, and further comprising tracking memory requests received from the second processor unit for memory of the host processing unit; and responding from the host Snoop requests associated with such memory are processed for the unit.Example 20 includes the subject matter of any one of Examples 11-19, and wherein each of the processor unit and the second processor unit is a graphics processing unit.Example 21 includes a system comprising a host processor unit; and a plurality of processor units, a processor unit of the plurality of processor units coupled to the host processor unit via a first link and via a plurality of A second link is coupled to other processor units of the plurality of processor units coupled to the host processor unit via a third plurality of links; and wherein the processor unit Determining whether to send a cache coherency request to the host processing unit via the first link or via one of the other processing units via one of the second links.Example 22 includes the subject matter of Example 21, and wherein each of the first link and the third link is a link according to Compute Fast Link protocol.Example 23 includes the subject matter of any one of Examples 21 and 22, and wherein the processor unit determines, based on an amount of uplink bandwidth available on the first link, whether to pass the first link Still sending the cache coherency request through one of the second links.Example 24 includes the subject matter of any one of Examples 21-23, and wherein the processor unit determines the amount of available uplink bandwidth on the first link based on a number of link credits available.Example 25 includes the subject matter of any one of Examples 21-24, and wherein the processor unit determines the amount of available uplink bandwidth on the first link based on a raw uplink bandwidth metric.Example 26 includes the subject matter of any one of Examples 21-25, and wherein the processor unit determines that via the first link is based on an amount of available uplink bandwidth on the second link Still sending the cache coherency request through one of the second links.Example 27 includes the subject matter of any one of Examples 21-26, and wherein the processor unit determines based on an amount of available uplink bandwidth on the plurality of third links that the first The link still sends the cache coherency request through the second link.Example 28 includes the subject matter of any one of Examples 21-27, and wherein the processor unit is based on a number of requests to the host received by the processor unit from the second processing unit, to determine an amount of available uplink bandwidth on the third link, wherein the processor unit sends the request to the host to the host processing unit over the first link.Example 29 includes the subject matter of any one of Examples 21-28, and wherein the processor unit reports to the host processor unit via a first plurality of other processor units of the other processor units Send multiple cache-coherent requests.Example 30 includes the subject matter of any one of Examples 21-29, and wherein the processor unit tracks memory requests received from a second processor unit of the plurality of processor units, the memory A request is for memory of the host processing unit; and a snoop request associated with such memory is responded to from the host processing unit.Example 31 includes the subject matter of any one of Examples 21-30, and wherein each of the processor unit and the second processor unit is a graphics processing unit.Example 32 includes the subject matter of any one of Examples 21-31, and wherein the processor unit tracks memory requests received from a second processor unit of the plurality of processor units, the memory A request is for memory of the host processing unit; and a snoop request associated with such memory is responded to from the host processing unit.Example 33 includes at least one non-transitory machine-accessible storage medium having stored thereon instructions that, when executed on the machine, cause the machine to perform the method of any one of Examples 11-20 .Example 34 includes a system comprising means for performing the method of any one of Examples 11-20.In the foregoing specification, a detailed description has been given with reference to specific exemplary embodiments. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the disclosure as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. Moreover, the foregoing use of the embodiments and other exemplary language are not necessarily to refer to the same embodiment or the same example, but may refer to different and different embodiments, and possibly the same embodiment.
A processing device includes a sorting module, which adds to each of a plurality of elements a position value of a corresponding position in a register rest resulting in a plurality of transformed elements in corresponding positions. The plurality of elements include a plurality of bits. The sorting module compares each of the plurality of transformed elements to itself and to one another. The sorting module also assigns one of an enabled or disabled indicator to each of the plurality of the transformed elements based on the comparison. The sorting module further counts a number of the enabledindicators assigned to each of the plurality of the transformed elements to generate a sorted sequence of the plurality of elements.
1.A processing device comprising:Sorting module for:Adding a position value of a corresponding position in the register set to each of the plurality of elements, thereby generating a plurality of transformed elements in the corresponding position, wherein each of the plurality of elements includes a plurality of bits;Comparing each of the plurality of transformed elements to itself and to each other;Assigning one of an enable indicator or a disable indicator to each of the plurality of transformed elements based on the comparison;Counting the number of the enable indicators assigned to each of the plurality of transformed elements to generate a ranked sequence of the plurality of elements.2.The processing apparatus according to claim 1, wherein said sorting module is configured to: when a value of at least one of said plurality of elements is the same as a value of said other one of said plurality of elements Each of the plurality of elements in each of the respective positions is shifted left by a set of bits.3.The processing apparatus according to claim 1, wherein said sorted sequence includes a count of the number of enable indicators, and said sorting module is configured to generate said plurality of elements in one of ascending or descending order The sorted sequence is described.4.The processing apparatus of claim 3 wherein said ranking module is operative to compare each of said plurality of transformed elements further comprising said ordering module for performing a less than operation.5.The processing apparatus of claim 3 wherein said ranking module is operative to compare each of said plurality of transformed elements further comprising said ordering module for performing a greater than operation.6.The processing device of claim 1 wherein said ordering module is operative to generate at least said first set of said ordered sequences of said plurality of ordered elements and said plurality of ordered elements The second group of sorted sequences.7.The processing device of claim 6 further comprising a merge module, said merge module being coupled to said sort module, wherein said merge module is for:Dividing the first set of the sorted sequence into a first half and dividing the second set of the sorted sequence into a second half, wherein the first half includes the Sorting the plurality of sorted elements of the first set of sequences, and the second half comprises the plurality of sorted elements of the second set of the sorted sequence;Comparing each of the plurality of sorted elements in the first half with each of the plurality of sorted elements in the second half and placing the second half Each of the plurality of sorted elements in the comparison with each of the plurality of sorted elements in the first half to generate the plurality of sorted elements in a certain order The third group of sequences; andThe position values ​​of the respective locations of each of the plurality of elements in the third set of the sequence are generated as a merged sorted sequence.8.The processing device of claim 6 further comprising a merge module, said merge module being coupled to said sort module, wherein said merge module is for:Identifying a plurality of the plurality of sorted elements from the first set of the sorted sequences and identifying another one of the plurality of sorted elements from the second set of the sorted sequences a;Each of the plurality of sorted elements in each of the identified plurality of sets of the first sequence of the sorted sequence from the second set of the sorted sequence Each of the plurality of sorted elements in each of the identified plurality of groups is compared;Selecting the sorted element from each of the identified plurality of sets from the first sequence of the sorted sequence based on the comparing;Each of the plurality of sorted elements in each of the identified additional plurality of sets of the second sequence from the sorted sequence and the first from the sorted sequence Comparing each of the plurality of sorted elements in each of the identified plurality of sets; andThe ordered elements are selected from each of the identified additional sets from the second sequence of the sorted sequence based on the comparison.9.The processing device of claim 8 wherein said merging module is for:Selecting the selected ordered elements of each of the identified plurality of sets of the first sequence from the sorted sequence from the identified additional plurality of sets of the second sequence from the sorted sequence The selected ordered elements of each of the groups are combined to generate a merged sequence comprising the selected selected ordered elements;The combined selected ordered elements are placed in a sequence to generate a merged ordered sequence.10.The processing device of claim 9 wherein said merging module is operative to generate said merged ordered sequence in one of ascending or descending order.11.A system on a chip (SoC) that includes:Memory;A processing device communicatively coupled to the memory, the processing device comprising:Sorting module for:Adding a position value of a corresponding position in the register set to each of the plurality of elements, thereby generating a plurality of transformed elements in the corresponding position, wherein each of the plurality of elements includes a plurality of bits;Comparing each of the plurality of transformed elements to itself and to each other;Assigning one of an enable indicator or a disable indicator to each of the plurality of transformed elements based on the comparison;Counting the number of the enable indicators assigned to each of the plurality of transformed elements to generate a ranked sequence of the plurality of elements.12.The SoC according to claim 11, wherein said sorting module is configured to: when a value of at least one of said plurality of elements is the same as a value of said other one of said plurality of elements Each of the plurality of elements in each of the respective positions is shifted left by a set of bits.13.The SoC of claim 11 wherein said ordered sequence comprises a count of the number of enable indicators, and said ordering module is operative to generate said plurality of elements in one of ascending or descending order Sorted sequence.14.The SoC of claim 13 wherein said processing means further comprises a merge module, said merge module being coupled to said ordering module, wherein said merge module is for:Dividing the first set of the sorted sequence into a first half and dividing the second set of the sorted sequence into a second half, wherein the first half includes the Sorting the plurality of sorted elements of the first set of sequences, and the second half comprises the plurality of sorted elements of the second set of the sorted sequence;Comparing each of the plurality of sorted elements in the first half with each of the plurality of sorted elements in the second half and placing the second half Each of the plurality of sorted elements in the comparison with each of the plurality of sorted elements in the first half to generate the plurality of sorted elements in a certain order The third group of sequences;The position values ​​of the respective locations of each of the plurality of elements in the third set of the sequence are generated as a merged sorted sequence.15.The SoC of claim 13 wherein said processing means further comprises a merge module, said merge module being coupled to said ordering module, wherein said merge module is for:Identifying a plurality of the plurality of sorted elements from the first set of the sorted sequences and identifying another one of the plurality of sorted elements from the second set of the sorted sequences a;Each of the plurality of sorted elements in each of the identified plurality of sets of the first sequence of the sorted sequence from the second set of the sorted sequence Each of the plurality of sorted elements in each of the identified plurality of groups is compared;Selecting the sorted element from each of the identified plurality of sets from the first sequence of the sorted sequence based on the comparing;Each of the plurality of sorted elements in each of the identified additional plurality of sets of the second sequence from the sorted sequence and the first from the sorted sequence Comparing each of the plurality of sorted elements in each of the identified plurality of sets of groups;Selecting the sorted element from each of the identified additional plurality of sets from the second sequence of the sorted sequence based on the comparing;Selecting the selected ordered elements of each of the identified plurality of sets of the first sequence from the sorted sequence from the identified additional plurality of sets of the second sequence from the sorted sequence The selected ordered elements of each of the groups are combined to generate a merged sequence comprising the selected selected ordered elements;The combined selected ordered elements are placed in a sequence to generate a merged ordered sequence.16.A method comprising:Adding a position value of a corresponding position in the register set to each of the plurality of elements, thereby generating a plurality of transformed elements in the corresponding position, wherein each of the plurality of elements includes a plurality of bits;Comparing each of the plurality of transformed elements to itself and to each other;Assigning one of an enable indicator or a disable indicator to each of the plurality of transformed elements based on the comparison;Counting the number of the enable indicators assigned to each of the plurality of transformed elements to generate a ranked sequence of the plurality of elements.17.The method of claim 16 wherein said sorted sequence comprises a count of the number of enable indicators.18.The method of claim 16 wherein said ordered sequence of said plurality of elements is generated in one of an ascending or descending order.19.The method of claim 16 further comprising:Shifting each of the plurality of elements to each of the respective positions when a value of at least one of the plurality of elements is the same as a value of the other of the plurality of elements The left position in the middle;20.The method of claim 16 further comprising: generating at least a first set of said ordered sequences of said plurality of sorted elements and said sorted sequence of said plurality of sorted elements Second Group.21.The method of claim 20, further comprising:Dividing the first set of the sorted sequence into a first half and dividing the second set of the sorted sequence into a second half, wherein the first half includes the Sorting the plurality of sorted elements of the first set of sequences, and the second half comprises the plurality of sorted elements of the second set of the sorted sequence;Comparing each of the plurality of sorted elements in the first half with each of the plurality of sorted elements in the second half and placing the second half Each of the plurality of sorted elements in the comparison with each of the plurality of sorted elements in the first half to generate the plurality of sorted elements in a certain order The third group of sequences;The position values ​​of the respective locations of each of the plurality of elements in the third set of the sequence are generated as a merged sorted sequence.22.The method of claim 20, further comprising:Identifying a plurality of the plurality of sorted elements from the first set of the sorted sequences and identifying another one of the plurality of sorted elements from the second set of the sorted sequences a;Each of the plurality of sorted elements in each of the identified plurality of sets of the first sequence of the sorted sequence from the second set of the sorted sequence Each of the plurality of sorted elements in each of the identified plurality of groups is compared;Selecting the sorted element from each of the identified plurality of sets from the first sequence of the sorted sequence based on the comparing;Each of the plurality of sorted elements in each of the identified additional plurality of sets of the second sequence from the sorted sequence and the first from the sorted sequence Comparing each of the plurality of sorted elements in each of the identified plurality of sets of groups;Selecting the sorted element from each of the identified additional plurality of sets from the second sequence of the sorted sequence based on the comparing;Selecting the selected ordered elements of each of the identified plurality of sets of the first sequence from the sorted sequence from the identified additional plurality of sets of the second sequence from the sorted sequence The selected ordered elements of each of the groups are combined to generate a merged sequence comprising the selected selected ordered elements;The combined selected ordered elements are placed in a sequence to generate a merged ordered sequence.23.The method of claim 22 wherein said merged ordered sequences are generated in one of ascending or descending order.24.At least one machine readable medium comprising a plurality of instructions responsive to being executed on a computing device to cause the computing device to perform the method of any one of claims 16-23.25.An apparatus comprising means for performing the method of any one of claims 16 to 23.
Sort data in the instruction set architecture and merge the sorted dataTechnical fieldEmbodiments described herein relate generally to processing devices and, more particularly, to ordering data and merging ordered data in an instruction set architecture of a processing device.Background techniqueSorting is an important core for the widespread use of many computer applications. In the database, sorting helps sort data, create indexes, and perform binary searches. Sorting facilitates statistically relevant applications, including finding recent pairs, determining the uniqueness of an element, finding the kth largest element, and identifying outliers. Sorting is used in physical simulations, for example, to find convex hulls to facilitate collision detection. Sorting is also used in big data applications, especially graphical analysis, where sorting is used to order key/value pairs that make up an output vector during vertex programming. Merge sorting is a very widely used implementation of sorting. The key elements within the merge sort are the merged two sorted sequences.DRAWINGSThe present disclosure will be more fully understood from the following detailed description of the embodiments of the invention. However, the drawings are not to be considered as limiting the invention,1 is a block diagram of one embodiment of a computing system including a processing device that implements an instruction set architecture environment;2 is a block diagram showing a ranking module for implementing an instruction set architecture execution environment in accordance with an embodiment of the present disclosure;3 is an example of ordering performed in an instruction set architecture execution environment, in accordance with an embodiment of the present disclosure;4 is a flow diagram illustrating a method for ranking in an instruction set architecture execution environment, in accordance with an embodiment of the present disclosure;5 is a block diagram showing a merge module for implementing an instruction set architecture execution environment in accordance with an embodiment of the present disclosure;6 is an example of a merge performed in an instruction set architecture execution environment, in accordance with an embodiment of the present disclosure;7 is a flow diagram illustrating a method for merging in an instruction set architecture execution environment, in accordance with an embodiment of the present disclosure;8 is a block diagram showing a merge module for implementing an instruction set architecture execution environment in accordance with an embodiment of the present disclosure;9 is an example of a merge performed in an instruction set architecture execution environment, in accordance with an embodiment of the present disclosure;10 is a flow diagram illustrating a method for merging in an instruction set architecture execution environment, in accordance with an embodiment of the present disclosure;11A is a block diagram showing a micro-architecture for a processor, one embodiment of the present disclosure may be used in the micro-architecture;11B is a block diagram showing an in-order pipeline and register renaming stage, out-of-order issue/execution pipeline, implemented in accordance with at least one embodiment of the present disclosure;12 shows a block diagram of a microarchitecture for a processor, in accordance with one embodiment of the present disclosure;Figure 13 is a block diagram showing a system in which embodiments of the present disclosure may be used;14 is a block diagram of a system in which an embodiment of the present disclosure may operate;15 is a block diagram of a system in which an embodiment of the present disclosure may operate;16 is a block diagram of a system on a chip (SoC) in accordance with an embodiment of the present disclosure;17 is a block diagram of an embodiment of a SoC design in accordance with the present disclosure;Figure 18 illustrates a block diagram of one embodiment of a computer system;19 shows a block diagram of a machine in the form of a computing system in accordance with the present disclosure.detailed descriptionDisclosed herein are embodiments for providing an instruction set architecture environment for ordering data in a computing system and merging the sorted data.Existing data ordering mechanisms are implemented in software in a computing system that can sort data elements stored in registers in a computing system. The current data sorting mechanism takes multiple cycles or instructions to sort each data element. For example, sorting one data element may take at least 15 cycles and 12 instructions. Therefore, sorting data in a big data application can consume a large number of instructions or cycles, which is time consuming.Embodiments of the present disclosure overcome the above problems by ordering data elements using hardware logic such as crossbar logic, counting logic, and permutation logic. In one embodiment, the crossbar logic will treat the n unsorted when the value of one of the n unsorted elements (in the register) is the same as the value of the other of the n ordered elements The element is shifted left by a certain number of bits, a position value is added to each of the left shifted n elements to generate transformed n elements, and each of the transformed n elements is transformed with n The other elements in the element are compared. In one embodiment, the counting logic generates a generated sequence of n elements in a relative order based on the comparison, and the permutation logic permutes the generated sequence of n elements and outputs the permuted sequence of the n ordered elements To the register. In one embodiment, the crossbar logic takes 4 cycles, the counting logic takes 1 cycle, and the replacement logic takes 1 cycle. Therefore, a total of 6 cycles can be used to sort 16 data elements. Thus, embodiments of the present disclosure speed up the ordering of data elements by at least 35 times compared to existing data ordering mechanisms.Existing sorted data merge mechanisms are implemented in software in a computing system that can merge the sorted data elements stored in registers in the computing system. The current data merge mechanism merges each data element one at a time, which can take multiple cycles or instructions. For example, for 16 elements, merging each data element may take approximately 15 cycles and 12 instructions per merge.Embodiments of the present disclosure overcome the above-described problems of merging ordered data elements by implementing hardware logic for performing merge operations, such as partitioning logic, location payload logic, toggle logic, sorting payload logic, and permutation logic. In one embodiment, the partitioning logic divides the two groups of the ordered input sequences of n elements (in the register) into two halves: a lower half and an upper half. In one embodiment, the location payload logic attaches a location identifier to both the upper and lower halves as the payload for each of the n elements. In one embodiment, the toggle logic merges each of the n elements in the upper half with each of the n elements in the lower half. The sorting payload logic can then use the position of each of the merged ordered n elements to generate a sorted merged sequence of n elements. In one embodiment, the permutation logic permutates the generated ordered merged sequence of n elements and outputs the permuted ordered merged sequence of the n ordered elements to a register. For 16 elements, embodiments of the present disclosure can merge each data element with 2 cycles or instructions at a time (ie, by merging 8 elements per cycle), resulting in 8 times the execution of the existing sorted data merge mechanism speed.Alternatively, embodiments of the present disclosure may overcome the above-described problems of merging ordered data elements by implementing other hardware logic such as recognition logic, toggle logic, mask logic, sort mask logic, and permutation logic. In one embodiment, the identification logic identifies a plurality of the n elements from the first of the two sets of the ordered input sequences of the n elements (in the register) and from the second input sequence Identify multiple of the n elements. The dual tone logic may first identify each of the identified elements in each of the plurality of groups from the first sequence and each of the plurality of groups from the second sequence Each of the elements is compared. Additionally, the dual tone logic identifies each of the identified elements from each of the plurality of groups of the second sequence from each of the plurality of groups from the first sequence Each of the elements is compared. In one embodiment, the masking logic selects the identified element from each of the plurality of groups from the first sequence based on the first comparison, and further from the second sequence based on the second comparison The identified elements are selected in each of the groups. The masking logic can then group the selected identified elements from each of the plurality of groups from the first sequence with the selected identified elements of each of the plurality of groups from the second sequence merge. In one embodiment, the sort mask logic sorts the merged selected identified elements to generate a sorted merged sequence of n elements. In one embodiment, the permutation logic permutates the generated ordered merged sequence of n elements and outputs the permuted ordered merged sequence of the n ordered elements to a register.Thus, in contrast to the previous solution of merging one data element, embodiments of the present disclosure increase the processing speed associated with merging data elements by merging multiple data elements at the same time. Further, embodiments of the present disclosure merge multiple ordered input sequences to produce a globally ordered output.1 is a block diagram of a computing system 100 that implements an instruction set architecture (ISA) of a processing device. Some examples of computing system 100 may include, but are not limited to, computing devices with extensive processing capabilities, such as personal computers (PCs), server computers, personal digital assistants (PDAs), smart phones, laptop computers, netbook computers, tablet devices, and / or any machine capable of (sequentially or otherwise) executing a set of instructions specifying the actions to be taken by that machine.Computing system 100 can include, for example, processing device 105 for processing operations for computing system 100. Processing device 105 may include one or more processing devices (also referred to as processors) located in separate components or, alternatively, may be included in a single integrated circuit that is arranged, for example, in a system-on-a-chip (SOC) configuration ( One or more processing cores embodied in IC). In some embodiments, the processing device is a general purpose processing device. For example, processing device 105 includes a processing device of the type commonly used as a central processing unit (CPU). In other embodiments, the processing device can be a dedicated processing device. Examples of dedicated processors include, but are not limited to, co-processing devices, graphics processing devices, communication processing devices, network processing devices, cryptographic processing devices, embedded processing devices, digital signal processing devices (DSPs), and the like. The processing device 105 can be connected to a jack. In some embodiments, if there are multiple processing devices, the processing device 105 can be connected to the same jack or a different jack.Computing system 100 can include one or more different applications 150 that are executed by processing device 105. Instructions for implementing the application 150 (i.e., computer executable programs) may be executed in the processing device 105. The instructions may include, but are not limited to, an add operation, a shift operation, a compare operation, a count operation, a conversion operation, a replacement operation, and a scramble operation.Although processing device 105 and application 150 are depicted in FIG. 1 as a single distinct component, these components can be implemented together in a single device or in various combinations of multiple different devices operating together. Examples of devices may include, but are not limited to, servers, mainframe computers, networked computers, processing-based devices, and similar types of systems and devices.Processing device 105 may include modules such as sequencing module 110, merging module 120, and one or more registers 140a through 140n. In one embodiment, a module is a hardware component, such as a hardware circuitry that performs certain operations. A module can be a self-contained component that interacts with other components in a processing device of a computer system.The ranking module 110 can execute instructions corresponding to the application 150. The instructions may include program code for causing the ordering module 110 to order a sequence of n data elements (elements), each of the data elements (elements) having a particular number of bits. In particular, the instructions cause sorting module 110 to perform activities such as, but not limited to, reading/retrieving sequences of the n elements in respective locations of unsorted n elements in registers 140a-140n; The sorting element is shifted left by log(n) bits; the value of the corresponding position is added to each of the shifted unsorted n elements, thereby generating transformed n elements; each of the transformed n elements Comparing with the other of the transformed n elements to generate a generated sequence of n elements in a relative order; and permuting the generated sequence of the n elements to output the permuted sequence of the n ordered elements To registers 140a to 140n. Information (not shown) including instructions, data, and the like may be stored in the memory 130.The merge module 120 can execute instructions corresponding to the application 150. The instructions can include program code for causing merge module 120 to combine the two combined sequences into one merged sequence and sort the merged sequences to generate a sorted merged sequence. Each of the two groups of the sorted sequence includes n elements, each of which has a certain number of bits.In one embodiment, the instructions cause the merge module 120 to perform activities such as, but not limited to, reading/retrieving a sequence of n elements in respective locations of the ordered n elements in the registers 140a-140n. Sorting two groups of sequences; dividing the two groups of the ordered sequence of the n elements into two halves: the lower half and the upper half; attaching the position as the upper half and the lower half a payload of each of the n elements; combining each of the n elements in the upper half with each of the n elements in the lower half; and using the merged ordered n Position of each of the elements to generate a generated ordered merged sequence of n elements; and permuting the generated ordered merged sequence of n elements to sort the n ordered elements by sorting The sequence is output to registers 140a through 140n. Information (not shown) including instructions, data, and the like may be stored in the memory 130.In another embodiment, the instructions cause the merging module 120 to perform activities such as, but not limited to, reading/retrieving two groups of the sorted sequence in respective locations of the n elements in the registers 140a-140n, Each of the two groups includes n elements; a plurality of the n elements are identified from the first sequence and a plurality of the n elements are identified from the second sequence; Each of the identified elements in each of the plurality of groups is compared to each of the identified elements in each of the plurality of groups from the second sequence; Comparing selecting the identified element from each of the plurality of groups from the first sequence; bringing each of the identified elements from each of the plurality of groups from the second sequence with Comparing each of the identified elements of each of the plurality of groups of the first sequence; selecting the identified element from each of the plurality of groups from the second sequence based on the comparing Selecting each of the plurality of groups from the first sequence The identified element is compared to the selected identified element of each of the plurality of groups from the second sequence; the merged selected identified elements are ordered to generate the resulting ordered by n elements Merging the sequence; and permuting the generated ordered merged sequence of n elements to output the ordered ordered merged sequence of the n ordered elements to registers 140a through 140n. Information (not shown) including instructions, data, and the like may be stored in the memory 130.Memory 130 may include random access memory (RAM), non-volatile memory, or read only memory (ROM) in a fixed or removable format. The RAM may include memory for holding information during operation of computing system 100, such as, for example, static RAM (SRAM) or dynamic RAM (DRAM). The ROM may include memory such as a computing device BIOS memory for providing instructions when computing system 100 is activated, a programmable memory such as an electronic programmable ROM (EPROM), flash memory, and the like. In one embodiment, memory 130 is protected such that memory 130 can be accessed and modified by sequencing module 110 and by merge module 120.The registers 140a-140n may include registers and/or storage devices that are used during execution of the instructions by the sequencing module 110 while the computing system 100 is in a sorted state to enable reading of unsorted elements in the data. In one embodiment, the register is a single instruction multiple data (SIMD) register. In one embodiment, registers 140a through 140n are vector data registers. Registers 140a through 140n may include, but are not limited to, registers for temporary values, stack pointers, pointers to data elements, temporary storage for instructions executed in computing system 100, and the like. In one embodiment, registers 140a through 140n may be protected for access and modification only by sequencing module 110 and merge module 120. The registers 140a to 140n may be readable by software executed outside of the sorting module 110 and the merging module 120.2 illustrates a processing device 205 that includes a ranking module 210 for implementing an instruction set architecture environment, in accordance with one embodiment of the present disclosure. In one embodiment, processing device 205 is the same as processing device 105 described above with respect to FIG. In one embodiment, the ranking module 210 is the same as the ranking module 110 described above with respect to FIG. The ranking module 210 can include logic such as crossbar logic 220, counting logic 230, and permutation logic 240. In one embodiment, the logic is a hardware component, such as a hardware circuitry that performs certain operations. The logic may be a self-contained component that interacts with other components in the processing device of the computer system.More or fewer components may be included in the ranking module 210 without loss of generality.In one embodiment, the ordering module 210 receives input of a sequence of unsorted n data elements (elements) from registers such as registers 140a through 140n described with respect to FIG. In one embodiment, the data elements are data units that are defined for processing. Data elements can be qualified by size and type. In one embodiment, registers 140a through 140n are source registers. Each of these n elements includes a specific number of bits. In one embodiment, each of the registers 140a through 140n is a 512-bit register comprising 16 elements each having 32 bits. In one embodiment, each of the registers 140a through 140n is a 512-bit register comprising 8 elements each having 64 bits. The crossbar logic 220 places the unsorted n elements in the crossbar in the corresponding positions of the n elements to assist in ordering the elements. In one embodiment, the crossbar switch logic 220 is a crossbar switch that includes a set of switches arranged in a matrix configuration.As an example, the n unsorted elements are 4 elements of a decimal number consisting of 3 3 1 2, each of which includes 32 bits in a 128-bit register. 3 is a block diagram 300 depicting a conceptual ordering of input sequences by a 4X4 crossbar switch, in accordance with an example embodiment. As shown in FIG. 3, the decimal number 3 3 1 2 is the input placed in the first row 302 of the separate columns 322a, 322b, 322c, and 322d. The second row 304 shows the position vector value 0 1 2 3 in a separate column. Therefore, the number 3 3 1 2 position vector values ​​are 0 1 2 3 respectively. The binary value of 3 is 0011, the binary value of 1 is 0001, and the binary value of 2 is 0010.In one embodiment, when the value of at least one of the n unsorted elements is the same as the value of another of the n unsorted elements, the crossbar logic 220 will unsort n elements Each of the left shifts the log(n) bit. In one embodiment, the log(n) bit is half of the number of bits in n. In one embodiment, a left shift is performed to process the repetition of the values ​​of the input unsorted n elements. In other embodiments, if the values ​​of the input unsorted n elements are not repeated and the hardware circuitry is aware that the values ​​are not repeated, no left shift is performed. In one embodiment, the hardware circuitry is aware that the input values ​​of these unsorted n elements are not repeated according to the programmer of the hardware circuitry or the user.Returning to the example in Figure 3, each of these 4 unsorted elements is shifted 2 bits to the left. The left shift of the binary value 0011 produces 1100 and the 1100 is converted to a decimal value of 12. The left shift of the binary value 0001 produces 0100, and the 0100 is converted to the decimal value 4. The left shift of the binary value 0010 yields 1000, and 1000 is converted to a decimal value of 8. Thus, the left-shifted value of the decimal number 3 3 1 2 is 12 12 4 8 respectively. Each of 1212 4 8 is inserted into the third of its separate columns corresponding to its unsorted number 3 3 1 2 In line 306. In one embodiment, the crossbar logic 220 adds a position vector value for the respective position of each of the unsorted n elements to each of the unsorted n elements. In one embodiment, the result of the addition outputs transformed n elements that maintain the relative ordering of the input sequences of the unsorted n elements without any duplicate values. Returning to the example in FIG. 3, position vector values ​​0 1 2 3 are added to each of the respective left shifted values ​​1212 4 8 to produce a decimal value 12 13 6 11 in the fourth row 308, respectively.In one embodiment, the crossbar logic 220 compares each of the transformed n elements with itself and other elements of the transformed n elements. In one embodiment, the crossbar logic 220 assigns one of an enable indicator or a disable indicator when comparing each of the transformed n elements to itself and other elements of the transformed n elements. In one of the statements, the enable indicator includes a value of one and the disable indicator includes a value of zero. In one embodiment, the crossbar logic 220 places this assigned value of 1 or 0 of each of the transformed n elements in its respective location. In one embodiment, the unsorted n elements will be sorted in ascending order. Thus, when comparing each of the transformed n elements, the crossbar logic 220 performs a greater than operation. In one embodiment, when each of the transformed n elements is greater than the other of the transformed n elements, the crossbar logic 220 assigns a value of 1, and when each of the transformed n elements A value of 0 is assigned when it is not greater than itself and other elements of the transformed n elements. In one embodiment, the unsorted n elements will be sorted in descending order. Thus, when comparing each of the transformed n elements, the crossbar logic 220 performs a less than operation. In one embodiment, when each of the transformed n elements is less than the other of the transformed n elements, the crossbar logic 220 assigns a value of 1, and when each of the transformed n elements A value of 0 is assigned when it is smaller than itself and other elements of the transformed n elements.Returning to the example in Figure 3, each of the generated values ​​12 13 6 and 11 will be compared to itself and other generated values. In this example, the comparison operation is greater than the operation. For example, 12 is compared to itself, 13, 6, and 11 to determine if 12 is greater than each of 12, 13, 6, and 11. Since 12 is not greater than 12, the assigned value is 0; since 12 is not greater than 13, the assigned value is 0; since 12 is greater than 6, the assigned value is 1; and since 12 is greater than 11, it is allocated The value is 1. Therefore, the value in the column 322a having the position vector value of 0 is 0 0 11 . A similar comparison using the greater than operation is performed for values ​​12, 6, and 11, which results in a value of 1 0 1 in column 322b with position vector value 1 and a value of 0 0 0 0 in column 322c with position vector value 2. And the value in the column 322d having the position vector value of 3 is 0 0 1 0.In one embodiment, the counting logic 230 counts the total number of 1s assigned to each of these elements in the respective locations of the transformed n elements to generate a relatively sequential ordered n elements. Returning to the example in FIG. 3, the total number of 1 assigned to each of these values ​​in the respective position vector values ​​0 1 2 and 3 of the generated values ​​12, 13, 6 and 11 is 2 3 0 and 1, As reflected in the last column 310, respectively. Therefore, 2 3 0 and 1 are the relative order of the unsorted input numbers 3 3 1 2, respectively. In one embodiment, permutation logic 240 permutates (or scrambles) the sequentially ordered n elements and outputs the permuted ordered n elements to one of registers 140a through 140n. In one embodiment, one of the registers 140a through 140n is a destination register. In one embodiment, the destination register is the same as the source register. In one embodiment, the destination register is different from the source register. In one embodiment, the permutation logic 240 allocates the position of each of the ordered n elements in the destination register that corresponds to the location of each of the unsorted n elements in the source register. And push the sorted n elements to their respective locations in the destination register.Although Figure 3 shows a 4x4 crossbar switch as an example of nxn depicting a conceptual ordering of input sequences, another example is a 16x16 crossbar switch with 16 position vector values ​​and 16 unsorted elements of decimal numbers, Each of the elements includes 32 bits of a 512-bit register, resulting in a relatively sequential 256-bit 16 ordered elements.4 is a flow diagram of a method 400 for sorting data in ascending order in an instruction set architecture environment of a processing device, in accordance with an embodiment of the present disclosure. Method 400 can be performed by processing logic, which can include hardware (eg, circuitry, dedicated logic, programmable logic, microcode, etc.), software (eg, operating on a processing device, general purpose computer system, or special purpose machine) Instruction), firmware, or a combination thereof. In one embodiment, method 400 may be performed in part by ranking modules 110 and 210 described above with respect to FIGS. 1 and 2.To simplify the illustration, method 400 is depicted and described as a series of acts. However, the acts in accordance with the present disclosure can occur in various orders and/or concurrently, and can occur with other acts not presented and described herein. Moreover, not all of the acts shown may be performed to implement method 400 in accordance with the disclosed subject matter. Additionally, those skilled in the art will understand and appreciate that method 400 may alternatively be represented as a series of related states via a state diagram or event.At block 402, processing logic retrieves from the source register an input sequence of the data elements from respective locations of the unsorted n data elements (elements). At block 404, the unsorted n elements are shifted left by a certain number of bits. At block 406, the value of the corresponding location is added to each of the unsorted n elements shifted to the left, resulting in transformed n elements. In one embodiment, the transformed n elements maintain the relative order of the input sequences of the unsorted n elements. In one embodiment, the transformed n elements eliminate any duplicate values ​​of the unsorted n elements. At block 408, each of the transformed n elements is compared to itself and other elements of the transformed n elements to generate a generated sequence of n elements in a relative order. At block 410, a value of 0 is assigned when comparing each of the transformed n elements to itself, since the transformed n elements are not greater than itself. At block 412, it is determined whether each of the transformed n elements is greater than other elements of the transformed n elements.Subsequently, at block 414, a value of 0 is assigned when it is determined at block 412 that each of the transformed n elements is not greater than the other of the transformed n elements. At block 416, a value of one is assigned when it is determined at block 412 that each of the transformed n elements is greater than the other of the transformed n elements. At block 418, the total number of 1s assigned to each of the transformed n elements is counted, which produces a relatively sequential ordered n elements. At block 420, the ordered n elements are permuted. In one embodiment, the permuting includes assigning a position of each of the sorted n elements in a destination register that corresponds to a location in the source register of each of the unsorted n elements. At block 422, the permuted ordered n elements are output to a destination register. In one embodiment, the destination register is the same as the source register. In one embodiment, the destination register is different from the source register. In one embodiment, to order in descending order, a less than operation is performed at block 412 such that at block 412 it is determined whether each of the transformed n elements is less than other elements of the transformed n elements.FIG. 5 is a block diagram showing a processing device 505 including a merge module 520 for implementing an instruction set architecture environment, in accordance with one embodiment of the present disclosure. In one embodiment, processing device 505 is the same as processing device 105 described above with respect to FIG. In one embodiment, the merge module 520 is the same as the merge module 120 described above with respect to FIG. The merge module 520 can include logic such as partitioning logic 530, location payload logic 540, dual tone logic 550, ordering payload logic 560, and permutation logic 570. In one embodiment, the logic is a hardware component, such as a hardware circuitry that performs certain operations. The logic may be a self-contained component that interacts with other components in the processing device of the computer system. More or fewer components may be included in the merge module 520 without loss of generality.In one embodiment, merge module 520 receives input of two sets of ordered sequences of n elements from at least one register, such as registers 140a through 140n described with respect to FIG. In one embodiment, one of the registers 140a through 140n is a source register. Each of these n elements includes a specific number of bits. In one embodiment, one of the registers 140a through 140n is a 512-bit register comprising 16 elements each having 32 bits. In one embodiment, one of the registers 140a through 140n is a 512-bit register comprising 8 elements each having 64 bits. In one embodiment, one of the registers 140a through 140n is a 128-bit register that includes 4 elements each having 32 bits. In one embodiment, the partitioning logic 530 divides the two groups of the input ordered sequences of n elements into two halves: a first half and a second half, such that the input sequence of the n elements is entered One group is in the first half and the other of the input ordered sequences of n elements is in the second half. Each of the n elements in the first half and the second half includes their respective locations to assist in merging the elements.As an example, the two of the n elements include 4 elements, where each element is a decimal number. In the example, the decimal numbers can be 1 4 7 8 and 3 7 10 15, each of these numbers being 32 bits. FIG. 6 is a block diagram 600 depicting a conceptual illustration of an exemplary merge operation using merge module 520. As shown in FIG. 6, an exemplary plurality of sets of ordered elements of 4 elements are divided into a first half (eg, lower half) 610 and a second half 612, respectively, of table 602 (eg, upper half) section). The first set of ordered sequences of the two sets of decimal numbers 1 4 7 8 are placed in the lower half 610 in the first four columns of the second row 620b of the table 602, and the second of the two groups The sorted sequence 37 10 15 is placed in the upper half 612 in the last four columns of the second row of the table 602. The first row 620a shows the position vector values ​​0 to 7 in separate columns. Therefore, the position vector values ​​of the numbers of the lower half of the group 1 4 7 8 are 0 1 2 3, respectively, and the position vector values ​​of the numbers of the upper half group 3 7 10 15 are 4 5 6 7 respectively.In one embodiment, the location payload logic 540 appends a corresponding location to each of the n elements in the lower half 610 and the upper half 612. Returning to the example in FIG. 6, the corresponding position vector value 0 1 2 3 is attached to the decimal number 1 4 7 8 in the lower half 610, and the corresponding position is attached to the decimal number 3 7 10 15 in the upper half 612. The vector value is 4 5 6 7, as shown in the parentheses in the third line. As such, the lower half 610 of the third row 620c includes 1 [position 0], 4 [position 1], 7 [position 2], 8 [position 3], and the upper half 612 of the third row 620c includes 3 [positions 4], 7 [position 5], 10 [position 6], 15 [position 7]. In one embodiment, the toggle logic 550 combines each of the n elements in the lower half 610 with each of the n elements in the upper half to produce a merged sequence of n elements. And a corresponding position is attached for each of the merged n elements in the lower half 610 and the upper half 612, as shown in the fourth row 620d of the table 602.In one embodiment, merging includes comparing a value of each of the n elements in the lower half with each of the n elements in the upper half and n of the upper half The value of each of the elements is compared to each of the n elements in the lower half. As discussed above, the value of each of the n elements in the lower half comes from one of the two groups of the sorted input sequence, and each of the n elements in the upper half One value comes from the other of the two groups that are sorted by the input sequence. (ie, one merge is to merge two sorted inputs to produce a globally ordered output). When using the double-tuning merge process, the number of comparisons is n*log2n.These comparisons then produce a sorted sequence of 2n elements including the n elements in the beginning of the element with the lowest value in the lower half and the end of the element with the highest value in the upper half. In one embodiment, the number of comparisons is the square of the value n (eg, when n = 8, the number of comparisons is 64, and when n = 16, the number of comparisons is 256).In one embodiment, merging includes comparing values ​​of the input n elements, wherein the first half (n/2 elements) of the input elements are from a sorted input sequence and the remaining half of the input elements ( n/2 elements) from another sorted input sequence (ie, one merge is to merge two sorted inputs to produce a globally ordered output). In the example mentioned in Figure 6, the first half may be the lower half and the second half may be the upper half, or vice versa. When using the double-tuning merge process, the number of comparisons is n*log2n.Returning to the example in Figure 6, each of the decimal numbers 1 4 7 8 in the lower half is merged with each of the decimal numbers 3 7 10 and 15 in the upper half, resulting in a merger of 8 elements. sequence. The merged sequence of the 8 elements includes 1 3 4 7 in the lower half and 7 8 10 15 in the upper half together with the accompanying corresponding position vector values ​​in parentheses, such that the third row 620c is generated. The merged sequence includes 1 [position 0], 3 [position 4], 4 [position 1], 7 [position 2], and 7 [position 5], 8 [position 3], 10 in the upper half in the lower half. [Position 6], 15 [Location 7].In one embodiment, the sort payload logic 560 retrieves the respective locations of each of the merged n elements in the lower and upper halves as the resulting sorted merged sequence of n elements. Referring to the example in FIG. 6, each of the respective position vector values ​​0 4 1 2 in the lower half of the third row 620c from the table 602 is retrieved and placed in the lower half of the fourth row, and from Each of the corresponding position vector values ​​53 6 7 in the upper half of the third row is retrieved and placed in the upper half of the fourth row 620d. Thus, the resulting merged ordered sequence (with position vector values) is 0 4 1 2 5 3 6 7 as shown in the fifth row 620e.In one embodiment, permutation logic 570 permutes (or scrambles) the ordered merged sequence of n elements and outputs the permuted merged ordered n elements to one of registers 140a-140n. In one embodiment, one of the registers 140a through 140n is a destination register. In one embodiment, the destination register is the same as the source register. In one embodiment, the destination register is different from the source register. In one embodiment, permutation logic 570 allocates the position of each of the sorted merged n elements in a destination register that is in position in the source register with each of the sorted n elements. Corresponding, and the sorted merged n elements are pushed to their respective locations in the destination register.7 is a flow diagram of a method 700 for merging two sets of ordered sequences and sorting the merged sequences in an instruction set architecture environment of a processing device, in accordance with an embodiment of the present disclosure. Method 700 can be performed by processing logic, which can include hardware (eg, circuitry, dedicated logic, programmable logic, microcode, etc.), software (eg, operating on a processing device, general purpose computer system, or special purpose machine) Instruction), firmware, or a combination thereof. In one embodiment, method 700 may be performed in part by merge modules 120 and 520 described above with respect to FIGS. 1 and 5.To simplify the illustration, method 700 is depicted and described as a series of acts. However, the acts in accordance with the present disclosure can occur in various orders and/or concurrently, and can occur with other acts not presented and described herein. Moreover, not all of the acts shown may be performed to implement method 700 in accordance with the disclosed subject matter. Additionally, those skilled in the art will understand and appreciate that method 700 may alternatively be represented as a series of related states via a state diagram or event.At block 702, processing logic retrieves input from the source register for two sets of ordered sequences of n data elements from respective locations of n data elements (elements). At block 704, the two groups of the ordered input sequences of n elements are divided into two halves: a first half and a second half, wherein each of the n elements has its corresponding position. At block 706, the value of the corresponding location is appended to each of the n elements in the first half and the second half. At block 708, each of the n elements in the first half are merged with each of the n elements in the second half to produce a merged sequence of n elements. Thus, the two ordered input sequences are combined to produce a globally ordered output. At block 710, a value for the corresponding location is appended to each of the merged n elements in the first half and the second half.Subsequently, at block 712, each of the respective position values ​​of the merged n elements is retrieved from the first half and the second half, thereby producing a sorted merged sequence of n elements. At block 714, the ordered merged sequences of n elements are permuted. In one embodiment, the permutation operation includes assigning a position of each of the sorted merged n elements in a destination register to a position in the source register with each of the sorted n elements Corresponding. At block 716, the permuted ordered merged sequence of n elements is output to a destination register. In one embodiment, the destination register is the same as the source register. In one embodiment, the destination register is different from the source register.FIG. 8 is a block diagram showing a processing device 805 including a merge module 820 for implementing an instruction set architecture environment, in accordance with one embodiment of the present disclosure. In one embodiment, processing device 805 is the same as processing device 105 described above with respect to FIG. In one embodiment, the merge module 820 is the same as the merge module 120 described above with respect to FIG. The merge module 820 can include logic such as recognition logic 830, dual tone logic 840, mask logic 850, sort mask logic 860, and permutation logic 870. In one embodiment, the logic is a hardware component, such as a hardware circuitry that performs certain operations. The logic may be a self-contained component that interacts with other components in the processing device of the computer system. More or fewer components may be included in the merge module 820 without loss of generality.In one embodiment, the merge module 820 receives input of two sets of ordered sequences of n elements from at least one of registers, such as registers 140a through 140n described with respect to FIG. In one embodiment, one of the registers 140a through 140n is a source register. Each of these n elements includes a specific number of bits. In one embodiment, one of the registers 140a through 140n is a 512-bit register comprising 16 elements each having 32 bits. In one embodiment, one of the registers 140a through 140n is a 512-bit register comprising 8 elements each having 64 bits. In one embodiment, the recognition logic 830 identifies a first one of the n elements from the first sorted sequence and a second one of the n elements from the second sorted sequence.9 is a block diagram 900 depicting a conceptual illustration of a merge operation in accordance with an embodiment of the present disclosure. The example depicted in Figure 9 includes a first ordered input sequence of n elements, wherein the first ordered sequence includes 8 elements of a decimal number consisting of 1 2 7 8 9 1417 17 , each of these numbers Includes 32 bits. Additionally, a second ordered input sequence of n elements is illustrated, wherein the second ordered sequence includes eight elements of a decimal number consisting of 4 5 6 10 14 17 17 18, each of these numbers also including 32-bit. In one embodiment, the recognition logic 830 identifies that the first of the n elements is identified from the first sorted sequence and the first of the n elements is identified from the ordered second sequence. Returning to the example in FIG. 9, the first set from the first sequence 902 includes the first four elements 1 27 8 and the first set from the second sequence 904 includes the first four elements 4 5 6 10.In one embodiment, the binar logic 840 merges the identified first set of elements from the first sorted sequence with the identified second set of elements from the second sorted sequence. The dualtoning logic 840 can compare each element of the identified first set of the first sorted sequence with each of the identified first set of the second sorted sequence and assign a value based on the comparison . Similarly, the binar logic 840 can compare each element in the first set of the second sorted sequence to each element in the first set of the first sorted sequence and assign a value based on the comparison.In one embodiment, the n elements will be sorted in descending order. Thus, when comparing each of the identified groups of the n elements in the first sequence to the second sequence, the dualtoning logic 840 performs a less than operation, and vice versa. In one embodiment, when each of the identified groups of the n elements in the first sequence is less than or equal to at least one other of the identified groups of the n elements of the second sequence, The tuning logic 840 assigns a value of 1, and vice versa. In one embodiment, when each of the identified groups of the n elements in the first sequence is not less than at least one other of the identified groups of the n elements in the second sequence, the dual tone Logic 840 assigns a value of 0, and vice versa.In another embodiment, the n elements will be sorted in ascending order. Thus, when comparing each of the identified groups of the n elements in the first sequence to the second sequence, the dual-tuning logic 840 performs a greater than operation, and vice versa. In one embodiment, when each of the identified groups of the n elements in the first sequence is greater than or equal to at least one other of the identified groups of the n elements in the second sequence, the dual tone Logic 840 assigns a value of 1, and vice versa. In one embodiment, when each of the identified groups of the n elements in the first sequence is not greater than at least one other of the identified groups of the n elements in the second sequence, the dual tone Logic 840 assigns a value of 0, and vice versa.Returning to the example in FIG. 9, each of the elements 1 2 7 8 in the first sequence 902 is compared to each of the elements 4 5 6 10 in the second sequence 904. In this example, the comparison is less than the operation such that the decimal value 1 in the first sequence is compared to other decimal values ​​to determine if it is less than or equal to those values ​​(eg, 4 5 6 10 in the second sequence) At least one of the), similarly comparing 2 in the first sequence to determine if it is less than or equal to at least one of 4 56 10 in the second sequence, and so on. Therefore, a mask value 906 of 1 1 11 is assigned to all elements 1 2 7 8 of the first sequence. Furthermore, each of the elements 4 5 6 10 in the second sequence is compared to each of the elements 1 2 7 8 in the first sequence. Therefore, a mask value 906 of 1 1 1 0 is assigned to all elements 4 5 6 10 of the second sequence.In one embodiment, mask logic 850 selects those elements of the first set of the first sequence that are assigned a value of one and selects those elements of the first set of the second sequence that are assigned a value of one. These selected elements are combined by mask logic 850 to form a merged sequence. Returning to the example in FIG. 9, each of the elements 1 2 7 8 in the first sequence is selected, and the elements 456 in the second sequence are selected, and the elements are combined to produce 1 The combined sequence 908 of 2 7 8 45 6 . In one embodiment, the sort mask logic 860 sorts the merged sequences to produce a sorted merged sequence of elements. In one embodiment, the sort mask logic 860 uses the ordering modules 110 and 210 of FIGS. 1 and 2 as described above to sort the merged sequences. Returning to the example in FIG. 9, the sorted merged sequence 910 is 1 2 4 5 6 7 8 .In one embodiment, the second set of n elements from the first sequence and the n elements from the second sequence are paired by recognition logic 830, dualton logic 840, mask logic 850, and sort mask logic 860 The second group repeats the above process. The second of the n elements of the first sorted input sequence may include at least one element from the first of the n elements in the first sequence that is not merged. The second of the n elements of the second sorted sequence may include at least one element from the first of the n elements in the second sequence that is not merged.Referring back to the example in Figure 9, the second set of elements identified in the first sequence is 9 14 1717 and the second set of elements identified in the second sequence is 10 14 17 17 . It should be noted that the elements 10 of the first group from the second sequence are not merged and are therefore identified and included in the second group of the second sequence. The result produced by the dualtoning logic 840 based on the identified second set of elements from the first sequence (as identified by the recognition logic 830) has a mask value 916 of 1 1 1 1 and is computed by the dualtoning logic 840 The result of the second set of identified sets of elements (as identified by recognition logic 830) has a mask value 916 that is also 1 1 1 1 . As such, mask logic 850 selects all elements 914 17 17 from the first sequence and selects all elements 10 14 17 17 from the second sequence and combines these elements into a merged sequence 918 of 9 14 1717 10 14 17 17 . The sort mask logic 860 sorts the merged sequences to produce a sorted merged sequence 920 of 910 14 14 17 17 17 17 .In one embodiment, the identification logic 830, the dualton logic 840, the mask logic 850, and the sort mask logic 860 are directed to the remaining plurality of n elements from the first sorted input sequence and from the second sorted input. The above process is repeated for the remaining plurality of n elements of the sequence until all elements in the first sequence and the second sequence are merged, sorted, and output to the final ordered merged sequence of n elements by the sort mask logic 860 . Returning to the example in Figure 9, the final ordered merged sequence 922 is 1 2 4 5 6 7 8 9 10 14 17 17 17 17 18 . Thus, the two ordered input sequences are combined to produce a globally ordered output.In one embodiment, permutation logic 870 permutes (or scrambles) the final ordered merged sequence of n elements and outputs the permuted merged ordered n elements to one of registers 140a-140n. In one embodiment, one of the registers 140a through 140n is a destination register. In one embodiment, the destination register is the same as the source register. In one embodiment, the destination register is different from the source register. In one embodiment, permutation logic 870 allocates the position of each of the sorted merged n elements in a destination register that is in position in the source register with each of the sorted n elements. Corresponding, and the sorted merged n elements are pushed to their respective locations in the destination register.10 is a flow diagram of a method 1000 for merging data in an instruction set architecture environment of a processing device, in accordance with an embodiment of the present disclosure. Method 1000 can be performed by processing logic, which can include hardware (eg, circuitry, dedicated logic, programmable logic, microcode, etc.), software (eg, operating on a processing device, general purpose computer system, or special purpose machine) Instruction), firmware, or a combination thereof. In one embodiment, method 1000 can be performed in part by the ordering modules 110 and 210 described above with respect to FIGS. 1 and 2.To simplify the illustration, method 1000 is depicted and described as a series of acts. However, the acts in accordance with the present disclosure can occur in various orders and/or concurrently, and can occur with other acts not presented and described herein. Moreover, not all of the acts shown may be performed to implement method 1000 in accordance with the disclosed subject matter. Additionally, those skilled in the art will understand and appreciate that method 1000 can alternatively be represented as a series of related states via a state diagram or event.At block 1002, processing logic retrieves input from the source register for two sets of ordered sequences of n data elements from respective locations of n data elements (elements). At block 1004, a first one of the n elements is identified from the first ordered input sequence and a first one of the n elements is identified from the second ordered input sequence. At block 1006, each of the elements in the first set of the first sequence is compared to each of the elements in the first set of the first sequence. At block 1008, a value of 1 is assigned to the elements in the first group of the first sequence based on the comparison made at block 1006. In one embodiment, the sequences are combined in ascending order such that when it is determined that an element in the first group of the first sequence is greater than or equal to at least one of the elements in the first group of the second sequence, The elements in the first group in the sequence are assigned a value of 1. In another embodiment, the sequences are combined in descending order such that when it is determined that an element in the first group of the first sequence is less than or equal to at least one of the elements in the first group in the second sequence, The elements in the first group in a sequence are assigned a value of 1.At block 1010, based on the comparison at block 1006, the elements in the first group in the first sequence are assigned a value of zero. In one embodiment, the sequences are combined in ascending order such that when it is determined that the elements in the first group of the first sequence are not greater than at least one of the elements in the first group in the second sequence, the first sequence is The elements in the first group in the assignment have a value of 0. In another embodiment, the sequences are combined in descending order such that when it is determined that the elements in the first group of the first sequence are not less than at least one of the elements in the first group in the second sequence, The elements in the first group in the sequence are assigned a value of zero.Then, at block 1012, each of the elements in the first set of the second sequence is compared to each of the elements in the first set of the second sequence. Then, at block 1014, based on the comparison made at block 1012, the elements in the first group of the second sequence are assigned a value of one. In one embodiment, the sequences are combined in ascending order such that when it is determined that an element in the first group of the second sequence is greater than or equal to at least one of the elements in the first group in the first sequence, to the second The elements in the first group in the sequence are assigned a value of 1. In one embodiment, the sequences are combined in descending order such that when it is determined that an element in the first group of the second sequence is less than or equal to at least one of the elements in the first group in the first sequence, The elements in the first group in the sequence are assigned a value of 1.Then, at block 1016, based on the comparison made at block 1012, the elements in the first group of the second sequence are assigned a value of zero. In one embodiment, the sequences are combined in ascending order such that when it is determined that the elements in the first group of the second sequence are not greater than at least one of the elements in the first group of the first sequence, the second sequence is The elements in the first group in the assignment have a value of 0. In one embodiment, the sequences are combined in descending order such that when it is determined that the elements in the first group of the second sequence are not less than at least one of the elements in the first group in the first sequence, the second sequence is The elements in the first group in the assignment have a value of 0.At block 1018, each of the elements of the first set of the first sequence assigned a value of 1 from block 1008 and the elements of the first set of the second sequence from block 1014 that are assigned a value of 1 are Each of the merges produces a merged sequence. At block 1020, the merged sequences are ordered to produce a sorted merged sequence of elements. Thus, the two ordered input sequences are combined to produce a globally ordered output.Method 1000 is repeated starting at block 1004 for all group elements of the first sequence and all group elements of the second sequence, including any output of block 1010 and block 1016, until the final ordered sorting of the n elements is output at block 1020. sequence. At block 1022, the final ordered sequence of n elements is permuted. In one embodiment, the permutation operation includes assigning a position of each of the sorted merged n elements in a destination register to a position in the source register with each of the sorted n elements Corresponding. At block 1024, the permuted ordered merged sequence of n elements is output to a destination register. In one embodiment, the destination register is the same as the source register. In one embodiment, the destination register is different from the source register.11A is a block diagram showing an in-order pipeline and register renaming stage, out-of-order issue/execution pipeline of a processor that monitors performance of a processing device to manage inaccurate events, in accordance with at least one embodiment of the present invention. 11B is a block diagram showing an in-order architecture core and register renaming logic, out-of-order issue/execution logic to be included in a processor, in accordance with at least one embodiment of the present invention. The solid lined box in Figure 11A shows the in-order pipeline, while the dashed box shows the register renaming, out-of-order issue/execution pipeline. Similarly, the solid lined box in Figure 11B shows the ordered architectural logic, while the dashed box shows the register renaming logic, out of order publishing/execution logic.In FIG. 11A, processor pipeline 1100 includes fetch stage 1102, length decode stage 1104, decode stage 1106, allocation stage 1108, rename stage 1110, scheduling (also known as dispatch or issue) stage 1112, register read/memory read level. 1114, an execution stage 1116, a writeback/memory write stage 1118, an exception handling stage 1122, and a commit stage 1124. In some embodiments, the stages are provided in a different order, and the different levels may be considered ordered or out of order.In Fig. 11B, the arrows indicate the coupling between two or more units, and the direction of the arrows indicates the direction of data flow between these units. FIG. 11B shows a processor core 1190 that includes a front end unit 1130 coupled to an execution engine unit 1150, and both the execution engine unit and the front end unit are coupled to the memory unit 70.Core 1190 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, core 1190 can be a dedicated core such as, for example, a network or communication core, a compression engine, a graphics core, and the like.The front end unit 1130 includes a branch prediction unit 1132 coupled to an instruction cache unit 1134, the instruction cache unit being coupled to an instruction conversion lookaside buffer (TLB) 1136, the instruction conversion lookaside buffer being coupled to the instruction fetch unit 1138, The instruction fetch unit is coupled to the decoding unit 1140. The decoding unit or decoder may decode the instructions and generate one or more micro-operations, microcode entry points, micro-instructions, other that are decoded from the original instructions or otherwise reflect the original instructions or derived from the original instructions. Instructions or other control signals are used as outputs. The decoder can be implemented using a variety of different mechanisms. Examples of suitable mechanisms include, but are not limited to, lookup tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memory (ROM), and the like. Instruction cache unit 1134 is further coupled to level 2 (L2) cache unit 1176 in memory unit 1170. Decoding unit 1140 is coupled to renaming/dispenser unit 1152 in execution engine unit 1150.Execution engine unit 1150 includes a rename/allocator unit 1152 coupled to retirement unit 1154 and a set of one or more scheduler units 1156. The retirement unit 1154 can include a merge and order module 1103 for sorting data in an instruction set architecture and merging the sorted data in accordance with an embodiment of the present invention. The scheduler unit(s) 1156 represent any number of different schedulers, including reservation stations, central instruction windows, and the like. The scheduler unit(s) 1156 are coupled to the physical register file unit(s) 1158. Each of the physical register file units 1158 represents one or more physical register files, wherein different physical register files store one or more different data types, such as scalar integers, scalar floating points, compact integers, deflation A floating point, a vector integer, a vector floating point, etc., a state (for example, an instruction pointer as an address of a next instruction to be executed), and the like. The physical register file unit(s) 1158 overlaps with the retirement unit 1154 to demonstrate various ways in which register renaming and out-of-order execution can be implemented (eg, using the reorder buffer(s) and the retired register file(s); Use (multiple) future heaps, (multiple) history buffers, and (multiple) retired register files; use register maps, register pools, etc.).Typically, architectural registers are visible from outside the processor or from the programmer's perspective. The registers are not limited to any known specific type of circuit. Various different types of registers are suitable as long as they are capable of storing and providing data as described herein. Examples of suitable registers include, but are not limited to, dedicated physical registers, use of register renaming of dynamically allocated physical registers, combinations of dedicated and dynamically allocated physical registers, and the like. The retirement unit 1154 and the physical register file unit(s) 1158 are coupled to the execution cluster(s) 1160. The execution cluster(s) 1160 includes a set of one or more execution units 1162 and a set of one or more memory access units 1164. Execution unit 1162 can perform various operations (eg, shift, add, subtract, multiply) and perform on various types of data (eg, scalar floating point, packed integer, packed floating point, vector integer, vector floating point).While some embodiments may include several execution units dedicated to a particular function or set of functions, other embodiments may include an execution unit or a plurality of execution units that all perform all of the functions. The scheduler unit(s) 1156, the physical register file unit(s) 1158, and the execution cluster(s) 1160 are shown as likely to be plural, as some embodiments create for certain types of data/operations. Separate pipelines (eg, scalar integer pipelines, scalar floating point/compact integer/compact floating point/vector integer/vector floating point pipelines, and/or memory access pipelines, each with its own scheduler unit, (multiple) The physical register file unit, and/or the cluster is executed, and in the case of a separate memory access pipeline, some embodiments in which the execution cluster of the pipeline has the memory access unit(s) 1164 are implemented. It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution pipelines, and the rest are ordered pipelines.The set of memory access units 1164 are coupled to a memory unit 1170 that includes a data TLB unit 1172 coupled to a data cache unit 1174 that is coupled to a level 2 (L2) cache unit 1176. In an exemplary embodiment, memory access unit 1164 can include a load unit, a memory address unit, and a store data unit each coupled to data TLB unit 1172 in memory unit 1170. L2 cache unit 1176 is coupled to one or more other levels of cache and is ultimately coupled to main memory.By way of example, an exemplary register renaming out-of-order issue/execute core architecture may implement pipeline 1100 as follows: 1) instruction fetch 38 performs fetch and length decode stages 1102 and 1104; 2) decode unit 1140 performs decode stage 1106; 3) Renaming/allocator unit 1152 performs allocation stage 1108 and renaming stage 1110; 4) scheduler unit 1156(s) execute scheduling stage 1112; 5) physical register file unit(s) 1158 and memory unit 1170 perform register read /memory read stage 1114; execution cluster 1160 executes the execution stage 1116; 6) memory unit 1170 and physical register file unit(s) 1158 execute write-back/memory write stage 1118; 7) various units may be involved in exception handling stages 1122; and 8) retirement unit 1154 and physical register file unit(s) 1158 execute commit stage 1124.Core 1190 can support one or more instruction sets (for example, the x86 instruction set (with some extensions that have been added with newer versions); MIPS instruction set from MIPS Technologies, Sunnyvale, California; ARM from Sunnyvale, California The holding company's ARM instruction set (with additional extensions, such as NEON).It should be understood that a core may support multi-threading (performing two or more parallel operations or sets of threads), and the multi-threading may be accomplished in a variety of ways, including time division multithreading, synchronization. Multi-threading (where a single physical core provides a logical core for each of the threads that the physical core is synchronizing and multi-threading), or a combination thereof (eg, time-division fetching and decoding, and thereafter such ashyperthreading technology) Synchronous multithreading).Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming can be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 634/674 and shared L2 cache unit 1176, alternative embodiments may have a single internal cache for both instructions and data, such as For example, level 1 (L1) internal cache, or multi-level internal cache. In some embodiments, the system can include a combination of an internal cache and an external cache external to the core and/or processor. Alternatively, all caches can be external to the core and/or processor.12 is a block diagram showing a micro-architecture for a processor 1200 that includes logic for executing instructions in accordance with one embodiment of the present invention. In one embodiment, processor 1200 monitors the performance of the processing device to manage inaccurate events. In some embodiments, instructions in accordance with one embodiment may be implemented to perform data elements having data sizes such as bytes, words, double words, quadwords, and the like, as well as single and double precision integer and floating point data types. operating. In one embodiment, the ordered front end 1201 is the portion of the processor 1200 that takes the instructions to be executed and prepares them for use in the processor pipeline. The front end 1201 can include several units. In one embodiment, instruction prefetcher 1226 fetches instructions from memory and feeds them to instruction decoder 1228, which in turn decodes or interprets the instructions. For example, in one embodiment, the decoder decodes the received instructions into one or more operations called "microinstructions" or "micro-ops" (also known as micro-ops or uops) that the machine can execute.In other embodiments, according to one embodiment, the decoder parses the instructions into an opcode and corresponding data and a control field that is used by the micro-architecture to perform the operation. In one embodiment, the trace cache 1230 employs decoded uops and assembles them into a program ordered sequence or tracks in the uop queue 1234 for execution. When the trace cache 1230 encounters a complex instruction, the microcode ROM 1232 provides the uop needed to complete the operation.Some instructions are converted to a single micro op, while others use several microops to perform a complete operation. In one embodiment, if more than four microops are required to complete the instruction, decoder 1228 accesses microcode ROM 1232 to execute the instructions. For one embodiment, the instructions may be decoded into a small number of microops for processing at instruction decoder 1228. In another embodiment, if multiple microops are required to complete the operation, the instructions may be stored in microcode ROM 1232. Trace cache 1230 references an entry point programmable logic array (PLA) to determine the correct microinstruction pointer for reading the microcode sequence from microcode ROM 1232 to complete one or more instructions in accordance with one embodiment. After the microcode ROM 1232 completes the sort microop for the instruction, the front end 1201 of the machine re-fetches the micro op from the trace cache 1230.The out-of-order execution engine 1203 is where the instructions are ready to execute. The out-of-order execution logic has a number of buffers for smoothing and reordering the instruction stream to optimize performance as the instructions proceed along the pipeline and are scheduled to execute. The allocator logic allocates the machine buffers and resources required for each uop execution. The register renaming logic renames the logical registers to entries in the register file. In front of the instruction scheduler: memory scheduler, fast scheduler 1202, slow/generic floating point scheduler 1204, and simple floating point scheduler 1206, the allocator is also a two uop queue (one for memory operations and one for Each uop in one of the no memory operations allocates an entry. Uop schedulers 1202, 1204, 1206 determine whether these uops are ready for execution based on their read state of the input register operand source and the availability of execution resources that uop uses to complete its operations. The fast scheduler 1202 of one embodiment may schedule on each half of the primary clock cycle, while other schedulers may schedule once on each primary processor clock cycle. The scheduler arbitrates the dispatch port to dispatch uop for execution.Register file 1208, 1210 is located between schedulers 1202, 1204, 1206 and execution units 1212, 1214, 1216, 1218, 1220, 1222, 1224 in execution block 1211. There are separate register files for integer and floating point operations. Each register file 1208, 1210 of an embodiment also includes a bypass network that can bypass or forward completed results that have not been written to the register file to a new dependent uop. Integer register file 1208 and floating point register file 1210 are also capable of data communication with other register files. For one embodiment, the integer register file 1208 is divided into two separate register files, one for the low order 32 bits of data and the second register for the high order 32 bits of data. The floating point register file 1210 of one embodiment has a 128 bit wide entry because floating point instructions typically have operands having a width from 66 bits to 128 bits.Execution block 1211 includes execution units 1212, 1214, 1216, 1218, 1220, 1222, 1224 in which the instructions are actually executed. This portion includes register files 1208, 1210 that store the integer and floating point data operand values ​​that the microinstructions use to execute. The processor 1200 of an embodiment is comprised of a plurality of execution units: an address generation unit (AGU) 1212, an AGU 1214, a fast ALU 1216, a fast ALU 1218, a slow ALU 1220, a floating point ALU 1222, and a floating point mobile unit 1224. For one embodiment, floating point execution blocks 1222, 1224 perform floating point, MMX, SIMD, and SSE, or other operations. The floating point ALU 1222 of one embodiment includes a 64 bit by 54 bit floating point divider for performing division, square root, and residual micro op. For embodiments of the present invention, instructions involving floating point values ​​can be processed with floating point hardware.In one embodiment, the ALU operations go to the high speed ALU execution units 1216, 1218. The fast ALUs 1216, 1218 of one embodiment can perform fast operations with an effective latency of half a clock cycle. For one embodiment, most complex integer operations go to slow ALU 1220 because slow ALU 1220 includes integer execution hardware for long latency type operations, such as multiplication, shifting, flag logic, and branch processing. Memory load/store operations are performed by AGUs 1212, 1214. For one embodiment, integer ALUs 1216, 1218, 1220 are described in the context of performing integer operations on 64-bit data operands. In an alternative embodiment, ALUs 1216, 1218, 1220 can be implemented to support various data bits including 16, 32, 128, 256, and the like. Similarly, floating point units 1222, 1224 can be implemented to support an operand range of bits having various widths. For one embodiment, floating point units 1222, 1224 can operate on 128-bit width packed data operands in conjunction with SIMD and multimedia instructions.In one embodiment, the uop scheduler 1202, 1204, 1206 dispatches a non-independent operation before the parent load completes execution. When speculatively scheduling and executing uops in processor 1200, processor 1200 also includes logic for processing memory misses. If data loading misses in the data cache, there may be a non-independent operation in the pipeline that has left the scheduler due to temporarily incorrect data. The replay mechanism tracks and re-executes instructions that use incorrect data. Non-independent operations need to be replayed and allow independent operations to be completed. The scheduler and playback mechanism of one embodiment of the processor is also designed to capture sequences of instructions for text string comparison operations.Processor 1200 can include a retirement unit 1254 coupled to execution block 1211. The retirement unit 1254 can include a merge and order module 1205 for sorting the data in the instruction set architecture of the processing device and merging the sorted data.The term "register" can refer to an on-board processor storage location that is used as part of an instruction to identify an operand. In other words, the registers can be registers available from outside the processor (from the programmer's perspective). However, the registers of the embodiments should not be limited to a particular type of circuit. Rather, the registers of the embodiments are capable of storing and providing data and performing the functions described herein. The registers described herein may be implemented by circuitry within the processor using any number of different techniques, such as dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, and the like. In one embodiment, the integer register stores thirty-two bit integer data.The register file of one embodiment also includes eight multimedia SIMD registers for squashing data. For the following discussion, a register is understood to be a data register designed to hold data compacted, for example, a 64-bit wide MMX register in a microprocessor implemented with Intel Corporation's MMXTM technology from Santa Clara, California (in some cases) Also known as the "mm" register). These MMX registers, available in integer and floating point form, can operate with packed data elements that accompany SIMD and SSE instructions. Similarly, 128-bit wide XMM registers associated with SSE2, SSE3, SSE4 or higher (commonly referred to as "SSEx") techniques can also be used to maintain such compact data operands. In one embodiment, the register does not distinguish between the two data types when storing the compact data and the integer data. In one embodiment, integers and floating points are contained in the same register file or in different register files. Moreover, in one embodiment, floating point and integer data can be stored in different registers or in the same register.Referring now to Figure 13, a block diagram of a system 1300 in accordance with one embodiment of the present invention is shown. System 1300 can include one or more processors 1310, 1315 coupled to a graphics storage controller hub (GMCH) 1320. The optional nature of the additional processor 1315 is indicated by dashed lines in FIG. In one embodiment, the processors 1310, 1315 monitor the performance of the processing device to manage inaccurate events.Each processor 1310, 1315 can be a version of a circuit, an integrated circuit, a processor, and/or a silicon integrated circuit as described above. However, it should be noted that integrated graphics logic and integrated memory control units will be less likely to be present in processors 1310, 1315. FIG. 13 illustrates that GMCH 1320 can be coupled to memory 1340, which can be, for example, a dynamic random access memory (DRAM). For at least one embodiment, the DRAM can be associated with a non-volatile cache.The GMCH 1320 can be a chipset, or part of a chipset. The GMCH 1320 can be in communication with the processors 1310, 1315 and control the interaction between the processor(s) 1310, 1315 and the memory 1340. The GMCH 1320 can also be used as an acceleration bus interface between the processor(s) 1310, 1315 and other components of the system 1300. For at least one embodiment, the GMCH 1320 communicates with the processor(s) 1310, 1315 via a multipoint branch bus (such as a front side bus (FSB) 1395).In addition, the GMCH 1320 is coupled to a display 1345 (such as a flat panel display or a touch screen display). The GMCH 1320 can include an integrated graphics accelerator. The GMCH 1320 is further coupled to an input/output (I/O) controller hub (ICH) 1350 that can be used to couple various peripheral devices to the system 1300. Shown in the embodiment of FIG. 13, for example, is an external graphics device 1360, which may be a discrete graphics device coupled to another peripheral device 1370 to the ICH 1350.Alternatively, additional or different processors may also be present in system 1300. For example, the additional processor(s) 1315 can include the same additional processor(s) as processor 1310, additional processors(s) that are heterogeneous or asymmetric with processor 1310, accelerators (eg, graphics accelerators) Or a digital signal processing (DSP) unit), a field programmable gate array, or any other processor. There are a number of differences between processors 1310, 1315 in terms of a range of quality metrics including architecture, microarchitecture, thermal, power consumption characteristics, and the like. These differences can effectively manifest themselves as asymmetry and heterogeneity between the processors 1310, 1315. For at least one embodiment, each processor 1310, 1315 can reside in the same die package.Embodiments can be implemented using many different system types. FIG. 14 is a block diagram of a SoC 1400, in accordance with an embodiment of the present disclosure. The dashed box is an optional feature for more advanced SoCs. In Figure 14, the interconnect unit(s) 1412 are coupled to: an application processor 1420 that includes a set of one or more cores 1402A through 1402N and a shared cache unit(s) 1406; The proxy unit 1410; the bus controller unit(s) 1416; the integrated memory controller unit(s) 1414; a set or one or more media processors 1418, the media processor can include integrated graphics logic 1408, An image processor 1424 providing still and/or video camera functions, an audio processor 1426 for providing hardware audio acceleration, and a video processor 1428 for providing video encoding/decoding acceleration; a static random access memory (SRAM) Unit 1430; a direct memory access (DMA) unit 1432; and a display unit 1440 for coupling to one or more external displays. In one embodiment, a memory module can be included in the integrated memory controller unit 1414. In another embodiment, a memory module can be included in one or more other components of SoC 1400 that can be used to access and/or control memory. Application processor 1420 can include conditional branches, indirect branches, and event execution logic as described in the embodiments herein.The memory hierarchy includes a cache of one or more hierarchical structures within the core, a set or one or more shared cache units 1406, and an external memory (not shown) coupled to the one The memory controller unit 1414 is integrated. The set of shared cache units 1406 may include one or more intermediate caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other level of cache, last level cache ( LLC), and/or combinations thereof.In some embodiments, one or more of the cores 1402A through 1402N are capable of multi-threading.System agent 1410 includes those components that coordinate and operate cores 1402A through 1402N. System agent unit 1410 can include, for example, a power control unit (PCU) and a display unit. The PCU may be or include the logic and components needed to adjust the power states of cores 1402A through 1402N and integrated graphics logic 1408. The display unit is used to drive one or more externally connected displays.In terms of architecture and/or instruction set, cores 1402A through 1402N may be homogeneous or heterogeneous. For example, some of the cores 1402A through 1402N may be ordered while other cores are out of order. As another example, two or more of the cores 1402A through 1402N can execute the same set of instructions, while other cores can execute only a subset of the set of instructions or execute a different set of instructions.The application processor 1420 may be a general purpose processor such as CoreTMi3, i5, i7, 2Duo and Quad, XeonTM, ItaniumTM, AtomTM, XScaleTM or StrongARMTM processors available from IntelTM Corporation of Santa Clara, California. Alternatively, application processor 1420 may be from another company, such as ARMTM Holdings, MIPSTM, and the like. Application processor 1420 may be a special purpose processor, such as a network or communication processor, a compression engine, a graphics processor, a coprocessor, an embedded processor, or the like. Application processor 1420 can be implemented on one or more chips. Application processor 1420 may be part of one or more substrates and/or may be implemented on one or more substrates using any of a variety of process technologies such as, for example, BiCMOS, CMOS, or NMOS.15 is a block diagram of an embodiment of a system on a chip (SoC) design in accordance with the present disclosure. As a specific illustrative example, SoC 1500 is included in a User Equipment (UE). In one embodiment, a UE refers to any device used by an end user for communication, such as a handy phone, smart phone, tablet, ultra-thin notebook, notebook with a broadband adapter, or any other similar communication device. Typically, the UE is connected to a base station or node, which may essentially correspond to a mobile station (MS) in a GSM network.Here, the SOC 1500 includes 2 cores (1506 and 1507). Cores 1506 and 1507 may conform to an instruction set architecture such as aArchitecture CoreTM based processor, an AMD Semiconductors (AMD) processor, a MIPS based processor, an ARM based processor, or its customers, and Their licensors or compatible parties. Cores 1506 and 1507 are coupled to cache control 1508, which is associated with bus interface unit 1508 and L2 cache 1510 to communicate with other portions of system 1500. Interconnect 1510 includes on-chip interconnects that may implement one or more aspects of the present disclosure as described, such as IOSF, AMBA, or other interconnects discussed above. In one embodiment, conditional branches, indirect branches, and event execution logic may be included in cores 1506, 1507.The interconnect 1510 provides communication channels to other components, such as a Subscriber Identity Module (SIM) 1530 for interfacing with a SIM card, a boot ROM 1535 for maintaining boot code to perform initialization and booting of the SoC 1500 by the cores 1506 and 1507, A boot SDRAM controller 1540 for interfacing with an external memory (eg, DRAM 1560), a flash controller 1545 for interfacing with a non-volatile memory (eg, flash memory 1565), a peripheral for interfacing with peripheral devices Control 1550 (eg, a serial peripheral interface), video codec 1520 and video interface 1525 for displaying and receiving inputs (eg, touch enabled inputs), GPU 1515 for performing graphics related calculations, and the like. Any of these interfaces can incorporate aspects of the present disclosure as described herein. In addition, system 1500 exhibits peripheral devices for communication, such as Bluetooth module 1570, 3G modem 1575, GPS 1580, and Wi-Fi 1585.Referring now to Figure 16, a block diagram of a system 1600 in accordance with an embodiment of the present invention is shown. As shown in FIG. 16, multiprocessor system 1600 is a point-to-point interconnect system and includes a first processor 1670 and a second processor 1680 coupled via a point-to-point interconnect 1650. Each of processors 1670 and 1680 can be a certain version of a processor of a computing system as described herein. In one embodiment, the processors 1670, 1680 monitor the performance of the processing device to manage inaccurate events in order to monitor the performance of the processing device to manage inaccurate events.Although shown with two processors 1670, 1680, it should be understood that the scope of the present disclosure is not limited thereto. In other embodiments, one or more additional processors may be present in a given processor.Processors 1670 and 1680 are shown as including integrated memory controller units 1672 and 1682, respectively. Processor 1670 also includes point-to-point (P-P) interfaces 1676 and 1678 as part of its bus controller unit; similarly, second processor 1680 includes P-P interfaces 1686 and 1688. The processors 1670, 1680 can exchange information via the P-P interface 1650 using point-to-point (P-P) interface circuits 1678, 1688. As shown in FIG. 16, IMCs 1672 and 1682 couple the processors to corresponding memories (ie, memory 1632 and memory 1634), which may be portions of the main memory that are locally attached to the corresponding processor.Processors 1670 and 1680 can each exchange information with chipset 1690 via separate P-P interfaces 1652, 1654 using point-to-point interface circuits 1676, 1694, 1686, 1698. Chipset 1690 can also exchange information with high performance graphics circuitry 1638 via high performance graphics interface 1639.A shared cache (not shown) may be included in either processor or external to both processors but connected to the processor via a PP interconnect such that if the processor is placed in a low power mode, then Local cache information for one or two processors may be stored in the shared cache.Chipset 1690 can be coupled to first bus 1616 via interface 1616. In one embodiment, the first bus 1616 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present disclosure is not limited in this respect. .As shown in FIG. 16, respective I/O devices 1614 can be coupled to a first bus 1616 along with a bus bridge 1618 that couples the first bus 1616 to a second bus 1620. In one embodiment, the second bus 1620 can be a low pin count (LPC) bus. In one embodiment, various devices may be coupled to a second bus 1620, including, for example, a keyboard and/or mouse 1622, a plurality of communication devices 1627, and a storage unit 1628 (eg, a disk drive) that may include instruction/code data 1630. Or other mass storage devices). Further, audio I/O 1624 can be coupled to second bus 1620. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 16, the system can implement a multi-drop branch bus or other such architecture.Referring now to Figure 17, a block diagram of a system 1700 in accordance with an embodiment of the present invention is shown. FIG. 17 shows processors 1770, 1780. In one embodiment, the processors 1770, 1780 monitor the performance of the processing device to manage inaccurate events. Moreover, processors 1770, 1780 can include integrated memory and I/O control logic ("CL") 1772 and 1782, respectively, and communicate with each other via point-to-point interconnect 1750 between point-to-point (PP) interfaces 1778 and 1788, respectively. As shown, processors 1770, 1780 each communicate with chipset 1790 via point-to-point interconnects 1752 and 1754 via corresponding P-P interfaces 1776 through 1794 and 1786 through 1798. For at least one embodiment, CL 1772, 1782 can include an integrated memory controller unit. CL 1772, 1782 may include I/O control logic. As depicted, memories 1732, 1734 are coupled to CLs 1772, 1782, and I/O devices 1714 are also coupled to control logic 1772, 1782. Traditional I/O device 1715 is coupled to chipset 1790 via interface 1796.18 illustrates a block diagram 1800 of an embodiment of a tablet computing device, smart phone, or other mobile device in which a touch screen interface connector can be used. The processor 1810 can monitor the performance of the processing device to manage inaccurate events. Additionally, processor 1810 performs the main processing operations. Audio subsystem 1820 represents hardware (eg, audio hardware and audio circuitry) and software (eg, drivers, codecs) components associated with providing audio functionality to a computing device. In one embodiment, the user interacts with the tablet computing device or smart phone by providing audio commands that are received and processed by processor 1810.Display subsystem 1832 represents hardware (e.g., display device) and software (e.g., drive) components that provide a visual and/or tactile display to a user for interaction with a tablet computing device or smart phone. Display subsystem 1830 includes a display interface 1832 that includes particular screens or hardware devices for providing a display to a user. In one embodiment, display subsystem 1830 includes touch screen devices that provide output and input to a user.I/O controller 1840 represents hardware devices and software components related to user interaction. I/O controller 1840 is operable to manage hardware as part of audio subsystem 1820 and/or display subsystem 1830. Additionally, I/O controller 1840 presents a connection point to an add-on device connected to a tablet computing device or smart phone through which a user can interact. In one embodiment, I/O controller 1840 manages devices such as accelerometers, cameras, light sensors, or other environmental sensors, or other hardware that can be included in a tablet computing device or smart phone. Input can be part of a direct user interaction and provide environmental input to a tablet computing device or smartphone.In one embodiment, the tablet computing device or smart phone includes a power management 1850 that manages battery power usage, battery charging, and features related to power saving operations. Memory subsystem 1860 includes memory devices for storing information in a tablet computing device or smart phone. Connection 1870 includes hardware devices (eg, wireless and/or wired connectors and communication hardware) and software components (eg, drivers, protocol stacks) to enable a tablet computing device or smart phone to communicate with external devices. Cellular connection 1872 may include, for example, a wireless carrier such as GSM (Global System for Mobile Communications), CDMA (Code Division Multiple Access), TDM (Time Division Multiplexing), or other cellular service standards. Wireless connection 1874 may include, for example, non-cellular activities such as a personal area network (e.g., Bluetooth), a local area network (e.g., Wi-Fi), and/or a wide area network (e.g., WiMAX), or other wireless communication.Peripheral connection 1880 includes hardware interfaces and connectors and software components (eg, drivers, protocol stacks) for using peripheral connections as peripheral devices ("to" 1882) to other computing devices, and having connections to tablet computing devices Or a smart phone, including peripheral devices such as "docked" connectors for connection to other computing devices ("from" 1884). Peripheral connections 1880 include general purpose or standards-based connectors such as universal serial bus (USB) connectors, display ports including small display ports (MDPs), high definition multimedia interface (HDMI), Firewire, and the like.19 illustrates a graphical representation of a machine in which an instruction set can be executed in an example form of computing system 1900 for causing the machine to perform any one or more of the methods discussed herein. In an alternative embodiment, the machine can be connected (eg, networked) to a LAN, an intranet, an extranet, or other machine in the Internet. The machine can operate as a server or client device in a client-server network environment or as a peer in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set top box (STB), a personal digital assistant (PDA), a cellular telephone, a network appliance, a server, a network router, a switch or a bridge, or capable of (sequentially or otherwise) Any machine that executes an instruction set that specifies the actions to be taken by that machine. Further, although only a single machine is shown, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute one (or more) sets of instructions for performing the purposes discussed herein. Any one or more of the methods.Computing system 1900 includes processing device 1902 that communicates with one another via bus 1930, main memory 1904 (eg, read only memory (ROM), flash memory, dynamic random access memory (DRAM) (eg, synchronous DRAM (SDRAM) or DRAM (RDRAM)). And so on, static memory 1906 (eg, flash memory, static random access memory (SRAM), etc.), and data storage device 1918.Processing device 1902 represents one or more general purpose processing devices, such as a microprocessor, central processing unit, and the like. More specifically, the processing device can be a Complex Instruction Set Computing (CISC) microprocessor, a Reduced Instruction Set Computer (RISC) microprocessor, a Very Long Instruction Word (VLIW) microprocessor, or a processor that implements other instruction sets, Or a processor that implements a combination of instruction sets. Processing device 1902 can also be one or more dedicated processing devices such as an application specific integrated circuit (ASIC), field programmable gate array (FPGA), digital signal processor (DSP), network processor, and the like. In one embodiment, processing device 1902 can include one or more processing cores. Processing device 1902 is configured to execute processing logic 1926 to perform the operations discussed herein. In one embodiment, processing device 1902 is the same as computer system 100 that implements sorting module 103 and merge module 105 described with respect to FIG. Alternatively, computing system 1900 can include other components as described herein.Computing system 1900 can further include a network interface device 1908 communicatively coupled to network 1920. Computing system 1900 can also include a video display unit 1910 (eg, a liquid crystal display (LCD) or cathode ray tube (CRT)), alphanumeric input device 1912 (eg, a keyboard), cursor control device 1914 (eg, a mouse), and signals A device 1916 (eg, a speaker) or other peripheral device is generated. Moreover, computing system 1900 can include graphics processing unit 1922, video processing unit 1928, and audio processing unit 1932. In another embodiment, computing system 1900 can include a chipset (not shown) that refers to a set of integrated circuits or chips designed to work with processing device 1902 and that controls processing device 1902 and externally Communication between devices. For example, the chipset may be a super high speed device on the motherboard that links processing device 1902 to, for example, main memory 1904 and graphics controller, and links processing device 1902 to a lower speed peripheral bus or peripheral device such as a USB, PCI or ISA bus. A set of chips.Data storage device 1918 can include a computer readable storage medium 1924 on which software 1926 that embody any one or more of the methods of functions described herein is stored. Software 1926 may also reside wholly or at least partially within main memory 1904 as instruction 1926 and/or within processing device 1902 as processing logic 1926 during execution by computing system 1900; main memory 1904 and processing device 1902 also Computer readable storage medium.The computer readable storage medium 1924 can also be used to store the instructions 1926 using the ranking module 103 and the merging module 105 described above with respect to FIG. 1 and/or a software library containing methods for invoking the above described applications. Although computer readable storage medium 1924 is illustrated as a single medium in the exemplary embodiments, the term "computer readable storage medium" shall be taken to include a single medium or a plurality of mediums that store one or more sets of instructions (eg, Centralized or distributed databases, and/or associated caches and servers). The term "computer readable storage medium" shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the embodiments. The term "computer readable storage medium" shall accordingly be taken to include, without limitation, solid state memory, as well as optical and magnetic media. Although the invention has been described in terms of a limited number of embodiments, those skilled in the art will recognize many modifications and changes. It is intended that the appended claims be interpreted as covering all such modifications andThe following examples relate to further embodiments.Example 1 is a processing device comprising a sorting module for: adding a position value of a corresponding position in a register set to each of a plurality of elements, thereby generating a plurality of transformed elements in the corresponding position , wherein each of the plurality of elements comprises a plurality of bits; each of the plurality of transformed elements is compared to itself and to each other; based on the comparing to the plurality of transformed elements Each of the allocation enable indicator or one of the disable indicators; and counting the number of the enable indicators assigned to each of the plurality of transformed elements to generate a ranked sequence of the plurality of elements.In Example 2, the subject matter of Example 1 may optionally include: a sorting module for using a value of at least one of the plurality of elements and the plurality of elements When the values ​​of the other elements are the same, each of the plurality of elements in each of the respective positions is shifted to the left by a set of bits.In Example 3, the subject matter as described in Examples 1 to 2 can optionally include: wherein the sorted sequence includes a count of the number of enabled indicators, and the sorting module is for one of ascending or descending order Generating the sorted sequence of the plurality of elements.In Example 4, the subject matter as described in Examples 1 to 3 can optionally include: wherein the sorting module is configured to compare each of the plurality of transformed elements further comprising the sorting module for Perform less than the operation.In Example 5, the subject matter as described in Examples 1 to 4 can optionally include: wherein the sorting module is configured to compare each of the plurality of transformed elements further comprising the sorting module for Execute greater than the operation.In Example 6, the subject matter of Examples 1 through 5 can optionally include: wherein the ranking module is to generate at least a first set of the sorted sequences of the plurality of sorted elements and the A second set of the sorted sequences of the plurality of sorted elements.In Example 7, the subject matter of Examples 1 through 6 can optionally include: further comprising a merge module coupled to the ordering module, wherein the merge module is configured to: sort the sort The first set of sequences is divided into a first half and the second set of the sorted sequence is divided into a second half, wherein the first half includes the recited sequence The plurality of sorted elements of the first set, and the second half includes the plurality of sorted elements of the second set of the sorted sequence; Each of the plurality of sorted elements is compared to each of the plurality of sorted elements of the second half and the plurality of sorted elements of the second half are Each of the plurality of sorted elements in the first half is compared to generate a third set of the sequence of the plurality of sorted elements in a certain order; and the The plurality of elements in the third group of sequences The green value in the position of each of the respective positions become merged ordered sequence.In Example 8, the subject matter as described in Examples 1 through 7 can optionally include further comprising a merge module coupled to the ordering module, wherein the merge module is configured to: sort from the sort Identifying a plurality of the plurality of sorted elements in the first set of sequences and identifying additional ones of the plurality of sorted elements from the second set of the sorted sequences; Each of the plurality of sorted elements in each of the identified plurality of sets of the first sequence of the sorted sequence and the second set of the sorted sequence Each of the plurality of sorted elements in each of the plurality of sets is compared; based on the comparing from the identified plurality of sets of the first sequence from the sorted sequence Selecting the sorted elements in each set; each of the plurality of sorted elements in each of the identified additional plurality of sets from the second sequence of the sorted sequence And the first group from the sorted sequence Each of the plurality of sorted elements in each of the identified plurality of sets is compared; and based on the comparing, the identified more from the second sequence from the sorted sequence The sorted elements are selected in each of the groups;In Example 9, the subject matter of Examples 1 through 8 can optionally include: wherein the merging module is to: each of the identified plurality of groups from the first sequence of the sorted sequence A selected ordered element of a set is combined with a selected ordered element from each of the identified additional plurality of sets of the second sequence of the sorted sequence to generate a selected selected one comprising the combined Sorting the merged sequences of elements; and placing the combined selected sorted elements in a certain order to generate a merged sorted sequence.In Example 10, the subject matter as described in Examples 1-9 can optionally include wherein the merge module is to generate the merged sorted sequence in one of ascending or descending order.Example 11 is a system on a chip (SoC) comprising: a memory; and processing means communicatively coupled to the memory, the processing means comprising a ranking module for: to each of a plurality of elements a position value corresponding to a corresponding position in the register set, thereby generating a plurality of transformed elements in the corresponding position, wherein each of the plurality of elements includes a plurality of bits; among the plurality of transformed elements Each of them is compared to itself and to each other; one of the plurality of transformed elements is assigned an enable indicator or a disable indicator based on the comparison; and each of the plurality of transformed elements is aligned The number of said enabled enable indicators is counted to generate a sorted sequence of said plurality of elements.In Example 12, the subject matter of Example 11 may optionally include: a sorting module for using a value of at least one of the plurality of elements and the plurality of elements When the values ​​of the other elements are the same, each of the plurality of elements in each of the respective positions is shifted to the left by a set of bits.In Example 13, the subject matter of Examples 11 through 12 can optionally include: wherein the sorted sequence includes a count of the number of enabled indicators, and wherein the ranking module is to generate the at least the a first set of the sorted sequences of the plurality of sorted elements and a second set of the sorted sequences of the plurality of sorted elements.In Example 14, the subject matter as described in Examples 11 to 13 can optionally include: wherein the processing device further comprises a merge module, the merge module being coupled to the sort module, wherein the merge module is for Subdividing the first set of the sorted sequence into a first half and dividing the second set of the sorted sequence into a second half, wherein the first half comprises the Sorting the plurality of the sorted elements of the first set of sequences, and the second half comprises the plurality of sorted elements of the second set of the sorted sequence; Each of the plurality of ordered elements in the half portion is compared to each of the plurality of sorted elements in the second half and the Each of the plurality of sorted elements is compared to each of the plurality of sorted elements in the first half to generate a third of the sequence of the plurality of sorted elements in a certain order Group; and said said sequence The position of each of the green value corresponding position of said plurality of elements of the three groups be merged ordered sequence.In Example 15, the subject matter as described in Examples 11 to 14 can optionally include: wherein the processing device further comprises a merge module, the merge module being coupled to the sort module, wherein the merge module is for Identifying from the first group of the sorted sequences a plurality of the plurality of sorted elements and identifying from the second set of the sorted sequences of the plurality of sorted elements Further plurality of sets; each of said plurality of ordered elements in each of said identified plurality of sets of said first sequence from said sorted sequence being said to be from said sorted sequence Comparing each of the plurality of sorted elements in each of the identified additional plurality of sets of the second set; said comparing from said first sequence from said sorted sequence based on said comparing The ordered elements are selected from each of the identified plurality of groups; the plurality of the selected ones of the identified additional plurality of sets from the second sequence of the sorted sequence Sorting each of the elements from the sorted Comparing each of the plurality of sorted elements in each of the identified plurality of sets of the first set of columns; based on the comparison from the first from the sorted sequence Selecting the sorted element in each of the identified additional sets of the two sequences; selecting the selected one of the identified plurality of sets from the first sequence of the sorted sequence Sorting elements are combined with selected ordered elements from each of the identified additional sets of the second sequence of the sorted sequence to generate a merged sequence comprising the selected selected ordered elements And placing the combined selected ordered elements in a sequence to generate a merged ordered sequence.Example 16 is a method comprising: adding a position value of a corresponding position in a register set to each of a plurality of elements, thereby generating a plurality of transformed elements in the respective position, wherein the plurality of Each of the elements includes a plurality of bits; each of the plurality of transformed elements is compared to itself and to each other; an enable indicator is assigned to each of the plurality of transformed elements based on the comparison or Disabling one of the indicators; and counting the number of the enable indicators assigned to each of the plurality of transformed elements to generate a ranked sequence of the plurality of elements.In Example 17, the subject matter of Example 16 can optionally include the sorted sequence including a count of the number of enabled indicators.In Example 18, the subject matter as described in Examples 16 to 17 may optionally include: further including a value of at least one of the plurality of elements and a value of the other of the plurality of elements Simultaneously shifting each of the plurality of elements to a left position in each of the respective locations; and generating at least a first set of the sorted sequences of the plurality of sorted elements and A second set of the sorted sequences of the plurality of sorted elements.In Example 19, the subject matter of Examples 16-18 can optionally include dividing the first set of the sorted sequence into a first half and the second of the sorted sequence The group is divided into a second half, wherein the first half includes the plurality of sorted elements of the first set of the sorted sequence, and the second half includes the sorted sequence The plurality of the sorted elements of the second set; sorting each of the plurality of sorted elements in the first half with the plurality of the second half Each of the elements is compared and each of the plurality of ordered elements in the second half is compared to each of the plurality of ordered elements in the first half Generating a third set of said sequences of said plurality of sorted elements in a certain order; and said respective locations of each of said plurality of elements in said third set of said sequence The position values ​​are generated as merged sorted sequences.In Example 20, the subject matter of Examples 16 through 19 can optionally include identifying, from the first set of the sorted sequences, a plurality of the plurality of sorted elements and from the Identifying a further plurality of the plurality of sorted elements in the second set of sorting sequences; said each of the identified plurality of sets of the first sequence from the sorted sequence Each of the plurality of sorted elements is compared to each of the plurality of sorted elements in each of the identified additional plurality of sets of the second set of the sorted sequence; based on The comparing selects the ordered element from each of the identified plurality of sets of the first sequence from the sorted sequence; from the second sequence of the sorted sequence Each of the plurality of sorted elements in each of the identified additional plurality of groups and each of the identified plurality of sets from the first set of the sorted sequence Each of the plurality of sorted elements in the comparison is compared; based on the ratio Selecting the sorted element from each of the identified additional sets of the second sequence from the sorted sequence; identifying the first sequence from the sorted sequence A selected ordered element of each of the plurality of sets is combined with a selected ordered element of each of the identified additional plurality of sets of the second sequence of the sorted sequence to generate a included Combining the combined sequences of the selected sorted elements; and placing the combined selected sorted elements in a sequence to generate a merged sorted sequence.Example 21 is a non-transitory machine readable storage medium comprising instructions that, when accessed by a processing device, cause the processing device to perform operations comprising: adding a register to each of a plurality of elements a position value of a corresponding position in the group, thereby generating a plurality of transformed elements in the respective position, wherein each of the plurality of elements includes a plurality of bits; each of the plurality of transformed elements is associated with Comparing itself and each other; assigning one of an enable indicator or a disable indicator to each of the plurality of transformed elements based on the comparison; and assigning to each of the plurality of transformed elements The number of enable indicators is counted to generate a sorted sequence of the plurality of elements.In Example 22, the subject matter of example 21 can optionally include the sorted sequence including a count of the number of enabled indicators.In Example 23, the subject matter as described in Examples 21 to 22 may optionally include: wherein the operation further includes a value of at least one of the plurality of elements and among the plurality of elements Shifting each of the plurality of elements to a left position in each of the respective positions when the values ​​of the other elements are the same; and generating at least the sorted sequence of the plurality of sorted elements a first group and a second group of the sorted sequences of the plurality of sorted elements.In Example 24, the subject matter of Examples 21 through 23 can optionally include: wherein the operation further comprises dividing the first set of the sorted sequence into a first half and The second set of sorting sequences is divided into a second half, wherein the first half includes the plurality of sorted elements of the first set of the sorted sequence, and the second half Part comprising the plurality of sorted elements of the second set of the sorted sequence; each of the plurality of sorted elements in the first half and the second half Each of the plurality of sorted elements is compared and each of the plurality of sorted elements in the second half is sorted with the plurality of the first half Each of the elements is compared to generate a third set of the sequences of the plurality of sorted elements in a certain order; and each of the plurality of elements in the third set of the sequence The position value of the corresponding position is generated as a merged Collating sequence.In Example 25, the subject matter of Examples 21 through 24 can optionally include: wherein the operation further comprises identifying from among the plurality of sorted elements in the first set of the sorted sequences Multiple sets and identifying additional ones of the plurality of sorted elements from the second set of the sorted sequences; from among the identified plurality of sets of the first sequence from the sorted sequence Each of the plurality of sorted elements in each set and the plurality of sorted elements in each of the identified additional plurality of sets from the second set of the sorted sequence Each of the comparisons; selecting, based on the comparison, the sorted elements from each of the identified plurality of sets of the first sequence from the sorted sequence; from the sorted sequence Each of the plurality of sorted elements in each of the identified additional plurality of sets of the second sequence and the identified one of the first set from the sorted sequence Each of the plurality of sorted elements in each of the plurality of groups Comparing; selecting the sorted element from each of the identified additional sets of the second sequence from the sorted sequence based on the comparison; the said from the sorted sequence Selected of the selected one of the identified plurality of sets of the first sequence and each of the identified additional sets of the second sequence from the sorted sequence The ordered elements are combined to generate a merged sequence comprising the selected selected ordered elements; and the combined selected ordered elements are placed in a sequence to generate a merged ordered sequence.While the present disclosure has been described in terms of a limited number of embodiments, those skilled in the art will understand many modifications and variations. All such modifications and variations fall within the true spirit and scope of the disclosure.Design can run through different stages, from production to simulation to manufacturing. The data representing the design can represent the design in several ways. First, as useful in simulations, hardware description languages ​​or other functional description languages ​​can be used to represent the hardware. Additionally, circuit level models with logic and/or transistor gates can be generated at some stages of the design process. In addition, most designs reach data levels that represent the physical layout of different devices in the hardware model at some stages. Where conventional semiconductor fabrication techniques are used, the data representing the hardware model may be data indicative of the presence or absence of various features of the integrated circuit used by the mask on different mask layers. In any design representation, the data can be stored in any form of machine readable medium. The memory or magnetic storage device or optical storage device (eg, a disk) may be a machine readable medium for storing information that is transmitted via light waves or waves that are modulated or otherwise Generate to transfer this type of information. When transmitting an indication or carrying an electrical carrier of code or design, a new copy is made to the extent that the copying, buffering or retransmission of the electrical signal is performed. Accordingly, a communication provider or network provider can at least temporarily store an article, such as information encoded into a carrier wave, on a tangible machine readable medium, thereby embodying the techniques of the embodiments of the present disclosure.A module as used herein refers to any combination of hardware, software, and/or firmware. As an example, a module includes hardware, such as a microcontroller associated with a non-transitory medium for storing code, the code being adapted for execution by a microcontroller. Thus, in one embodiment, a reference to a module refers to hardware that is specifically configured to identify and/or execute code to be held in a non-transitory medium. Moreover, in another embodiment, the use of a module refers to a non-transitory medium that includes code that is specifically adapted for execution by a microcontroller to perform predetermined operations. And as can be inferred, in yet another embodiment, the term "module" (in this example) can refer to a combination of a microcontroller and a non-transitory medium. In general, module boundaries that are shown as separate will usually change and may overlap. For example, the first and second modules may share hardware, software, firmware, or a combination thereof while retaining some separate hardware, software, or firmware. In one embodiment, the use of the term "logic" includes hardware, such as transistors, registers, or other hardware, such as programmable logic devices.The use of the phrase "configured" in one embodiment refers to the arrangement, assembly, manufacture, sale, import, and/or design of a device, hardware, logic, or component to perform the specified or determined tasks. In this example, an unoperated device or element thereof, if designed, coupled, and/or interconnected for performing the specified task, is still 'configured to' perform the specified task. For illustrative purposes only, a logic gate may provide 0 or 1 during operation. However, a logic gate that is configured to provide a start signal for a clock does not include every possible logic gate that can provide 1 or 0. Instead, the logic gates are coupled in such a way that during operation, the 1 or 0 output is used to start the clock. It is again noted that the use of the term 'configured to' does not require an operation, but instead focuses on the potential state of the device, hardware and/or components in which the device, hardware and/or components are designed to be Specific tasks are performed while the device, hardware, and/or components are running.Furthermore, the use of the phrases 'for', 'capable/can be used' and or 'operable to' means in some embodiments that some devices, logic, hardware and/or elements are designed in such a way as to be The device, logic, hardware, and/or components are used. Note that in one embodiment, the use of 'to', 'capable to' or 'operable to' as used above refers to devices, logic, hardware, and/or components. A potential state in which the device, logic, hardware, and/or components are not operating but are designed in such a way as to enable the device to be used in a particular manner.Values ​​as used herein include any known representation of a number, state, logic state, or binary logic state. Often, the use of logic levels, logic values, or logical values ​​is also referred to as 1 and 0, which simply represent binary logic states. For example, 1 refers to a high logic level and 0 refers to a low logic level. In one embodiment, a memory cell such as a transistor or a flash cell may be capable of holding a single logic value or multiple logic values. However, other representations of values ​​in computer systems have been used. For example, the decimal number ten can also be represented as a binary value 910 and a hexadecimal letter A. Thus, the value includes any representation of information that can be maintained in the computer system.In addition, the status can be represented by a value or a portion of the value. As an example, a first value (eg, a logical one) may represent a default or initial state, and a second value (eg, a logical zero) may represent a non-default state. Moreover, in one embodiment, the terms reset and settings refer to default and updated values ​​or states, respectively. For example, the default value may include a high logical value (ie, a reset), while the updated value may include a low logical value (ie, a setting). Note that any combination of values ​​can be used to represent any number of states.Embodiments of the methods, hardware, software, firmware or code set forth above may be executed via code or instructions stored on a machine-accessible, machine-readable, computer-accessible or computer-readable medium. To implement. Non-transitory machine accessible/readable media includes any mechanism that provides (ie, stores and/or transmits) information in a form readable by a machine, such as a computer or electronic system. For example, the non-transitory machine accessible medium includes: a random access memory (RAM) such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; a magnetic storage medium or an optical storage medium; a flash memory device; an electrical storage device; Storage device; acoustic storage device; other form of storage device for maintaining information received from transient (propagating) signals (eg, carrier waves, infrared signals, digital signals), etc., and non-transients from which information can be received The media is separated.Instructions for program logic to perform embodiments of the present disclosure may be stored in a memory in the system, such as a DRAM, cache, flash memory, or other storage device. Moreover, the instructions may be distributed via a network or by other computer readable media. Accordingly, a machine-readable medium can include any mechanism for storing or transmitting information in a form readable by a machine (eg, a computer), but is not limited to a floppy disk, an optical disk, a compact disk-read only memory (CD-ROM), and a magneto-optical disk. , read only memory (ROM), random access memory (RAM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), magnetic or optical card, flash memory, or A volatile machine readable storage device for transmitting information over the Internet via electrical, optical, acoustic, or other forms of propagated signals (eg, carrier waves, infrared signals, digital signals, etc.). Accordingly, computer readable media includes any type of tangible machine readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (eg, a computer).A reference to "one embodiment" or "an embodiment" in this specification means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. . Thus, appearances of the phrases "in one embodiment" or "in an embodiment" in the various embodiments are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.In the above specification, a detailed description has been given with reference to the specific exemplary embodiments. It will be apparent, however, that various modifications and changes may be made without departing from the spirit and scope of the invention. Accordingly, the specification and drawings are to be regarded as In addition, the above-described uses of the embodiments and other exemplary languages ​​are not necessarily referring to the same embodiments or the same examples, but may refer to different and distinct embodiments, and may refer to the same embodiments.
Methods, systems, and devices for edgeless memory clusters are described. Systems, devices, and techniques are described for eliminating gaps between clusters by creating groups (e.g., domains) of clusters that are active at a given time, and using drivers within inactive clusters to perform array termination functions for abutting active clusters. Tiles on the edges of a cluster may have drivers that operate both for the cluster, and for a neighboring cluster, with circuits (e.g., a multiplexers) on the drivers to enable operations for both clusters.
CLAIMSWhat is claimed is:1. An apparatus, comprising: a set of memory clusters, each memory cluster of the set of memory clusters comprising: a plurality of tiles having drivers; a memory array positioned above the plurality of tiles and having a plurality of memory cells; and a plurality of electrodes coupled with the plurality of memory cells for addressing each memory cell of the plurality of memory cells, wherein a driver of a first tile of a first memory cluster of the set of memory clusters is coupled with an electrode of the plurality of electrodes of a second tile of a second memory cluster of the set of memory clusters, and wherein the first memory cluster and the second memory cluster are configured to operate in an active mode during mutually exclusive periods of time.2. The apparatus of claim 1, wherein the set of memory clusters comprises: a first subset of memory clusters that do not abut each other; and a second subset of memory clusters that do not abut each other, the second subset not including the memory clusters of the first subset, wherein the apparatus is configured such that the memory clusters of the second subset operate in an inactive mode when the memory clusters of the first subset operate in the active mode.3. The apparatus of claim 2, wherein the first subset of memory clusters includes the first memory cluster and the second subset of memory clusters includes the second memory cluster.4. The apparatus of claim 2, wherein the set of memory clusters further comprises: a third subset of memory clusters that do not abut each other, the third subset not including the memory clusters of the first and second subsets.5. The apparatus of claim 1, wherein the first memory cluster abuts the second memory cluster.6. The apparatus of claim 1, wherein each memory cluster of the set abuts another memory cluster of the set.7. The apparatus of claim 1, wherein the apparatus is configured such that memory clusters of the set that abut each other do not operate in the active mode at the same time.8. The apparatus of claim 1, wherein for each memory cluster of the set, tiles of the plurality of tiles have a common configuration of drivers and are positioned in a repeating pattern.9. The apparatus of claim 1, wherein for each memory cluster of the set, the electrodes of the plurality of electrodes are coupled with the drivers via respective socket connections.10. The apparatus of claim 1, wherein one or more electrodes of the plurality of electrodes of the first memory cluster are also in the plurality of electrodes of the second memory cluster.11. The apparatus of claim 1 , wherein one or more electrodes are each included in the plurality of electrodes of more than one memory cluster of the set.12. A memory cluster, comprising: a plurality of tiles having drivers; a memory array positioned above the plurality of tiles and having a plurality of memory cells; and a plurality of electrodes coupled with the drivers via respective socket connections, the plurality of electrodes coupled with the plurality of memory cells for addressing each memory cell of the plurality of memory cells, wherein a tile of the plurality of tiles comprises a first portion of the drivers and a second portion of the drivers, the second portion of the drivers configured to be used when the memory cluster is inactive for accessing memory cells of an abutting memory cluster.13. The memory cluster of claim 12, wherein each driver of the second portion of the drivers comprises: a first driver circuit configured to enable the driver to access one or more memory cells of the plurality of memory cells via an electrode of the plurality of electrodes; and a second driver circuit configured to enable the driver to access one or more memory cells of the abutting memory cluster via the electrode of the plurality of electrodes.14. The memory cluster of claim 12, wherein the first portion of the drivers is configured to be used for accessing one or more memory cells of the memory cluster when the abutting memory cluster is inactive.15. The memory cluster of claim 12, wherein tiles of the plurality of tiles have a common configuration of drivers and are positioned in a repeating pattern.16. An apparatus, comprising: a first memory cluster, comprising: a first memory array having a first plurality of memory cells; a first driver coupled with a subset of the first plurality of memory cells; and a first driver circuit configured to enable the first driver when the first memory cluster is active; and a second memory cluster, comprising: a second memory array having a second plurality of memory cells, wherein the first driver is coupled with at least one memory cell of the second plurality of memory cells; and a second driver coupled with a subset of the second plurality of memory cells, wherein the first driver circuit is further configured to enable the first driver when the second memory cluster is active and the first memory cluster is inactive.17. The apparatus of claim 16, wherein the first memory cluster abuts the second memory cluster.18. The apparatus of claim 16, further comprising: an electrode that couples the first driver with the subset of the first plurality of memory cells and the second driver with the subset of the second plurality of memory cells.19. A method, comprising: determining to access a first memory array of a first memory cluster, the first memory cluster including a first plurality of tiles that include a first plurality of drivers coupled with the first memory array; inhibiting, based at least in part on determining to access the first memory array, a first portion of a tile of a second plurality of tiles of a second memory cluster in response to determining to access the first memory array of the first memory cluster, the second memory cluster including a second memory array, the second plurality of tiles including a second plurality of drivers coupled with the second memory array, a driver of the second plurality of drivers coupled with the first memory array and positioned on a second portion of the tile; and enabling, based at least in part on determining to access the first memory array, the driver on the second portion of the tile of the second memory cluster to access the first memory array of the first memory cluster.20. The method of claim 19, wherein the second portion of the tile acts as a termination tile for the first memory cluster when the first portion of the tile is inhibited.21. The method of claim 19, further comprising: deactivating the second memory cluster before enabling the driver on the second portion of the tile of the second memory cluster.22. The method of claim 19, wherein the driver is coupled with one or more memory cells of the first memory array and one or more memory cells of the second memory array via a same electrode.23. The method of claim 22, wherein enabling the driver on the second portion of the tile of the second memory cluster comprises: activating the driver to access, via the electrode, the one or more memory cells of the first memory array without accessing the one or more memory cells of the second memory array.24. The method of claim 22, wherein inhibiting the first portion of the tile of the second plurality of tiles comprises: disabling one or more drivers of the first portion of the tile.25. The method of claim 19, further comprising: determining to access the second memory array of the second memory cluster; inhibiting a first portion of a second tile of the first plurality of tiles of the first memory cluster in response to determining to access the second memory array of the second memory cluster, a second driver of the first plurality of drivers further coupled with the second memory array and positioned on a second portion of the second tile; and enabling the second driver on the second portion of the second tile of the first memory cluster to access the second memory array of the second memory cluster.
EDGELESS MEMORY CLUSTERS CROSS REFERENCES[0001] The present Application for Patent claims priority to U.S. Patent Application No. 17/385,682 by Castro et ak, entitled “EDGELESS MEMORY CLUSTERS,” filed July 26, 2021; which is assigned to the assignee hereof and expressly incorporated by reference herein.BACKGROUND[0002] Memory devices are widely used to store information in various electronic devices such as computers, user devices, wireless communication devices, cameras, digital displays, and the like. Information is stored by programing memory cells within a memory device to various states. For example, binary memory cells may be programmed to one of two supported states, often denoted by a logic 1 or a logic 0. In some examples, a single memory cell may support more than two states, any one of which may be stored. To access the stored information, a component may read, or sense, at least one stored state in the memory device. To store information, a component may write, or program, the state in the memory device.[0003] Various types of memory devices and memory cells exist, including magnetic hard disks, random access memory (RAM), read-only memory (ROM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), static RAM (SRAM), ferroelectric RAM (FeRAM), magnetic RAM (MRAM), resistive RAM (RRAM), flash memory, phase change memory (PCM), self-selecting memory, chalcogenide memory technologies, and others. Memory cells may be volatile or non-volatile.BRIEF DESCRIPTION OF THE DRAWINGS[0004] FIG. 1 illustrates an example of a system that supports edgeless memory clusters in accordance with examples as disclosed herein.[0005] FIG. 2 illustrates an example of a memory die that supports edgeless memory clusters in accordance with examples as disclosed herein.[0006] FIG. 3 illustrates an example of an array of memory cells that support edgeless memory clusters in accordance with examples as disclosed herein. [0007] FIGs. 4A and 4B illustrate examples of memory modules that support edgeless memory clusters in accordance with examples as disclosed herein.[0008] FIG. 5 illustrates a simplified plan view of a memory die that supports edgeless memory clusters in accordance with examples as disclosed herein.[0009] FIGs. 6A and 6B illustrate simplified plan views of memory dies that support edgeless memory clusters in accordance with examples as disclosed herein.[0010] FIG. 7 shows a block diagram of a memory device that supports edgeless memory clusters in accordance with examples as disclosed herein.[0011] FIG. 8 shows a flowchart illustrating a method that supports edgeless memory clusters in accordance with examples as disclosed herein.DETAILED DESCRIPTION[0012] A quilt architecture has been used for some memories where the drivers are arranged in tiles underneath the memory cells. A group or array of memory that is decoded as a single group of memory cells may make up a cluster or partition. The cluster may include the tiles underneath the memory cells. In a cluster, the array of memory cells has breaks to allow for connections of electrodes to the drivers underneath, called socket connections. For this architecture, some of the electrodes contacting the memory cells associated with a tile may extend beyond the footprint of the tile to contact memory cells associated with a neighboring tile. Those memory cells may be attached to a driver on one of the neighboring tiles. In some cases, neighboring clusters have gaps between them to allow placement of array termination tiles, which include extra drivers for electrodes that have their socket connections outside the footprint of the cluster.[0013] But the array termination tiles do not have associated memory cells, wasting that valuable space between neighboring clusters. Removing the array termination tiles (and thus the gap) between neighboring clusters would allow the neighboring clusters to be positioned closer together. However, that would remove the extra drivers for the electrodes that have their socket connections outside the cluster’s footprint.[0014] Systems, devices, and techniques are presented herein for edgeless memory clusters. In particular, systems, devices, and techniques are described for eliminating gaps between clusters by creating groups (e.g., domains) of clusters that are active at a given time, and using drivers within inactive clusters to perform array termination functions for abutting active clusters. Tiles on the edges of a cluster may have drivers that operate both for the cluster and for a neighboring cluster, with circuits (e.g., multiplexers) on the drivers to enable operations for both clusters.[0015] Eliminating the gaps between clusters may achieve substantial reduction in die size. It may also reduce stress on the electrodes by reducing changes in density of the electrode layers.[0016] Features of the disclosure are initially described in the context of memory systems and dies as described with reference to FIGs. 1 and 2. Features of the disclosure are further described in the context of arrays and systems as described with reference to FIGs. 3-5.These and other features of the disclosure are further illustrated by and described with reference to block diagrams, an apparatus diagram, and a flowchart that relate to edgeless memory clusters as described with references to FIGs. 4A-8.[0017] FIG. 1 illustrates an example of a system 100 that supports edgeless memory clusters in accordance with examples as disclosed herein. The system 100 may include a host device 105, a memory device 110, and a plurality of channels 115 coupling the host device 105 with the memory device 110. The system 100 may include one or more memory devices, but aspects of the one or more memory devices 110 may be described in the context of a single memory device (e.g., memory device 110).[0018] The system 100 may include portions of an electronic device, such as a computing device, a mobile computing device, a wireless device, a graphics processing device, a vehicle, or other systems. For example, the system 100 may illustrate aspects of a computer, a laptop computer, a tablet computer, a smartphone, a cellular phone, a wearable device, an internet- connected device, a vehicle controller, or the like. The memory device 110 may be a component of the system operable to store data for one or more other components of the system 100.[0019] At least portions of the system 100 may be examples of the host device 105. The host device 105 may be an example of a processor or other circuitry within a device that uses memory to execute processes, such as within a computing device, a mobile computing device, a wireless device, a graphics processing device, a computer, a laptop computer, a tablet computer, a smartphone, a cellular phone, a wearable device, an internet-connected device, a vehicle controller, a system on a chip (SoC), or some other stationary or portable electronic device, among other examples. In some examples, the host device 105 may refer to the hardware, firmware, software, or a combination thereof that implements the functions of an external memory controller 120. In some examples, the external memory controller 120 may be referred to as a host or a host device 105.[0020] A memory device 110 may be an independent device or a component that is operable to provide physical memory addresses/space that may be used or referenced by the system 100. In some examples, a memory device 110 may be configurable to work with one or more different types of host devices 105. Signaling between the host device 105 and the memory device 110 may be operable to support one or more of: modulation schemes to modulate the signals, various pin configurations for communicating the signals, various form factors for physical packaging of the host device 105 and the memory device 110, clock signaling and synchronization between the host device 105 and the memory device 110, timing conventions, or other factors.[0021] The memory device 110 may be operable to store data for the components of the host device 105. In some examples, the memory device 110 may act as a secondary-type or dependent-type device to the host device 105 (e.g., responding to and executing commands provided by the host device 105 through the external memory controller 120). Such commands may include one or more of a write command for a write operation, a read command for a read operation, a refresh command for a refresh operation, or other commands.[0022] The host device 105 may include one or more of an external memory controller 120, a processor 125, a basic input/output system (BIOS) component 130, or other components such as one or more peripheral components or one or more input/output controllers. The components of the host device 105 may be coupled with one another using a bus 135.[0023] The processor 125 may be operable to provide control or other functionality for at least portions of the system 100 or at least portions of the host device 105. The processor 125 may be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or a combination of these components. In such examples, the processor 125 may be an example of a central processing unit (CPU), a graphics processing unit (GPU), a general purpose GPU (GPGPU), or an SoC, among other examples. In some examples, the external memory controller 120 may be implemented by or be a part of the processor 125.[0024] The BIOS component 130 may be a software component that includes a BIOS operated as firmware, which may initialize and run various hardware components of the system 100 or the host device 105. The BIOS component 130 may also manage data flow between the processor 125 and the various components of the system 100 or the host device 105. The BIOS component 130 may include a program or software stored in one or more of read-only memory (ROM), flash memory, or other non-volatile memory.[0025] The memory device 110 may include a device memory controller 155 and one or more memory dies 160 (e.g., memory chips) to support a desired capacity or a specified capacity for data storage. Each memory die 160 (e.g., memory die 160-a, memory die 160-b, memory die 160-N) may include a local memory controller 165 (e.g., local memory controller 165-a, local memory controller 165-b, local memory controller 165-/V) and a memory array 170 (e.g., memory array 170-a, memory array 170-b, memory array 170-/V). A memory array 170 may be a collection (e.g., one or more grids, one or more banks, one or more tiles, one or more sections) of memory cells, with each memory cell being operable to store at least one bit of data. A memory device 110 including two or more memory dies 160 may be referred to as a multi-die memory or a multi-die package or a multi-chip memory or a multi-chip package.[0026] The memory die 160 may be an example of a two-dimensional (2D) array of memory cells or may be an example of a three-dimensional (3D) array of memory cells. A 2D memory die 160 may include a single memory array 170. A 3D memory die 160 may include two or more memory arrays 170, which may be stacked on top of one another or positioned next to one another (e.g., relative to a substrate). In some examples, memory arrays 170 in a 3D memory die 160 may be referred to as decks, levels, layers, or dies. A 3D memory die 160 may include any quantity of stacked memory arrays 170 (e.g., two high, three high, four high, five high, six high, seven high, eight high). In some 3D memory dies 160, different decks may share at least one common access line such that some decks may share one or more of a row line or column line. In some 3D memory dies 160, common access lines may be shared by abutting clusters of memory arrays.[0027] The device memory controller 155 may include circuits, logic, or components operable to control operation of the memory device 110. The device memory controller 155 may include the hardware, the firmware, or the instructions that enable the memory device 110 to perform various operations and may be operable to receive, transmit, or execute commands, data, or control information related to the components of the memory device 110. The device memory controller 155 may be operable to communicate with one or more of the external memory controller 120, the one or more memory dies 160, or the processor 125. In some examples, the device memory controller 155 may control operation of the memory device 110 described herein in conjunction with the local memory controller 165 of the memory die 160.[0028] A local memory controller 165 (e.g., local to a memory die 160) may include circuits, logic, or components operable to control operation of the memory die 160. In some examples, a local memory controller 165 may be operable to communicate (e.g., receive or transmit data or commands or both) with the device memory controller 155. In some examples, a memory device 110 may not include a device memory controller 155, and a local memory controller 165 or the external memory controller 120 may perform various functions described herein. As such, a local memory controller 165 may be operable to communicate with the device memory controller 155, with other local memory controllers 165, or directly with the external memory controller 120, or the processor 125, or a combination thereof. Examples of components that may be included in the device memory controller 155 or the local memory controllers 165 or both may include receivers for receiving signals (e.g., from the external memory controller 120), transmitters for transmitting signals (e.g., to the external memory controller 120), decoders for decoding or demodulating received signals, encoders for encoding or modulating signals to be transmitted, or various other circuits or controllers operable for supporting described operations of the device memory controller 155 or local memory controller 165 or both.[0029] The external memory controller 120 may be operable to enable communication of one or more of information, data, or commands between components of the system 100 or the host device 105 (e.g., the processor 125) and the memory device 110. The external memory controller 120 may convert or translate communications exchanged between the components of the host device 105 and the memory device 110. In some examples, the external memory controller 120 or other component of the system 100 or the host device 105, or its functions described herein, may be implemented by the processor 125. For example, the external memory controller 120 may be hardware, firmware, or software, or some combination thereof implemented by the processor 125 or other component of the system 100 or the host device 105. Although the external memory controller 120 is depicted as being external to the memory device 110, in some examples, the external memory controller 120, or its functions described herein, may be implemented by one or more components of a memory device 110 (e.g., a device memory controller 155, a local memory controller 165) or vice versa.[0030] The components of the host device 105 may exchange information with the memory device 110 using one or more channels 115. The channels 115 may be operable to support communications between the external memory controller 120 and the memory device 110. Each channel 115 may be examples of transmission media that carry information between the host device 105 and the memory device. Each channel 115 may include one or more signal paths or transmission media (e.g., conductors) between terminals associated with the components of the system 100. A signal path may be an example of a conductive path operable to carry a signal. For example, a channel 115 may include a first terminal including one or more pins or pads at the host device 105 and one or more pins or pads at the memory device 110. A pin may be an example of a conductive input or output point of a device of the system 100, and a pin may be operable to act as part of a channel[0031] Channels 115 (and associated signal paths and terminals) may be dedicated to communicating one or more types of information. For example, the channels 115 may include one or more command and address (C A) channels 186, one or more clock signal (CK) channels 188, one or more data (DQ) channels 190, one or more other channels 192, or a combination thereof. In some examples, signaling may be communicated over the channels 115 using single data rate (SDR) signaling or double data rate (DDR) signaling. In SDR signaling, one modulation symbol (e.g., signal level) of a signal may be registered for each clock cycle (e.g., on a rising or falling edge of a clock signal). In DDR signaling, two modulation symbols (e.g., signal levels) of a signal may be registered for each clock cycle (e.g., on both a rising edge and a falling edge of a clock signal).[0032] FIG. 2 illustrates an example of a memory die 200 that supports edgeless memory clusters in accordance with examples as disclosed herein. The memory die 200 may be an example of the memory dies 160 described with reference to FIG. 1. In some examples, the memory die 200 may be referred to as a memory chip, a memory device, or an electronic memory apparatus. The memory die 200 may include one or more memory cells 205 that may each be programmable to store different logic states (e.g., a programmed one of a set of two or more possible states). For example, a memory cell 205 may be operable to store one bit of information at a time (e.g., a logic 0 or a logic 1). In some examples, a memory cell 205 (e.g., a multi-level memory cell 205) may be operable to store more than one bit of information at a time (e.g., a logic 00, logic 01, logic 10, a logic 11). In some examples, the memory cells 205 may be arranged in an array, such as a memory array 170 described with reference to FIG. 1.[0033] A memory cell 205 may store a logic state using a configurable material, which may be referred to as a memory element, a memory storage element, a material element, a material memory element, a material portion, or a polarity-written material portion, among others. A memory cell 205 may include a capacitor or other memory storage component to store a charge representative of the programmable states. For example, a charged and uncharged capacitor may represent two logic states, respectively, or a chalcogenide material may represent different states depending on its crystalline structure or other properties. A configurable material of a memory cell 205 may refer to a chalcogenide-based storage component. For example, a chalcogenide storage element may be used in a phase change memory (PCM) cell, a thresholding memory cell, or a self-selecting memory cell.[0034] The memory die 200 may include the access lines (e.g., row lines 210 and the column lines 215) arranged in a pattern, such as a grid-like pattern. Access lines may be formed of one or more conductive materials. In some examples, row lines 210 may be referred to as word lines. In some examples, column lines 215 may be referred to as digit lines or bit lines. References to access lines, row lines, column lines, word lines, digit lines, or bit lines, or their analogues, are interchangeable without loss of understanding or operation. Memory cells 205 may be positioned at intersections of the row lines 210 and the column lines 215.[0035] The memory die 200 may be arranged using a quilt architecture. In a quilt architecture, tiles with similar configurations of components may be arranged in an array. Memory devices built in such a manner may be expanded or contracted by adding or reducing tiles. The tiles may be building blocks for the memory die 200. Supporting circuitry (not shown) for the memory die may be positioned beneath the arrays of memory cells in tiles. As used herein a quilt architecture may refer to a memory array comprising a plurality of memory modules. For example, a memory die having a quilt architecture may comprise a repeating pattern of memory modules. In some examples, a memory module may include a tile and the circuitry and memory cells positioned on and above the tile. [0036] In some examples of quilt architecture, some memory cells positioned above a tile may be addressed and accessed using support circuitry (not shown) positioned on a neighboring tile. Consequently, at the borders of the arrays of memory cells, some memory cells may not be addressable or accessible. To address these inaccessibility issues, boundary tiles may be positioned beyond the border of the array of memory cells to ensure the memory cells of the tiles are accessible.[0037] Operations such as reading and writing, which may be referred to as access operations, may be performed on the memory cells 205 by activating or selecting access lines such as one or more of a row line 210 or a column line 215. By biasing a row line 210 and a column line 215 (e.g., applying a voltage to the row line 210 or the column line 215), a single memory cell 205 may be accessed at their intersection. The intersection of a row line 210 and a column line 215 in either a two-dimensional or three-dimensional configuration may be referred to as an address of a memory cell 205. An access line may be a conductive line coupled with a memory cell 205 and may be used to perform access operations on the memory cell 205.[0038] Accessing the memory cells 205 may be controlled through a row decoder 220 or a column decoder 225. For example, a row decoder 220 may receive a row address from a local memory controller 245 and activate a row line 210 based on the received row address.A column decoder 225 may receive a column address from the local memory controller 245 and may activate a column line 215 based on the received column address. In a quilt architecture, the row decoder 220 and the column decoder 225 may be positioned on the tiles below the memory array. However, the row decoder 220 or the column decoder 225 or both may or may not be positioned on the tile positioned directly below the memory cell being accessed.[0039] A sense component 230 may be operable to detect a state (e.g., a material state, a resistance, a threshold state) of a memory cell 205 and determine a logic state of the memory cell 205 based on the stored state. The sense component 230 may include one or more sense amplifiers to amplify or otherwise convert a signal resulting from accessing the memory cell 205. The sense component 230 may compare a signal detected from the memory cell 205 to a reference 235 (e.g., a reference voltage). The detected logic state of the memory cell 205 may be provided as an output of the sense component 230 (e.g., to an input/output 240), and may indicate the detected logic state to another component of a memory device that includes the memory die 200. In a quilt architecture, the sense component 230 may be positioned on the tiles below the memory array. However, the sense component 230 may or may not be positioned on the tile positioned directly below the memory cell being addressed.[0040] The local memory controller 245 may control the accessing of memory cells 205 through the various components (e.g., row decoder 220, column decoder 225, sense component 230). The local memory controller 245 may be an example of the local memory controller 165 described with reference to FIG. 1. In some examples, one or more of the row decoder 220, column decoder 225, and sense component 230 may be co-located with the local memory controller 245. The local memory controller 245 may be operable to receive one or more of commands or data from one or more different memory controllers (e.g., an external memory controller 120 associated with a host device 105, another controller associated with the memory die 200), translate the commands or the data (or both) into information that can be used by the memory die 200, perform one or more operations on the memory die 200, and communicate data from the memory die 200 to a host device 105 based on performing the one or more operations. The local memory controller 245 may generate row signals and column address signals to activate the target row line 210 and the target column line 215. The local memory controller 245 may also generate and control various voltages or currents used during the operation of the memory die 200. In general, the amplitude, the shape, or the duration of an applied voltage or current discussed herein may be varied and may be different for the various operations discussed in operating the memory die 200.[0041] The local memory controller 245 may be operable to perform one or more access operations on one or more memory cells 205 of the memory die 200. Examples of access operations may include a write operation, a read operation, a refresh operation, a precharge operation, or an activate operation, among others. In some examples, access operations may be performed by or otherwise coordinated by the local memory controller 245 in response to various access commands (e.g., from a host device 105). The local memory controller 245 may be operable to perform other access operations not listed here or other operations related to the operating of the memory die 200 that are not directly related to accessing the memory cells 205.[0042] FIG. 3 illustrates an example of a memory array 300 in accordance with examples as disclosed herein. Memory array 300 may be an example of portions of the memory arrays or memory dies described with reference to FIGs. 1 and 2. The memory array 300 may include a first deck 305 of memory cells that is positioned above a substrate layer 315 and a second deck 310 of memory cells on top of the first array or deck 305. Though the example of memory array 300 includes two decks 305, 310, the memory array 300 may include any quantity of decks (e.g., one or more than two) positioned above the substrate layer 315. The memory array 300 may be included as part of a quilt architecture such that memory cells of portions of the memory array are positioned above substrate layer 315, which may include support components for accessing the memory cells, such as, e.g., decoders and amplifiers.[0043] The memory cells of memory array 300 may include storage elements, electrodes, and/or selection elements. In some examples, a single component may act as both a storage element and a selection element. In the example shown in FIG. 3, one or more memory cells of the first deck 305 may include one or more of an electrode 325-a, a storage element 320-a, or an electrode 325-b. One or more memory cells of the second deck 310 may include an electrode 325-c, a storage element 320-b, and an electrode 325-d. In some cases, the storage elements 320 may be examples of a chalcogenide material, such as a phase change storage element, a thresholding storage element, or a self-selecting storage element. Although some elements included in FIG. 3 are labeled with a numeric indicator, other corresponding elements are not labeled, although they are the same or would be understood to be similar, in an effort to increase visibility and clarity of the depicted features.[0044] Memory array 300 may also include row lines 210 (e.g., row lines 210-a, 210-b, 210-c, and 210-d), and column lines 215 (e.g., column lines 215-a and 215-b), which may be examples of row lines 210 and column lines 215, as described with reference to FIG. 2. One or more memory cells of the first deck 305 and the second deck 310 may include one or more chalcogenide materials in a pillar between access lines. For example, a single stack between access lines may include one or more of a first electrode, a first chalcogenide material (e.g., selector component), a second electrode, a second chalcogenide material (e.g., storage element), or a third electrode.[0045] The memory cells of the first deck 305 and second deck 310 may, in some examples, have common conductive lines such that corresponding memory cells of one or more decks 305 and one or more decks 310 may share column lines 215 or row lines 210. For example, electrode 325-c of the second deck 310 and electrode 325-b of the first deck 305 may be coupled with column line 215-a such that the column line 215-a may be shared by vertically adjacent memory cells. [0046] In some examples, the common conductive lines may couple with the support components for accessing the memory cells. For example, in a quilt architecture, the electrodes of a deck may correspond to row 210 and column lines 215 that extend horizontally. The electrodes may couple with corresponding drivers and decoders on tiles of the substrate layer 315 via vertical connectors (not shown) that extend downward through the decks.[0047] In some examples, the material of the storage element 320 may include a chalcogenide material or other alloy including selenium (Se), tellurium (Te), arsenic (As), antimony (Sb), carbon (C), germanium (Ge), silicon (Si), or indium (IN), or various combinations thereof. In some examples, a chalcogenide material having primarily selenium (Se), arsenic (As), and germanium (Ge) may be referred to as a SAG-alloy. In some examples, a SAG-alloy may also include silicon (Si) and such chalcogenide material may be referred to as SiSAG-alloy. In some examples, SAG-alloy may include silicon (Si) or indium (In) or a combination thereof and such chalcogenide materials may be referred to as SiSAG- alloy or InSAG-alloy, respectively, or a combination thereof. In some examples, the chalcogenide glass may include additional elements such as hydrogen (H), oxygen (O), nitrogen (N), chlorine (Cl), or fluorine (F), each in atomic or molecular forms.[0048] In some examples, the storage element 320 may be an example of a phase change memory cell. In such examples, the material used in the storage element 320 may be based on an alloy (such as the alloys listed above) and may be operated so as to undergo a phase change or change to different physical state during normal operation of the memory cell. For example, a phase change memory cell may have an amorphous state (e.g., a relatively disordered atomic configuration) and a crystalline state (e.g., a relatively ordered atomic configuration) that may be used to indicate a logic state of the memory cell.[0049] The architecture of memory array 300 may be referred to as a cross-point architecture, in some examples, in which a memory cell is formed at a topological cross-point between a row line 210 and a column line 215. Such a cross-point architecture may offer relatively high-density data storage with lower production costs compared to other memory architectures. For example, the cross-point architecture may have memory cells with a reduced area and, resultantly, an increased memory cell density compared to other architectures. For example, the architecture may have a 4F2 memory cell area, where F is the smallest feature size, compared to other architectures with a 6F2 memory cell area, such as those with a three-terminal selector element. For example, DRAM may use a transistor, which is a three-terminal device, as the selector element for each memory cell and may have a larger memory cell area compared to the cross-point architecture.[0050] While the example of FIG. 3 shows two memory decks, other configurations are possible. In some examples, a single memory deck of memory cells may be constructed above a substrate, which may be referred to as a two-dimensional memory. In some examples additional memory decks may be constructed above the two memory decks to form 3D vertical structures, with similarly alternating row lines and column lines. In some examples, two or more decks of memory cells may be configured in a similar manner in a three- dimensional cross point architecture. Further, in some cases, elements shown in or described with reference to FIG. 3 may be electrically coupled with one another as shown or described but rearranged physically (e.g., a storage element 320 and possibly a selection element or electrode 325 may be electrically in series between a row line 210 and a column line 215 but need not be in a pillar or stack configuration). In some examples, the layers or decks may be arranged vertically. That is, each memory deck may extend vertically and may be horizontally separated from each other.[0051] A memory die may include a substrate layer and memory cells positioned above the substrate layer. The memory cells may be partitioned into memory clusters each having an array of memory cells that may be decoded as a single group of memory cells. Each memory cluster may include supporting circuitry for the array of memory cells, such as, e.g., drivers, decoders, and sense amplifiers. Each memory cluster may include tiles formed on a portion of the substrate layer and a portion of the array of memory cells formed above each tile. The supporting circuitry may be positioned within the tiles.[0052] FIG. 4A and 4B illustrate examples of memory modules 400-a and 400-b that support edgeless memory clusters in accordance with examples as disclosed herein. FIGs. 4A and 4B are examples of an architecture in which electrode drivers are distributed across a footprint of an active memory module. The memory modules may implement aspects of the system as described with reference to FIGs. 1-3. For example, memory modules 400-a and 400-b may be portions of memory arrays 170.[0053] A memory array may include an array of memory cells positioned above a group of tiles. Each tile and the portion of the array of memory cells positioned above the tile (which may extend past the boundaries of the tile) may be considered to be a memory module. (In FIGs. 4 A and 4B, the memory cells of the memory modules have been removed for the sake of clarity). The memory modules may be used as part of a quilt architecture. In a quilt architecture a plurality of tiles that have a common configuration of components (e.g., drivers) may be arranged in an array. As discussed in more detail with reference to FIG. 5, the tiles may be arranged in a repeating pattern.[0054] The memory module 400-a illustrated in FIG. 4A may include a tile 405 formed on a substrate, according to one example. The tile 405 may include supporting components, such as, e.g., drivers, for accessing the memory cells of the memory array. The tile may be partitioned into multiple sub-arrays, that may be referred to as “patches.” Together the patches may define the larger repeating unit of a tile. In the example of FIG. 4A, four patches 410 (e.g., patches 410-a, 410-b, 410-c, and 410-d) correspond to tile 405. In other examples, a tile may include other quantities of patches, such as, e.g., 2, 4, 8, 12, 16 or 32 patches.[0055] One or more drivers may be located substantially within a footprint of each patch, under the memory cells and near or at the periphery of the patch. For example, one or more word line drivers 415 and/or one or more bit line drivers 420 may be positioned on each patch 410. It will be understood that each shaded area may comprise a driver region that may include multiple driver circuits and so can represent a group of drivers. The row drivers, represented by word line drivers 415 in FIG. 4A may be elongated in the column or y- direction, while the column drivers, represented by bit line drivers 420 in FIG. 4A, may be elongated in the row or x-direction. A signal path traversing a path in an x- or y-direction may alternately pass over row and column driver regions.[0056] Access lines (e.g., conductors or electrodes) may be included on each level for accessing the memory cells. For example, word line electrodes 425 and bit line electrodes 430 may be coupled with the memory cells positioned above the patches 410. The word line electrodes 425 may extend in one direction (e.g., the x-direction) and the bit line electrodes 430 may extend in a different direction (e.g., the y-direction).[0057] The drivers may be electrically coupled with the access lines (e.g., access line electrodes). For example, word line drivers 415 and bit line drivers 420 may be electrically coupled with the word line electrodes 425 and the bit line electrodes 430, respectively. Because the drivers may be positioned along the periphery of the patches, the drivers may be coupled to the word line electrodes and bit line electrodes through interconnect regions 435, which may extend upward from the boundaries of the patches. The interconnect regions 435 may be referred to as socket regions. The array of memory cells directly above the interconnect regions 435 may have breaks to allow for vertical connectors through the interconnect regions 435 between the drivers and the electrodes.[0058] A connection between a driver and an electrode may be known as a socket connection. The connection point, also known as a socket, between each access line electrode 425, 430 and its driver 415, 420 may be indicated by a dot along the electrode. The connection point (socket) may be positioned anywhere along the respective electrode. In some examples, the connection point (socket) may be positioned at an end of the electrode. In some examples, the connection point (socket) may be between either end of the electrode (e.g., a central location of the electrode). The word line electrodes and bit line electrodes may cross boundaries between adjacent patches and may also cross boundaries of other driver regions. In some examples, the word line electrodes 425 and bit line electrodes 430 may extend laterally beyond the outer boundaries (e.g., footprint) of the tile 405.[0059] In some examples, the access line electrodes may be staggered or shifted. For example, adjacent word line electrodes 425 may be shifted with respect to one another along their axis of elongation (x-axis) and adjacent bit line electrodes 430 may be shifted with respect to one another along their axis of elongation (y-axis). By breaking the word and bit line driver groups and interconnect regions into smaller pieces and staggering the access line electrodes, or groups of access line electrodes, in alternate rows, as illustrated in FIG. 4A, the word line electrodes 425 and bit line electrodes 430 may extend through the memory array and through the interconnect regions 435. Accordingly, neither the interconnect regions nor the driver locations are restricted to the edges of the memory array.[0060] The memory module 400-b illustrated in FIG. 4B may include a tile 450 formed on a substrate, according to one example. As with tile 405, tile 450 may be partitioned into multiple patches 410 having word line drivers 415 and bit line drivers 420. In the example of FIG. 4B, the tile 450 includes 16 patches 410 in a 4x4 arrangement. The patches 410 may be arranged in repeating patterns.[0061] To access the memory cells positioned above the tile 450, word line electrodes 425 (e.g., word line electrodes 425-a through 425-g) and bit line electrodes 430 (e.g., bit line electrodes 430-a through 430-g, represented in FIG. 4B with dashed lines to differentiate from the word line electrodes 425) may be used. The word line electrodes 425-a through 425-g and bit line electrodes 430-a through 430-g may be considered to be coupled with the memory cells of the memory module 400-b for addressing each memory cell of tile 450. Similar to FIG. 4A, word line drivers 415 and bit line drivers 420 may be electrically coupled with word line electrodes 425 and bit line electrodes 430, respectively, via socket connections that pass through interconnect regions to connection points (sockets) represented by dots in the figure.[0062] Further, many of the electrodes are configured to address and access memory cells corresponding to one of the tiles 455 along with tile 450. For example, word line electrodes 425-b, 425-d, and 425-g are illustrated as extending above tiles 450 and 455-a, for accessing memory cells of the two tiles. As such, the word line electrodes 425-b, 425-d, and 425-g may be considered to be coupled with one or more memory cells of a different memory tile than the memory tile including the drivers for addressing each memory cell of the tiles 450 and 455.[0063] The tile 450 may be configured to couple with neighboring tiles 455 (e.g., tiles 455-a, 455-b, 455-c, and 455-d) to address and access memory cells of the memory array. Note that only a portion of each tile 455 is shown. In some examples, circuitry (e.g., decoders and amplifiers) positioned on neighboring tiles 455 may be configured to address and access memory cells positioned above the tile 450. For example, to address and access memory cells positioned above tile 450, word line electrodes 425-d and 425-f may be coupled with word line drivers 415 on neighboring tiles 455-a and 455-c, respectively; and bit line electrodes 430-b and 430-d may be coupled with bit line drivers 420 on neighboring tiles 455-b and 455-d, respectively. In this manner, a tile 450 may not be configured to be fully operational as a stand-alone unit. Rather, a tile 450 may rely on the circuitry of neighboring tiles 455 to provide full functionality to the tile 450. Remove any of the neighboring tiles 455 and one or more of the memory cells above tile 450 may be inaccessible.[0064] If a tile of a cluster is positioned on the edge of the cluster, there may be no neighboring tile beyond that edge to provide an array termination function (e.g., access the memory cells using a driver thereon). As a result, one or more memory cells positioned above the edge tile may be inaccessible by the cluster. For example, if tile 450 is positioned on a right edge of a first cluster, the cluster may not include abutting tile 455-a. As a result, memory cells positioned above tile 450 and normally accessed by word line electrode 425-d of tile 455-a may be inaccessible by the first cluster. As another example, if tile 450 is positioned on a bottom edge of a cluster, the cluster may not include abutting tile 455-b. As a result, memory cells positioned above tile 450 and normally accessed by bit line electrode 430-b of tile 455-b may be inaccessible by the first cluster. In some examples, however, if the abutting tiles 455-a or 455-b are edge tiles of a second cluster that abuts the first cluster, the tiles may be used to address and access the memory cells from outside the first cluster (e.g., by the second cluster).[0065] FIG. 5 illustrates a simplified plan view of a memory die 500 that supports edgeless memory clusters in accordance with examples as disclosed herein. FIG. 5 illustrates one example of accessing memory cells of a cluster using tiles of one or more neighboring clusters. The memory die 500 may implement aspects of the system as described with reference to FIGs. 1-4. For example, memory die 500 may be an example of memory die 160 or 200 discussed with reference to FIGs. 1 and 2.[0066] Memory die 500 may include memory partitioned into memory arrays corresponding to a plurality of clusters 505 (e.g., clusters 505-a, 505-b, 505-c, and 505-d). Each cluster 505 may include a plurality of tiles 510 in a repeating pattern. For example, cluster 505-a may include tiles 510-a, 510-b, and 510-c; cluster 505-b may include tiles 510-d and 510-e; cluster 505-c may include tiles 510-f and 510-g; and cluster 505-c may include tile 510-h. The tiles 510 may include electrodes, such as, e.g., word line electrodes 525 and bit line electrodes 530. (Only a few of the electrodes are shown on FIG. 5 for the sake of clarity). The tiles 510 may be examples of the tile 450 discussed with reference to FIG. 4B. The word line electrodes 525 and bit line electrodes 530 may be examples of the word line electrodes 425 and bit line electrodes 430 discussed with reference to FIG. 4B.[0067] As discussed with reference to FIG. 4B, one or more memory cells positioned above an edge tile of a cluster may be inaccessible by the cluster due to a lack of an abutting tile that would typically provide an array termination function (e.g., access the memory cells using a driver thereon). In some examples, an array termination function for these memory cells may be provided by another cluster. This may allow the memory cells to be accessed from outside the cluster (e.g., by the other cluster). In some examples, the memory cells of a first cluster may be accessed using a driver on an edge tile of a second cluster that abuts the first cluster. For example, edge tiles 510-d and 510-e of cluster 505-b may include word line drivers 515 (highlighted in FIG. 5) that provide access to memory cells positioned above edge tiles 510-a and 510-b of abutting cluster 505-a via word line electrodes 525, and edge tiles 510-f and 510-g of cluster 505-c may include bit line drivers 520 (highlighted in FIG. 5) that provide access to memory cells positioned above edge tiles 510-a and 510-c of abutting cluster 505-a via bit line electrodes 530. As such, clusters 505-b and 505-c may provide an array termination function for cluster 505-a. Each overlapping electrode may be considered to be coupled with memory cells of more than one memory cluster. That is, each overlapping electrode may be considered to be an electrode of both memory clusters it overlaps.[0068] Although not shown, it is appreciated that the array termination function may be provided both ways. That is, abuhing clusters may provide an array termination function for each other. For example, edge tiles 510-a, 510-b, and 510-c of cluster 505-a may provide access to memory cells of tiles 510-d, 510-e, 510-f, and 510-g of clusters 505-b and 505-c (e.g., in the opposite direction) in a similar manner.[0069] In this manner, memory cells of a cluster that may be inaccessible to the cluster may be accessible using tiles of a neighboring, abuhing cluster. As a result, the gap between clusters may be eliminated without losing functionality of the memory. This may achieve substantial reduction in die size.[0070] In some examples, edge tiles of a cluster may access memory cells of an abuhing cluster in a similar manner that the tiles access memory cells positioned above abuhing tiles in its own cluster (e.g., the tiles may use similar drivers connected to similar electrodes via similar socket connections in similar manners). For example, the tiles may use the same drivers (e.g., word line drivers 515 and bit line drivers 520) and electrodes (e.g., word line electrodes 525 and bit line electrodes 530) to access the neighboring cluster that it uses in its own cluster. This may reduce stress on the electrodes by reducing changes in density of the electrode layers.[0071] In some examples, a driver may operate both for its own cluster and for a neighboring cluster using a same electrode. That is, a driver may provide access to memory cells of its cluster and provide an array termination function for an abuhing cluster via a same electrode. In some examples, the driver may operate to access memory cells within its own cluster when the cluster is active and operate to provide an array termination for (e.g., access to memory cells within) a neighboring cluster when its own cluster is inactive. A driver may have different parameters when used to provide an array termination function as it does during normal operations. In some examples, a circuit (e.g., a multiplexer) may be coupled to the driver to control the driver (e.g., select which input signal to use) dependent on which cluster the driver is being used for at any given time. [0072] In some examples, a first driver circuit may be configured to enable the driver to access one or more memory cells of its own cluster via an electrode, e.g., when the memory cluster is active. A second driver circuit may be configured to enable the driver to access one or more memory cells of the abutting memory cluster via the same electrode, e.g., when the abutting memory cluster is active.[0073] A tile may include a plurality of drivers, some of which may be used to provide an array termination function for an abutting cluster. The plurality of drivers may be partitioned into a first portion, which may include the drivers that are not used to provide the array termination function and a second portion, which may include the drivers that are used to provide the array termination function. In some examples, the first and second portions of the tile may be active (e.g., usable) to access memory cells of the cluster when the cluster is active. The second portion of the tile may be active (e.g., usable) when the cluster is inactive to provide an array termination for (e.g., access to memory cells within) a neighboring cluster.[0074] In some examples, a tile of a first cluster may be partitioned into first and second portions. The first portion of the tile may include, e.g., the first portion of drivers and the second portion of the tile may include, e.g., the second portion of drivers. Upon deciding to access memory cells of an abutting, second cluster, the first portion of the tile may be inhibited (e.g., the first portion of drivers may be disabled) and the second portion of the tile may be enabled to access the memory cells of the second cluster. When the first portion is inhibited, the tile may act as a termination tile for the second cluster and the second portion of drivers may be enabled. In some cases, the second portion of drivers may each be coupled with memory cells of both the first and second cluster via a same respective electrode. In those cases, the drivers may be activated to access, via the respective electrodes, the memory cells of the second cluster without accessing the memory cells of the first cluster.[0075] In some examples, the abutting, second cluster may include similar tiles to the first cluster and may provide an array termination function for the first cluster in a similar manner. That way, each cluster may provide an array termination function (e.g., access memory cells using a driver thereon) for the other cluster[0076] In some examples, adjoining (abutting) clusters may be activated at mutually exclusive periods of time. That is, when a cluster may be in active mode, the clusters that abut that cluster may be inactive. For example, when cluster 505-a may be active, clusters 505-b and 505-c may be inactive. Cluster 505-d may be active or inactive because it does not about active cluster 505-a. When cluster 505-b or 505-c (or both) are active, cluster 505-a would be inactive. Cluster 505-d would be inactive because it abuts clusters 505-b and 505-c, at least one of which is active.[0077] Activating adjoining clusters at mutually exclusive periods of time may isolate the active clusters from each other so that tiles within the inactive clusters may provide array termination functions for the active clusters. For example, edge tiles of the inactive clusters may be configured to behave as termination tiles for the edge tiles of the active clusters.[0078] In some examples, the clusters may be assigned to different groups (domains) whose clusters do not abut one another and the domains may be active at different, mutually exclusive times.[0079] FIG. 6A and 6B illustrate simplified plan views of memory dies 600-a and 600-b that support edgeless memory clusters in accordance with examples as disclosed herein. The dies may implement aspects of the system as described with reference to FIGs. 1-5. For example, the dies represented 600-a and 600-b may be examples of dies 160 or 200 discussed with reference to FIGs. 1 and 2. FIGs. 6A and 6B each illustrate a plurality of abutting clusters 605 of a die. The clusters 605 may be assigned to different groups (domains) based on their positions with respect to one another.[0080] FIG. 6A illustrates a two-domain system in which each cluster 605 may be assigned to a first domain (denoted “A”) or a second domain (denoted “B”). The assignment may be made so that clusters that abut each other are in different domains. That is, the clusters of domain A may not abut other clusters of domain A and the clusters of domain B may not abut other clusters of domain B. By assigning the clusters this way, clusters of one of the domains (e.g., domain A) may be in active mode at the same time (e.g., the domain may be active) and clusters of the other domain (e.g., domain B) may be inactive (e.g., the domain may be inactive), but may provide array termination functions for the active clusters. As long as the domains are active at mutually exclusive times, (e.g., not active at the same time), active clusters may be isolated from each other regardless of which domain is active.[0081] FIG. 6B illustrates a four-domain system in which each cluster 605 may be assigned to one of four domains (denoted “A”, “B”, “C”, and “D”). As with FIG. 6A, the assignment may be made so that clusters that abut each other are in different domains. As such, clusters of one of the domains (e.g., domain A) may be in active mode at the same time and clusters of the other domains (e.g., domains B, C, and D) may be inactive, but may provide array termination functions for the active clusters. As long as the domains are active at mutually exclusive times (i.e., one is active at a time), active clusters may be isolated from each other regardless of which domain is active. In some examples, the assignments may be made so that active clusters may be isolated from each other when more than one domain is active. For example, domains A and C of FIG. 6B may be concurrently active since none of the clusters of those domains abut one another. Similarly, the domains B and D may be concurrently active since none of the clusters of those domains abut one another.[0082] FIG. 7 shows a block diagram 700 of a memory device 720 that supports edgeless memory clusters in accordance with examples as disclosed herein. The memory device 720 may be an example of aspects of a memory device as described with reference to FIGs. 1 through 6. The memory device 720, or various components thereof, may be an example of means for performing various aspects of edgeless memory clusters as described herein. For example, the memory device 720 may include a determining component 725, an inhibition manager 730, a driver manager 735, an activation manager 740, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses).[0083] The determining component 725 may be configured as or otherwise support a means for determining to access a first memory array of a first memory cluster, the first memory cluster including a first plurality of tiles that include a first plurality of drivers coupled with the first memory array. The inhibition manager 730 may be configured as or otherwise support a means for inhibiting, based at least in part on determining to access the first memory array, a first portion of a tile of a second plurality of tiles of a second memory cluster in response to determining to access the first memory array of the first memory cluster, the second memory cluster including a second memory array, the second plurality of tiles including a second plurality of drivers coupled with the second memory array, a driver of the second plurality of drivers coupled with the first memory array and positioned on a second portion of the tile. The driver manager 735 may be configured as or otherwise support a means for enabling, based at least in part on determining to access the first memory array, the driver on the second portion of the tile of the second memory cluster to access the first memory array of the first memory cluster. [0084] In some examples, the second portion of the tile may act as a termination tile for the first memory cluster when the first portion of the tile is inhibited.[0085] In some examples, the activation manager 740 may be configured as or otherwise support a means for deactivating the second memory cluster before enabling the driver on the second portion of the tile of the second memory cluster.[0086] In some examples, the driver may be coupled with one or more memory cells of the first memory array and one or more memory cells of the second memory array via a same electrode.[0087] In some examples, to support enabling the driver on the second portion of the tile of the second memory cluster, the activation manager 740 may be configured as or otherwise support a means for activating the driver to access, via the electrode, the one or more memory cells of the first memory array without accessing the one or more memory cells of the second memory array.[0088] In some examples, to support inhibiting the first portion of the tile of the second plurality of tiles, the driver manager 735 may be configured as or otherwise support a means for disabling one or more drivers of the first portion of the tile.[0089] In some examples, the determining component 725 may be configured as or otherwise support a means for determining to access the second memory array of the second memory cluster. In some examples, the inhibition manager 730 may be configured as or otherwise support a means for inhibiting a first portion of a second tile of the first plurality of tiles of the first memory cluster in response to determining to access the second memory array of the second memory cluster, a second driver of the first plurality of drivers further coupled with the second memory array and positioned on a second portion of the second tile. In some examples, the driver manager 735 may be configured as or otherwise support a means for enabling the second driver on the second portion of the second tile of the first memory cluster to access the second memory array of the second memory cluster.[0090] FIG. 8 shows a flowchart illustrating a method 800 that supports edgeless memory clusters in accordance with examples as disclosed herein. The operations of method 800 may be implemented by a memory device or its components as described herein. For example, the operations of method 800 may be performed by a memory device as described with reference to FIGs. 1 through 7. In some examples, a memory device may execute a set of instructions to control the functional elements of the device to perform the described functions. Additionally or alternatively, the memory device may perform aspects of the described functions using special-purpose hardware.[0091] At 805, the method may include determining to access a first memory array of a first memory cluster, the first memory cluster including a first plurality of tiles that include a first plurality of drivers coupled with the first memory array. The operations of 805 may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of 805 may be performed by a determining component 725 as described with reference to FIG. 7.[0092] At 810, the method may include inhibiting, based at least in part on determining to access the first memory array, a first portion of a tile of a second plurality of tiles of a second memory cluster in response to determining to access the first memory array of the first memory cluster, the second memory cluster including a second memory array, the second plurality of tiles including a second plurality of drivers coupled with the second memory array, a driver of the second plurality of drivers coupled with the first memory array and positioned on a second portion of the tile. The operations of 810 may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of 810 may be performed by an inhibition manager 730 as described with reference to FIG. 7.[0093] At 815, the method may include enabling, based at least in part on determining to access the first memory array, the driver on the second portion of the tile of the second memory cluster to access the first memory array of the first memory cluster. The operations of 815 may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of 815 may be performed by a driver manager 735 as described with reference to FIG. 7.[0094] In some examples, an apparatus as described herein may perform a method or methods, such as the method 800. The apparatus may include, features, circuitry, logic, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for determining to access a first memory array of a first memory cluster, the first memory cluster including a first plurality of tiles that include a first plurality of drivers coupled with the first memory array, inhibiting, based at least in part on determining to access the first memory array, a first portion of a tile of a second plurality of tiles of a second memory cluster in response to determining to access the first memory array of the first memory cluster, the second memory cluster including a second memory array, the second plurality of tiles including a second plurality of drivers coupled with the second memory array, a driver of the second plurality of drivers coupled with the first memory array and positioned on a second portion of the tile, and enabling, based at least in part on determining to access the first memory array, the driver on the second portion of the tile of the second memory cluster to access the first memory array of the first memory cluster.[0095] In some examples of the method 800 and the apparatus described herein, the second portion of the tile may act as a termination tile for the first memory cluster when the first portion of the tile may be inhibited.[0096] Some examples of the method 800 and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for deactivating the second memory cluster before enabling the driver on the second portion of the tile of the second memory cluster.[0097] In some examples of the method 800 and the apparatus described herein, the driver may be coupled with one or more memory cells of the first memory array and one or more memory cells of the second memory array via a same electrode.[0098] In some examples of the method 800 and the apparatus described herein, enabling the driver on the second portion of the tile of the second memory cluster may include operations, features, circuitry, logic, means, or instructions for activating the driver to access, via the electrode, the one or more memory cells of the first memory array without accessing the one or more memory cells of the second memory array.[0099] In some examples of the method 800 and the apparatus described herein, inhibiting the first portion of the tile of the second plurality of tiles may include operations, features, circuitry, logic, means, or instructions for disabling one or more drivers of the first portion of the tile.[0100] Some examples of the method 800 and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for determining to access the second memory array of the second memory cluster, inhibiting a first portion of a second tile of the first plurality of tiles of the first memory cluster in response to determining to access the second memory array of the second memory cluster, a second driver of the first plurality of drivers further coupled with the second memory array and positioned on a second portion of the second tile, and enabling the second driver on the second portion of the second tile of the first memory cluster to access the second memory array of the second memory cluster.[0101] It should be noted that the methods described herein describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Furthermore, portions from two or more of the methods may be combined.[0102] An apparatus is described. The apparatus may include a set of memory clusters, each memory cluster of the set of memory clusters including a plurality of tiles having drivers, a memory array positioned above the plurality of tiles and having a plurality of memory cells, a plurality of electrodes coupled with the plurality of memory cells for addressing each memory cell of the plurality of memory cells, where a driver of a first tile of a first memory cluster of the set of memory clusters is coupled with an electrode of the plurality of electrodes of a second tile of a second memory cluster of the set of memory clusters, and where the first memory cluster and the second memory cluster are configured to operate in an active mode during mutually exclusive periods of time.[0103] In some examples of the apparatus, the set of memory clusters may include a first subset of memory clusters that do not abut each other and a second subset of memory clusters that do not abut each other, the second subset not including the memory clusters of the first subset, where the apparatus may be configured such that the memory clusters of the second subset operate in an inactive mode when the memory clusters of the first subset operate in the active mode.[0104] In some examples of the apparatus, the first subset of memory clusters includes the first memory cluster and the second subset of memory clusters includes the second memory cluster.[0105] In some examples of the apparatus, the set of memory clusters further includes a third subset of memory clusters that do not abut each other, the third subset not including the memory clusters of the first and second subsets.[0106] In some examples of the apparatus, the first memory cluster abuts the second memory cluster. [0107] In some examples of the apparatus, each memory cluster of the set abuts another memory cluster of the set.[0108] In some examples of the apparatus, the apparatus may be configured such that memory clusters of the set that abut each other do not operate in the active mode at the same time.[0109] In some examples of the apparatus, for each memory cluster of the set, tiles of the plurality of tiles may have a common configuration of drivers and may be positioned in a repeating pattern.[0110] In some examples of the apparatus, for each memory cluster of the set, the electrodes of the plurality of electrodes may be coupled with the drivers via respective socket connections.[0111] In some examples of the apparatus, one or more electrodes of the plurality of electrodes of the first memory cluster may be also in the plurality of electrodes of the second memory cluster.[0112] In some examples of the apparatus, one or more electrodes may be each included in the plurality of electrodes of more than one memory cluster of the set.[0113] Another apparatus is described. The apparatus may be a memory cluster. The memory cluster may include a plurality of tiles having drivers, a memory array positioned above the plurality of tiles and having a plurality of memory cells, a plurality of electrodes coupled with the drivers via respective socket connections, the plurality of electrodes coupled with the plurality of memory cells for addressing each memory cell of the plurality of memory cells, and where a tile of the plurality of tiles includes a first portion of the drivers and a second portion of the drivers, the second portion of the drivers configured to be used when the memory cluster is inactive for accessing memory cells of an abutting memory cluster.[0114] In some examples of the apparatus, each driver of the second portion of the drivers includes a first driver circuit configured to enable the driver to access one or more memory cells of the plurality of memory cells via an electrode of the plurality of electrodes and a second driver circuit configured to enable the driver to access one or more memory cells of the abutting memory cluster via the electrode of the plurality of electrodes. [0115] In some examples of the apparatus, the first portion of the drivers may be configured to be used for accessing one or more memory cells of the memory cluster when the abutting memory cluster may be inactive.[0116] In some examples of the apparatus, tiles of the plurality of tiles may have a common configuration of drivers and may be positioned in a repeating pattern.[0117] Another apparatus is described. The apparatus may include a first memory cluster, including a first memory array having a first plurality of memory cells, a first driver coupled with a subset of the first plurality of memory cells, and a first driver circuit configured to enable the first driver when the first memory cluster is active and a second memory cluster, including a second memory array having a second plurality of memory cells, where the first driver is coupled with at least one memory cell of the second plurality of memory cells and a second driver coupled with a subset of the second plurality of memory cells, where the first driver circuit is further configured to enable the first driver when the second memory cluster is active and the first memory cluster is inactive.[0118] In some examples of the apparatus, the first memory cluster abuts the second memory cluster.[0119] In some examples, the apparatus may include an electrode that couples the first driver with the subset of the first plurality of memory cells and the second driver with the subset of the second plurality of memory cells.[0120] Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Some drawings may illustrate signals as a single signal; however, the signal may represent a bus of signals, where the bus may have a variety of bit widths.[0121] The terms “electronic communication,” “conductive contact,” “connected,” and “coupled” may refer to a relationship between components that supports the flow of signals between the components. Components are considered in electronic communication with (or in conductive contact with or connected with or coupled with) one another if there is any conductive path between the components that can, at any time, support the flow of signals between the components. At any given time, the conductive path between components that are in electronic communication with each other (or in conductive contact with or connected with or coupled with) may be an open circuit or a closed circuit based on the operation of the device that includes the connected components. The conductive path between connected components may be a direct conductive path between the components or the conductive path between connected components may be an indirect conductive path that may include intermediate components, such as switches, transistors, or other components. In some examples, the flow of signals between the connected components may be interrupted for a time, for example, using one or more intermediate components such as switches or transistors.[0122] The term “coupling” refers to condition of moving from an open-circuit relationship between components in which signals are not presently capable of being communicated between the components over a conductive path to a closed-circuit relationship between components in which signals are capable of being communicated between components over the conductive path. When a component, such as a controller, couples other components together, the component initiates a change that allows signals to flow between the other components over a conductive path that previously did not permit signals to flow.[0123] The term “isolated” refers to a relationship between components in which signals are not presently capable of flowing between the components. Components are isolated from each other if there is an open circuit between them. For example, two components separated by a switch that is positioned between the components are isolated from each other when the switch is open. When a controller isolates two components, the controller affects a change that prevents signals from flowing between the components using a conductive path that previously permitted signals to flow.[0124] The term “layer” or “level” used herein refers to a stratum or sheet of a geometrical structure (e.g., relative to a substrate). Each layer or level may have three dimensions (e.g., height, width, and depth) and may cover at least a portion of a surface. For example, a layer or level may be a three-dimensional structure where two dimensions are greater than a third, e.g., a thin-film. Layers or levels may include different elements, components, and/or materials. In some examples, one layer or level may be composed of two or more sublayers or sublevels. [0125] As used herein, the term “substantially” means that the modified characteristic (e.g., a verb or adjective modified by the term substantially) need not be absolute but is close enough to achieve the advantages of the characteristic.[0126] As used herein, the term “electrode” may refer to an electrical conductor, and in some examples, may be employed as an electrical contact to a memory cell or other component of a memory array. An electrode may include a trace, wire, conductive line, conductive layer, or the like that provides a conductive path between elements or components of a memory array.[0127] The devices discussed herein, including a memory array, may be formed on a semiconductor substrate, such as silicon, germanium, silicon-germanium alloy, gallium arsenide, gallium nitride, etc. In some examples, the substrate is a semiconductor wafer. In other examples, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon- on-glass (SOG) or silicon-on-sapphire (SOP), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, may be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion-implantation, or by any other doping means.[0128] A switching component or a transistor discussed herein may represent a field- effect transistor (FET) and comprise a three terminal device including a source, drain, and gate. The terminals may be connected to other electronic elements through conductive materials, e.g., metals. The source and drain may be conductive and may comprise a heavily- doped, e.g., degenerate, semiconductor region. The source and drain may be separated by a lightly-doped semiconductor region or channel. If the channel is n-type (i.e., majority carriers are electrons), then the FET may be referred to as a n-type FET. If the channel is p-type (i.e., majority carriers are holes), then the FET may be referred to as a p-type FET. The channel may be capped by an insulating gate oxide. The channel conductivity may be controlled by applying a voltage to the gate. For example, applying a positive voltage or negative voltage to an n-type FET or a p-type FET, respectively, may result in the channel becoming conductive. A transistor may be “on” or “activated” when a voltage greater than or equal to the transistor’s threshold voltage is applied to the transistor gate. The transistor may be “off’ or “deactivated” when a voltage less than the transistor’s threshold voltage is applied to the transistor gate. [0129] The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “exemplary” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details to providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form to avoid obscuring the concepts of the described examples.[0130] In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.[0131] The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described herein can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations.[0132] For example, the various illustrative blocks and modules described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). [0133] As used herein, including in the claims, “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of’ or “one or more of’) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.”[0134] Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media can comprise RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general- purpose or special-purpose computer, or a general-purpose or special-purpose processor.Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of these are also included within the scope of computer-readable media.[0135] The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein, but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.
In one embodiment, the present invention includes a method for determining a power budget for a multi-domain processor for a current time interval, determining a portion of the power budget to be allocated to first and second domains of the processor, and controlling a frequency of the domains based on the allocated portions. Such determinations and allocations can be dynamically performed during runtime of the processor. Other embodiments are described and claimed.
What is claimed is: 1. An apparatus comprising: a multi-domain processor including a first domain and a second domain, each of the first and second domains to operate at an independent voltage and frequency, the multi- domain processor further including first logic to dynamically allocate a power budget for the multi-domain processor between the first and second domains at run time. 2. The apparatus of claim 1 , wherein the first logic is to dynamically allocate the power budget according to a first sharing policy value for the first domain and a second sharing policy value for the second domain, the first and second sharing policy values controllable by user-level software. 3. The apparatus of claim 2, wherein the first logic is to determine a portion of the power budget to allocate to the first domain based on the first sharing policy value. 4. The apparatus of claim 2, wherein the first logic is to dynamically allocate the power budget further according to a first minimum reservation value for the first domain and a second minimum reservation value for the second domain, the first and second minimum reservation values controllable by the user-level software. 5. The apparatus of claim 4, wherein the first logic is to provide at least a first portion of the power budget to the first domain, the first portion corresponding to the first minimum reservation value. 6. The apparatus of claim 1 , wherein the first logic is to determine the power budget for a current time interval based at least in part on a power budget carried forward from a previous time interval, a power consumed in the previous time interval, and a power budget decay value. 7. The apparatus of claim 6, wherein the first logic is to determine the power budget according to En = En-i * alpha + ( l -alpha)*(Power_Limit*deltaT - Energy), where: En is the power budget for the current time interval;En-i is the power budget carried forward from the previous time interval; Powerjimit is a threshold power level; deltaT is a length of the time interval; Energy is the power consumed during the previous time interval; and alpha is the power budget decay value. 8. The apparatus of claim 1 , wherein the first logic is to dynamically allocate substantially all of the power budget to the first domain for a first workload, and to dynamically allocate substantially all of the power budget to the second domain for a second workload executed after the first workload. 9. A method comprising: determining, in a power controller of a multi-domain processor, a power budget for the multi-domain processor for a current time interval, the multi-domain processor including at least a first domain and a second domain; determining, in the power controller, a portion of the power budget to be allocated to the first and second domains; and controlling a frequency of the first domain and a frequency of the second domain based on the allocated portions. 10. The method of claim 9, wherein determining the portion of the power budget to be allocated to the first and second domains includes allocating the power budget to the second domain and not to the first domain if the power budget is less than a minimum reservation value for the second domain. 1 1. The method of claim 10, further comprising obtaining the minimum reservation value for the second domain from a configuration register written by user-level software. 12. The method of claim 10, further comprising allocating the minimum reservation value for the second domain to the second domain, and allocating a remaining portion of the power budget to the first domain when the power budget is greater than the minimum reservationvalue for the second domain but less than a sum of the minimum reservation value for the second domain and a minimum reservation value for the first domain. 13. The method of claim 9, further comprising allocating a minimum reservation value to the first domain and a minimum reservation value to the second domain, and sharing a remaining portion of the power budget according to a first sharing policy value for the first domain and a second sharing policy value for the second domain. 14. The method claim 13, wherein the first sharing policy value is controllable by software executing on the first domain, the first sharing policy value to be incremented when a request for a higher frequency for the first domain is not granted, and the second sharing policy value is controllable by software executing on the second domain, the second sharing policy value to be incremented when a request for a higher frequency for the second domain is not granted. 15. A system comprising: a multicore processor having a first domain including a plurality of cores, a second domain including a graphics engine, and a third domain including system agent circuitry, the third domain to operate at a fixed power budget and to dynamically allocate a variable power budget between the first and second domains; and a dynamic random access memory (DRAM) coupled to the multicore processor. 16. The system of claim 15, wherein the system agent circuitry includes a power sharing logic to determine the variable power budget for a current time interval and to allocate a first portion of the variable power budget to the first domain according to a first power sharing value for the first domain, and to allocate a second portion of the variable power budget to the second domain according to a second power sharing value for the second domain. 17. The system of claim 16, wherein the power sharing logic is to dynamically allocate substantially all of the variable power budget to the first domain for a first workload, and to dynamically allocate substantially all of the variable power budget to the second domain for a second workload executed after the first workload. 18. The system of claim 16, wherein the power sharing logic is to increment the first power sharing value when a request for a higher frequency for the first domain is not granted, and to increment the second power sharing value when a request for a higher frequency for the second domain is not granted. 19. The system of claim 16, wherein the power sharing logic is to further allocate the first portion of the variable power budget according to a first minimum reservation value for the first domain and allocate the second portion of the variable power budget according to a second minimum reservation value for the second domain. 20. The system of claim 16, wherein the power sharing logic is to further allocate the variable power budget according to a preference value, the preference value to favor the second domain over the first domain.
Dynamically Allocating A Power Budget Over Multiple Domains Of A Processor Background [0001 ] As technology advances in the semiconductor field, devices such as processors incorporate ever-increasing amounts of circuitry. Over time, processor designs have evolved from a collection of independent integrated circuits (ICs), to a single integrated circuit, to multicore processors that include multiple processor cores within a single IC package. As time goes on, ever greater numbers of cores and related circuitry are being incorporated into processors and other semiconductors. [0002] Multicore processors are being extended to include additional functionality by incorporation of other functional units within the processor. One issue that arises is that the different circuitry can consume differing amounts of power based on their workloads. However, suitable mechanisms to ensure that these different units have sufficient power do not presently exist. Brief Description of the Drawings [0003] FIG. 1 is a flow diagram of a high level method of performing power budget allocations between multiple domains in accordance with an embodiment of the present invention. [0004] FIG. 2 is a flow diagram of a method describing further details of allocating a package power budget between multiple domains in accordance with an embodiment of the present invention. [0005] FIG. 3 is a graphical illustration of allocation of a power budget to multiple domains in accordance with one embodiment of the present invention. [0006] FIG. 4 is a graphical illustration of power consumption for a variety of workloads in accordance with one embodiment of the present invention. [0007] FIG. 5 is another graphical illustration of power consumption for a variety of workloads in accordance with an embodiment of the present invention. [0008] FIG. 6 is a block diagram of a processor in accordance with an embodiment of the present invention.[0009] FIG. 7 is a block diagram of a multi-domain processor in accordance with another embodiment of the present invention. [0010] FIG. 8 is a block diagram of a system in accordance with an embodiment of the present invention. Detailed Description [001 1] In various embodiments, a power budget of a processor including multiple domains can be dynamically apportioned at run time. As used herein the term "domain" is used to mean a collection of hardware and/or logic that operates at the same voltage and frequency point. As an example, a multicore processor can further include other non-core processing engines such as fixed function units, graphics engines, and so forth. Such processor can include at least two independent domains, one associated with the cores (referred to herein as a core domain) and one associated with a graphics engine (referred to herein as a graphics domain). Although many implementations of a multi-domain processor can be formed on a single semiconductor die, other implementations can be realized by a multi-chip package in which different domains can be present on different semiconductor die of a single package. [0012] In a multi-domain processor, the multiple domains collectively share a single power budget. Accordingly, the higher the frequency at which, e.g., the core domain is operating, the higher the power consumed by the core domain. And in turn, the higher the power consumed by the core domain, there is less power left for the graphics domain to consume and vice versa. On workloads that utilize both one or more cores of a core domain and a graphics engine of a graphics domain, embodiments may at run time dynamically repartition how a package power budget is shared between these domains. Thus embodiments provide a power balancing mechanism that can be implemented between the different domains of a multicore processor. For ease of discussion, embodiments described herein are with regard to a multi-domain processor including a core domain and a graphics domain that can share a power budget. However understand the scope of the present invention is not limited in this regard and additional domains can be present. As another example, each core can be allocated to a different domain and each of the domains can be provided with a dynamically re-partitionable amount of a power budget. Furthermore, in addition to core domains and graphics domains, understand that additional domains can be present. Forexample, another domain can be formed of other processing units such as fixed function units, accelerators or so forth. And a still further domain can be provided for certain management agents of a processor, which can receive a fixed portion of a total power budget. [0013] In various embodiments a power budget management (PB ) algorithm may be executed by logic such as logic of a power control unit (PCU) of a processor to control the power of an entire processor or an individual domain to a configured power limit. Such algorithm may be based in part on various processor parameters. One such parameter is a guaranteed frequency (PI ), which is a frequency that a domain is guaranteed to operate at and not exceed power or thermal specifications of the product. A processor can be tested, e.g., during fabrication to determine this guaranteed frequency, which can be stored in a nonvolatile storage or other mechanism of the processor. In various embodiments such guaranteed frequency can be set on a per domain basis. This guaranteed frequency can be fixed upon manufacture and not changed, or in certain embodiments this frequency can be dynamically updated, e.g., as a processor ages due to various degradation mechanisms of the semiconductor product. In various embodiments, all power domains of a processor can be run at their respective guaranteed frequency simultaneously and the processor should not exceed power or thermal specifications. [0014] Note that this guaranteed frequency can correspond to a performance state of a processor, namely a PI processor state. According to an operating system (OS)-based mechanism, namely the Advanced Configuration and Platform Interface (ACPI) standard (e.g., Rev. 3.0b, published October 10, 2006), a processor can operate at various performance states or levels, namely from P0 to PN. In general, the PI performance state may correspond to the highest guaranteed performance state that can be requested by an OS. In addition to this PI state, the OS can further request a higher performance state, namely a P0 state. This P0 state may thus be an opportunistic state in which, when power and/or thermal budget is available, processor hardware can configure the processor or at least portions thereof to operate at a higher than guaranteed frequency. In many implementations a processor can include multiple so-called bin frequencies above this PI frequency. [0015] Another parameter to be used in a PBM algorithm is a maximum turbo frequency (P0), which is a highest frequency at which a domain can operate. This maximum turbo frequency thus is the highest end of multiple bin frequencies greater than the PI frequencyand corresponds to a maximum non-guaranteed highest performance level that can be achieved. Note that at this frequency there are no guarantees on whether the domain exceeds the power or thermal specifications of the processor. In many situations, device characterization during fabrication of a processor can be used to set a maximum turbo frequency, which can be set on a per domain basis. Bin frequencies up to the maximum turbo frequency can be stored in a non-volatile storage or other mechanism of a processor. Note that it is not guaranteed that a processor with more than one domain is able to simultaneously run all domains at their respective maximum turbo frequencies. It is also not guaranteed that a given domain can run at its maximum turbo frequency while other domains are running at their respective guaranteed frequencies. [0016] Embodiments may dynamically calculate a package power budget which is a metric that measures the power headroom available to the processor for a given time interval. Depending on this power budget, one or more domains may be controlled to enter into a turbo mode in which a frequency can be raised above the guaranteed frequency. [0017] In one embodiment, the power budget can be calculated using the following equation: En = En- 1 * alpha + (l -alpha)*(Power_Limit*deltaT - Energy) [1] where En = energy budget for the current (Nth) evaluation instant (which can be measured in Joules); En.i = energy budget carried forward from the previous evaluation instant (which can be measured in Joules); Power limit = threshold power level that the processor is configured to maintain, and which may correspond to a thermal design processor (TDP). This thermal design power may thus be a measure of an average power at which the processor can operate. In many implementations, this TDP can be measured in units of power, namely Watts (W). For example, a processor can be rated at a TDP of 40W. This means that on average, the processor can withstand a power consumption level of 40W. But at any instant, its instantaneous power consumption level may be higher or lower than this TDP level. deltaT = evaluation interval at which a power budget is computed, which in one embodiment may be approximately 1 millisecond (ms); Energy = energy consumed during the previous evaluation interval, which can be measured in Joules. In one embodiment, energy can be estimated based on counters that trace variousmicro-architectural activity. For example, an energy value can be associated with each micro-operation retiring, or each cache access. Then based on these events occurring over the time interval, energy consumed can be determined. In another embodiment, energy can be obtained from reading external current and voltage monitoring sensors such as current monitoring circuitry implemented in a voltage regulator; and alpha = rate of power budget decay, which can be a function of the thermal resistance of a heatsink and cooling solution of the platform. In general, an alpha value can vary inversely with the selected deltaT. Where the deltaT is relatively small, e.g., 1 ms, the alpha value may be higher, and vice-versa. [0018] In various embodiments, a user may be provided with control, e.g., by user-level software to enable the software to determine how package power budget is shared between different domains. In one embodiment, this control can be exposed by way of configuration information that can be set by such software, e.g., as entries in one or more configuration registers. In one particular embodiment of a processor having a first domain and a second domain (also referred to as "planes"), two such configuration registers may be provided as follows in Table 1 : TABLE 1 [0019] These two values, referred to as policy values (and in the example of Table 1 can be 5-bit values), can be used to determine how a package power budget is to be shared between these two domains. For purposes of discussion herein, assume that these two domains are a core domain and a graphics domain. Furthermore, the core domain is referred to also herein as an "IA" domain, referring to processor cores in accordance with an Intel Architecture™ instruction set architecture (ISA) (although embodiments can be used in connection with processors of other manufacturers and ISAs), while the graphics domain can be referred to as a "GT" domain, which refers to an embedded graphics engine that can be implemented on the same die as such IA cores. In one embodiment, the following equations may govern how the package power budget is shared between core and graphics domains:lA_percentage_of_package_budget = 0.5 + (POL1CY FIRST - POLICY_SECOND)/62 [2] GT_percentage_of_package_budget = 0.5 + (POLICY_SECOND - POLICY_FIRST)/62 [3] [0020] More generally, for the case of N domains over which an allocation of a package power budget is based on priority over these N domains, the following equations can be used: Edomainix) = En * BIASx [5] Here a BIAS can be calculated for each domain x based on the policy value (x) of that domain and the sum of total policy values of all the domains, and again En is the energy budget at time instant n. [0021 ] In some embodiments, these configuration registers that store policy values can be controlled generally as incrementing counters. That is, in some embodiments both an OS or other scheduler for the core domain and a graphics driver, which is software and/or firmware that controls various parameters of the graphics engine such as its frequency of operation and so forth, can make requests for a given frequency of operation to the PCU. If the requested frequency is not allowed (e.g., due to a power or thermal limit), the corresponding entity can increment the associated policy value. Thus over time, as a given entity's requests for a higher frequency are not granted, the policy values can be raised. These policy values similarly can be decremented. For example, policy values can be decremented on reset and when a domain gets the frequency it requested, or if a workload profile on the domain changes. For example if the workload utilization on that domain decreases (e.g., CO residency decreases), the OS or driver software can chose to reduce the policy value for that domain. Then based on these policy values and the above Equations 2 and 3, a percentage of the package budget, e.g., as determined in accordance with Equation 1 , can be allocated to each of the domains by controlling their frequency and/or voltage accordingly. In variousembodiments, the above mechanism of splitting budget between the domains is done on a continuous basis every deltaT time interval. [0022] Referring now to FIG. 1 , shown is a flow diagram of a high level method of performing power budget allocations between multiple domains in accordance with an embodiment of the present invention. As shown in FIG. 1 , method 100 can be implemented by logic within a processor, such as power sharing logic of a PCU or other power controller. As seen, method 100 may begin by determining a package power budget for a current interval (block 1 10). In various embodiments, this determination can be made in accordance with Equation 1 above, although other manners of determining a package power budget for a given interval can occur. [0023] Next, at block 120 a portion of this package power budget to be allocated amongst multiple domains can be determined. For purposes of discussion herein, assume a multi- domain processor including a core domain and a graphics domain. Different manners of allocating or sharing a power budget between these domains can occur in different embodiments. In general however, information regarding the manner in which sharing is to be performed, e.g., as indicated within configuration registers that can be set by system level and/or user-level software in addition to any floors or minimum values to be allocated to the different domains, can be considered in this determination. Accordingly, block 120 determines the allocation of the package power budget to be provided to each domain. Thus control next passes to block 130 where these domains can be controlled in accordance with this allocation. More specifically at block 130 a frequency and/or voltage of these domains can be updated based on the allocated portion of the power budget. In this way, for the given interval each of the domains can execute operations in accordance with this budget. While shown at this high level in the embodiment of FIG. 1 , understand the scope of the present invention is not limited in this regard. [0024] To handle cases where a certain amount of budget may be desired to be reserved for a domain, embodiments may support additional tunable parameters. These parameters, which may also be exposed by way of configuration registers or in another manner, can reserve a predetermined amount of budget for a given domain. In one embodiment, these parameters may be referred to as reservation values and can be used to identify a minimumpower budget (e.g., in terms of Watts) to be allocated to a given domain. In the described multi-domain processor, these parameters may be as follows: Min_reserved_for_IA = amount of budget to be reserved for the core domain; and Min_reserved_for_GT = amount of budget to be reserved for the graphics domain. [0025] Referring now to FIG. 2, shown is a flow diagram of a method describing further details of allocating a package power budget between multiple domains in accordance with an embodiment of the present invention. As shown in FIG. 2, method 200 may similarly be performed by power sharing logic of a PCU or other power controller of a processor. Method 200 can begin by obtaining minimum reservation values for the domains (block 210). As in the embodiment of FIG. 1 assume a multi-domain processor including at least a core domain and a graphics domain. These minimum reservation values can be obtained, e.g., from configuration registers set by user-level software to indicate a floor level corresponding to a minimum amount of power budget to be allocated to the given domain. Of course, these values instead can be set by other entities such as an OS and/or graphics driver, respectively for the core and graphics domains. [0026] At diamond 220 it can be determined whether the package power budget is greater than the minimum reservation value for the second domain. The package power budget can be calculated in different manners, but assume for purposes of discussion that it is calculated in accordance with Equation 1 above. If the budget is not greater than this minimum reservation value, control passes to block 230 where all of the package power budget can be allocated to the second domain. Thus the embodiment shown in FIG. 2 favors providing power to the second domain (which may correspond to a graphics domain) over the first domain (which may correspond to a core domain). Although shown with this preference in FIG. 2, understand the scope of the present invention is not limited in this regard, and in other implementations the preference can be in the other direction. And note that this preference can be dynamically changed based on a workload of a processor (e.g., a graphics intensive workload versus a computing intensive workload). As an example, a preference for the graphics domain can be hard mapped to first allocate to the graphics domain and then to the core domain to share the rest between core domain and graphics domain. This decision can be made based on the POLICY FIRST and POLICY_SECOND values. For example, ifPOLICY_FIRST is greater than POLICY_SECOND, the preference may be to first allocate to the core domain and then allocate the remaining budget to graphics domain and so on. [0027] If instead at diamond 220 it is determined that the package power budget is greater than the minimum reservation value for the second domain, control passes to diamond 240, where it can be determined whether the package power budget is greater than this minimum reservation value but less than the sum of the minimum reservation values for the two domains. If this is the case, control passes to block 250 where the minimum reservation value can be allocated to the second domain, and any remaining package power budget can be allocated to the first domain. Note again with regard to this allocation that a preference is made in favor of the second domain over the first domain. But in another implementation (or different workload), the preferences can be in the other direction. [0028] Finally, if the package power budget is greater than the sum of the minimum reservation values, control passes to block 260 where the minimum reservation values can be allocated to the domains, and then any remaining package power budget can be shared according to sharing policy values. These sharing policy values may also be obtained, e.g., from the configuration registers of Table 1. As one example, these sharing values can be set at equal values such that the remaining power budget can be allocated equally amongst the two domains. However in other examples, one of the domains may have a higher policy sharing value and thus may obtain more of the available power budget. Although shown with this particular implementation in the embodiment of FIG. 2, understand the scope of the present invention is not limited in this regard. For example, in products having greater than two domains, the analysis may proceed similarly but the available power budget is shared amongst the m>2 variable power domains based on policy values, reservation values, and/or preferences for each of the domains. [0029] Thus, based on the scenarios in FIG. 2 using the programmable reservation parameters, there can be four potential cases with regard to the packet budget (and assuming that the graphics domain has a higher preference value than the core domain): 1. Package budget < Min_reserved_for_GT. In this case the graphics domain receives the entire package budget, and the core domain receives no part of the package budget.2. Min_reserved_for_IA + min reserved for GT > Package budget > Min reserved for GT. In this case the graphics domain receives its minimum reservation value and the core domain receives the remaining budget (i.e., Packet budget - Min reserved for GT). 3. Package budget > Min_reserved_for_IA + Min_reserved_for_GT. In this case the graphics domain receives its minimum reservation value and a portion of the budget that exceeds the sum of the minimum reservation values. Identically the core domain receives its minimum reservation value and a portion of the budget that exceeds the sum of the minimum reservation values. The portions allocated to the two domains can be a function of POLICY_FIRST and POLICY_SECOND (of Table 1 ) as follows: Pkg_budget_for_IA = Min Guaranteed IA + PKG BUDGET * IA_percentage_of_package_budget. [6] Similarly, Pkg_budget_GT = Min Guaranteed GT + PKG BUDGET * GT_percentage_of_package_budget. [7] 4. Package budget > Min reserved for IA + GT but with a non-uniform split. In this case the graphics domain receives its minimum reservation value and a portion of the budget that exceeds the sum of the minimum reservation values. The portion of budget given to the graphics domain is governed by Equation 7 listed above (and that allocated to the core domain in Equation 6). Based on the POLICY_FIRST and POLICY_SECOND values, the excess power budget can by split asymmetrically between the domains. For example if POLICY_FIRST is 0 and POLICY_SECOND is 16, the graphics domain receives 75% of the package budget and the core domain receives the remaining 25%. FIG. 3 is a graphical illustration of the different allocation cores for these four cases, which are listed as cases 1 -4 in the illustration. [0030] To further illustrate how different domains can share a power budget, and furthermore how this power budget can shift between the domains based on a type of workload being executed, reference can be made to FIGS. 4 and 5, which are graphical illustrations of various allocations of a power budget between multiple domains in different workload environments. [0031 ] Referring first to FIG. 4, shown is a power consumption diagram in which a graphics domain power consumption is on the X-axis and a core domain power consumptionis on the Y-axis. As seen, each domain may have an independent specification power level, which may correspond to a PI or thermal design power level, which is the maximum guaranteed frequency at which the domain can execute. In addition, the domains can execute at higher power levels in a turbo mode (namely, a higher than guaranteed operating frequency, corresponding to a P0 performance state). As seen, a line 10 joining the axes corresponds to a total package power budget. When a core-intensive workload is executed, the portion of the total power budget allocated to the core domain can be higher, and in turn when a graphics-intensive workload is being executed, the portion of the total package power budget being allocated to the graphics domain can be higher. [0032] As further seen, the sum of the power budgets when both domains are executing at their highest guaranteed frequency corresponds to a sum of the maximum power budgets at a point 20. This sum can exceed the total package power budget and thus, the realistic current maximum power consumption level may fall on a range between points 30 and 40. Operating at points 30 or 40 depends on how the power budget is split between the core and graphics domains. Prioritizing towards the graphics domain will result in operating at point 40 and prioritizing towards the core domain will result in operating at point 30. [0033] However as seen in FIG. 5, it is possible that both domains can execute at their guaranteed maximum operating frequency and not violate the total package power budget in a turbo mode, as the total package power budget can be set to a higher level 15, when the turbo mode is available. Thus for at least short time periods, a turbo mode may be available in which both domains can at least meet their maximum guaranteed operating frequency. [0034] Referring now to FIG. 6, shown is a block diagram of a processor in accordance with an embodiment of the present invention. As shown in FIG. 6, processor 300 may be a multicore processor including a plurality of cores 310a - 310„. In one embodiment, each such core may be of an independent power domain and can be configured to operate at an independent voltage and/or frequency, and to enter turbo mode when available headroom exists. The various cores may be coupled via an interconnect 315 to a system agent or uncore 320 that includes various components. As seen, the uncore 320 may include a shared cache 330 which may be a last level cache. In addition, the uncore may include an integrated memory controller 340, various interfaces 350 and a power control unit 355.[0035] In various embodiments, power control unit 355 may include a power sharing logic 359, which may be a logic to perform dynamic control and re-allocation of an available power budget between multiple independent domains of the processor. In the embodiment of FIG. 6, assuming that each core is of an independent power domain, logic 359 can calculate an available power budget for a given time interval and dynamically allocate portions of this available power budget to the different cores. Such allocations can be on equal footing, or preferenced to one or more of the domains. These allocations can thus be based on policy values for the different domains, minimum reservation values for the different domains, and preference values. In one embodiment, these preference values may be of a ranked order in which each domain is ranked according to its preference. For example, in a two domain system the two domains can be ranked as a higher and lower preference such that an algorithm as discussed in FIG. 2 can allocate a minimum reservation value to the higher ranked domain when there is insufficient power budget for both domains' minimum reservation values. And of course such rankings can be extended to additional domains. As further seen in FIG. 6 to provide for storage of various policy values, minimum reservation values and preference values, a power control storage 357 may further be present within PCU 355 to store these various values. Although shown at this location in the embodiment of FIG. 6, understand that the scope of the present invention is not limited in this regard and the storage of this information can be in other locations, such as configuration registers, nonvolatile storage or the like. [0036] With further reference to FIG. 6, processor 300 may communicate with a system memory 360, e.g., via a memory bus. In addition, by interfaces 350, connection can be made to various off-chip components such as peripheral devices, mass storage and so forth. While shown with this particular implementation in the embodiment of FIG. 6, the scope of the present invention is not limited in this regard. [0037] Referring now to FIG. 7, shown is a block diagram of a multi-domain processor in accordance with another embodiment of the present invention of the present invention. As shown in the embodiment of FIG. 7, processor 400 includes multiple domains. Specifically, a core domain 410 can include a plurality of cores 410a-410n, a graphics domain 420 can include one or more graphics engines, and a system agent domain 450 may further be present. In various embodiments, system agent domain 450 may execute at a fixed frequency and mayremain powered on at all times to handle power control events and power management such that domains 410 and 420 can be controlled to dynamically enter into and exit low power states. In addition, these domains can dynamically share a package power budget between them in accordance with an embodiment of the present invention. Each of domains 410 and 420 may operate at different voltage and/or power. [0038] Note that while only shown with three domains, understand the scope of the present invention is not limited in this regard and additional domains can be present in other embodiments. For example, multiple core domains may be present each including at least one core. In this way, finer grained control of the amount of processor cores that can be executing at a given frequency can be realized. [0039] In general, each core 410 may further include low level caches in addition to various execution units and additional processing elements. In turn, the various cores may be coupled to each other and to a shared cache memory formed of a plurality of units of a last level cache (LLC) 440o - 440n. In various embodiments, LLC 450 may be shared amongst the cores and the graphics engine, as well as various media processing circuitry. As seen, a ring interconnect 430 thus couples the cores together, and provides interconnection between the cores, graphics domain 420 and system agent circuitry 450. [0040] In the embodiment of FIG. 7, system agent domain 450 may include display controller 452 which may provide control of and an interface to an associated display. As further seen, system agent domain 450 may include a power control unit 455 which can include a power sharing logic 459 in accordance with an embodiment of the present invention. In various embodiments, this logic may execute algorithms such as shown in FIGS. 1 and 2 to thus dynamically share an available package power budget between the core domain and the graphics domain. [0041 ] As further seen in FIG. 7, processor 400 can further include an integrated memory controller (IMC) 470 that can provide for an interface to a system memory, such as a dynamic random access memory (DRAM). Multiple interfaces 480o - 480„ may be present to enable interconnection between the processor and other circuitry. For example, in one embodiment at least one direct media interface (DMI) interface may be provided as well as one or more Peripheral Component Interconnect Express (PCI Express™ (PCIe™))interfaces. Still further, to provide for communications between other agents such as additional processors or other circuitry, one or more interfaces in accordance with a Intel® Quick Path Interconnect (QPI) protocol may also be provided. Although shown at this high level in the embodiment of FIG. 7, understand the scope of the present invention is not limited in this regard. [0042] Thus in various embodiments, a technique is provided to enable selection of how much of a common power envelope can be allocated to each of multiple independent power domains of a semiconductor device. Note that this power sharing approach is different than conventional power management control of processing engines, which simply acts to select one or more engines to be placed into a low power state, but does not provide for the dynamic power sharing of a power budget between domains as described herein. That is, embodiments provide a mechanism to dynamically share the power budget between different compute components in the same die. As a result, a power budget or power headroom can be reallocated between cores and graphics engine when they are both integrated on the same die. Although embodiments described herein are with regard to a multi-domain processor having at least one core domain and a graphics domain, the scope of the present invention is not so limited, and can be extended to any integrated semiconductor device where common power resources are dynamically allocated between multiple compute entities. [0043] Embodiments thus may dynamically redistribute power between core domain and graphics domain, enabling flexibility to handle various different workload requirements. [0044] Embodiments may be implemented in many different system types. Referring now to FIG. 8, shown is a block diagram of a system in accordance with an embodiment of the present invention. As shown in FIG. 8, multiprocessor system 500 is a point-to-point interconnect system, and includes a first processor 570 and a second processor 580 coupled via a point-to-point interconnect 550. As shown in FIG. 8, each of processors 570 and 580 may be multicore processors, including first and second processor cores (i.e., processor cores 574a and 574b and processor cores 584a and 584b), although potentially many more cores may be present in the processors. Each of the processors can include a PCU or other logic to perform dynamic allocation of a package power budget between multiple domains of the processor, as described herein.[0045] Still referring to FIG. 8, first processor 570 further includes a memory controller hub (MCH) 572 and point-to-point (P-P) interfaces 576 and 578. Similarly, second processor 580 includes a MCH 582 and P-P interfaces 586 and 588. As shown in FIG. 8, MCH's 572 and 582 couple the processors to respective memories, namely a memory 532 and a memory 534, which may be portions of system memory (e.g., DRAM) locally attached to the respective processors. First processor 570 and second processor 580 may be coupled to a chipset 590 via P-P interconnects 552 and 554, respectively. As shown in FIG. 8, chipset 590 includes P-P interfaces 594 and 598. [0046] Furthermore, chipset 590 includes an interface 592 to couple chipset 590 with a high performance graphics engine 538, by a P-P interconnect 539. In turn, chipset 590 may be coupled to a first bus 516 via an interface 596. As shown in FIG. 8, various input/output (I/O) devices 514 may be coupled to first bus 516, along with a bus bridge 518 which couples first bus 516 to a second bus 520. Various devices may be coupled to second bus 520 including, for example, a keyboard/mouse 522, communication devices 526 and a data storage unit 528 such as a disk drive or other mass storage device which may include code 530, in one embodiment. Further, an audio I/O 524 may be coupled to second bus 520. Embodiments can be incorporated into other types of systems including mobile devices such as a smart cellular telephone, tablet computer, netbook, or so forth. [0047] Embodiments may be implemented in code and may be stored on a non-transitory storage medium having stored thereon instructions which can be used to program a system to perform the instructions. The storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, solid state drives (SSDs), compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions. [0048] While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications andvariations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention.
Apparatuses and methods for user-directed motion gesture control are disclosed. According to aspects of the present disclosure, direct user inputs can be used to predictably manipulate power control behavior. In some embodiments, a wearable mobile device may be configured to accept user commands, and be configured to sense multitude of use, use environment, and use contexts. The wearable mobile device may include a memory configured to store a set of reference power control motion gesture sequences, one or more sensors configured to sense a motion gesture sequence, and a controller configured to provide interactive power control of the device using the motion gesture sequence and the set of reference power control motion gesture sequences.
I claim:1. A device, comprising:a memory configured to store a set of reference power control motion gesture sequences;one or more sensors configured to sense a motion gesture sequence; and a controller configured to provide interactive power control of the device using the motion gesture sequence and the set of reference power control motion gesture sequences.2. The device of claim 1, wherein the device is a wrist-worn device.3. The device of claim 1, wherein the motion gesture sequence comprises one or more motion gestures generated by a user using one hand.4. The device of claim 1, wherein the motion gesture sequence comprises one or more motion gestures generated by a user without visual assistance.5. The device of claim 1, wherein the controller configured to provide interactive power control of the device comprises:logic configured to compare the motion gesture sequence with the set of reference power control motion gesture sequences; andlogic configured to adjust a power control timer of the device in response to a match being found in the set of reference power control motion gesture sequences.6. The device of claim 5, wherein the logic configured to adjust the power control timer of the device comprises:logic configured to delay the device from entering a power saving mode based at least in part on the motion gesture sequence.7. The device of claim 5, wherein the logic configured to adjust the power control timer of the device comprises:logic configured to set the device on a power saving mode based at least in part on the motion gesture sequence.8. The device of claim 1, wherein the controller further comprises:logic configured to determine whether to set a power control timer; logic configured to determine whether the device is ready to enter a power saving mode; andlogic configured to set the power control timer in response to a determination that the power control timer is to be set and that the device is ready to enter the power saving mode.9. The device of claim 1, wherein the controller further comprises:logic configured to determine whether a power control timer has expired; and logic configured to set a wake up time in response to a determination that the power control timer has expired.10. A method of providing power control of a device, comprising:storing a set of reference power control motion gesture sequences in a memory; sensing a motion gesture sequence using one or more sensors; andproviding interactive power control of the device using the motion gesture sequence and the set of reference power control motion gesture sequences.11. The method of claim 10, wherein the device is a wrist-worn device.12. The method of claim 10, wherein the motion gesture sequence comprises one or more motion gestures generated by a user using one hand.13. The method of claim 10, wherein the motion gesture sequence comprises one or more motion gestures generated by a user without visual assistance.14. The method of claim 10, wherein the providing interactive power control of the device comprises:comparing the motion gesture sequence with the set of reference power control motion gesture sequences; andadjusting a power control timer of the device in response to a match being found in the set of reference power control motion gesture sequences.15. The method of claim 14, wherein the adjusting the power control timer of the device comprises:delaying the device from entering a power saving mode based at least in part on the motion gesture sequence.16. The method of claim 14, wherein the adjusting the power control timer of the device comprises:setting the device on a power saving mode based at least in part on the motion gesture sequence.17. The method of claim 10, wherein the providing interactive power control of the device further comprises:determining whether to set a power control timer;determining whether the device is ready to enter a power saving mode; and setting the power control timer in response to a determination that the power control timer is to be set and that the device is ready to enter the power saving mode.18. The method of claim 10, wherein the providing interactive power control of the device further comprises:determining whether a power control timer has expired; andsetting a wake up time in response to a determination that the power control timer has expired.19. A computer program product comprises non-transitory medium storing instructions for execution by one or more computer systems, wherein the instructions comprise:instructions for sensing a motion gesture sequence using one or more sensors; andinstructions for providing interactive power control of the device using the motion gesture sequence and a set of reference power control motion gesture sequences stored in a memory of a device.20. The computer program product of claim 19, wherein the instructions for providing interactive power control of the device comprises:instructions for comparing the motion gesture sequence with the set of reference power control motion gesture sequences; andinstructions for adjusting a power control timer of the device in response to a match being found in the set of reference power control motion gesture sequences.21. The computer program product of claim 20, wherein the instructions for adjusting the power control timer of the device comprises: instructions for delaying the device from entering a power saving mode based at least in part on the motion gesture sequence.22. The computer program product of claim 20, wherein the instructions for adjusting the power control timer of the device comprises:instructions for setting the device on a power saving mode based at least in part on the motion gesture sequence.23. The computer program product of claim 19, wherein the instructions for providing interactive power control of the device further comprises:instructions for determining whether to set a power control timer;instructions for determining whether the device is ready to enter a power saving mode; andinstructions for setting the power control timer in response to a determination that the power control timer is to be set and that the device is ready to enter the power saving mode.24. The computer program product of claim 19, wherein the instructions for providing interactive power control of the device further comprises:instructions for determining whether a power control timer has expired; and instructions for setting a wake up time in response to a determination that the power control timer has expired.25. A device:means for storing a set of reference power control motion gesture sequences; means for sensing a motion gesture sequence; andmeans for providing interactive power control of the device using the motion gesture sequence and the set of reference power control motion gesture sequences.26. The device of claim 25, wherein the means for providing interactive power control of the device comprises:means for comparing the motion gesture sequence with the set of reference power control motion gesture sequences; andmeans for adjusting a power control timer of the device in response to a match being found in the set of reference power control motion gesture sequences.27. The device of claim 26, wherein the means for adjusting the power control timer of the device comprises:means for delaying the device from entering a power saving mode based at least in part on the motion gesture sequence.28. The device of claim 26, wherein the means for adjusting the power control timer of the device comprises:means for setting the device on a power saving mode based at least in part on the motion gesture sequence.29. The device of claim 25, wherein the means for providing interactive power control of the device further comprises:means for determining whether to set a power control timer;means for determining whether the device is ready to enter a power saving mode; andmeans for setting the power control timer in response to a determination that the power control timer is to be set and that the device is ready to enter the power saving mode.30. The device of claim 25, wherein the means for providing interactive power control of the device further comprises:means for determining whether a power control timer has expired; and means for setting a wake up time in response to a determination that the power control timer has expired.
User-Directed Motion Gesture ControlCROSS REFERENCE TO RELATED APPLICATIONS[0001] This application claims the benefit of U.S. application number14/180,116, entitled " User-Directed Motion Gesture Control," filed February 13, 2014, assigned to the assignee hereof. The aforementioned United States application is hereby incorporated by reference in its entirety.FIELD[0002] The present disclosure relates to the field of wireless communications.In particular, the present disclosure relates to apparatuses and methods of user-directed motion gesture control.BACKGROUND[0003] The power control of conventional mobile devices has been typically designed to function in a conservative manner. As a result, timeouts may often be used to lower resources usage. However, such timeouts may often be lengthy and they often may sacrifice power to favor performance for such conventional mobile devices. For example, power control in a conventional mobile phone may typically involve constantly examining sufficiency of specific performance metrics to measure whether it may be warranted for resources to be reduced, or conversely increase resources when performance could be anticipated to become deficient. The generic performance metrics that may be commonly used to measure workload processing performance may include attributes like idleness of the processors, bus arbitration exception rate or bus bandwidth. After measuring such performance metrics, conventional power control methods may speculate future resources level without knowing how platform workload or concurrency may change, often risking resources to become too scarce or too much. Such call-back schemes that require constantly measuring platform resource sufficiency can be inherently costly and unaffordable for always-on wearable devices with small battery capacity, such as wrist-worn watches or head-mount displays.[0004] In addition, battery used in a typical conventional mobile device may have capacity of approximately 2000mA-Hour. On the other hand, battery used in a wearable mobile device may have capacity that cannot exceed approximately 50mA- Hour, with similar type, volume, and weight density of battery as used in theconventional mobile device. Moreover, since a wearable mobile device can be expected to be constantly worn and in-touch with a user, it may be desirable for the wearable mobile device to be always-on to enable many essential applications, such as health, security, surveillance applications. In such situations, it can be even more critical to enable power control and access control to such wearable mobile devices. Therefore, it would be beneficial to control power and performance more efficiently for such wearable mobile devices.SUMMARY[0005] The present disclosure relates to apparatuses and methods for providing power control of a device. In one embodiment, a method for providing power control of a device may include storing a set of reference power control motion gesture sequences in a memory, sensing a motion gesture sequence using one or more sensors, and providing interactive power control of the device using the motion gesture sequence and the set of reference power control motion gesture sequences.[0006] In another embodiment, a device may comprise a memory configured to store a set of reference power control motion gesture sequences, one or more sensors configured to sense a motion gesture sequence, and a controller configured to provide interactive power control of the device using the motion gesture sequence and the set of reference power control motion gesture sequences.[0007] In yet another embodiment, a computer program product may comprise non-transitory medium storing instructions for execution by one or more computer systems. The instructions may comprise instructions for storing a set of reference power control motion gesture sequences in a memory, instructions for sensing a motion gesture sequence using one or more sensors, and instructions for providing interactive power control of the device using the motion gesture sequence and the set of reference power control motion gesture sequences.[0008] In yet another embodiment, an apparatus may comprise means for storing a set of reference power control motion gesture sequences in a memory, means for sensing a motion gesture sequence using one or more sensors, and means for providing interactive power control of the device using the motion gesture sequence and the set of reference power control motion gesture sequences. BRIEF DESCRIPTION OF THE DRAWINGS[0009] The aforementioned features and advantages of the disclosure, as well as additional features and advantages thereof, will be more clearly understandable after reading detailed descriptions of embodiments of the disclosure in conjunction with the non-limiting and non-exhaustive aspects of following drawings. Like numbers are used throughout the figures.[0010] FIG. 1 illustrates an exemplary block diagram of a wearable mobile device according to aspects of the present disclosure.[0011] FIG. 2 illustrates an exemplary implementation of the sensor subsystem of the wearable mobile device of FIG. 1 according to aspects of the present disclosure.[0012] FIG. 3 illustrates an exemplary implementation of a controller configured to provide power control of the wearable mobile device of FIG. 1 according to aspects of the present disclosure.[0013] FIG. 4 illustrates an exemplary implementation of a controller configured to provide access control of the wearable mobile device of FIG. 1 according to aspects of the present disclosure.[0014] FIG. 5 illustrates an exemplary implementation of the power control module of FIG. 3 according to aspects of the present disclosure.[0015] FIG. 6 illustrates an exemplary implementation of the access control module of FIG. 4 according to aspects of the present disclosure.[0016] FIG. 7A illustrates an exemplary coordinate system for tracking motion gestures of a wearable mobile device according to aspects of the present disclosure.[0017] FIG. 7B illustrates exemplary user-directed motion gestures according to aspects of the present disclosure.[0018] FIG. 8A illustrates an exemplary method for providing power control according to aspects of the present disclosure.[0019] FIG. 8B illustrates an exemplary implementation of providing interactive power control of a device using a motion gesture sequence according to aspects of the present disclosure. [0020] FIG. 8C illustrates another exemplary implementation of providing interactive power control of a device using a motion gesture sequence according to aspects of the present disclosure.[0021] FIG. 8D illustrates yet another exemplary implementation of providing interactive power control of a device using a motion gesture sequence according to aspects of the present disclosure.[0022] FIG. 9A illustrates an exemplary method for providing access control according to aspects of the present disclosure.[0023] FIG. 9B illustrates an exemplary implementation of determining a valid motion gesture sequence for access control according to aspects of the present disclosure.[0024] FIG. 9C illustrates another exemplary implementation of determining a valid motion gesture sequence for access control according to aspects of the present disclosure.DESCRIPTION OF EMBODIMENTS[0025] Embodiments of providing power control and providing access control of a wearable mobile device are disclosed. The following descriptions are presented to enable a person skilled in the art to make and use the disclosure. Descriptions of specific embodiments and applications are provided only as examples. Various modifications and combinations of the examples described herein may be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other examples and applications without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the examples described and shown, but is to be accorded the scope consistent with the principles and features disclosed herein. The word "exemplary" or "example" is used herein to mean "serving as an example, instance, or illustration." Any aspect or embodiment described herein as "exemplary" or as an "example" in not necessarily to be construed as preferred or advantageous over other aspects or embodiments.[0026] According to aspects of the present disclosure, direct user inputs can be used to predictably manipulate power control behavior. In some embodiments, a wearable mobile device may be configured to accept user commands (via direct, voice/aural, and/or visual inputs), and be configured to sense multitude of use, use environment, and use contexts, including but not limited to, biometric signs/signals, nearness/presence, pressure, stability/vibration, location/position, orientation, heading, kinetics, etc.[0027] According to aspects of the present disclosure, a wearable mobile device can be configured to receive user directed inputs. Using a motion gesture sequence, a user may directly manipulate operation of the underlying platform including performing power control. For example, one motion gesture sequence may be used to alter a sleep timer for power control; another motion gesture sequence may be used to defer the wearable mobile device from entering into a power saving mode, etc.[0028] Similarly, a motion gesture sequence can be used for performing access control, similar to how characters/numbers and graphical gestures (both can be detected by touch) may be used for access control in conventional mobile devices. A motion gesture sequence may also be used to emulate a security key to control the use of the wearable device, according to a predefined/predetermined authorization motion gesture sequence.[0029] FIG. 1 illustrates an exemplary block diagram of a wearable mobile device according to aspects of the present disclosure. In the example shown in FIG. 1, a wearable mobile device 100 may include wireless connection module 102, controller 104, sensor subsystem 106, memory 110, and applications module 108. The wearable mobile device 100 may optionally include multimedia subsystem 112, speaker(s) and microphone(s) 114, and display 116. In some implementations, the wireless connection module 102 may be configured to support WiFi and/or Bluetooth in a wireless local area network (LAN) or wireless personal area network (PAN). The controller 104 may include one or more processors, software, hardware, and firmware to implement various functions described herein. For example, the controller 104 may be configured to implement functions of the wearable mobile device 100 as described in FIG. 3 to FIG. 9A-9C. The sensor subsystem 106 may be configured to sense and process various sensor input data and produce sensor output data to the controller 104. The applications module may include battery charging circuit and power manager, oscillators, phase lock loops, clock generators, and timers.[0030] In certain embodiments, wearable mobile device 100 may comprise a wireless transceiver which is capable of transmitting and receiving wireless signals via a wireless antenna over a wireless communication network. Some embodiments may include multiple wireless transceivers and wireless antennas to enable transmitting and/or receiving signals according to a corresponding multiple wireless communication standards such as, for example, versions of IEEE Std. 802.11, CDMA, WCDMA, LTE, UMTS, GSM, AMPS, Zigbee and Bluetooth, etc.[0031] Wireless connection module 102 may comprise SPS receiver capable of receiving and acquiring SPS signals via a SPS antenna. The SPS receiver may also process, in whole or in part, acquired SPS signals for estimating a location of wearable mobile device 100. In some embodiments, controller 104, memory 110, may also be utilized to process acquired SPS signals, in whole or in part, and/or calculate an estimated location of wearable mobile device 100, in conjunction with the SPS receiver. SPS or other signals for use in performing positioning operations may be stored in memory 140 or registers (not shown).[0032] In various embodiments, controller 104 may be configured to execute one or more machine-readable instructions stored in memory 110 such as on a computer-readable storage medium, such as RAM, ROM, FLASH, or disc drive, just to name a few example. The one or more instructions may be executable by processor(s), specialized processors, or DSP(s). Memory 110 may comprise a non-transitory processor-readable memory and/or a computer-readable memory that stores software code (programming code, instructions, etc.) that are executable by processor(s) and/or DSP(s) to perform functions described herein. Controller 104 may execute instructions to perform one or more aspects of processes/methods discussed below in connection with FIG. 2 to FIG. 9A-9C.[0033] In some implementations, a user interface may comprise any one of several devices such as, for example, multimedia subsystem 112, speakers and microphones 114, and display 116, etc. In a particular implementation, the user interface may enable a user to interact with one or more applications hosted on wearable mobile device 100. For example, devices may store analog or digital signals on memory 110 to be further processed by controller 104 in response to action from a user. Similarly, applications hosted on wearable mobile device 100 may store analog or digital signals on memory 110 to present an output signal to a user. [0034] Wearable mobile device 100 may also comprise a camera for capturing still or moving imagery. The camera may comprise, for example an imaging sensor (e.g., charge coupled device or CMOS imager), lens, analog to digital circuitry, frame buffers, etc. In one implementation, additional processing, conditioning, encoding or compression of signals representing captured images may be performed by controller 104. Alternatively, a video processor may perform conditioning, encoding,compression or manipulation of signals representing captured images. Additionally, the video processor may decode/decompress stored image data for presentation on display 116 of wearable mobile device 100.[0035] FIG. 2 illustrates an exemplary implementation of the sensor subsystem of the wearable mobile device of FIG. 1 according to aspects of the present disclosure. Sensor subsystem 106 may generate analog or digital signals that may be stored in memory 110 and processed by controller 104 in support of one or more applications such as, for example, applications relate to user-directed motion gesture control sequences for power control and/or access control.[0036] As shown in FIG. 2, the sensor subsystem 106 may include one or more sensor input devices 202, sensor processing module 204, one or more sensor output devices 206, and one or more optional active sensor output devices 208. The one or more sensor input devices 202 may include but not limited to, one or more of keys and buttons, temperature and moisture sensors, microphones, ultrasound microphone arrays, photo detectors, image sensors, touch sensors, pressure sensors, chemical sensors, gyroscope, accelerometer, magnetometer, GPS, and Compass. The sensor processing module 204 may be configured to perform one or more of the following functions, including but not limited to: input sensor selection and control, synchronization and timing control, signal processing, sensor platform performance estimation, sensor optimization, sensor fusion, and output sensor/device selection and control. The one or more sensor output devices 206 may produce one or more voice, visual, biometric, nearness, presence, pressure, stability, vibration, location, orientation, heading, kinetics, and chemical signals. The one or more optional active sensor output devices 208 may include one or more light emitting diodes, ultrasound speakers, and radio frequency signal generators. The sensor subsystem 106 may be configured to implement functions of motion gesture detection and analysis as described in FIG. 3 to FIG. 9A-9C. [0037] The sensor processing module 204 can be configured to process sensor input data from the one or more sensor input devices 202, and produced output commands or signals to the one or more sensor output devices 206, and/or to the one or more optional active sensor output devices. According to aspects of the present disclosure, direct user inputs can be used to predictably manipulate power control behavior. In some embodiments, a wearable device may be configured to accept user commands (via direct, voice/aural, and/or visual inputs), and be configured to sense multitude of use, use environment, and use contexts.[0038] FIG. 3 illustrates an exemplary implementation of a controller configured to provide power control of the wearable mobile device of FIG. 1 according to aspects of the present disclosure. Using one or more motion gesture sequences, a user may be enabled to control operations of the underlying platform, such as performing power control of the wearable mobile device 100. In the particular embodiment shown in FIG. 3, controller 104 may include power control module 300, user interface module 302, and motion gesture analysis module 304. Controller 104 may further include one or more device managers for controlling sensor input devices 202 and sensor output devices 206 and/or active sensor output devices 208 as described in FIG. 2. Power control module 300 is further described below in association with the descriptions of FIG. 5.[0039] FIG. 4 illustrates another exemplary implementation of a controller configured to provide access control of the wearable mobile device of FIG. 1 according to aspects of the present disclosure. Similar to the example shown in FIG. 3, using one or more motion gesture sequences, a user may be enabled to control operations of the underlying platform, such as performing access control of the wearable mobile device 100. One or more motion gesture sequences may be used for access control, similar to sequences of characters and numbers as detected by touch sensors in mobile phones and tablet computers. According to aspects of the present disclosure, the one or more motion gesture sequence can be used to emulate an equivalent security key that controls the use of the wearable mobile device based on a predetermined authorization policy. In the particular embodiment shown in FIG. 4, controller 104 may include access control module 400, user interface module 302, and motion gesture analysis module 304. Access control module 400 is further described below in association with the descriptions of FIG. 6. [0040] FIG. 5 illustrates an exemplary implementation of the power control module of FIG. 3 according to aspects of the present disclosure. In the exemplary method shown in FIG. 5, in block 502, the method starts in a state where the wearable mobile device 100 is active. In block 504, the method determines whether a power control timer notification has been received. If a power control timer notification has been received (504_Yes), the method moves to block 510. Alternatively, if a power control timer notification has not been received (504_No), the method moves to block 506. In block 506, the method determines whether the wearable mobile device 100 is ready to enter a power saving state (also referred to as power saving mode or standby mode). If the wearable mobile device 100 is ready to enter a power saving state(506_Yes), the method moves to block 508. Alternatively, if the wearable mobile device 100 is not ready to enter a power saving state (506_No), the method returns to block 502. In block 508, the method sets a power control timer and may optionally send a notification to the user. Then, the method returns to block 502.[0041] In block 510, the method determines whether a valid power control motion gesture sequence has been received. If a valid power control motion gesture sequence has been received (510_Yes), the method moves to block 512. Otherwise, if a valid power control motion gesture sequence has not been received (510_No), the method moves to block 514. In block 514, the method determines whether the power control timer has expired. If the power control timer has expired (514_Yes), the method moves to block 516. Alternatively, if the power control timer has not expired (514_No), the method moves to block 502.[0042] In block 512, the method adjusts the power control timer based on the power control motion gesture sequence received. In one embodiment, the method may increase the power control timer by a first predetermined increment based on a first motion gesture sequence received. For example, in the situation where the user may want to delay the decision and may prefer to be notified again, the first motion gesture sequence may be used. In another embodiment, the method may increase the power control timer by a second predetermined increment based on a second predetermined increment. For example, in the situation where the user may not want to be notified again anytime soon, the second motion gesture may be used. In yet anotherembodiment, the method may decrease the power control timer by a first predetermined decrement based on a third motion gesture sequence received. For example, in the situation where the user may need a short period of time to finish up a current task, the third motion gesture may be used. In yet another embodiment, the method may put the wearable mobile device 100 into a standby mode immediately based on a fourth motion gesture sequence received. For example, in the situation the user would no longer use the wearable mobile device 100 anytime soon; the fourth motion gesture sequence may be used.[0043] In block 516, the method sets a wake up time prior to entering into a power saving state. In one particular embodiment, the method may use a default wake up time, which may be predefined by the user. In another embodiment, the method may enable the user to set a new wake up time. After block 516, the method moves to block 518, where the wearable mobile device 100 stays in a power saving state.[0044] FIG. 6 illustrates an exemplary implementation of the access control module of FIG. 4 according to aspects of the present disclosure. In the exemplary method shown in FIG. 6, in block 602, the method starts in a state where the wearable mobile device 100 is active. In block 604, the method determines whether the access to the wearable mobile device 100 has been locked. If the access to the wearable mobile device 100 has been locked (604_Yes), the method moves to block 606. Alternatively, if the access to the wearable mobile device 100 has not been locked (604_No), the method moves back to block 602.[0045] In block 606, the method determines whether a gesture detection flag has been set. If the gesture detection flag has been set (606_Yes), the method moves to block 608. Alternatively, if the gesture detection flag has not been set (606_No), the method moves to block 612.[0046] In block 608, the method determines whether a valid access control motion gesture sequence has been detected. According to aspects of the present disclosure, in one embodiment, the method may determine a valid access control motion gesture sequence by comparing the motion gesture sequence with the set of reference access control motion gesture sequences stored in the memory, and identifying the valid access control motion gesture sequence in response to a match being found in the set of reference access control motion gesture sequences. In another embodiment, the method may determine a valid access control motion gesture sequence by generating a prompt to a user requesting a particular access control motion gesture sequence, receiving a response motion gesture sequence from the user, and determining whether there is a match between the response motion gesture sequence and the particular access control motion gesture sequence. If a valid access control motion gesture sequence has been detected (608_Yes), the method moves to block 610. Alternatively, if a valid access control motion gesture sequence has not been detected (6O8 N0), the method moves to block 616.[0047] In block 610, the method may unlock access to the wearable mobile device 100. In some implementations, the method may optionally send a user notification indicating the wearable mobile device 100 has been unlocked.[0048] In block 612, the method determines whether a set access control gesture has been detected. If the set access control gesture has been detected (612_Yes), the method moves to block 614. Alternatively, if the set access control gesture has not been detected (612_No), the method moves back to block 602. For enhanced security, in some implementations, the method may be configured to set the access control gesture detection flag from a predetermined starting position. One example of the set access control motion gesture sequence and the predetermined starting position may be completing a circle in a clockwise manner and from the bottom of the circle, respectively. Another example of the set access control motion gesture sequence and the predetermined starting position may be completing a triangle in a counterclockwise manner and from the top of the triangle, respectively.[0049] In block 614, the method sets access control gesture detection flag.Then, the method returns to block 602. In block 616, the method clears the invalid access control motion gesture sequence and returns to block 602.[0050] According to aspects of the present disclosure, since motion-gesture can be reliably detected, depending on security requirements (e.g., FIPS), access failures may also be used to alter access control profile including access from a host device. Unlike a conventional password or security pattern, which may commonly require two- handed operation, disclosed motion gesture sequence for access control can be performed with a single hand without visual assistance. In addition, unlike the conventional password or security pattern, which may require touch and/or graphics processing, the disclosed methods for power control can be efficiently processed and/or decoded by a sensor subsystem with substantially lower overhead on the a controller of the mobile device or on a host device.[0051] According to aspects of the present disclosure, a combination of a motion gesture sequence along with haptic interactions with a user can offer highly secure interactive challenge-response access control to implement nearly zero-knowledge access control. For example, haptic messages can be used to confidentially steer the user to enter a requested motion gesture sequence for access control and authentication.[0052] FIG. 7A illustrates an exemplary coordinate system for tracking motion gestures of a wearable mobile device according to aspects of the present disclosure. In the example shown in FIG. 7A, the wearable mobile device 100 may be worn on a user's hand 700. The position and orientation of the wearable mobile device 100 may be identified or referenced in a 3-dimensional Cartesian coordination system with origin 702, X axis 704, Y axis 706, and Z axis 708.[0053] In other implementations, the position and orientation of the wearable mobile device 100 may be identified or referenced in a spherical coordinate system (not shown), where the position and orientation of the wearable mobile device 100 may be identified or referenced by the radial distance of wearable mobile device 100 from a fixed origin, its polar angle measured from a fixed zenith direction, and the azimuth angle of its orthogonal projection on a reference plane that passes through the origin and being orthogonal to the zenith, measured from a fixed reference direction on that plane.[0054] As discussed in association with FIG. 2, various types of sensors, including but not limited to, accelerometer, gyroscope, and magnetometer may be used to detect motion gesture sequences. The accelerometer may perform better in detecting linear movements, the gyroscope may perform better in detecting rotations, and the magnetometer may perform better in detecting orientations of the wearable mobile device 100. A combination of two or more such sensors may be used to detect motion gesture sequences of the wearable mobile device 100 according to aspects of the present disclosure.[0055] According to embodiments of the present disclosure, an accelerometer can be a device that measures the acceleration of the wearable mobile device 100. It can be configured to measure the acceleration associated with the weight experienced by a test mass that resides in the frame of reference of the accelerometer. For example, an accelerometer measures a value even if it is stationary, because masses have weights, even though there is no change of velocity. The accelerometer measures weight per unit of mass, a quantity also known as gravitational force or g-force. In other words, by measuring weight, an accelerometer measures the acceleration of the free-fall reference frame (inertial reference frame) relative to itself. In one approach, a multi-axis accelerometer can be used to detect magnitude and direction of the proper acceleration (or g-force), as a vector quantity. In addition, the multi-axis accelerometer can be used to sense orientation as the direction of weight changes, coordinate acceleration as it produces g-force or a change in g-force, vibration, and shock. In another approach, a micro-machined accelerometer can be used to detect position, movement, and orientation of the wearable mobile device 100.[0056] According to embodiments of the present disclosure, a gyroscope can be used to measure rotation and orientation of the wearable mobile device 100, based on the principles of conservation of angular momentum. The accelerometer ormagnetometer can be used to establish an initial reference for the gyroscope. After the initial reference is established, the gyroscope can be more accurate than theaccelerometer or magnetometer in detecting rotation of the wearable mobile device 100 because it may be less impacted by vibrations, or by the electromagnet fields generated by electrical appliances around the wearable mobile device 100. A mechanical gyroscope can be a spinning wheel or disk whose axle can be free to take any orientation. This orientation may change much less in response to a given external torque than it would without the large angular momentum associated with the gyroscope's high rate of spin. Since external torque may be minimized by mounting the device in gimbals, its orientation may remain nearly fixed, regardless of any motion of the platform on which it may be mounted. In other approaches, gyroscopes based on other operating principles may also be used, such as the electronic, microchip-packaged Micro-electromechanical systems (MEMS) gyroscope devices, solid state ring lasers, fiber optic gyroscopes and quantum gyroscope.[0057] According to embodiments of the present disclosure, a magnetometer can be used to measure orientations by detecting the strength or direction of magnetic fields around the wearable mobile device 100. Various types of magnetometers may be used. For example, a scalar magnetometer measures the total strength of the magnetic field it is subjected to, and a vector magnetometer measures the component of the magnetic field in a particular direction, relative to the spatial orientation of the wearable mobile device 100. In another approach, a solid-state Hall-effect magnetometer can be used. The Hall-effect magnetometer produces a voltage proportional to the applied magnetic field, and it can be configured to sense polarity.[0058] According to aspects of the present invention, a user motion gesture sequence may be detected by a combination of one or more accelerometer,magnetometer, and gyroscope.[0059] FIG. 7B illustrates exemplary user-directed motion gestures according to aspects of the present disclosure. As shown in FIG. 7B, examples of user-directed motion gestures may include but not limited to: 1) waving hand 700 up and down with different ranges of motions such as 710 and 712; 2) pivoting hand 700 at an elbow with different ranges of motions such as 714 and 716; 3) rotating wrist of hand 700 with different ranges of motions such as 718 and 720; and 4) other user-defined motions. According to aspects of the present disclosure, a motion gesture sequence may include any combination of one or more of the above user-directed motion gestures. According to aspects of the present disclosure, the exemplary motion gesture sequences may be performed without visual aid to the user, and may be performed with one hand.[0060] FIG. 8A illustrates an exemplary method for providing power control according to aspects of the present disclosure. In the example shown in FIG. 8A, in block 802, the method stores a set of reference power control motion gesture sequences in a memory. In block 804, the method senses a motion gesture sequence using one or more sensors, such as using one or more of the sensor input devices 202 as described in FIG. 2. In block 806, the method provides interactive power control of the device using the motion gesture sequence and the set of reference power control motion gesture sequences. According to aspects of the present disclosure, the wearable mobile device 100 may be a wrist- worn device. The motion gesture sequence may include one or more motion gestures generated by a user using one hand and without visual assistance.[0061] FIG. 8B illustrates an exemplary implementation of providing interactive power control of a device using a motion gesture sequence according to aspects of the present disclosure. As shown in FIG. 8B, in block 808, the method compares the motion gesture sequence with the set of reference power control motion gesture sequences. In block 810, the method adjusts a power control timer of the wearable mobile device 100 in response to a match being found in the set of reference power control motion gesture sequences. According to aspects of the present disclosure, methods performed in block 810 may further comprise methods performed in block 812 and block 814. In block 812, the method delays the wearable mobile device 100 from entering a power saving mode based at least in part on the motion gesture sequence. In block 814, the method sets the wearable mobile device 100 to a power saving mode based at least in part on the motion gesture sequence.[0062] FIG. 8C illustrates another exemplary implementation of providing interactive power control of a device using a motion gesture sequence according to aspects of the present disclosure. As shown in FIG. 8C, in block 820, the method determines whether to set a power control timer. In block 822, the method determines whether the wearable mobile device 100 may be ready to enter a power saving mode. In block 824, the method sets the power control timer in response to a determination that the power control timer is to be set and the wearable mobile device 100 may be ready to enter the power saving mode.[0063] FIG. 8D illustrates yet another exemplary implementation of providing interactive power control of a device using a motion gesture sequence according to aspects of the present disclosure. As shown in FIG. 8D, in block 826, the method determines whether a power control timer has expired. In block 828, the method sets a wake up timer in response to the power control timer has expired.[0064] FIG. 9A illustrates an exemplary method for providing access control according to aspects of the present disclosure. In the example shown in FIG. 9A, in block 902, the method stores a set of reference access control motion gesture sequences in a memory. In block 904, the method senses a motion gesture sequence using one or more sensors, such as using one or more of the sensor input devices 202 as described in FIG. 2. In block 906, the method determines a valid access control motion gesture sequence using the motion gesture sequence and the set of reference access control motion gesture sequences. According to aspects of the present disclosure, the wearable mobile device 100 may be a wrist-worn device. The motion gesture sequence may include one or more motion gestures generated by a user using one hand and without visual assistance. [0065] FIG. 9B illustrates an exemplary implementation of determining a valid motion gesture sequence for access control according to aspects of the present disclosure. As shown in FIG. 9B, in block 908, the method compares the motion gesture sequence with the set of reference access control motion gesture sequences stored in the memory. In block 910, the method identifies the valid access control motion gesture sequence in response to a match being found in the set of reference access control motion gesture sequences. Upon identifying the valid access control motion gesture sequence, the method may grant access control to the mobile device and a user may start using the mobile device.[0066] FIG. 9C illustrates another exemplary implementation of determining a valid motion gesture sequence for access control according to aspects of the present disclosure. In the exemplary implementation shown in FIG. 9C, in block 912, the method generates a prompt to a user requesting a particular access control motion gesture sequence. In block 914, the method receives a response motion gesture sequence from the user. In block 916, the method determines whether there is a match between the response motion gesture sequence and the particular access control motion gesture sequence. According to aspects of the present disclosure, the prompt comprises a haptic message received by the user. The particular access control motion gesture sequence may be selected from a plurality of access control motion gesture sequences in the set of reference access control motion gesture sequences.[0067] According to aspects of the present disclosure, some of the benefits of the disclosed embodiments are described as following. First, motion gesture can be subtle and un-intrusive (organic) as compared to other sensor modalities. It can enable user interactions with the wearable mobile device with substantially lower power overhead. Second, the complexity of a motion gesture sequence can be scaled according to the significance of corresponding operation that may be controlled. For example, a motion gesture sequence for power control can be less stringent than a motion gesture sequence for access control.[0068] In one embodiment, a method for providing access control of a device may include a memory configured to store a set of reference access control motion gesture sequences, one or more sensors configured to sense a motion gesture sequence, and a controller configured to determine a valid access control motion gesture sequence using the motion gesture sequence and the set of reference access control motion gesture sequences. In some implementations, the device can be a wrist-worn device. The motion gesture sequence comprises one or more motion gestures generated by a user using one hand, and without visual assistance. The controller configured to determine the valid access control motion gesture sequence may comprise logic configured to compare the motion gesture sequence with the set of reference access control motion gesture sequences stored in the memory, and logic configured to identify a valid motion gesture sequence in response to a match being found in the set of reference access control motion gesture sequences. The controller configured to determine the valid access control motion gesture sequence may further comprise logic configured to generate a prompt to a user requesting a particular access control motion gesture sequence, logic configured to receive a response motion gesture sequence from the user, and logic configured to determine whether there is a match between the response motion gesture sequence and the particular access control motion gesture sequence. In some implementations, the prompt comprises a haptic message received by the user. The particular access control motion gesture sequence may be selected from a plurality of access control motion gesture sequences in the set of reference access control motion gesture sequences.[0069] In another embodiment, a method of providing access control of a device may comprise storing a set of reference access control motion gesture sequences in a memory, sensing a motion gesture sequence using one or more sensors, and determining a valid access control motion gesture sequence using the motion gesture sequence and the set of reference access control motion gesture sequences. The method of determining the valid access control motion gesture sequence may comprise comparing the motion gesture sequence with the set of reference access control motion gesture sequences stored in the memory, and identifying the valid access control motion gesture sequence in response to a match being found in the set of reference access control motion gesture sequences. The method of determining the valid access control motion gesture sequence may further comprise generating a prompt to a user requesting a particular access control motion gesture sequence, receiving a response motion gesture sequence from the user, and determining whether there is a match between the response motion gesture sequence and the particular access control motion gesture sequence.[0070] In yet another embodiment, a computer program product may comprise non-transitory medium storing instructions for execution by one or more computer systems. The instructions may comprise instructions for storing a set of reference access control motion gesture sequences in a memory of a device, instructions for sensing a motion gesture sequence using one or more sensors, and instructions for determining a valid access control motion gesture sequence using the motion gesture sequence and the set of reference access control motion gesture sequences. The instructions for determining the valid access control motion gesture sequence may comprise instructions for comparing the motion gesture sequence with the set of reference access control motion gesture sequences stored in the memory, and instructions for identifying the valid access control motion gesture sequence in response to a match being found in the set of reference access control motion gesture sequences. The instructions for determining the valid access control motion gesture sequence may further comprise instructions for generating a prompt to a user requesting a particular access control motion gesture sequence, instructions for receiving a response motion gesture sequence from the user, and instructions for determining whether there is a match between the response motion gesture sequence and the particular access control motion gesture sequence.[0071] In yet another embodiment, a device may comprise means for storing a set of reference access control motion gesture sequences, means for sensing a motion gesture sequence, and means for determining a valid access control motion gesture sequence using the motion gesture sequence and the set of reference access control motion gesture sequences. The means for determining the valid access control motion gesture sequence may comprise means for comparing the motion gesture sequence with the set of reference access control motion gesture sequences stored in the memory, and means for identifying the valid access control motion gesture sequence in response to a match being found in the set of reference access control motion gesture sequences. The means for determining the valid access control motion gesture sequence may further comprise means for generating a prompt to a user requesting a particular access control motion gesture sequence, means for receiving a response motion gesture sequence from the user, and means for determining whether there is a match between the response motion gesture sequence and the particular access control motion gesture sequence.[0072] Note that the subsequent paragraphs, FIG. 1, FIG. 2, FIG. 3, FIG. 5, FIG.8A-8D and their corresponding descriptions provide means for storing a set of reference power control motion gesture sequences; means for sensing a motion gesture sequence; means for providing interactive power control of the device using the motion gesture sequence and the set of reference power control motion gesture sequences; means for comparing the motion gesture sequence with the set of reference power control motion gesture sequences; means for adjusting a power control timer of the device in response to a match being found in the set of reference power control motion gesture sequences; means for delaying the device from entering a power saving mode based at least in part on the motion gesture sequence; means for setting the device on a power saving mode based at least in part on the motion gesture sequence; means for determining whether to set a power control timer; means for determining whether the device is ready to enter a power saving mode; means for setting the power control timer in response to a determination that the power control timer is to be set and that the device is ready to enter the power saving mode; means for determining whether a power control timer has expired; and means for setting a wake up time in response to a determination that the power control timer has expired.[0073] Note that the subsequent paragraphs, FIG. 1 , FIG. 2, FIG. 4, FIG. 6, FIG.9A-9C and their corresponding descriptions provide means for storing a set of reference access control motion gesture sequences; means for sensing a motion gesture sequence; means for determining a valid access control motion gesture sequence using the motion gesture sequence and the set of reference access control motion gesture sequences; means for comparing the motion gesture sequence with the set of reference access control motion gesture sequences stored in the memory; means for identifying the valid access control motion gesture sequence in response to a match being found in the set of reference access control motion gesture sequences; means for generating a prompt to a user requesting a particular access control motion gesture sequence; means for receiving a response motion gesture sequence from the user; and means for determining whether there is a match between the response motion gesture sequence and the particular access control motion gesture sequence.[0074] The methodologies described herein may be implemented by various means depending upon applications according to particular examples. For example, such methodologies may be implemented in hardware, firmware, software, or combinations thereof. In a hardware implementation, for example, a processing unit may be implemented within one or more application specific integrated circuits("ASICs"), digital signal processors ("DSPs"), digital signal processing devices ("DSPDs"), programmable logic devices ("PLDs"), field programmable gate arrays ("FPGAs"), processors, controllers, micro-controllers, microprocessors, electronic devices, other devices units designed to perform the functions described herein, or combinations thereof.[0075] Some portions of the detailed description included herein are presented in terms of algorithms or symbolic representations of operations on binary digital signals stored within a memory of a specific apparatus or special purpose computing device or platform. In the context of this particular specification, the term specific apparatus or the like includes a general purpose computer once it is programmed to perform particular operations pursuant to instructions from program software.Algorithmic descriptions or symbolic representations are examples of techniques used by those of ordinary skill in the signal processing or related arts to convey the substance of their work to others skilled in the art. An algorithm is here, and generally, is considered to be a self-consistent sequence of operations or similar signal processing leading to a desired result. In this context, operations or processing involve physical manipulation of physical quantities. Typically, although not necessarily, such quantities may take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared or otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to such signals as bits, data, values, elements, symbols, characters, terms, numbers, numerals, or the like. It should be understood, however, that all of these or similar terms are to be associated with appropriate physical quantities and are merely convenient labels. Unless specifically stated otherwise, as apparent from the discussion herein, it is appreciated that throughout this specification discussions utilizing terms such as "processing,""computing," "calculating," "determining" or the like refer to actions or processes of a specific apparatus, such as a special purpose computer, special purpose computing apparatus or a similar special purpose electronic computing device. In the context of this specification, therefore, a special purpose computer or a similar special purpose electronic computing device is capable of manipulating or transforming signals, typically represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the special purpose computer or similar special purpose electronic computing device. [0076] Wireless communication techniques described herein may be in connection with various wireless communications networks such as a wireless wide area network ("WW AN"), a wireless local area network ("WLAN"), a wireless personal area network (WPAN), and so on. The term "network" and "system" may be used interchangeably herein. A WW AN may be a Code Division Multiple Access("CDMA") network, a Time Division Multiple Access ("TDMA") network, a Frequency Division Multiple Access ("FDMA") network, an Orthogonal Frequency Division Multiple Access ("OFDMA") network, a Single-Carrier Frequency Division Multiple Access ("SC-FDMA") network, or any combination of the above networks, and so on. A CDMA network may implement one or more radio access technologies (" ATs") such as cdma2000, Wideband-CDMA ("W-CDMA"), to name just a few radio technologies. Here, cdma2000 may include technologies implemented according to IS- 95, IS-2000, and IS-856 standards. A TDMA network may implement Global System for Mobile Communications ("GSM"), Digital Advanced Mobile Phone System ("D- AMPS"), or some other RAT. GSM and W-CDMA are described in documents from a consortium named "3rd Generation Partnership Project" ("3GPP"). Cdma2000 is described in documents from a consortium named "3rd Generation Partnership Project 2" ("3GPP2"). 3 GPP and 3GPP2 documents are publicly available. 4G Long Term Evolution ("LTE") communications networks may also be implemented in accordance with claimed subject matter, in an aspect. A WLAN may comprise an IEEE 802.1 lx network, and a WPAN may comprise a Bluetooth network, an IEEE 802.15x, for example. Wireless communication implementations described herein may also be used in connection with any combination of WW AN, WLAN or WPAN.[0077] In another aspect, as previously mentioned, a wireless transmitter or access point may comprise a femtocell, utilized to extend cellular telephone service into a business or home. In such an implementation, one or more mobile devices may communicate with a femtocell via a code division multiple access ("CDMA") cellular communication protocol, for example, and the femtocell may provide the mobile device access to a larger cellular telecommunication network by way of another broadband network such as the Internet.[0078] Techniques described herein may be used with an SPS that includes any one of several GNSS and/or combinations of GNSS. Furthermore, such techniques may be used with positioning systems that utilize terrestrial transmitters acting as "pseudolites", or a combination of SVs and such terrestrial transmitters. Terrestrial transmitters may, for example, include ground-based transmitters that broadcast a PN code or other ranging code (e.g., similar to a GPS or CDMA cellular signal). Such a transmitter may be assigned a unique PN code so as to permit identification by a remote receiver. Terrestrial transmitters may be useful, for example, to augment an SPS in situations where SPS signals from an orbiting SV might be unavailable, such as in tunnels, mines, buildings, urban canyons or other enclosed areas. Anotherimplementation of pseudolites is known as radio-beacons. The term "SV", as used herein, is intended to include terrestrial transmitters acting as pseudolites, equivalents of pseudolites, and possibly others. The terms "SPS signals" and/or "SV signals", as used herein, is intended to include SPS-like signals from terrestrial transmitters, including terrestrial transmitters acting as pseudolites or equivalents of pseudolites.[0079] The terms, "and," and "or" as used herein may include a variety of meanings that will depend at least in part upon the context in which it is used.Typically, "or" if used to associate a list, such as A, B or C, is intended to mean A, B, and C, here used in the inclusive sense, as well as A, B or C, here used in the exclusive sense. Reference throughout this specification to "one example" or "an example" means that a particular feature, structure, or characteristic described in connection with the example is included in at least one example of claimed subject matter. Thus, the appearances of the phrase "in one example" or "an example" in various places throughout this specification are not necessarily all referring to the same example. Furthermore, the particular features, structures, or characteristics may be combined in one or more examples. Examples described herein may include machines, devices, engines, or apparatuses that operate using digital signals. Such signals may comprise electronic signals, optical signals, electromagnetic signals, or any form of energy that provides information between locations.[0080] While there has been illustrated and described what are presently considered to be example features, it will be understood by those skilled in the art that various other modifications may be made, and equivalents may be substituted, without departing from claimed subject matter. Additionally, many modifications may be made to adapt a particular situation to the teachings of claimed subject matter without departing from the central concept described herein. Therefore, it is intended that claimed subject matter not be limited to the particular examples disclosed, but that such claimed subject matter may also include all aspects falling within the scope of the appended claims, and equivalents thereof.
Embodiments include computing devices, systems, and methods identifying enhanced synchronization operation outcomes. A computing device may receive a first resource access request for a first resource of a computing device including a first requester identifier from a first computing element of the computing device. The computing device may also receive a second resource access request for the first resource including a second requester identifier from a second computing element of the computing device. The computing device may grant the first computing element access to the first resource based on the first resource access request, and return a response to the second computing element including the first requester identifier as a winner computing element identifier.
CLAIMSWhat is claimed is:1. A method of identifying enhanced synchronization operation outcomes in a computing device, comprising:receiving a plurality of resource access requests for a first resource of the computing device from a plurality of computing elements of the computing device including a first resource access request having a first requester identifier from a first computing element of the plurality of computing elements and a second resource access request having a second requester identifier from a second computing element of the plurality of computing elements;granting the first computing element access to the first resource based on the first resource access request; andreturning a response to the second computing element including the first requester identifier as a winner computing element identifier.2. The method of claim 1, further comprising:comparing the second requester identifier to the winner computing element identifier; anddetermining whether the second computing element is a winner computing element by the second requester identifier matching the winner computing element identifier.3. The method of claim 2, further comprising:identifying the winner computing element from the winner computing element identifier;determining whether a criterion is met for adjusting a second resource of the computing device in response to determining that the second computing element is not the winner computing element; and adjusting the second resource by the second computing element in response to determining that the criterion is met for adjusting the second resource.4. The method of claim 3, wherein determining whether a criterion is met for adjusting a second resource of the computing device comprises determining, by the second computing element, a likelihood of sharing the second resource by the first computing element and the second computing element based on one or more of a shared operating system, shared dynamic voltage and frequency scaling, and a shared topology.5. The method of claim 1, further comprising:receiving a third resource access request for the first resource including a third requester identifier from a third computing element of the plurality of computing elements; andreturning the response to the third computing element including the first requester identifier as the winner computing element identifier.6. The method of claim 1, further comprising:determining whether the second computing element has a task to execute; and sending a signal to steal a task from the first computing element in response to determining that the second computing element does not have a task to execute, wherein the signal includes the second requester identifier.7. The method of claim 6, further comprising:receiving a response to the signal to steal a task including a task winner computing element identifier;comparing the second requester identifier to the task winner computing element identifier; determining whether the second computing element is a task winner computing element by the second requester identifier matching the task winner computing element identifier; andadjusting a task stealing list of the second computing element in response to determining that the second computing element is not the task winner computing element.8. The method of claim 7, wherein adjusting the task stealing list of the second computing element comprises re-arranging items in the stealing list based at least in part on whether a computing element is executing a recursive task or a non-recursive task.9. A computing device configured for identifying enhanced synchronization operation outcomes, comprising:a plurality of computing elements, including a first computing element and a second computing element;a first resource; anda resource manager communicatively connected to the plurality of computing elements and the resource, wherein the resource manager is configured with executable instructions to perform operations comprising:receiving a plurality of resource access requests for the first resource including a first resource access request having a first requester identifier from the first computing element and a second resource access request having a second requester identifier from the second computing element;granting the first computing element access to the first resource based on the first resource access request; andreturning a response to the second computing element including the first requester identifier as a winner computing element identifier.10. The computing device of claim 9, wherein the second computing element is configured with executable instructions to perform operations comprising:comparing the second requester identifier to the winner computing element identifier; anddetermining whether the second computing element is a winner computing element by the second requester identifier matching the winner computing element identifier.11. The computing device of claim 10, further comprising a second resource communicatively connected to the second computing element, wherein the second computing element is configured with executable instructions to perform operations further comprising:identifying the winner computing element from the winner computing element identifier;determining whether a criterion is met for adjusting the second resource in response to determining that the second computing element is not the winner computing element; andadjusting the second resource in response to determining that the criterion is met for adjusting the second resource.12. The computing device of claim 11, wherein the second computing element is configured with executable instructions to perform operations such that determining whether a criterion is met for adjusting the second resource comprises determining a likelihood of sharing the second resource by the first computing element and the second computing element based on one or more of a shared operating system, shared dynamic voltage and frequency scaling, and a shared topology.13. The computing device of claim 9, wherein the plurality of computing elements further comprises a third computing element, and wherein the resource manager is configured with executable instructions to perform operations further comprising: receiving a third resource access request for the first resource including a third requester identifier from the third computing element; andreturning the response to the third computing element including the first requester identifier as the winner computing element identifier.14. The computing device of claim 9, wherein the second computing element is configured with executable instructions to perform operations comprising:determining whether the second computing element has a task to execute; and sending a signal to steal a task from the first computing element in response to determining that the second computing element does not have a task to execute, wherein the signal includes the second requester identifier.15. The computing device of claim 14, wherein the second computing element is configured with executable instructions to perform operations further comprising: receiving a response to the signal to steal a task including a task winner computing element identifier;comparing the second requester identifier to the task winner computing element identifier;determining whether the second computing element is a task winner computing element by the second requester identifier matching the task winner computing element identifier; andadjusting a task stealing list of the second computing element in response to determining that the second computing element is not the task winner computing element.16. The computing device of claim 15, wherein the second computing element is configured with computing element-executable instructions to perform operations such that adjusting the task stealing list of the second computing element comprises re-arranging items in the stealing list based at least in part on whether a computing element is executing a recursive task or a non-recursive task.17. A computing device configured for identifying enhanced synchronization operation outcomes, comprising:means for receiving a plurality of resource access requests for a first resource of the computing device from a plurality of computing elements of the computing device including a first resource access request having a first requester identifier from a first computing element of the plurality of computing elements and a second resource access request having a second requester identifier from a second computing element of the plurality of computing elements;means for granting the first computing element access to the first resource based on the first resource access request; andmeans for returning a response to the second computing element including the first requester identifier as a winner computing element identifier.18. The computing device of claim 17, further comprising:means for comparing the second requester identifier to the winner computing element identifier; andmeans for determining whether the second computing element is a winner computing element by the second requester identifier matching the winner computing element identifier.19. The computing device of claim 18, further comprising:means for identifying the winner computing element from the winner computing element identifier; means for determining whether a criterion is met for adjusting a second resource of the computing device in response to determining that the second computing element is not the winner computing element; andmeans for adjusting the second resource in response to determining that the criterion is met for adjusting the second resource.20. The computing device of claim 19, wherein means for determining whether a criterion is met for adjusting a second resource of the computing device comprises means for determining a likelihood of sharing the second resource by the first computing element and the second computing element based on one or more of a shared operating system, shared dynamic voltage and frequency scaling, and a shared topology.21. The computing device of claim 17, further comprising:means for receiving a third resource access request for the first resource including a third requester identifier from a third computing element of the plurality of computing elements; andmeans for returning the response to the third computing element including the first requester identifier as the winner computing element identifier.22. The computing device of claim 17, further comprising:means for determining whether the second computing element has a task to execute; andmeans for sending a signal to steal a task from the first computing element in response to determining that the second computing element does not have a task to execute, wherein the signal includes the second requester identifier.23. The computing device of claim 22, further comprising: means for receiving a response to the attempt to steal a task including a task winner computing element identifier;means for comparing the second requester identifier to the task winner computing element identifier;means for determining whether the second computing element is a task winner computing element by the second requester identifier matching the task winner computing element identifier; andmeans for adjusting a task stealing list of the second computing element in response to determining that the second computing element is not the task winner computing element.24. The computing device of clam 23, wherein means for adjusting the task stealing list of the second computing element comprises means for re-arranging items in the stealing list based at least in part on whether a computing element is executing a recursive task or a non-recursive task.25. A non-transitory processor-readable storage medium having stored thereon processor-executable instructions configured to cause a processor of a computing device to perform operations comprising:receiving a plurality of resource access requests for a first resource of the computing device from a plurality of computing elements of the computing device including a first resource access request having a first requester identifier from a first computing element of the plurality of computing elements and a second resource access request having a second requester identifier from a second computing element of the plurality of computing elements;granting the first computing element access to the first resource based on the first resource access request; andreturning a response to the second computing element including the first requester identifier as a winner computing element identifier.26. The non-transitory processor-readable storage medium of claim 25, wherein the stored processor-executable instructions are configured to cause the processor to perform operations further comprising:comparing the second requester identifier to the winner computing element identifier; anddetermining whether the second computing element is a winner computing element by the second requester identifier matching the winner computing element identifier.27. The non-transitory processor-readable storage medium of claim 26, wherein the stored processor-executable instructions are configured to cause the processor to perform operations further comprising:identifying the winner computing element from the winner computing element identifier;determining whether a criterion is met for adjusting a second resource of the computing device in response to determining that the second computing element is not the winner computing element andadjusting the second resource in response to determining that the criterion is met for adjusting the second resource..28. The non-transitory processor-readable storage medium of claim 27, wherein the stored processor-executable instructions are configured to cause the processor to perform operations such that determining whether a criterion is met for adjusting a second resource of the computing device comprises determining a likelihood of sharing the second resource by the first computing element and the second computing element based on one or more of a shared operating system, shared dynamic voltage and frequency scaling, and a shared topology.29. The non-transitory processor-readable storage medium of claim 25, wherein the stored processor-executable instructions are configured to cause the processor to perform operations further comprising:receiving a third resource access request for the first resource including a third requester identifier from a third computing element of the plurality of computing elements; andreturning the response to the third computing element including the first requester identifier as the winner computing element identifier.30. The non-transitory processor-readable storage medium of claim 25, wherein the stored processor-executable instructions are configured to cause the processor to perform operations further comprising:determining whether the second computing element has a task to execute; and sending a signal to steal a task from the first computing element in response to determining that the second computing element does not have a task to execute, wherein the signal includes the second requester identifier.31. The non-transitory processor-readable storage medium of claim 30, wherein the stored processor-executable instructions are configured to cause the processor to perform operations further comprising:receiving a response to the signal to steal a task including a task winner computing element identifier;comparing the second requester identifier to the task winner computing element identifier;determining whether the second computing element is a task winner computing element by the second requester identifier matching the task winner computing element identifier; and adjusting a task stealing list of the second computing element in response to determining that the second computing element is not the task winner computing element.32. The non-transitory processor-readable storage medium of claim 31, wherein the stored processor-executable instructions are configured to cause the processor to perform operations such that adjusting the task stealing list of the second computing element comprises re-arranging items in the stealing list based at least in part on whether a computing element is executing a recursive task or a non-recursive task.
TITLEIdentifying Enhanced Synchronization Operation Outcomes to Improve Runtime OperationsBACKGROUND[0001] Guaranteeing correctness in parallel application execution requires hardware atomic synchronization instructions. Such instructions ensure that if multiple processor cores try to concurrently update the same variable, only one processor core will succeed. Some examples of atomic synchronization instructions supported by current hardware include load-link/store-conditional, compare-and-swap, fetch-and- increment, etc.[0002] Synchronization instructions only return binary notifications of success (win) or failure (loss) to the processor cores, causing an information gap between the hardware and the software. Therefore, a processor core only receives a notification of whether its update was successful. However, arbiters or other resourcesynchronization and management components on an interconnection network between the processor cores and resources do not share other information related to a successful/failed update. Therefore, information is lost between atomicity hardware and the instruction set architecture.[0003] Exclusive access to a resource by two or more processor cores that are executing concurrently may be obtained by executing atomic synchronization instructions in order to gain access to said resource. The processor core that executes the synchronization instruction successfully will have obtained exclusive access to the resource.[0004] Exclusive access to a contended resource may also be granted to a processor core issuing a resource access request for the contended resource on a first come first serve basis. A resource manager can determine whether to grant or deny access to a resource access request issued by any of the processor cores, i.e., requester processor core, based on availability of the contended resource.SUMMARY[0005] The methods and apparatuses of various embodiments provide apparatus and methods for identifying enhanced synchronization operation outcomes in a computing device. Various embodiments may include receiving a plurality of resource access requests for a first resource of the computing device from a plurality of computing elements of the computing device, granting the first computing element access to the first resource based on the first resource access request, and returning a response to the second computing element. The plurality of resource access requests may include a first resource access request from a first computing element of the plurality of computing elements and a second resources access request from a second computing element of the plurality of computing elements. The first resource access request may include a first requester identifier from the first computing element. The second resource access request may include a second requester identifier from the second computing element. The response may include the first requester identifier as a winner computing element identifier. The computing elements may include physical processors and cores, or logical threads as defined herein.[0006] Some embodiments may further include comparing the second requester identifier to the winner computing element identifier, and determining whether the second computing element is a winner computing element by determining whether the second requester identifier matches the winner computing element identifier.[0007] Some embodiments may further include identifying the winner computing element from the winner computing element identifier and determining whether a criterion is met for adjusting a second resource of the computing device in response to determining that the second computing element is not the winner computing element. Such embodiments may further include adjusting the second resource by the second computing element in response to determining that the criterion is met for adjusting the second resource.[0008] In some embodiments, determining whether a criterion is met for adjusting a second resource of the computing device may include determining, by the second computing element, a likelihood of sharing the second resource by the first computing element and the second computing element based on one or more criteria. The criteria may include the first computing element and the second computing element having a shared operating system, shared dynamic voltage and frequency scaling, and a shared topology.[0009] Some embodiments may further include receiving a third resource access request for the first resource, the third resource access request including a third requester identifier from a third computing element of the plurality of computing elements, and returning the response to the third computing element including the first requester identifier as the winner computing element identifier.[0010] Some embodiments may further include determining whether the second computing element has a task to execute, and sending a signal to steal a task from the first computing element in response to determining that the second computing element does not have a task to execute, in which the signal includes the second requester identifier.[0011] Some embodiments may further include receiving a response to the attempt to steal a task, the response including a task winner computing element identifier. Such embodiments may further include comparing the second requester identifier to the task winner computing element identifier and determining whether the second computing element is a task winner computing element by determining whether the second requester identifier matches the task winner computing element identifier. Such embodiments may further include adjusting a task stealing list of the second computing element in response to determining that the second computing element is not the task winner computing element. [0012] In some embodiments, adjusting the task stealing list of the second computing element may include re-arranging items in the stealing list based at least in part on whether a computing element is executing a recursive task or a non-recursive task.[0013] Various embodiments may include a computing device configured for identifying enhanced synchronization operation outcomes. The computing device may include a plurality of computing elements, including a first computing element and a second computing element, a first resource, and a resource managercommunicatively connected to the plurality of computing elements and the resource, and configured with resource manager-executable instructions to perform operations of one or more of the embodiment methods summarized above.[0014] Various embodiments may include a computing device configured for identifying enhanced synchronization operation outcomes having means for performing functions of one or more of the embodiment methods summarized above.[0015] Various embodiments may include a non-transitory processor-readable storage medium having stored thereon processor-executable instructions configured to cause a processor of a computing device to perform operations of one or more of the embodiment methods summarized above.BRIEF DESCRIPTION OF THE DRAWINGS[0016] The accompanying drawings, which are incorporated herein and constitute part of this specification, illustrate example embodiments of various embodiments, and together with the general description given above and the detailed description given below, serve to explain the features of the claims.[0017] FIG. 1 is a component block diagram illustrating a computing device suitable for implementing an embodiment.[0018] FIG. 2 is a component block diagram illustrating an example multi-core processor suitable for implementing an embodiment. [0019] FIG. 3 is a process and signaling diagram illustrating hardware support for identifying enhanced synchronization operation outcomes according to anembodiment.[0020] FIG. 4 is a representational diagram illustrating an identifier register according to an embodiment.[0021] FIG. 5 is a process flow diagram illustrating an embodiment method for adjusting resources at a physical level based on a winner.[0022] FIG. 6 is a process flow diagram illustrating an embodiment method for adjusting task stealing heuristics at a logical level based on a winner.[0023] FIG. 7 is component block diagram illustrating an example mobile computing device suitable for use with the various embodiments.[0024] FIG. 8 is component block diagram illustrating an example mobile computing device suitable for use with the various embodiments.DETAILED DESCRIPTION[0025] The various embodiments will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and implementations are for illustrative purposes, and are not intended to limit the scope of the claims.[0026] The terms "computing device" and "mobile computing device" are used interchangeably herein to refer to any of a variety of electronic devices that include a memory, and a programmable processor with any number of processor cores. A processor with more than one processor core may be referred to as a multi-core processor. Examples of computing devices include cellular telephones, smartphones, personal or mobile multi-media players, personal data assistants (PDA's), laptop computers, tablet computers, convertible laptops/tablets (2-in-l computers), smartbooks, ultrabooks, netbooks, palm-top computers, wireless electronic mail receivers, multimedia Internet enabled cellular telephones, mobile gaming consoles, and wireless gaming controllers. The term "computing device" may further refer to stationary computing devices including personal computers, desktop computers, all- in-one computers, workstations, super computers, mainframe computers, embedded computers, servers, home theater computers, and game consoles. The various embodiments may be particularly useful for mobile computing devices with limited memory and battery resources. However, the embodiments may be generally useful in any electronic device that implements a plurality of memory devices and a limited power budget in which reducing the power consumption of the processors can extend the battery-operating time of the electronic device.[0027] Embodiments may include methods, systems, and devices for sharing more information during atomic synchronization operations. The methods/apparatus may include identifying enhanced synchronization operation outcomes to a losing contender, such as by storing the information in a generally accessible register.Embodiments may include sending an identifier of each issuer of resource access requests (requester identifier) with a synchronization instruction of the request, and returning to other contenders (or storing in a register) the identifier of the issuer that is granted access (winner requester identifier) to the contended resource. Embodiments may also include methods for using the winner requester identifier (or winner identifier) to adjust resource configurations at a hardware level, and/or to adjust workload balancing heuristics (e.g., work stealing heuristics) at a software level.[0028] Computing elements within a computing device, such as physical processors and cores, or logical threads, may issue resource access requests that include a synchronization instruction and a requester identifier of a computing element issuing an access request for a contended resource (a requester computing element). A resource manager, such as an arbiter, barrier, or controller, receiving an access request for a contended resource returns a winner identifier to the requester computing elements. The winner identifier identifies the computing element that won the contention and thus has sole ownership of the contended resource. The computing element having ownership of the contended resource may be referred to herein as the owner computing element, winner computing element, owner device, winner device, owner, or winner. The requester computing element that lost the contention to the contended resource may be referred to herein as a non-owner computing element, loser computing element, non-owner device, loser device, non-owner, or loser. The requester identifier may include an identifier for any hardware component or software element requesting access to the contended resource.[0029] The requester computing element receiving the winner identifier of the winner computing element may determine that access to the contended resource is denied and may adjust hardware resource configurations and/or software resources based on a relationship to the winner computing element, and shared and/or topologically close resources. Adjusting resources may benefit overall performance as the loser computing element may be holding software resources that are needed for the winner computing element to make progress. Thus, a loser computing element is informed of the identity of the winner computing element, and with this information, actions may be taken to transfer ownership of the resources held by the loser computing element. For example, the loser computing element may determine a likelihood of sharing hardware resources based on whether the winner device is managed by the same operating system, managed within the same dynamic voltage and frequency scaling domain, and/or within physical proximity of the loser computing element. Based on this information, resource configurations may be adjusted, including processing frequency, voltage scaling, number of memory operations, activity states, bandwidth use, etc.[0030] How and which resource configurations are adjusted may depend on a level of resource sharing between the winner computing element and the loser computing element, and management polices of a computing device. An implementation may include reducing the frequency of the loser computing element and increasing the winner computing element's frequency to allow the winner computing element to execute faster. Control of the respective processing frequencies of the loser computing element and the winner computing element may be implemented by an operating system (OS) as preprogrammed or at runtime in response to notification by signals of the atomic operation outcome.[0031] The requester computing element receiving the identifier of the winner computing element may also adjust workload balancing heuristics, such as work stealing heuristics, based on the winner of the contended resource. For example, a work stealing list may include logical elements used to execute an application.[0032] Adjusting the work balancing heuristics may involve adjusting work stealing heuristics to take into account the behavior of a task. An example of a work stealing heuristic may be one in which a logical element can steal unfinished work items from others upon completion of its original assignment. For a recursive task, which is a task launched by another task, in response to identifying an winner logical element of the contended resource, the winner logical element may be identified as potentially spawning further tasks to execute the iterations of the recursive task. The stealing list can be reordered to indicate that the winner logical element is a first one of the logical elements to check for tasks to steal. For a non-recursive task, in response to identifying a winner logical element of the contended resource, the winner logical element may be identified as having finished all of its assigned tasks for executing the application and stealing tasks from other logical elements, signifying that the winner logical element does not have any tasks for stealing. The stealing list can be modified to remove the winner logical element from the stealing list or reordered to indicate that the winner logical element is the last one of the logical elements to check for tasks to be stolen.[0033] The identifier of the requester computing element may be stored in a register and provided to the resource manager with the synchronization instruction. State information relating to the hardware component or software element associated with the identifier may also be stored in the register and provided to the resource manager. The resource manager may return the information it received for the winner of the contended resource. The resource manager may also track contention information, including a number of failed resource access requests since a successful resource access request for the contended resource and a number of non-owners of the contended resource. The resource manager may also return the contentioninformation.[0034] FIG. 1 illustrates a system including a computing device 10 in communication with a remote computing device 50 suitable for use with the various embodiments. The computing device 10 may include a system-on-chip (SoC) 12 with a processor 14, a memory 16, a communication interface 18, and a storage memory interface 20. The computing device 10 may further include a communication component 22 such as a wired or wireless modem, a storage memory 24, an antenna 26 for establishing a wireless connection 32 to a wireless network 30, and/or the network interface 28 for connecting to a wired connection 44 to the Internet 40. The processor 14 may include any of a variety of hardware cores, for example a number of processor cores.[0035] The term "system-on-chip" (SoC) is used herein to refer to a set ofinterconnected electronic circuits typically, but not exclusively, including a hardware core, a memory, and a communication interface. A hardware core may include a variety of different types of processors, such as a general purpose processor, a central processing unit (CPU), a digital signal processor (DSP), a graphics processing unit (GPU), an accelerated processing unit (APU), an auxiliary processor, a single-core processor, and a multi-core processor. A hardware core may further embody other hardware and hardware combinations, such as a field programmable gate array(FPGA), an application-specific integrated circuit (ASIC), other programmable logic device, discrete gate logic, transistor logic, performance monitoring hardware, watchdog hardware, and time references. Integrated circuits may be configured such that the components of the integrated circuit reside on a single piece of semiconductor material, such as silicon. In various embodiments, various combinations of the components of the computing device 10 may be separate components not included on the SoC 12.[0036] The SoC 12 may include one or more processors 14. The computing device 10 may include more than one SoCs 12, thereby increasing the number of processors 14 and processor cores. The computing device 10 may also include processors 14 that are not associated with an SoC 12. Individual processors 14 may be multi-core processors as described below with reference to FIG. 2. The processors 14 may each be configured for specific purposes that may be the same as or different from other processors 14 of the computing device 10. One or more of the processors 14 and processor cores of the same or different configurations may be grouped together. A group of processors 14 or processor cores may be referred to as a multi-processor cluster.[0037] The memory 16 of the SoC 12 may be a volatile or non-volatile memory configured for storing data and processor-executable code for access by the processor 14. The computing device 10 and/or SoC 12 may include one or more memories 16 configured for various purposes. In an embodiment, one or more memories 16 may include volatile memories such as random access memory (RAM) or main memory, or cache memory. These memories 16 may be configured to temporarily hold a limited amount of data received from a data sensor or subsystem and data and/or processor- executable code instructions that are requested from non-volatile memory. These memories 16 may also be configured to temporarily hold data and/or processor executable code instructions loaded to the memories 16 from non- volatile memory in anticipation of future access based on a variety of factors. The memories 16 may also be configured to temporarily hold intermediary processing data and/or processor- executable code instructions produced by the processor 14 and temporarily stored for future quick access. [0038] The memory 16 may be configured to store data and processor-executable code, at least temporarily, that is loaded to the memory 16 from another memory device, such as another memory 16 or storage memory 24, for access by one or more of the processors 14. The data or processor-executable code loaded to the memory 16 may be loaded in response to execution of a function by the processor 14. Loading the data or processor-executable code to the memory 16 in response to execution of a function may result from a memory access request to the memory 16 that isunsuccessful, or a miss, because the requested data or processor-executable code is not located in the memory 16. In response to a miss, a memory access request to another memory 16 or storage memory 24 may be made to load the requested data or processor-executable code from the other memory 16 or storage memory 24 to the memory device 16. Loading the data or processor-executable code to the memory 16 in response to execution of a function may result from a memory access request to another memory 16 or storage memory 24, and the data or processor-executable code may be loaded to the memory 16 for later access.[0039] The communication interface 18, communication component 22, antenna 26, and/or network interface 28, may work in unison to enable the computing device 10 to communicate over a wireless network 30 via a wireless connection 32, and/or a wired network 44 with the remote computing device 50. The wireless network 30 may be implemented using a variety of wireless communication technologies, including, for example, radio frequency spectrum used for wireless communications, to provide the computing device 10 with a connection to the Internet 40 by which it may exchange data with the remote computing device 50.[0040] The storage memory interface 20 and the storage memory 24 may work in unison to allow the computing device 10 to store data and processor-executable code on a non-volatile storage medium. The storage memory 24 may be configured much like an embodiment of the memory 16 in which the storage memory 24 may store the data or processor-executable code for access by one or more of the processors 14. The storage memory 24, being non-volatile, may retain the information even after the power of the computing device 10 has been removed. When the power is reestablished and the computing device 10 reboots, the information stored on the storage memory 24 may be available to the computing device 10. The storage memory interface 20 may control access to the storage memory 24 and allow the processor 14 to read data from and write data to the storage memory 24.[0041] Some or all of the components of the computing device 10 may be differently arranged and/or combined while still serving the necessary functions. Moreover, the computing device 10 may not be limited to one of each of the components, and multiple instances of each component may be included in various configurations of the computing device 10.[0042] FIG. 2 illustrates a multi-core processor 14 suitable for implementing an embodiment. The multi-core processor 14 may have a plurality of homogeneous or heterogeneous processor cores 200, 201, 202, 203. The processor cores 200, 201, 202, 203 may be homogeneous in implementations in which the cores of a single processor 14 are configured for the same purpose and have the same or similar performance characteristics. For example, the processor 14 may be a general purpose processor, and the processor cores 200, 201, 202, 203 may be homogeneous general purpose processor cores. Alternatively, the processor 14 may be a graphics processing unit or a digital signal processor, and the processor cores 200, 201, 202, 203 may be homogeneous graphics processor cores or digital signal processor cores, respectively. For ease of reference, the terms "processor" and "processor core" may be used interchangeably herein.[0043] The processor cores 200, 201, 202, 203 may be heterogeneous in that, the processor cores 200, 201, 202, 203 of a single processor 14 may be configured for different purposes and/or have different performance characteristics. Theheterogeneity of such heterogeneous processor cores may include different instruction set architecture, pipelines, operating frequencies, etc. An example of such heterogeneous processor cores may include what are known as "big. LITTLE" architectures in which slower, low-power processor cores may be coupled with more powerful and power-hungry processor cores. In similar embodiments, the SoC 12 may include a number of homogeneous or heterogeneous processors 14.[0044] In the example illustrated in FIG. 2, the multi-core processor 14 includes four processor cores 200, 201, 202, 203 (i.e., processor core 0, processor core 1, processor core 2, and processor core 3). For ease of explanation, the examples herein may refer to the four processor cores 200, 201, 202, 203 illustrated in FIG. 2. However, the four processor cores 200, 201, 202, 203 illustrated in FIG. 2 and described herein are merely provided as an example and in no way are meant to limit the various embodiments to a four-core processor system. The computing device 10, the SoC 12, or the multi-core processor 14 may individually or in combination include fewer or more than the four processor cores 200, 201, 202, 203 illustrated and described herein.[0045] FIG. 3 illustrates a process and signaling for identifying enhancedsynchronization operation outcomes according to an embodiment. The example illustrated in FIG. 3 is non-limiting, particularly with respect to the number and types of components implementing the process and signaling, and the number and order of the signals illustrated. This example includes computing elements (computing element 1 300 and computing element 2 302), a resource manager 304, and a resource 306. In various implementations, the computing elements 300, 302 may include any of the same or combination of, hardware implementations, such as processor 14, processor cores 200, 201, 202, 203, and other hardware cores, for example as described herein, and logical implementations, such as threads and processes. The resource manager 304 may also include hardware implementations, such as an arbiter, a barrier, a controller, a management unit, and an interface device. The resources 306 may include any hardware component or software elements accessible and usable by the computing elements 300, 302 to execute a task, such as a memory or storage location, an input/output port of various components, or a communication channel. [0046] In the example illustrated in FIG. 3, both the computing element 1 300 and the computing element 2 302 may issue resource access requests to the same resource 306 or multiple resources. As a non-limiting example and for ease of explanation, the descriptions herein may refer to the computing element 1 300 and the computing element 2 302 each issuing a single resource access request 308, 310 for a single resource 306. The computing element 1 300 may issue a resource access request 308, and the computing element 2 302 may issue a resource access request 310. The resource access request 310 may include a targeted resource 306, designated for example by a virtual or physical address, an operation, and a requester identifier. A synchronization operation may be an operation that requires the requester computing element 300, 302 to have exclusive access to the resource 306 or at least exclusive access to modify the resource 306. Without exclusive access to or exclusive access to modify the resource 306, the operation may encounter errors when an unexpected value is retrieved from the resource 306 after modification of the resource 306 by the other of the computing elements 300, 302. The requester identifier may include a value uniquely identifying the computing element 300, 302 issuing the request to access the resource 306. The requester identifier may be stored in a component, such as a register, cache, or buffer associated with the computing element 300, 302, and may be retrieved from the component for inclusion in the resource access request 308, 310, respectively.[0047] The resource manager 304 may receive the resource access requests 308, 310 and determine whether to allow access to the resource 306 by either or both of the computing elements 300, 302. In some implementations, the resource 306 may become contested and the resource manager 304 may deny access to the resource 306 by one of the computing elements 300, 302. The resource 306 may become a contested resource because multiple computing elements 300, 302 are concurrently accessing or attempting to access the resource 306. [0048] The contention for the resource 306 may stem from the resource manager 304 denying access to one of the computing element 1 300 and the computing element 2 302 while the other accesses the resource 306. Not all concurrent attempts to access the resource 306 may be contentious. However, contention may occur when multiple computing elements 300, 302 attempt to access the resource 306 to modify the resource 306. Contention may also occur when one of the computing element's access of the resource 306 relies on a consistent state of the resource 306 while the other computing element 300, 302 modifies the state of the resource 306. Contention may also occur when access to the resource 306 by one of the computing elements 300, 302 is dependent on pervious access to the resource 306 by the other of the computing elements 300, 302.[0049] Regardless of the reason for contention of the resource 306, the resource manager 304 may allow access to the resource 306 by one of the computing elements 300, 302 and deny access to the resource 306 by the other computing elements 300, 302. Thus, the resource 306 may allow implementation of one of the resource access requests 308, 310 and prohibit the implementation of the other of the resource access requests 308, 310.[0050] For the allowed one of the resource access requests 308, 310, the resource manager 304 may permit implementation of an operation 312 on the resource 306. As discussed above, the operation 312 may include an operation that may modify the resource 306 or may rely on a consistent state of the resource 306 during the operation 312.[0051] The resource manager 304 may store the requester identifier from the permitted resource access requests 308, 310. The resource manager 304 may store the requester identifier as a winner identifier, as differentiated from a loser identifier corresponding with the requester identifier of the prohibited resource access requests 308, 310. In some implementations, the winner identifier may be stored in a location accessible to the computing elements 300, 302, such as a register, so that the computing elements may check the winner identifier for use in adjusting resources as discussed further herein.[0052] The winner identifier may be correlated with the resource 306 to allow for tracking the ownership of the resource 306, so that the resource manager 304 and other computing elements requesting access to the resource 306 may be informed that the resource 306 is owned and by which computing element. For example, the resource manager 304 may use the stored winner identifier and its correlation to the resource 306 to make further determination of whether to allow or prohibit access to other concurrent resource access requests.[0053] In the example illustrated in FIG. 3, the resource manager 304 permits the resource access requests 308 issued by the computing element 1 300. As a result, the computing element 1 300 is the winner of the contention for the resource 306, and the requested identifier of the computing element 1 300 and the resource access requests 308 may be designated as the winner identifier.[0054] For the prohibited one of the resource access requests 308, 310, the resource manager 304 may return a response 314 to the computing element 300, 302 having issued the prohibited one of the resource access requests 308, 310. The response 314 may indicate to the receiving computing element 300, 302 that its respective resource access request 308, 310 is denied. The response 314 may indicate denial of the resource access request 308, 310 by including a signal, such as a designated bit, that may indicate the denial by having a designated value. The response 314 may also or alternatively include the winner identifier.[0055] The winner identifier may be used as the signal indicating the denial of the prohibited resource access request 308, 310. In this regard, the receiving computing element 300, 302 may compare the winner identifier to its own requester identifier and determine that the resource access request 308, 310 is denied in response to the winner identifier differing from its own requested identifier. [0056] The resource manager 304 may include the signal indicating the denial of the prohibited resource access request 308, 310 and/or the winner identifier in the response 314. In the example illustrated in FIG. 3, the resource manager 304 prohibits the resource access requests 310 issued by the computing element 2 302. As a result, the computing element 2 302 is the loser of the contention for the resource 306. The resource manager 304 sends the response 314 including the winner identifier (i.e., the requester identifier of the computing element 1 300) to the computing element 2 302. The computing element 2 302 may determine that it is the loser of the contention for the resource 306 and may wait for the resource 306 to become available, continue executing tasks that can be executed without access to the resource 306, and/or adjust physical and/or logical resources as described further herein.[0057] In response to the permitted resource access requests 308, 310, a response 316 may be generated either to notify the requester computing element 300, 302 of the permitted resource access requests 308, 310 of completion of the requested access to the resource 306 or to provide data from the resource 306. The resource manager 304 may receive the response 316, note whether the requested access to the resource 306 is complete, and may direct the response 316 to the requester computing element 300, 302 of the permitted resource access requests 308, 310 as a response 318. In some implementations, the resource manager 304 may relinquish the resource 306 upon completion of the requested access to the resource 306. In doing so, the resource manager 304 may remove or invalidate the stored winner identifier and its correlation to the resource 306.[0058] In some implementations, the resource access requests 308, 310 may further include state information for the respective requester computing elements 300, 302. The state information may include processing frequency, voltage scaling, number of memory operations, activity states, bandwidth use, temperature, current leakage, etc.[0059] The resource manager 304 may store and correlate the state information of the requester computing elements 300, 302 from the permitted resource access requests 308, 310 with the winner identifier. The resource manager 304 may include the state information of the requester computing elements 300, 302 from the permitted resource access requests 308, 310 as part of the response 314. The loser computing elements 300, 302 may use the state information of the winning computing elements 300, 302 in adjusting physical and/or logical resources as described further herein.[0060] The resource manager 304 may also track contention information for the contended resource 306, including a number of failed resource access requests since a successful resource access request for the contended resource 306 and a number loser computing elements 300, 302 of the contended resource 306. The resource manager 304 may store and correlate the contention information for the contended resource 306 with the winner identifier. The resource manager 304 may include the contention information for the contended resource 306 as part of the response 314. The loser computing elements 300, 302 may use the contention information for the contended resource 306 in adjusting physical and/or logical resources as described further herein.[0061] FIG. 4 illustrates an identifier register 400 according to an embodiment. Each computing element 300, 302 may include as a component or be associated with an identifier register 400. The identifier register 400 may include a location for storing the computing element identifier (ID) 402, which may be accessed for retrieving the computing element identifier for use as the requesting identifier in a resource access request.[0062] The identifier register 400 may also include locations associated with shared resources 404-412. The shared resources may be any resource shared by the computing element 300, 302 associated with the identifier register 400 and other computing elements 300, 302 for executing tasks, as opposed to being exclusively accessed by computing element 300, 302 associated with the identifier register 400.[0063] The locations associated with shared resources 404-412 may each be dedicated to a shared resource and store computing element identifiers for the computing elements 300, 302 that share the shared resource. For example, the identifier register 400 may include a location 404 for storing computing element identifiers that share shared resource 1, a location 406 for storing computing element identifiers that share shared resource 2, a location 408 for storing computing element identifiers that share shared resource N-l, and a location 412 for storing computing element identifiers that share shared resource N. The identifier register 400 may include any number of locations 404-412 for storing computing element identifiers for at least up to "N" number of shared resources.[0064] The identifier register 400 associated with a loser computing element 300, 302 and a winner computing element 300, 302 may be accessed by the loser computing element 300, 302 to identify resources 306 are shared between the winner and loser computing elements 300, 302. The resources 306 shared between the winner and loser computing elements 300, 302 may be adjusted to improve the execution of a critical portion of a process by the winner computing element 300, as described further herein.[0065] FIG. 5 illustrates a method 500 for adjusting resources at a physical level based on a winner according to various embodiments. The method 500 may be executed in a computing device using software executing on general purpose hardware, such as the processor, and/or dedicated hardware implementing the computing elements and/or the resource manager.[0066] In block 502, the computing device may issue a resource access request including a requester identifier. As discussed herein, the requester identifier may be the computing element identifier of the computing element issuing the resource access request. Further, the resource access request may include a synchronization operation, and/or a physical or virtual address of the target resource of the resource access request. In some implementations, the resource access request may also include state information of the requester computing element, such as a processing frequency, a voltage scaling, a number of memory operations, activity states, a bandwidth use, a temperature, a current leakage, etc. [0067] In block 504, the computing device may receive a result of the resource access request in a response including a winner identifier indicating the computing element granted access to the target resource by the resource manager in a resource contention. In some implementations, the response may include some or all of the stateinformation of the winner computing element provided in the winner resource access request.[0068] In determination block 506, the computing device may determine whether the requester computing element is the winner computing element. The computing device may retrieve the computing element identifier for the requester computing element from its associated identifier register, and compare the computing element identifier to the winner identifier. A comparison resulting in a match between the computing element identifier and the winner identifier may indicate that the requester computing element is the winner computing element. A comparison resulting in a mismatch between the computing element identifier and the winner identifier may indicate that the requester computing element is the loser computing element.[0069] In response to determining that the requester computing element is the winner computing element (i.e., determination block 506 = "Yes"), the computing device may continue to execute a process being executed by the winner computing element in block 516. The winner computing element may be provided access to resources, such as the contested resource, necessary for executing a process. The winner computing element may leverage the access to the contested resource in order complete a portion of an operation requiring use of the resource.[0070] The winner computing element may continue to maintain ownership of the contested resource until the contested resource is no longer needed by the winner computing element to execute the operation, upon which the winner computing element may relinquish ownership of the contested resource. In someimplementations, the winner computing element may be forced to relinquish ownership of the contested resource based on various factors, including time, number of loser computing elements for the contested resource, number of denied access requests for the contested resource, use of the contested resource, etc., to avoid degradation of the performance of the computing device. In relinquishing ownership of the contested resource, the winner computing element may send a notification signal to the resource manager, the loser computing element, other computing elements, and/or a register accessible by multiple computing elements that the contested resource is available. In some implementations, the resource manager may send the notification signal to the loser computing element, other computing elements, and/or a register accessible by multiple computing elements that the contested resource is available.[0071] In response to determining that the requester computing element is not the winner computing element or is the loser computing element (i.e., determination block 506 = "No"), the computing device may identify the winner computing element. The computing device may identify the winner computing element as the computing element associated with the winner identifier included with the response to the resource access request received in block 504. The computing device may use the winner identifier to determine a relationship between the winner computing element and the loser computing element for adjusting physical and/or logical resources as described herein.[0072] The computing device may determine whether a criterion is met for adjusting physical and/or logical resources of the computing device in determination block 510. In computing devices with multiple operating systems, dynamic voltage and frequency scaling domains, and topologies, the loser computing element may tune local and/or shared resources. The resources for tuning may be shared between the loser computing element and the winner computing element. In making this determination, the computing device may determine whether a criterion is met for adjusting physical and/or logical resources of the computing device when any of the conditions described herein are met and adjusting the resources is likely to improve the performance of the computing device. In various implementations, the criterion may depend, at least in part, on a relationship between the loser computing element and the winner computing element. The loser computing element may use the information of the winner computing device identified in block 508 to make the determination in determination block 510.[0073] In response to determining that a criterion is met for adjusting physical and/or logical resources of the computing device (i.e., determination block 510 = "Yes"), the computing device may adjust physical and/or logical resources of the computing device in block 512. Adjusting the physical and/or logical resources of the computing device may be implemented as described herein and in any manner that may benefit the performance of the computing device. For example, the loser computing element may adjust hardware resource configurations and/or software resources based on a relationship to the winner computing element, shared resources, and/or topologically close resources.[0074] Adjusting resources in block 512 may benefit overall performance of the computing device, as the loser computing element may be holding resources that are needed for the winner computing element to make progress in executing the process. For example, the loser computing element may determine a likelihood of sharing hardware resources based on whether the winner computing element is managed by the same operating system, managed within the same dynamic voltage and frequency scaling domain, and is in close physical proximity of the winner computing element. Based on this information, resource configurations may be adjusted, including processing frequency, voltage scaling, number of memory operations, activity states, bandwidth use, in flight misses, etc.[0075] How and which resource configurations are adjusted in block 512 may depend on a level of resource sharing between the winner computing element and the loser computing element, and management polices of a computing device. Some implementations may include the loser computing element reducing its processing frequency and increasing the winner computing element's frequency to reduce the time needed to execute a critical section of an application including the atomic operations of the winner computing element using the contended resource. In another example, the loser computing element may adjust its cache bandwidth use and/or inflight misses to reduce the number of outstanding miss requests for a period of time, thereby also reducing the number of slower storage lookups necessary, and allowing greater resources to be used by the winner computing element.[0076] Following or in parallel with adjusting resources, or in response to determining that no criterion is met for adjusting physical and/or logical resources of the computing device (i.e., determination block 510 = "No"), the computing device may wait for the winner computing element to release ownership of the contested resource in block 514. As discussed herein, the loser computing element may be notified of the release ownership of the contested resource by the winner computing element in multiple way. In some implementations, the loser computing element may receive a signal from the winner computing element or the resource manager indicating the release ownership of the contested resource by the winner computing element. In some implementations, the loser computing element may check an accessible register for an indication of the release of ownership of the contested resource by the winner computing element. Upon being notified of the release of ownership of the contested resource by the winner computing element, the computing device may issue a resource access request including a requester identifier in block 502.[0077] FIG. 6 illustrates an embodiment method 600 for adjusting tasks stealing heuristics at a logical level based on a winner according to various embodiments. The method 600 may be executed in a computing device using software executing on general purpose hardware, such as the processor, and/or dedicated hardware implementing the computing elements and/or the resource manager.[0078] Blocks 502-506 and 516 may be implemented in a manner similar to that of like numbered blocks in method 500 as described with reference to FIG. 5. In some implementations of the method 600, blocks 502-506 and 516 may be optionally implemented.[0079] In response to determining that the requester computing element is not the winner computing element or is the loser computing element (i.e., determination block 506 = "No"), the computing device may determine whether the loser computing element has a task that is executable without access to the contested resource in determination block 610.[0080] In response to determining that the loser computing element does not have a task that is executable without access to the contested resource (i.e., determination block 610 = "No"), the computing device may attempt to steal or request tasks in block 612. Like the resource access request, a signal to steal or request tasks may include the computing element identifier (requester identifier) of the computing element sending the signal to steal or request work, i.e., the loser computing element. The loser computing element may send a general signal to a resource manager implementing a scheduler, or check computing elements in its stealing list for computing elements likely to have available tasks and signal specifying the computing elements. In some implementations, the stealing list may contain computing elements executing the same application as the loser computing element. Like the resource access request, the resource manager may determine a winner and a loser from multiple computing elements attempting to steal or request tasks, and return to or make available to the computing elements the winner identifier.[0081] In block 614, the computing device may receive a response to the signal to steal or request tasks. The response may include the winner identifier, and for winner computing elements, a task assignment to execute.[0082] In determination block 616, the computing device may determine whether the loser computing element is a winner computing element for the task stealing or request. Like determining whether the computing element is a winner computing element in block 506, the computing device may compare the winner identifier with the computing element identifier of the issuer of the signal to steal or request tasks in determination block 616. A comparison resulting in a match between the computing element identifier and the winner identifier may indicate that the requester computing element is the task winner computing element. A comparison resulting in a mismatch between the computing element identifier and the winner identifier may indicate that the requester computing element is the task loser computing element.[0083] In response to determining that the loser computing element is the task winner computing element (i.e. determination block 616 = "Yes"), the computing device may execute the stolen or received tasks.[0084] In response to determining that the loser computing element is not the task winner computing element or is the task loser computing element (i.e. determination block 616 = "No"), the computing device may update or adjust the task stealing list of the task loser computing element in block 618.[0085] Adjusting the stealing list in block 618 may take into account behavior of the application. For a task winner computing element executing a non-recursive task, the task winner computing element may be identified as having finished all of its assigned tasks for executing the application and stealing tasks from other computing elements, signifying that the task winner computing element does not have any tasks for stealing. For example, in an application with non-recursive tasks, once a computing element finishes initially assigned tasks, the computing element will commence to look for other tasks by stealing or requesting other tasks. Thus, if a computing element is contending for tasks, then it has finished its initially assigned tasks and does not have any tasks to steal or give. The stealing list can be modified to remove the task winner computing element from the stealing list or reordered to indicate that the task winner computing element is the last one of the computing elements to check for tasks.[0086] For a task winner computing element executing a recursive task, the task winner computing element may be identified as potentially spawning further tasks to execute the iterations of the recursive task. For example, in an application with recursive tasks, once a computing element finishes initially assigned tasks, the computing element will commence to look for other tasks by stealing or requesting other tasks. Thus, if a computing element is contending for tasks, then it has finished its initially assigned tasks, but may generate more task to steal or give if assigned further recursive tasks. The stealing list can be reordered to indicate that the task winner computing element be a first one of the computing elements to check for tasks.[0087] The various embodiments (including, but not limited to, embodiments discussed above with reference to FIGs. 1-6) may be implemented in a wide variety of computing systems, which may include an example mobile computing device suitable for use with the various embodiments illustrated in FIG. 7. The mobile computing device 700 may include a processor 702 coupled to a touchscreen controller 704 and an internal memory 706. The processor 702 may be one or more multi-core integrated circuits designated for general or specific processing tasks. The internal memory 706 may be volatile or non- volatile memory, and may also be secure and/or encrypted memory, or unsecure and/or unencrypted memory, or any combination thereof.Examples of memory types that can be leveraged include but are not limited to DDR, LPDDR, GDDR, WIDEIO, RAM, SRAM, DRAM, P-RAM, R-RAM, M-RAM, STT- RAM, and embedded dynamic random access memory (DRAM). The touchscreen controller 704 and the processor 702 may also be coupled to a touchscreen panel 712, such as a resistive-sensing touchscreen, capacitive-sensing touchscreen, infrared sensing touchscreen, etc. Additionally, the display of the computing device 700 need not have touch screen capability.[0088] The mobile computing device 700 may have one or more radio signal transceivers 708 (e.g., Peanut, Bluetooth, ZigBee, Wi-Fi, RF radio) and antennae 710, for sending and receiving communications, coupled to each other and/or to the processor 702. The transceivers 708 and antennae 710 may be used with the above- mentioned circuitry to implement the various wireless transmission protocol stacks and interfaces. The mobile computing device 700 may include a cellular network wireless modem chip 716 that enables communication via a cellular network and is coupled to the processor.[0089] The mobile computing device 700 may include a peripheral device connection interface 718 coupled to the processor 702. The peripheral device connection interface 718 may be singularly configured to accept one type of connection, or may be configured to accept various types of physical and communication connections, common or proprietary, such as USB, Fire Wire, Thunderbolt, or PCIe. The peripheral device connection interface 718 may also be coupled to a similarly configured peripheral device connection port (not shown).[0090] The mobile computing device 700 may also include speakers 714 for providing audio outputs. The mobile computing device 700 may also include a housing 720, constructed of a plastic, metal, or a combination of materials, for containing all or some of the components discussed herein. The mobile computing device 700 may include a power source 722 coupled to the processor 702, such as a disposable or rechargeable battery. The rechargeable battery may also be coupled to the peripheral device connection port to receive a charging current from a source external to the mobile computing device 700. The mobile computing device 700 may also include a physical button 724 for receiving user inputs. The mobile computing device 700 may also include a power button 726 for turning the mobile computing device 700 on and off.[0091] The various embodiments (including, but not limited to, embodiments discussed above with reference to FIGs. 1-6) may be implemented in a wide variety of computing systems, which may include a variety of mobile computing devices, such as a laptop computer 800 illustrated in FIG. 8. Many laptop computers include a touchpad touch surface 817 that serves as the computer's pointing device, and thus may receive drag, scroll, and flick gestures similar to those implemented oncomputing devices equipped with a touch screen display and described above. A laptop computer 800 will typically include a processor 811 coupled to volatile memory 812 and a large capacity nonvolatile memory, such as a disk drive 813 of Flash memory. Additionally, the computer 800 may have one or more antenna 808 for sending and receiving electromagnetic radiation that may be connected to a wireless data link and/or cellular telephone transceiver 816 coupled to the processor 811. The computer 800 may also include a floppy disc drive 814 and a compact disc (CD) drive 815 coupled to the processor 811. In a notebook configuration, the computer housing includes the touchpad 817, the keyboard 818, and the display 819 all coupled to the processor 811. Other configurations of the computing device may include a computer mouse or trackball coupled to the processor (e.g., via a universal serial bus (USB) input) as are well known, which may also be used in conjunction with the various embodiments.[0092] Computer program code or "program code" for execution on a programmable processor for carrying out operations of the various embodiments may be written in a high level programming language such as C, C++, C#, Smalltalk, Java, JavaScript, Visual Basic, a Structured Query Language (e.g., Transact-SQL), Perl, or in various other programming languages. Program code or programs stored on a computer readable storage medium as used in this application may refer to machine language code (such as object code) whose format is understandable by a processor.[0093] The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the operations of the various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art the order of operations in the foregoing embodiments may be performed in any order. Words such as "thereafter," "then," "next," etc. are not intended to limit the order of the operations; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles "a," "an" or "the" is not to be construed as limiting the element to the singular. [0094] The various illustrative logical blocks, modules, circuits, and algorithm operations described in connection with the various embodiments may beimplemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and operations have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and designconstraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the claims.[0095] The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a fieldprogrammable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some operations or methods may be performed by circuitry that is specific to a given function.[0096] In one or more embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non- transitory computer-readable medium or a non-transitory processor-readable medium. The operations of a method or algorithm disclosed herein may be embodied in a processor-executable software module that may reside on a non-transitory computer- readable or processor-readable storage medium. Non-transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor-readable media may include RAM, ROM, EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non-transitory computer-readable and processor-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product.[0097] The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the claims. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the scope of the claims. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein.
Composite dielectric structures for semiconductor die assemblies, and associated systems and methods are disclosed. In some embodiments, the composite dielectric structure includes a flexible dielectric layer configured to conform to irregularities (e.g., particles, defects) at a bonding interface of directly bonded semiconductor dies (or wafers). The flexible dielectric layer may include a polymeric material configured to deform during a bonding process step in response to localized pressure generated by the irregularities. The composite dielectric structure includes an additional dielectric layer sandwiching the flexible dielectric layer such that the composite dielectric structure can provide robust bond strength to other dielectric layers through the additional dielectric layer. In some embodiments, a chemical vapor deposition process may be used to form composite dielectric structures that utilize siloxane derivatives as precursors.
1. A semiconductor die comprising:a substrate comprising an integrated circuit system; anda dielectric structure over the substrate, the dielectric structure comprising:a first dielectric layer on a first side of the dielectric structure facing the substrate;a second dielectric layer on a second side of the dielectric structure opposite the first side; anda third dielectric layer between the first dielectric layer and the second dielectric layer, the third dielectric layer configured to conform to one or more of the Irregular body.2. The semiconductor die of claim 1, wherein the third dielectric layer comprises a polymeric material that is flexible to deform in response to localized pressure generated by the one or more irregularities.3. The semiconductor die of claim 2, wherein the localized pressure is applied to the third dielectric layer through the second dielectric layer.4. The semiconductor die of claim 2, wherein the third dielectric layer comprises only the polymer material.5. The semiconductor die of claim 2, wherein the third dielectric layer consists essentially of the polymer material.6. The semiconductor die of claim 1, wherein the second dielectric layer is configured to:conforming to the one or more irregularities at the second side; andBonding directly to a fourth dielectric layer in contact with the second dielectric layer, wherein the fourth dielectric layer is a portion of a second semiconductor die attached to the semiconductor die.7. The semiconductor die of claim 1, wherein the first and third dielectric layers comprise at least one of silicon oxide, silicon nitride, silicon carbonitride, or silicon carbonate.8. The semiconductor die of claim 1, wherein:the second dielectric layer has a thickness of at least 50 nm; andThe thickness of the third dielectric layer is at least twice the thickness of the second dielectric layer.9. The semiconductor die of claim 1, wherein a thickness of the third dielectric layer is determined by a size of the one or more irregularities.10. The semiconductor die of claim 1, further comprising:One or more conductive pads formed in the dielectric structure, each conductive pad extending through the dielectric structure and configured to couple to at least one through-substrate via TSV of the integrated circuit system coupling.11. A method comprising:providing a semiconductor die comprising a substrate having an integrated circuit system;forming a dielectric structure over the substrate, the dielectric structure comprising:a first dielectric layer on a first side of the dielectric structure facing the substrate;a second dielectric layer on a second side of the dielectric structure opposite the first side; anda third dielectric layer between the first dielectric layer and the second dielectric layer, the third dielectric layer configured to conform to one or more of the Irregular body.12. The method of claim 11, wherein forming the dielectric structure over the substrate comprises:depositing the first dielectric layer over the substrate in a chemical vapor deposition CVD chamber;depositing the third dielectric layer on the first dielectric layer in the CVD chamber without breaking the vacuum of the CVD chamber; andThe second dielectric layer is deposited on the third dielectric layer in the CVD chamber without breaking the vacuum of the CVD chamber.13. The method of claim 11, wherein the third dielectric layer comprises a polymeric material that is flexible to deform in response to localized pressure generated by the one or more irregularities.14. The method of claim 13, wherein forming the dielectric structure over the substrate comprises:placing the semiconductor die in a chemical vapor deposition CVD chamber configured to contain a first gas having oxygen and a second gas having a precursor comprising the polymer material;providing the first and second gases to the CVD chamber at a first ratio between the oxygen and the precursor, the first ratio configured to deposit a first silicon oxide on the semiconductor die material, the first silicon oxide material corresponding to the first dielectric layer;modifying the amount of the first gas provided to the CVD chamber to establish a second ratio between the oxygen and the precursor, the second ratio configured to deposit the polymeric gas on the silicon oxide material, the polymer material corresponding to the third dielectric layer; andrestoring the amount of the first gas provided to the CVD chamber to establish the first ratio, thereby depositing a second silicon oxide material on the polymer material, the second silicon oxide material corresponding to the first Two dielectric layers.15. The method of claim 11, further comprising:One or more conductive pads are formed in the dielectric structure, each conductive pad extending through the dielectric structure and configured to couple with at least one through-substrate via TSV coupled to the integrated circuit system.16. A semiconductor die assembly comprising:package substrate; anda die attached to the package substrate, the die comprising:a semiconductor substrate having an integrated circuit system; anda dielectric structure over the semiconductor substrate, the dielectric structure comprising:a first dielectric layer on a first side of the dielectric structure facing the semiconductor substrate;a second dielectric layer on a second side of the dielectric structure opposite the first side; anda third dielectric layer between the first dielectric layer and the second dielectric layer, the third dielectric layer configured to conform to one or more of the Irregular body.17. The semiconductor die assembly of claim 16, wherein the third dielectric layer comprises a polymeric material that is flexible to deform in response to localized pressure generated by the one or more irregularities.18. The semiconductor die assembly of claim 17, further comprising:One or more conductive pads formed in the dielectric structure, each conductive pad extending through the dielectric structure and configured to couple to at least one through-substrate via TSV of the integrated circuit system coupling.19. The semiconductor die assembly of claim 18, wherein the die is a first die, and the semiconductor die assembly further comprises:a second die directly bonded to the first die on the second side, wherein the second die includes a fourth dielectric layer directly bonded to the second dielectric layer, the first die The second die does not include the polymer material.20. The semiconductor die assembly of claim 18, wherein the die is a first die and the dielectric structure is a first dielectric structure, and the semiconductor die assembly further comprises:A second die bonded directly to the first die, wherein the second die includes a second dielectric structure having:a fourth dielectric layer directly bonded to the second dielectric layer at the second side;a fifth dielectric layer immediately adjacent to the fourth dielectric layer, the fifth dielectric layer configured to conform to the one or more irregularities at the second side; anda sixth dielectric layer adjacent to the fifth dielectric layer, the sixth dielectric layer facing the second semiconductor substrate of the second die.
Composite dielectric structure for semiconductor die assembly and associated systems and methodstechnical fieldThe present disclosure relates generally to semiconductor device assemblies, and more particularly, to composite dielectric structures for semiconductor die assemblies and associated systems and methods.Background techniqueSemiconductor packages typically include one or more semiconductor die (eg, memory chips, microprocessor chips, imager chips) mounted on a packaging substrate and encased in a protective covering. A semiconductor die may include functional features, such as memory cells, processor circuits, or imager devices, and bond pads electrically connected to the functional features. The bond pads can be electrically connected to corresponding conductive structures of the package substrate, which can be coupled to terminals outside the protective covering so that the semiconductor die can be connected to higher level circuitry.In some semiconductor packages, two or more semiconductor die are stacked on top of each other to reduce the footprint of the semiconductor package. The semiconductor die in a stack may be arranged in a stair-like pattern (this may be referred to as a "shingled stack") such that a portion of the semiconductor die is freely accessible, for example, to attach bonding wires to One or more bond pads in the . In some cases, semiconductor die may be stacked in a “Zigzag” pattern to increase the space above the bond pad relative to semiconductor die overlying the bond pad in order to facilitate the formation of bond wires. However, such arrangements tend to increase the overall height of the semiconductor package. Additionally, bond wires can add height and/or introduce signal propagation delays.Contents of the inventionEmbodiments of the present disclosure provide a semiconductor die comprising: a substrate comprising an integrated circuit system; and a dielectric structure over the substrate, the dielectric structure comprising: a first dielectric layer comprising on a first side of the dielectric structure facing the substrate; a second dielectric layer on a second side of the dielectric structure opposite the first side; and a third dielectric layer, It is between the first dielectric layer and the second dielectric layer, the third dielectric layer being configured to conform to one or more irregularities at the second side.Another embodiment of the present disclosure provides a method comprising: providing a semiconductor die comprising a substrate having an integrated circuit system; forming a dielectric structure over the substrate, the dielectric structure comprising: a first dielectric an electrical layer on a first side of the dielectric structure facing the substrate; a second dielectric layer on a second side of the dielectric structure opposite the first side; and a third a dielectric layer between the first dielectric layer and the second dielectric layer, the third dielectric layer configured to conform to one or more irregularities at the second side body.Yet another embodiment of the present disclosure provides a semiconductor die assembly comprising: a packaging substrate; and a die attached to the packaging substrate, the die comprising: a semiconductor substrate having an integrated circuit system and a dielectric structure above the semiconductor substrate, the dielectric structure comprising: a first dielectric layer positioned on a first side of the dielectric structure facing the semiconductor substrate; a second dielectric layer layer on a second side of the dielectric structure opposite the first side; and a third dielectric layer between the first dielectric layer and the second dielectric layer, the The third dielectric layer is configured to conform to one or more irregularities at the second side.Description of drawingsMany aspects of the present technology can be better understood with reference to the following figures. Components in the drawings are not necessarily drawn to scale. Rather, the emphasis is on clearly illustrating the general characteristics and principles of the inventive technology.FIG. 1 is a schematic diagram of an example of a semiconductor die assembly.2 is a schematic diagram of a composite dielectric structure in accordance with an embodiment of the present technology.3 is a schematic diagram of a semiconductor die including a composite dielectric structure in accordance with an embodiment of the present technology.4 is a schematic diagram of a semiconductor die assembly configured in accordance with an embodiment of the present technology.5 is a block diagram schematically illustrating a system including a semiconductor die assembly configured in accordance with an embodiment of the present technology.6 is a flowchart of a method of fabricating a composite dielectric structure in accordance with an embodiment of the present technology.Detailed waysSpecific details of several embodiments of composite dielectric structures for semiconductor die assemblies and associated systems and methods are described below. The term "semiconductor device or die" generally refers to a solid state device that includes one or more semiconductor materials. Examples of semiconductor devices (or dies) include logic devices or dies, memory devices or dies, controllers or microprocessors (eg, central processing units (CPUs), graphics processing units (GPUs)), and so on. Such semiconductor devices may include integrated circuits or components, data storage elements, processing components, and/or other features fabricated on a semiconductor substrate. Additionally, the term "semiconductor device or die" may refer to a finished device or components or other structures at various stages of processing prior to becoming a finished functional device. Depending on the context in which it is used, the term "substrate" may include semiconductor wafers, packaging substrates, semiconductor devices or dies, and the like. Suitable steps of the methods described herein may be performed by processing steps associated with manufacturing semiconductor devices (wafer level and/or die level) and/or manufacturing semiconductor packages.Various computing systems or environments, such as high performance computing (HPC) systems, require high bandwidth and low power consumption. Certain approaches to forming interconnects between semiconductor dies (eg, direct bonding approaches) may help meet requirements as well as provide a form factor suitable for scaling the physical dimensions (eg, height) of semiconductor die assemblies of HPC systems. Direct bonding schemes comprising individual conductive features (e.g., copper pads, conductive pads) of a first semiconductor die (or a first wafer comprising a first semiconductor die) and a second semiconductor die (or A corresponding one of the conductive features of the second wafer) is aligned and directly bonded. Furthermore, the dielectric material surrounding each of the conductive features of the first semiconductor die may be directly bonded to another dielectric material surrounding each of the conductive features of the second semiconductor die. In other words, the bonding interface includes the first semiconductor die bonded directly to corresponding material (e.g., between dielectric materials, between conductive materials) of the second semiconductor die to form interconnects and surrounding dielectric layers. of two or more different materials. Thus, the direct joining scheme may also be referred to as a combined joining scheme, a hybrid joining scheme, or the like.In some embodiments, the conductive material includes copper (or other suitable conductive material or metal, such as tungsten (W)) as a major component, and the dielectric material includes silicon oxide (eg, SiO2), silicon nitride (eg, Si3N4), silicon carbonitride (eg, SiCN), silicon carbonate (eg, SiCO), etc. During the direct bonding process, the dielectric materials of the first and second semiconductor die (or first and second wafers containing the first and second semiconductor die) are brought together such that the dielectric materials adhere to each other. Subsequently, the semiconductor die are annealed at an elevated temperature such that the conductive materials of the first and second semiconductor die bond to form a permanent bond, such as a metallurgical bond. In addition, the dielectric material may enhance its joint strength during the annealing process. If there are any irregularities (e.g., defects, particles) at the bonding interface (which may also be referred to as the mating interface or bonding line), such irregularities will weaken the bond between the semiconductor chip die (or wafer) Strength, for example, weakens the bond strength by forming voids surrounding irregularities due at least to the rigidity and/or fragility of the dielectric material.In some cases, even if a direct bond is formed to hold the two semiconductor die (or wafers) bonded together, voids present at the bonding interface may interfere with forming a robust interconnect between the conductive features. If portions of the conductive features are unbonded (eg, fused) due to voids, an interconnect comprising partially bonded conductive features may have a higher than desired resistance value. If the conductive features fail to form a continuous conductive path, an electrical open circuit may occur in the interconnect. In some cases, a void may contain conductive material originating from a conductive feature connected to the void, such as by various mechanisms that cause the conductive material to migrate (eg, expansion, extrusion, diffusion, etc.). If the void is large enough to touch multiple conductive components, the void may act as a conduit for migration of conductive material (eg, Cu), such that undesirable leakage paths and/or electrical shorts may occur between the conductive components. Therefore, the environment of the direct bonding process must be ultra-clean to avoid particles at the bonding surfaces, which in turn tends to increase manufacturing costs.The present technology mitigates the risks associated with forming a damaged bonding interface (e.g., voids weakening bond strength, interconnects with partially bonded conductive features, lateral leakage paths, and/or electrical short between interconnects). The composite dielectric structure includes a dielectric surface layer (eg, silicon oxide, silicon nitride, silicon carbonate, etc.) suitable for direct bonding schemes such that the bonding strength provided by the dielectric surface layer can be maintained. In addition, the composite dielectric structure includes layers having elastic properties (eg, conforming to the shape of the irregularities) that are tolerant to irregularities (eg, particles, defects) at the bonded interface. In some embodiments, the layer having elastic properties may comprise a polymeric material that is flexible to deform in response to localized pressure created by the irregularities during the bonding process. For example, chemical vapor deposition (CVD) processes can be used to deposit flexible dielectric layers to reduce and/or eliminate voids caused by irregularities at the bonding interface, such as by applying siloxane derivatives (e.g., hexamethyl Disiloxane (HMDSO)) as a precursor.In this manner, the composite dielectric structure may avoid formation of voids and/or substantially reduce the size of voids despite possible irregularities at the bonding interface. Accordingly, the bonded interface may be improved to have enhanced bond strength due at least to increased bonded area, reduced amount of interconnects having high resistance, reduced probability of leakage paths formed between interconnects, and the like. Additionally or alternatively, direct bonding processes employing composite dielectric structures can be performed in an environment with relatively relaxed particle requirements, which in turn can reduce manufacturing costs.As used herein, the terms "front", "rear", "vertical", "lateral", "lower", "upper", "upper" and "lower" may refer to features in a semiconductor device assembly. The relative direction or position of the displayed orientation. For example, "top" or "topmost" may designate a feature positioned closer to the top of the page than another feature. However, these terms should be interpreted broadly to encompass semiconductor devices having other orientations. Unless stated otherwise, terms such as "first" and "second" are used to arbitrarily distinguish between the elements such terms describe. Accordingly, these terms are not necessarily intended to indicate a temporal or other prioritization of such elements.FIG. 1 is an example schematic diagram 100 of a semiconductor die assembly. Diagram 100 illustrates a bonding interface 105 between semiconductor die 101 (also individually identified as 101a/b) bonded directly to each other. Semiconductor die 101 is depicted as including substrate 110 (also individually identified as 110a/b) and dielectric layer 120 (also individually identified as 120a/b). Furthermore, semiconductor die 101 includes through-substrate vias (TSVs) 115 (also individually identified as 115a/b ) coupled to integrated circuit systems (not shown) of semiconductor die 101 . The TSVs 115 are also connected to corresponding conductive features 125 (also individually identified as 125a/b) of the semiconductor die 101 . Thus, the conductive member 125 is operatively coupled to the integrated circuit system.At the bonding interface 105 , the dielectric layers 120 a / b are directly bonded (eg, bonded, fused) to form a dielectric-to-dielectric bond 130 . Additionally, the conductive features 125 a / b are directly bonded (eg, bonded, fused) to form a metal-to-metal bond 135 at the bonding interface 105 . Accordingly, bonding interface 105 includes dielectric-to-dielectric bonding 130 and metal-to-metal bonding 135 and may be referred to as a combined bonding interface or a hybrid bonding interface. FIG. 100 illustrates interconnects 140 (also individually identified as 140a - c ), each comprising a bonded conductive feature 125 .In some cases, bonding interface 105 may contain irregularities (eg, defects, particles). For example, diagram 100 illustrates irregularities 145 at joint interface 105 . Dielectric layer 120 (eg, comprising SiO 2 , SiCN) tends to be rigid and/or brittle such that dielectric layer 120 may not locally conform to irregularities 145 during the direct bonding process. As a result, voids (eg, voids 150 depicted in FIG. 100 ) may form around irregularities 145 at joint interface 105 . Thus, while an overall direct bond may be established between dielectric layers 120 to bond semiconductor die 101 together, bonding interface 105 may contain such voids associated with irregularities. The presence of voids reduces the overall area of the dielectric-to-dielectric bond 130 and thus reduces the bond strength of the bonding interface 105 .In some cases, certain voids formed at bonding interface 105 may be large enough to interfere with (prevent, impede) forming metal-to-metal bond 135 . For example, void 150 may expand (encroach) into interconnect 140c such that the metal-to-metal bond of interconnect 140c is compromised. Accordingly, interconnect 140c may have a higher resistance than other interconnects 140 (eg, interconnect 140a). Such changes in the electrical characteristics of the interconnect 140 may degrade the performance of the semiconductor die assembly. If the size of the void 150 is large enough to prevent the interconnect 140 from forming a continuous current path (eg, creating an electrical open circuit), such an interconnect 140 may render the semiconductor die assembly inoperable.In some embodiments, the metal may be formed by thermally expanding (eg, by volumetric expansion in response to thermal energy applied during a direct bonding process) the conductive material (eg, copper) of conductive features 125 after semiconductor die 101 are brought into contact with each other. to metal junction 135 . Thus, if connected to a conductive member 125, the void may act as a conduit through which conductive material may migrate. As illustrated in Figure 100, if the void 150 is large enough to bridge (or otherwise connect) two or more interconnects 140, the void 150 containing traces of conductive material may be between the interconnects (e.g., Between interconnect 140b and interconnect 140c ), an undesirable leakage path and/or electrical short is created.FIG. 2 is a schematic diagram of a composite dielectric structure 260 in accordance with an embodiment of the present technology. Composite dielectric structure 260 may help tolerate irregularities during the direct bonding process so that the quality of the bonded interface may be improved, eg, by mitigating adverse effects from irregularities. The composite dielectric structure 260 includes a first dielectric layer 265 , a second dielectric layer 270 and a third dielectric layer 275 between the first dielectric layer 265 and the second dielectric layer 270 . The first dielectric layer 265 and the second dielectric layer 270 may include silicon oxide (eg, SiO2), silicon nitride (eg, Si3N4), silicon carbonitride (eg, SiCN), silicon carbonate (eg, SiCO), etc. at least one of the The first dielectric layer 265 and the second dielectric layer 270 are configured to provide an interface with other dielectric layers (eg, a dielectric layer directly bonded to the second dielectric layer 270, as described in more detail with reference to FIGS. A first dielectric layer 265 is deposited thereon to form a robust bond strength for the dielectric layer) contacts of the composite dielectric structure 260 .The third dielectric layer 275 may be configured to conform to one or more irregularities that may exist at the bonding interface (eg, the surface of the second dielectric layer 270 ). In some embodiments, the third dielectric layer 275 comprises a polymeric material that is flexible to deform in response to localized pressure generated by the one or more irregularities. In other words, the third dielectric layer 275 may have elastic properties that can withstand particles and/or defects existing at the bonding interface. Accordingly, the third dielectric layer 275 may represent a "resistive" or "spring-like" layer, as depicted in FIG. 2 . In this way, composite dielectric structure 260 may provide both the bonding strength of the dielectric material of second dielectric layer 270 and the flexibility of third dielectric layer 275 to withstand irregularities at the bonding interface. In this regard, the local pressure originating from the irregularities may be applied (transmitted) to the third dielectric layer 275 through the second dielectric layer 270 at the bonding interface (eg, the bonding interface 405 described with reference to FIG. 4 ). .In some embodiments, a CVD process may be used to deposit the first, second and third dielectric layers as depicted in FIG. 2 . For example, a semiconductor die (eg, semiconductor die 301 described with reference to FIG. 3 or a wafer comprising semiconductor die 301 , semiconductor die 401 described with reference to FIG. wafer) is placed in a CVD chamber configured to contain a first gas with oxygen (O2) and a precursor (e.g., , the second gas of hexadimethylsiloxane (HDMSO)). Subsequently, a first dielectric layer 265 (eg, comprising SiO 2 ) may be formed by providing first and second gases to the CVD chamber at a first ratio between oxygen and precursor. The first ratio may be configured to deposit SiO 2 on the semiconductor die (or semiconductor wafer) to form the first dielectric layer 265 . After achieving the desired thickness of the first dielectric layer 265, the third dielectric can be formed by modifying (eg, reducing) the amount of the first gas provided to the CVD chamber to establish a second ratio between oxygen and precursor. Layer 275 (eg, comprising PDMS). The second ratio may be configured to deposit a polymer material (eg, PDMS) on the first dielectric layer 265 to form the third dielectric layer 275 . After achieving the desired thickness of the third dielectric layer 275, a second dielectric layer can be formed on the third dielectric layer 275 by restoring the amount of the first gas supplied to the CVD chamber to establish the first ratio to deposit SiO 270 , so as to form the second dielectric layer 270 on the third dielectric layer 275 .As described herein, the process conditions of the CVD process can be modified by modifying the ratio between O2 and precursors (eg, HDMSO, other suitable siloxane derivatives) to change the relative amounts of polymer material and SiO2. In some embodiments, the third dielectric layer 275 includes only polymer materials. In other embodiments, the third dielectric layer 275 mainly includes polymer materials, for example, the third dielectric layer 275 may also partially include SiO 2 . While the foregoing example CVD process utilizes a precursor configured to deposit SiO2 and PDMS (a polymer material) in a CVD chamber based on the ratio between O2 and precursor (HDMSO), in other embodiments a different precursor may be used. Bulk (and/or one or more gases other than O2) is supplied to the CVD chamber for the deposition of polymer materials as well as silicon oxide (eg, SiO2), silicon nitride (eg, Si3N4), silicon carbonitride (eg, SiCN ), silicon carbonate (eg, SiCO) and the like.FIG. 3 is a schematic diagram of a semiconductor die 301 including a composite dielectric structure (eg, composite dielectric structure 260 described with reference to FIG. 2 ) in accordance with an embodiment of the present technology. Semiconductor die 301 may include aspects of semiconductor die 101 described with reference to FIG. 1 . For example, semiconductor die 301 includes substrate 110, which includes an integrated circuit system (not shown). Semiconductor die 301 also includes TSVs 115 (one of which is depicted in FIG. 3 ) coupled to the integrated circuit system.Furthermore, the semiconductor die 301 includes a dielectric layer 320 having a composite dielectric structure 260 over the substrate 110 . In some embodiments, dielectric layer 320 includes an additional dielectric layer 380 formed on substrate 110 on which composite dielectric structure 260 is formed. In some embodiments, dielectric layer 380 may be formed using process steps utilizing tetraethyl orthosilicate (TEOS) or other suitable techniques to deposit a dielectric material (eg, high plasma density (HDP) oxide).Semiconductor die 301 also includes conductive features 125 (one of which is depicted in FIG. 3 ) coupled to TSVs 115 . Conductive features 125 may also be referred to as conductive pads and are configured to have physical dimensions (eg, surface area, thickness) that provide sufficient volume of conductive material (eg, copper) to form a robust interconnect (eg, copper) during the direct bonding process. interconnect 440 depicted in FIG. 4). In some embodiments, conductive pad 125 may be formed in dielectric layer 320 , eg, after composite dielectric structure 260 (and dielectric layer 380 ) is formed over substrate 110 . Thus, each conductive pad 125 extends through and is surrounded by composite dielectric structure 260 (eg, by the first, second, and third dielectric layers of composite dielectric structure 260 ).The composite dielectric structure 260 includes a first dielectric layer 265 at a first side 261 of the composite dielectric structure 260 facing the substrate 110 , at a second side 262 of the composite dielectric structure 260 opposite to the first side 261 The second dielectric layer 270, and the third dielectric layer 275 between the first dielectric layer 265 and the second dielectric layer 270. In other words, the third dielectric layer 275 is sandwiched between the first dielectric layer 265 and the second dielectric layer 270 . If bonded directly to another semiconductor die (e.g., semiconductor die 101, semiconductor die 401) as depicted in FIG. 4, the second side 262 will form a bonding interface (e.g., bonding interface 405 described with reference to FIG. ). The third dielectric layer 275 may be configured to conform to one or more irregularities at the second side 262 at the bonding interface, eg, during a direct bonding process. In some embodiments, third dielectric layer 275 is included to be flexible (e.g., having elastic material properties) to respond to one or more irregularities (e.g., defects, defects, Polymer materials (e.g., PDMS) that are deformed by local pressure generated by particles). Accordingly, localized pressure may be applied to the third dielectric layer 275 through the second dielectric layer 270 .As described with reference to FIG. 2, the first, second, and third dielectric layers may be deposited during the CVD process by modifying the amount of gas flow (eg, gas flow that provides O2 and precursors). In some embodiments, the third dielectric layer 275 includes only polymer materials. In other embodiments, the third dielectric layer 275 consists primarily of a polymer material, eg, with a partial SiO2 content. Additionally, second dielectric layer 270 may be configured to conform to one or more irregularities at second side 262 . Accordingly, in some embodiments, the second dielectric layer 270 may be configured to include a partial polymeric material content. In other embodiments, the second dielectric layer 270 may not include any polymer material content. In such embodiments, the second dielectric layer 270 may be formed thin enough to deform in response to local pressure created by the one or more irregularities. Furthermore, the second dielectric layer 270 may be configured to be directly bonded to the second dielectric layer during a direct bonding process, for example in the case where the second semiconductor die is directly bonded to the semiconductor die 301 as depicted in FIG. 4 . Another dielectric layer that layer 270 contacts.In some embodiments, the thickness of the second dielectric layer 270 is at least 50 nm (denoted as t3 in FIG. 3 ), and the thickness of the third dielectric layer 275 is at least twice the thickness of the second dielectric layer 270 ( Denoted as t2) in FIG. 3 . In other embodiments, the thickness of the second dielectric layer 270 is at least 100 nm, and the thickness of the third dielectric layer 275 may be in the range of 200 to 500 nm. In some embodiments, the thickness of the third dielectric layer 275 is determined by the size of the one or more irregularities, eg, based on the clean room environment in which the direct bonding process is performed.In some embodiments, first dielectric layer 265 may be configured to provide sufficient transition and/or adhesion between dielectric layer 380 and composite dielectric structure 260, for example, the TEOS process for depositing dielectric layer 380 and The transition between the CVD process for depositing the composite dielectric structure 260, the adhesion between the SiO2 layer and the first dielectric layer 265 formed by the TEOS process. In some embodiments, the overall thickness (denoted T in FIG. 3 ) of composite dielectric structure 260 may range between 1 and 2 micrometers (μm). In some embodiments, the thickness of dielectric layer 380 (denoted t0 in FIG. 3 ) may be determined to provide a sufficient thickness of conductive pad 125 so that conductive pad 125 may form a robust interconnect during the direct bonding process (eg, interconnect 440).FIG. 4 is a schematic diagram 400 of a semiconductor die assembly configured in accordance with an embodiment of the present technology. Diagram 400 illustrates a bonding interface 405 between semiconductor die 401a/b bonded directly to each other. The semiconductor die 401a/b may be an example of the semiconductor die 301 described with reference to FIG. 3, i.e., the semiconductor die 401a/b includes the composite dielectric structure described with reference to FIGS. 260. In this regard, the orientation of dielectric layer 420a corresponds to the orientation of dielectric layer 320 depicted in FIG. 3 , while the orientation of dielectric layer 420b is inverted (eg, flipped) relative to dielectric layer 320 depicted in FIG. 3 .Similar to bonding interface 105 described with reference to FIG. 1 , bonding interface 405 may contain irregularities (eg, defects, particles). For example, diagram 400 illustrates irregularity 145 at joint interface 405 . As described herein, dielectric layer 420 a (and dielectric layer 420 b ) includes composite dielectric structure 260 configured to conform to one or more irregularities at bonding interface 405 . Thus, voids associated with irregularities 145 may be absent (or substantially reduced in size (not shown)) at joint interface 405 . In this way, bonding interface 405 may be improved over bonding interface 105 . For example, bonding interface 405 may be attributable to at least an increased bonding area, a reduced amount of interconnects 440 having high resistance, a reduced probability of leakage paths forming between interconnects 440, etc., as compared to bonding interface 105. And has enhanced bonding strength. Additionally or alternatively, the direct bonding process may be practiced in an environment with relatively relaxed requirements on particles (eg, particle size and/or distribution), which in turn may reduce manufacturing costs of semiconductor die assemblies.Although the foregoing example embodiment of FIG. 4 included two semiconductor dies (eg, semiconductor die 401 ) having composite dielectric structure 260 , the present technology is not so limited. For example, in some embodiments, semiconductor die 401b may be replaced by semiconductor die 101 , ie, a semiconductor die that does not include composite dielectric structure 260 as part of its dielectric layer 120 . In such embodiments, the composite dielectric structure 260 of the semiconductor die 401a may be modified (e.g., by increasing the thickness t2 of the third dielectric layer 275) so that The adverse effects attributed to irregularities 145 are mitigated.In some embodiments, a semiconductor die assembly includes a packaging substrate and a die (eg, semiconductor die 401a ) attached to the packaging substrate. The die includes a semiconductor substrate with an integrated circuit system, and a dielectric structure (eg, composite dielectric structure 206 ) over the semiconductor substrate. Furthermore, the dielectric structure comprises a first dielectric layer at a first side of the dielectric structure facing the semiconductor substrate, a second dielectric layer at a second side of the dielectric structure opposite the first side, and A third dielectric layer between the first dielectric layer and the second dielectric layer is configured to conform to the one or more irregularities at the second side.In some embodiments, the third dielectric layer includes a polymeric material that is flexible to deform in response to localized pressure generated by the one or more irregularities. In some embodiments, the semiconductor die assembly further includes one or more conductive pads formed in the dielectric structure, each conductive pad extending through the dielectric structure and configured to communicate with at least one via coupled to the integrated circuit system. Through Substrate Via (TSV) coupling. In some embodiments, the die is a first die, and the semiconductor die assembly further includes a second die (eg, semiconductor die 101 ) bonded directly to the first die at a second side, wherein the second The die includes a fourth dielectric layer bonded directly to the second dielectric layer, the second die including no polymer material.In some embodiments, the die is a first die and the dielectric structure is a first dielectric structure, and the semiconductor die assembly further includes a second die (eg, semiconductor die 401b ) bonded directly to the first die ), wherein the second die includes a second dielectric structure (eg, composite dielectric structure 260 ) having: a fourth dielectric layer bonded directly to the second dielectric layer at the second side; five dielectric layers immediately adjacent to the fourth dielectric layer, the fifth dielectric layer configured to conform to the one or more irregularities at the second side; and a sixth dielectric layer immediately adjacent to the fifth dielectric layer layer, the sixth dielectric layer faces the second semiconductor substrate of the second die.FIG. 5 is a block diagram schematically illustrating a system 500 including a semiconductor die assembly configured in accordance with embodiments of the present technology. System 500 may include semiconductor device assembly 570 , power supply 572 , drive 574 , processor 576 and/or other subsystems or components 578 . Semiconductor device assembly 570 may be incorporated into any of a number of larger and/or more complex systems, a representative example of which is system 500 shown schematically in FIG. 5 . The semiconductor die assembly described with reference to FIG. 4 may be included in the semiconductor device assembly 570 of the system 500 .The semiconductor device assembly 570 may have substantially similar features to the semiconductor die assembly described herein with reference to FIG. 4 . For example, semiconductor device assembly 570 may include two semiconductor die bonded directly to each other. At least one of the semiconductor dies can include a composite dielectric structure having a flexible dielectric layer that can tolerate irregularities (eg, defects, particles) present at the bonding interface. The flexible dielectric layer may comprise a polymeric material configured to deform in response to localized pressure generated by the irregularities during the bonding process step. The composite dielectric structure includes an additional dielectric layer sandwiching the flexible dielectric layer such that the composite dielectric structure can provide robust bonding strength to other dielectric layers through the additional dielectric layer. In some embodiments, a chemical vapor deposition process may be used to form composite dielectric structures utilizing siloxane derivatives as precursors.The resulting system 570 may perform any of a wide variety of functions, such as memory storage, data processing, and/or other suitable functions. Accordingly, representative system 570 may include, but is not limited to, handheld devices (eg, mobile phones, tablets, digital readers, and digital audio players), computers, and electrical equipment. Components of system 570 may be housed in a single unit or distributed (eg, via a communication network) over multiple interconnected units. Components of system 570 may also include remote devices and any of a wide variety of computer-readable media.FIG. 6 is a flowchart 600 of a method of fabricating a composite dielectric structure in accordance with an embodiment of the present technology. Flowchart 600 may incorporate aspects of the methods described with reference to FIGS. 2-5 .The method includes providing a semiconductor die including a substrate having an integrated circuit system (block 610). The method further includes forming a dielectric structure over the substrate, the dielectric structure including a first dielectric layer at a first side of the dielectric structure facing the substrate, a first dielectric layer at a first side of the dielectric structure opposite the first side. A second dielectric layer at two sides, and a third dielectric layer between the first dielectric layer and the second dielectric layer, the third dielectric layer being configured to conform to one or A plurality of irregularities (block 615).In some embodiments, forming the dielectric structure over the substrate includes: depositing a first dielectric layer over the substrate in a chemical vapor deposition (CVD) chamber; depositing a third dielectric layer on the first dielectric layer; and depositing a second dielectric layer on the third dielectric layer in the CVD chamber without breaking the vacuum of the CVD chamber. In some embodiments, the third dielectric layer includes a polymeric material that is flexible to deform in response to localized pressure generated by the one or more irregularities.In some embodiments, forming the dielectric structure over the substrate includes placing the semiconductor die in a chemical vapor deposition (CVD) chamber configured to contain a first gas having oxygen and a gas containing polymer a second gas of a precursor of the material; providing the first and second gases to the CVD chamber at a first ratio between oxygen and the precursor, the first ratio being configured to deposit a first silicon oxide material on the semiconductor die, the second The silicon monoxide material corresponds to the first dielectric layer; modifying the amount of the first gas supplied to the CVD chamber to establish a second ratio between oxygen and precursor, the second ratio being configured to deposit the polymer material on the silicon oxide , the polymer material corresponding to the third dielectric layer; and restoring the amount of the first gas supplied to the CVD chamber to establish the first ratio, thereby depositing a second silicon oxide material on the polymer material, the second silicon oxide material corresponding to second dielectric layer. In some embodiments, the method may further include forming one or more conductive pads in the dielectric structure, each conductive pad extending through the dielectric structure and configured to interface with at least one through-feed coupled to the integrated circuit system. Through Bottom Via (TSV) coupling.It should be noted that the methods described above describe possible implementations and that operations and steps may be rearranged or otherwise modified and other implementations are possible. Furthermore, embodiments from two or more of the methods may be combined. From the foregoing it should be appreciated that specific embodiments of the inventive technology have been described herein for purposes of illustration, but that various modifications may be made without departing from the disclosure. Additionally, although certain features or components have been shown in certain arrangements or configurations in the illustrated embodiments, other arrangements and configurations are possible. Furthermore, certain aspects of the inventive technology described in the context of particular embodiments may also be combined or eliminated in other embodiments.Devices discussed herein, including semiconductor devices, may be formed on semiconductor substrates or dies, such as silicon, germanium, silicon-germanium alloys, gallium arsenide, gallium nitride, and the like. In some cases, the substrate is a semiconductor wafer. In other cases, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOS), or an epitaxial layer of semiconductor material on another substrate. The conductivity of the substrate or sub-regions of the substrate can be controlled by doping using various chemistries including but not limited to phosphorous, boron or arsenic. Doping can be performed by ion implantation or by any other doping method during the initial formation or growth of the substrate.or " indicates an inclusive list such that a list such as at least one of A, B or C means A or B or C or AB or AC or BC or ABC (ie, A and B and C). Furthermore, as used herein, the phrase "based on" should not be understood to refer to a closed set of conditions. For example, an exemplary step described as "based on condition A" may be based on both condition A and condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase "based on" should be interpreted equally as the phrase "based at least in part on."From the foregoing it should be appreciated that specific embodiments of the invention have been described herein for purposes of illustration, but that various modifications may be made without departing from the scope of the invention. Rather, in the foregoing description, numerous specific details were set forth in order to provide a thorough and informative description of embodiments of the inventive technology. One skilled in the relevant art will recognize, however, that the present disclosure may be practiced without one or more of the specific details. In other instances, well-known structures or operations typically associated with memory systems and devices are not shown or described in detail to avoid obscuring other aspects of the technology. In general, it should be understood that various other devices, systems and methods other than those specific embodiments disclosed herein may be within the scope of the present technology.
A method to improve transistor performance uses a wafer (100) of single-crystalline semiconductor with a first zone (102) of field effect transistors (FETs) and circuitry at the wafer (100) surface, and an infrared (IR) laser with a lens for focusing the IR light to a second depth (112) farther from the wafer (100) surface than a first depth of the first zone (102). The focused laser beam is moved parallel to the wafer (100) surface across the wafer (100) to cause local multi-photon absorption at the second depth (112) for transforming the single-crystalline semiconductor into a second zone (111) of polycrystalline semiconductor with high density of dislocations. The second zone (111) has a height and lateral extensions, and permanently stresses the single-crystalline semiconductor. The stress increases a majority carrier mobility in the channel of the FETs, improving the transistor performance.
CLAIMSWhat is claimed is:1. A method of improving transistor performance, the method comprising:providing a wafer of a single-crystalline semiconductor, the wafer having a surface and a plurality of device chips including a first zone of field effect transistors (FETs) and circuitry extending to a first depth from the surface;providing an infrared (IR) laser having a lens for focusing the IR light to a second depth from the wafer surface, the second depth greater than the first depth; andmoving the focused laser beam parallel to the surface across the wafer to cause local multi-photon absorption at the second depth for transforming the single-crystalline semiconductor into a second zone of polycrystalline semiconductor with high density of dislocations, the second zone having a height and lateral extensions.2. The method of claim 1 wherein the polycrystalline semiconductor of the zone is amorphous.3. The method of claim 1 wherein the infrared laser is a stealth laser.4. The method of claim 1 wherein the first depth is between about 6 μπι and 12 μπι dependent on the number of metallization levels employed.5. The method of claim 1 wherein the second depth is between approximately 6 μπι and 50 μπι.6. The method of claim 1 wherein the height of the zone is between about 10 μπι and 30 μπι.7. The method of claim 1 wherein the zone may be subdivided into sections having lateral dimensions smaller than the length of a device chip.8. The method of claim 1 wherein the borders of the zone of amorphous polycrystalline semiconductor are parallel to the wafer surface and approximately planar.9. The method of claim 1 further including the process of singulating the wafer into discrete device chips, each chip having a zone of polycrystalline semiconductor embedded in the single crystalline semiconductor.10. A semiconductor device comprising:a chip of single-crystalline semiconductor having a surface and a first zone of field effect transistors (FETs) and circuitry extending to a first depth from the surface, the first zone parallel to the chip surface; anda second zone of polycrystalline semiconductor with high density of dislocations, the second zone parallel to the chip surface and having a center plane at a second depth from the chip surface, the second depth greater than the first depth, the second zone having a height and lateral extensions.11. The device of claim 10 wherein the first depth is between about 6 μπι and 12 μπι dependent on the number of metallization levels employed.12. The device of claim 10 wherein the second depth is between approximately 6 μπι and 50 μπι.13. The device of claim 10 wherein the height of the zone is between about 10 μπι and 30 μπι.
METHOD FOR IMPROVING TRANSISTOR PERFORMANCE[0001] This relates generally to semiconductor devices and processes, and more particularly to a structure and fabrication method of creating intrinsic semiconductor lattice strain to increase carrier mobility and enhance field effect transistor performance.BACKGROUND[0002] When a body of a semiconductor such as silicon is in contact with another solid state material, stress in the semiconductor may usually be caused by one of two situations: Stress may be caused by a mismatch of the coefficients of thermal expansion (CTE) between the two materials, or stress may be caused by differences of the lattice constants of the two bodies.[0003] Mechanical stress, when applied to a semiconductor lattice, leads to splitting of the conduction band and thus alters the effective mass of the majority carrier, leading to changes of the carrier mobility. For nMOS field effect transistors (FETs) with electrons as majority carriers, tensile stress to the channel lattice enhances the electron mobility in the channel. For pMOS FETs with holes as majority carriers, compressive stress to the channel lattice enhances the hole mobility in the channel. In both examples, the improvement of the carrier mobility leads to a decrease of the on-resistance between drain and source (Roson) and thus to improved FET efficiency. One frequently practiced technique to achieve such improved FET performance is the deposition of tensile silicon nitride layers on nMOS transistors, and compressive silicon nitride layers in pMOS transistors.[0004] The effect that localized stress near the channel of a field effect transistor can result in improved electrical performance has been put to practice in the last few years by the production of semiconductor devices having a need for through-silicon vias (TSVs). The fabrication of TSVs starts while the device chips are still in un-thinned wafer form. Holes distributed across each chip area in the desired pattern are etched with uniform diameter and to a certain depth. The etching may be performed by chemical etching or be focused laser light. Then, a dielectric compound such as silicon nitride or silicon dioxide is deposited on the TSV sidewalls in order to create a thin insulating layer between the semiconductor material and the intended conductive layers inside the TSV. Next, a metal seed layer (such as tantalum nitride or a refractory metal) is deposited on the insulating layer, followed by the deposition of the thicker metal filling (preferably copper). Thereafter, the wafer is thinned, by grinding or etching or both, until the bottom of the via holes are exposed and the TSVs are opened. The conductive via may be closed off by a solderable layer of nickel and palladium.[0005] While the stress caused by the TSVs may result in large keep-out zones for three-dimensional integrated circuits (ICs), recent studies of TSV placements have resulted in stress-aware layouts to help IC transistors to benefit from the enhanced carrier mobility in the stress zones. For example, a study by Yang et al., based upon mobility dependence on stress and orientation between FET channel and TSV, showed how the optimum placement of TSVs can help improve the majority charge carrier mobility and thus result in improved transistor performance (see Yang, Jae-Seok, et al., "TSV stress aware timing analysis with applications to 3D-IC layout optimization", Proceedings of the 47thDesign Automation Conference; ACM, June 13-18, 2010, pp. 803-806).SUMMARY[0006] In described examples of a process flow, a wafer of single-crystalline semiconductor material (such as silicon) is provided. The wafer has a surface and includes chips with field-effect transistors and integrated circuitry. The circuitry extends to a first depth from the surface. Also, an infrared (IR) laser is selected, so that its wavelength can be focused to a second depth greater than the first depth and allows a high percentage of the focused energy to be absorbed by the single-crystalline semiconductor lattice without ablation. The preferred wavelength range for the operation is between 900 nm and 1000 nm, allowing an internal transmittance between about 50% and 70%. The absorbed energy can then transform the single-crystalline semiconductor lattice into an amorphous polycrystalline region with high density of dislocations.[0007] After focusing the IR laser to the second depth, where the optical damage by multi-photon absorption creates modification of the single-crystalline semiconductor into polycrystalline material with a high density of dislocations, the focused beam is moved parallel to the surface across the wafer. The moving local multi-photon absorption at the second depth forms a zone of the polycrystalline semiconductor. Zones of various extension may be created. The movement may be repeated until a polycrystalline zone of about 30 μπι height is created. The polycrystalline zone creates an intrinsic permanent strain in the single-crystalline lattice near the FET structures, which in turn results in increased mobility of the majority device carriers and thus a more efficient FET performance, such as by lowering the R-DSon resistance.BRIEF DESCRIPTION OF THE DRAWINGS[0008] FIG. 1 shows a cross section of portion of a semiconductor chip with the integrated circuit zone near the chip surface and the embedded zone of amorphous polysilicon for creating strain in the single-crystal lattice.[0009] FIG. 2A is a schematic representation of the effect of stresses in an nMOS field effect transistor: Tensile stresses enhance electron mobility in the gate channel.[0010] FIG. 2B is a schematic representation of the effect of stresses in a pMOS field effect transistor: Compressive stresses enhance hole mobility in the gate channel.[0011] FIG. 3 illustrates the methodology of moving the focus of an infrared laser parallel to the surface of a single-crystalline semiconductor chip, creating a zone of amorphous semiconductor.[0012] FIG. 4 shows the diagram of a process flow for using focused infrared laser light to create a zone of amorphous polysilicon embedded in single-crystal silicon.DETAILED DESCRIPTION OF EXAMPLE EMB ODEVIENT S[0013] The intrinsic resistance of a field effect transistor (FET) is important for performance. Decreasing the intrinsic resistance (usually the on-resistance of the channel between source and drain of a field effect transistor) increases the transistor efficiency. Localized stresses near a transistor may result in improved electrical performance by enhancing majority carrier mobility, and numerous semiconductor products may be improved by a method (preferably a low cost method) to create such localized stress throughout the product, where field effect transistors operate.[0014] Example embodiments solve the problem of creating an intrinsic strain near a FET in a semiconductor chip, and thus an increased mobility of the FET majority carrier in the gate channel, by using the focused energy of an infrared stealth laser to form optical damage by multi-photon absorption in a location determined by the position of the laser focus. The multi-photon absorption produces a region of amorphous poly-semiconductor near an FET.[0015] By moving the focused laser parallel to the chip surface, a zone of amorphous semiconductor is created. The amorphous poly-semiconductor creates a permanent intrinsic strain in the single-crystalline lattice near the FET structure. The strain, in turn, increases the mobility of the majority carrier in the gate channel of the FET.[0016] Example embodiments use a focused infrared laser light moving along a direction to create (at the depth of the focus) an embedded precise layer of amorphous poly-semiconductor, which permanently stresses the single-crystalline bulk semiconductor and thus increases the majority carrier mobility in the channel of a field effect transistor.[0017] FIG. 1 illustrates as an example of crystal alteration of a portion of a chip made of a single-crystalline semiconductor, such as silicon; the chip portion is generally designated 100. Other bulk semiconductors include silicon germanium, gallium nitride, gallium arsenide, and any other compound used in fabrication of semiconductor devices. Chip 100 has a first surface 100a, a second surface 100b, and a thickness 101. In preferred embodiments, thickness 101 is in the range from about 70 μπι to 150 μπι (but may be thinner or thicker).[0018] A zone (with transistors and circuitry) is near first surface 100a. This integrated circuit zone is referred to herein as first zone; it has a first depth 102 from first surface 100a. In the example of FIG. 1, first depth 102 is between about 6 μπι and 12 μπι dependent on the number of metallization levels employed. In preferred embodiments, the first zone includes one or more field effect transistors (FETs) made according to MOS technology. The FETs may be nMOS or pMOS devices dependent on the conductivity type and the majority carrier of the bulk semiconductor.[0019] FIG. 1 illustrates a second zone 110 of polycrystalline semiconductor with a high density of dislocations. In the example of FIG. 1, second zone 110 has a height 111 of about 30 μπι; in other embodiments, the height may be thicker or thinner. Second zone 110 further has borderlines 110a and 110b, which are substantially planar and parallel to chip surfaces 100a and 100b. The middle line of second zone 110 is spaced by a distance 112 from the chip surface 110a; in the example of FIG. 1, distance 112 is between about 30 μπι and 50 μπι; in other devices, it may be smaller or greater. In some devices, borderline 110a may reach near the borderline of the circuitry zone (first zone).[0020] As described hereinbelow, the polycrystalline semiconductor of zone 110 is created from the single-crystalline bulk semiconductor by the optical damage caused in multi-photon absorption of the energy of a focused infrared laser, which has been directed towards, and is moving parallel to, the chip surface 100a. The amorphous poly-semiconductor creates a permanent intrinsic strain in the single-crystalline lattice near the zone of integrated circuitry with the FET structures. The strain, in turn, affects the mobility of the majority carriers in the gate channels of the FETs.[0021] The strain of a lattice multiplied by the modulus of the material results in the stress in the lattice (mechanical stress is measured in pascals, Pa). Because lattice stress leads to splitting of the conduction band, the effective mass of a carrier can be altered; this effect results in changes of the carrier mobility. If the goal is to improve the carrier mobility, semiconductor devices of pMOS and nMOS technologies require different stress types, because the majority carriers are different; in nMOS devices, electrons are majority carriers, in pMOS devices, holes are the majority carriers.[0022] FIGs. 2A and 2B summarize the stress types necessary to improve majority carrier mobility in field effect transistors (FETs), schematically depicted to emphasize the channel between source and drain. As FIG. 2A shows, for an nMOS FET with electrons as majority carriers, tensile stress in the x-direction between source and drain can increase the electron mobility in the channel between source and drain. FIG. 2B depicts the corresponding situation for a pMOS FET; with holes as majority carriers, compressive stress in the x-direction between source and drain can increase the hole mobility in the channel between source and drain. In either case, increased carrier mobility is proportional to increased carrier speed and thus improved FET performance.[0023] Another embodiment is a method for creating the stresses in the semiconductor lattice to improve carrier mobility and thus the performance of transistors. The method is illustrated in FIG. 3 and summarized in FIG. 4. The method starts by providing a wafer 300 of a single-crystalline semiconductor (process 401). The wafer has a surface 300a and multiple device chips. The wafer has completed those front-end processes, which result in fabrication of field effect transistors (FETs) and circuitry in a zone of depth 302 from surface 300a. In FIG. 3, wafer 300 is shown having its final thickness 301 after the process of back-grinding; yet for practical reasons, the laser process to be described is preferably executed while the wafer still has its original thickness before back-grinding.[0024] In process 402, an infrared (IR) laser is provided, which is suitable for stealth technology, also referred to as Mahoh technology. Suitable lasers are commercially available from a number of companies in the U.S., Japan, and other sources; a few of these companies are Hamamatsu, Disco, and Accretech. In the so-called stealth methodology, a laser is selected for its light operating according to a plot of Internal Transmittance (in %) as a function of the Wavelength (in nm). At IR wavelengths shorter than about 800 nm, the laser energy is high enough to be used for ablating an object, so that the transmittance is negligible (about 0 %). This wavelength regime is often referred to as laser dicing. At wavelengths longer than about 1100 nm, the laser energy is weak enough to be transmitted through the object, so that the transmittance is about 100 %. The wavelength regime is often referred to as stealth dicing.[0025] As an example, in stealth technology, or Mahoh technology, the IR wavelength may be between 900 nm and 1000 nm, and the transmittance is in the range between 30 % and 70 %, and preferably about 50 %. For the method illustrated in FIG. 3, the IR light is designated 350 and the focusing lens 351. An example IR laser engine may produce 1.2 W pulsed power so that at the focal area the semiconductor bulk experiences internal modification by optical damage caused by multi-photon absorption. The focal area may be fixed to a usual size of about 15 μπι diameter. The depth 312 of the focal area can be controlled working from the bottom of the semiconductor wafer up; in FIG. 3, the wafer of thickness 301 has the bottom at the wafer surface opposite surface 300a with the circuitry and transistors in first zone 302. The transformation of the single-crystalline semiconductor material into poly-crystalline semiconductor with high density of dislocations occurs within second zone 311 and can be controlled by laser power, feed rate, and wavelength. For its effect on FETs and circuitry in first zone 302, the proximity of second zone 311 relative to first zone 302 may reach from just a few micrometers to about 50 μπι.[0026] As described, the IR light 350 of the laser falls into a range of wave lengths, which can readily be absorbed by the semiconductor lattice (preferably monocrystalline silicon). In process 403, a lens 351 focuses the IR light to a focal point, which is spaced from surface 300a with the FTEs. For the example of FIG. 1, distance 312 may be in the 30 μπι to 50 μπι range. The energy of the laser light is absorbed by the single-crystalline semiconductor in so-called multi-photon absorption, which disturbs the single crystallinity of the lattice so that the resulting optical damage and energy absorption morphs the single-crystalline semiconductor into a poly crystalline and amorphous configuration within a zone of width 311. In the example of FIG. 1, width 311 is about 30 μπι wide. The borders of second zone 311 are parallel to the wafer surface 300a and approximately planar. Zone 311 may be subdivided into sections with lateral dimensions smaller than the length of device chip. The high density of dislocations in the polycrystalline semiconductor exerts stress on the single-crystalline lattice of the semiconductor between zones 311 and 302. As described hereinabove, this stress enhances the mobility of majority carriers in FETs positioned in suitable orientation.[0027] In process 404, the focused infrared laser beam is moved parallel to the wafer surface 300a across wafer 300. This movement extends zone 311 of polycrystalline semiconductor in the direction parallel to surface 300a. For some devices, the movement thus the polycrystalline extension may be short. However, for other devices, the movement may extend across the whole wafer, so that the polycrystalline zone extends across the whole length of each chip. In either case, the height of the polycrystalline zone is approximately 30 μιη.[0028] The laser movement may be repeated several times to widen the area of the polycrystalline zone (second zone), until the whole area of active circuitry is paralleled by a zone of polycrystalline semiconductor with an area sized to equal the circuitry area. The borders of the second zone are approximately planar und parallel to the wafer surface 300a.[0029] After the second zone of polycrystalline semiconductor is created, a dicing process singulates wafer 300 into discrete device chips. Each chip includes a zone of polycrystalline semiconductor embedded in the single-crystalline bulk semiconductor.[0030] Example embodiments are applicable to any semiconductor material, including silicon, silicon germanium, gallium arsenide, gallium nitride, or any other semiconductor or compound material used in manufacturing. As another example, example embodiments are applicable to any zone of polycrystalline semiconductor embedded in single-crystalline semiconductor, regardless of the geometries of the second zone (such as lateral dimensions, thickness and planarity), the degree of poly-crystallinity, and the position of the second zone relative to the first zone of circuitry. As another example, the semiconductor chip may be free of an encapsulation, or it may be in an additional package.[0031] Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims.
Conductive structures and methods for preparing conductive structures are provided. Conductive structures according to the present invention can be prepared by controllably deforming and shaping a metal layer by using a hydrogen gas source and thermally treating the hydrogen gas source.
What is claimed is: 1. A method for providing a conductive structure, the method comprising:providing a substrate assembly having a surface; providing a first metal layer on the substrate assembly surface; providing a second metal layer on at least a portion of the first layer; and deforming at least a portion of the second metal layer by the release of a gas from the first metal layer. 2. The method of claim 1, wherein deforming the second metal layer includes thermally treating at least a portion of the first metal layer.3. The method of claim 2, wherein thermally treating at least a portion of the first metal layer includes deforming at least a portion of the second metal layer by diffusion of hydrogen gas out of the first metal layer.4. The method of claim 3, wherein providing the first metal layer on the substrate assembly surface includes forming at least one high hydrogen solubility metal layer.5. The method of claim 4, wherein forming the at least one high hydrogen solubility metal layer includes incorporating hydrogen in the high hydrogen solubility metal layer.6. The method of claim 5, wherein forming the at least one high hydrogen solubility metal layer includes forming a metal hydride layer.7. The method of claim 5, wherein incorporating hydrogen into the high hydrogen solubility metal layer includes incorporating hydrogen into the high hydrogen solubility metal layer by diffusion through the second metal layer.8. The method of claim 5, wherein incorporating hydrogen into the high hydrogen solubility metal layer includes incorporating hydrogen into the high hydrogen solubility metal layer by exposing the high hydrogen solubility metal layer to a hydrogen atmosphere.9. The method of claim 4, wherein providing the second metal layer includes forming at least one low hydrogen solubility metal layer.10. The method of claim 9, wherein the high hydrogen solubility metal layer has a hydrogen permeability of about 4 or more orders of magnitude greater than a hydrogen permeability of the low hydrogen solubility metal layer.11. The method of claim 3, wherein thermally treating the first metal layer is performed at a temperature greater than the hydrogen release temperature for the first metal layer for a time period less than about 10 minutes.12. The method of claim 1, wherein providing the first metal layer includes providing an unpatterned first metal layer.13. The method of claim 1, wherein forming the first metal layer includes forming a patterned first metal layer.14. A method for providing a conductive structure, the method comprising:providing a substrate assembly having a surface; providing at least one hydrogen containing first metal layer on the substrate assembly surface; providing a second metal layer on at least a portion of the first layer; and thermally treating at least a portion of the first metal layer to displace at least a portion of the second metal layer from a first position to a second position. 15. The method of claim 14, wherein thermally treating at least a portion of the first metal layer includes displacing at least a portion of the second metal layer from the first position to the second position by diffusion of hydrogen gas out of the first metal layer.16. The method of claim 14, wherein providing the first metal layer on the substrate assembly surface includes forming at least one high hydrogen solubility metal layer.17. The method of claim 16, wherein forming the at least one high hydrogen solubility metal layer includes incorporating hydrogen in the high hydrogen solubility metal layer.18. The method of claim 17, wherein forming the at least one high hydrogen solubility metal layer includes forming a metal hydride layer.19. The method of claim 16, wherein providing the second metal layer includes forming at least one low hydrogen solubility metal layer.20. The method of claim 19, wherein the high hydrogen solubility metal layer has a hydrogen permeability of about 4 or more orders of magnitude greater than a hydrogen permeability of the low hydrogen solubility metal layer.21. The method of claim 14, wherein thermally treating the first metal layer is performed at a temperature greater than the hydrogen release temperature for the first metal layer for a time period less than about 10 minutes.22. A method for forming a conductive structure comprising:providing a substrate assembly having a surface; providing at least one hydrogen containing first metal layer on the substrate assembly surface; providing a second metal layer on at least a portion of the first metal layer, the second metal layer having at least a first portion thereof in a first configuration; and causing diffusion of hydrogen from the first metal layer to deform at least the first portion of the second metal layer into a second configuration. 23. The method of claim 22, wherein the method further comprises providing a clamping structure positioned over at least a portion of the second metal layer.24. The method of claim 23, wherein providing the clamping structure includes providing a clamping structure positioned on at least a portion of a perimeter of the second metal layer.25. The method of claim 23, wherein providing the clamping structure includes providing a mold positioned over at least a portion of the second metal layer.26. The method of claim 25, wherein the mold includes at least one mold surface, and further wherein the second configuration corresponds to the at least one mold surface.27. The method of claim 25, wherein the mold positioned over at least a portion of the second metal layer includes providing a heated mold positioned over at least a portion of the second metal layer.28. The method of claim 23, wherein providing the clamping structure includes providing a clamping structure formed at least on a portion of the second metal layer and a portion of the substrate assembly surface.29. The method of claim 23, wherein providing the clamping structure includes providing a clamping structure comprising a low hydrogen solubility metal.30. The method of claim 22, wherein providing the at least one hydrogen containing first metal layer includes forming the at least one hydrogen containing first metal layer using a high hydrogen solubility metal.31. The method of claim 30, wherein providing the second metal layer on at least a portion of the first metal layer includes forming the second metal layer using a low hydrogen solubility metal.32. The method of claim 22, wherein causing diffusion of hydrogen from the first metal layer to deform at least the first portion of the second metal layer into a second configuration includes defining a void between at least a portion of the substrate assembly surface and a portion of the second metal layer.33. The method of claim 22, wherein the method further comprises oxidizing at least a portion of the at least one hydrogen containing first metal layer resulting in an oxidized layer, wherein the second metal layer is formed on at least a portion of the oxidized layer.34. The method of claim 33, wherein the method further comprises forming a carbon layer on at least a portion of the oxidized layer, wherein the second metal layer is formed on at least a portion of the oxidized layer.
This is a continuation of application Ser. No. 09/385,579, filed Aug. 31, 1999, now U.S. Pat. No. 6,121,131, which is incorporated herein by reference.FIELD OF THE INVENTIONThe present invention relates to the formation of conductive structures. More particularly, the present invention pertains to deformation of one or more metal layers to form such conductive structures.BACKGROUND OF THE INVENTIONVarious manners of fabricating a diverse and wide range of structures are available. Particularly, a wide variety of techniques are available for forming conductive structures of integrated circuits. For example, various photolithography and etching techniques are known, various methods of micromachining silicon devices is known, etc. However, there is always a need for additional novel approaches for forming such structures.Dimensions in integrated circuits are constantly being reduced. For example, the separation between conductive layers is being reduced to achieve smaller integrated circuits. With a reduction in the spacing between conductive materials in an integrated circuit, an increase in capacitive crosstalk is observed. Conventional integrated circuits typically utilize interconnect structures wherein a first metal line is separated from a second metal line by an insulative material. If the capacitive effects between the first metal line and the second metal line is high, i.e., a voltage on one effects a voltage on the other, then the capacitive effects may lead to an inoperable integrated circuit.To reduce such capacitive coupling or to provide isolation in integrated circuits, low dielectric constant materials have been utilized between such conductive materials or lines. However, use of low dielectric constant materials have many associated problems. For example, equipment is not always available to properly process new low dielectric materials in various integrated circuits. Further, for example, such dielectric materials may not properly or adequately reduce such capacitive coupling between the conductive materials.A void region or space may also serve as a dielectric and offers the lowest possible dielectric constant, having a value equal to 1. It is noted that a void space can comprise a vacuum, but typically comprises some gases. A void space can alternatively be referred to as a free space, i.e., space that is empty of materials in a solid or liquid phase. It would be desirable to develop methods of forming void regions for use as low dielectric regions, such as for isolation in semiconductor constructions.SUMMARY OF THE INVENTIONThe present invention provides a method for forming a conductive structure, e.g., a conductive structure over a void region. The method involves controlled deformation and shaping of a metal layer employing a hydrogen gas source and thermal treatment of the source. The hydrogen gas source is preferably a hydrogen containing metal layer. Upon thermal heating, hydrogen gas evolves from the hydrogen containing metal layer and creates a pressure that exerts force sufficient to produce deformation in another metal layer. In other words, temperature, pressure and time along with differences in hydrogen solubility and diffusivity between different metal layers are used to form conductive structures. For example, release of hydrogen from a hydrogen containing metal layer is used to controllably deform an overlying metal layer.For example, using the present invention, conductive metals can be shaped and/or supported over a void region. The present invention provides a method to prepare a diverse and wide range of structures that may be used for various applications, e.g., interconnections in integrated circuits, for transistor and packaging technologies, switching arrays, micromachined silicon devices, microchannels for fluid transport and bulk material deformation applications.Accordingly, the present invention provides a method for forming a conductive structure by providing a substrate assembly having a surface. At least one hydrogen containing first metal layer (e.g., patterned or unpatterned layer) is formed on the substrate assembly surface. A second metal layer is formed on at least a portion of the first metal layer and at least the first metal layer is thermally treated to deform at least a portion of the second metal layer.In one embodiment of the method, the method includes thermally treating at least the first metal layer to deform at least a portion of the second metal layer by a diffusion of hydrogen gas out of the hydrogen containing first metal layer. In another embodiment of the method, the formation of the at least one hydrogen containing first metal layer includes forming a layer of at least one high hydrogen solubility metal and incorporating hydrogen in the high hydrogen solubility metal layer. Preferably, the formation of the at least one hydrogen containing first metal layer includes forming a metal hydride layer.Further, in other embodiments of the invention, the method provides incorporation of hydrogen into the high hydrogen solubility metal layer by diffusion through the second metal layer. Alternatively, the incorporation of hydrogen into the high hydrogen solubility metal layer is accomplished by exposing the high hydrogen solubility metal layer to a hydrogen atmosphere.Preferably, the high hydrogen solubility metal employed in the method is at least one metal typically selected from the group of titanium, zirconium, thorium, hafnium, vanadium, niobium, tantalum, lanthanum, cerium, and palladium. Additionally, the high hydrogen solubility metal may have a hydrogen permeability of about 4 or more orders of magnitude greater than a hydrogen permeability of the low hydrogen solubility metal. Preferably, the low hydrogen solubility metal is at least one metal typically selected from the group of copper, silver, gold, tungsten, platinum, aluminum, molybdenum, iron and nickel.The present invention further provides a method for forming a conductive structure by providing a substrate assembly having a surface; forming at least one hydrogen containing first metal layer on the substrate assembly surface; forming a second metal layer on at least a portion of the first metal layer; providing a clamping structure positioned over at least a portion of the second metal layer; and thermally treating at least the first metal layer to deform at least a portion of the second metal layer. In one embodiment of the method, the method includes providing a clamping structure positioned on at least a portion of a perimeter of the second metal layer. In another embodiment, the method also provides a mold, e.g., a heated mold, positioned over at least a portion of the second metal layer.The invention also provides a method for forming a void region associated with a substrate assembly by providing a substrate assembly having a surface; forming at least one hydrogen containing first metal layer on the substrate assembly surface; forming a second metal layer on at least a portion of the first metal layer; and thermally treating at least the first metal layer to define a void between at least a portion of the substrate assembly surface and a portion of the second metal layer.Also provided is a method for forming a conductive structure by providing a substrate assembly having a surface; forming at least one hydrogen containing first metal layer on at least a portion of the substrate assembly surface; oxidizing at least a portion of the at least one hydrogen containing first metal layer resulting in an oxidized layer; forming a second metal layer on at least a portion of the oxidized layer; and thermally treating at least the first metal layer to deform at least a portion of the second metal layer. For example, oxidizing at least a portion of the hydrogen containing first metal layer may result in an oxidized layer having a thickness of about 1 angstrom to about 20 angstroms.In one embodiment of this method, the method further includes forming a carbon layer on at least a portion of the oxidized layer. For example, formation of the carbon layer typically results in a carbon layer having a thickness of about 1 angstrom to about 25 angstroms.In another embodiment of the method, the method includes forming a seed metal layer of a low hydrogen solubility metal on the oxidized layer. The method may further include electrodepositing the low hydrogen solubility metal on the seed metal layer. Additionally, formation of the second metal layer may also include forming a seed metal layer of a low hydrogen solubility metal on the oxidized layer.The invention also provides a conductive structure, wherein the conductive structure contains a substrate assembly having a surface; a high hydrogen solubility metal on at least a portion of the substrate assembly surface; and a raised conductive region comprising a low hydrogen solubility metal. At least a portion of the high hydrogen solubility metal and low hydrogen solubility metal are separated by a void region.BRIEF DESCRIPTION OF THE DRAWINGSFIGS. 1A-1B are diagrammatic views generally illustrating the formation of a conductive structure according to the present invention.FIGS. 2A-2B are diagrammatic views illustrating one embodiment of the method generally illustrated in FIGS. 1A-1B for forming a conductive structure using a clamping structure according to the present invention.FIGS. 3A-3B are diagrammatic views illustrating other embodiments of the method generally illustrated in FIGS. 1A-1B for formation of conductive structures using a heated mold according to the present invention.FIGS. 4A-4D are diagrammatic views illustrating various other embodiments of the method generally illustrated in FIGS. 1A-1B for forming conductive structures using a technique for reducing adhesion characteristics according to the present invention.FIGS. 5A-5B are diagrammatic views of additional embodiments of the present invention illustrating the thermal treatment employed for forming conductive structures according to the present invention.FIGS. 6A-6B are diagrammatic views illustrating another thermal treatment method used to form conductive structures according to the present invention.DETAILED DESCRIPTION OF THE INVENTIONGenerally, methods of forming conductive structures from certain metals having a low hydrogen solubility (e.g., copper, silver, gold, tungsten, platinum, aluminum, molybdenum, iron, nickel, lanthanium, cerium, and palladium) using certain metals having a high hydrogen solubility (e.g., titanium or titanium-vanadium alloy) shall be described with reference to FIGS. 1-6. Such methods generally use the large differences in hydrogen solubility and transport characteristics of such metals to form conductive structures.In the following detailed description, reference is made to the accompanying figures which form a part hereof, and in which is shown by way of illustration the manner in which the invention may be practiced. These embodiments are described in sufficient detail to enable one skilled in the art to practice the present invention. It is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present invention as defined in the accompanying claims. Further, one skilled in the art will recognize that one or more techniques of one embodiment described herein may be used with one or more techniques of other embodiments described herein to form various combinations of the present invention.The present invention describes a conductive structure formed relative to a substrate assembly, e.g., a surface of a wafer. It is to be understood that the term "substrate assembly," as used herein, includes any substrate or substrate supported structure, e.g., such as a semiconductor substrate or any other substrate, having one or more layers or structures formed thereon. A semiconductor substrate is to be understood as including silicone-on-sapphire (SOS) technology, silicon-on-insulator (SOI) technology, doped and undoped semiconductors, epitaxial layers of silicon supported by a base semiconductor, as well as any other semiconductor based structures known to one skilled in the art. Furthermore, when reference is made to a substrate assembly in the following description, previous process steps may have been utilized to form regions/junctions/lines, such as metalized lines, in a previously formed structure. The following detailed description is, therefore, not to be taken in a limiting sense, as the scope of the present invention is defined by the appended claims. It is further understood that the present invention is not limited to conductive structures formed relative to silicon wafer or silicon wafers having one or more materials formed thereon, but rather other types of wafers (e.g., gallium arsenide wafers) can be employed, as well as other substrates of various other materials and sizes and configurations, e.g., circuit board substrates, ceramic substrates, etc.It will be readily apparent to one skilled in the art from the description below, that any conductive structure may be formed using the methods described herein. For example, an interconnect is one such conductive structure which can be prepared employing the present invention. An interconnect can serve to connect, in a specific configuration, multiple conductive elements of devices to form a desired circuit. For example, local interconnects, multi-level interconnect structures, etc., may benefit from a void over which the conductive structure is formed. In other words, the void may provide effective low dielectric isolation of the conductive structure from other conductive elements.Referring now to FIGS. 1A and 1B, a method of forming a conductive structure 10 is illustrated. Conductive structure 10 is formed by providing a substrate assembly 14 having a substrate assembly surface 30. Formed on the substrate assembly surface 30 is a first metal layer 12 having a first metal layer surface 32. The first metal layer 12 is a hydrogen containing layer preferably formed of at least one high hydrogen solubility metal. The first metal layer 12 can be a patterned metal layer or an unpatterned metal layer. A "patterned" metal layer may be prepared, for example, using conventional photoliothographic techniques known in the art. As shown in FIG. 1A, first metal layer 12 is a patterned metal layer defined on a specific region of the substrate assembly surface 30. Additionally, FIG. 1A shows a second metal layer 13 formed on at least a portion of the first metal layer 12. The second metal layer 13 includes at least one low hydrogen solubility metal. The second metal layer 13 may also be optionally patterned.The relatively large differences in the hydrogen solubility and transport characteristics of the first and second metal layers 12 and 13, formed of high and low hydrogen solubility metals, respectively (as further described below), allows for the formation of conductive structures according to the present invention. As used herein, a "high hydrogen solubility metal," is a metal that is capable of hydrogen absorption at a selected temperature and pressure. Preferably, a high hydrogen solubility metal exhibits hydrogen absorption greater than about 1000.0 cubic centimeters (cc)/100 grams of metal; more preferably in the range of about 1000.0 cc/100 grams of metal to about 50,000.0 cc/100 grams of metal. It is recognized, however, that hydrogen absorption may vary with a specific metal and will be dependent upon the temperature and hydrogen pressure that is employed.Table 1, shown below, includes a list of certain low and high solubility metals and the respective solubility values for these metals. For example, under a pressure of approximately 1 atmosphere and at a temperature of about 400[deg.] C., approximately 38,770 cc of hydrogen will be incorporated into about 100 grams of titanium. This is unlike copper, a low hydrogen solubility metal, wherein about 0.06 cc of hydrogen will be incorporated into about 100 grams thereof under such conditions. Preferred high hydrogen solubility metals for the first metal layer 12, include, but are not limited to, titanium, zirconium, thorium, hafnium, vanadium, niobium, tantalum, lanthanum, cerium, and palladium or alloys thereof.As used herein, "a low hydrogen solubility metal," is a metal that has a reduced ability to retain or absorb hydrogen. As shown in Table 1, the hydrogen absorption of low hydrogen solubility metals preferably is less than 20.0 cc/100 grams of metal; more preferably, the hydrogen absorption typically ranges from about 0.1 cc/100 grams of metal to about 20.0 cc/100 grams of metal. Low hydrogen solubility metals include, but are not limited to copper, silver, gold, tungsten, platinum, aluminum, molybdenum, iron and nickel or alloys thereof. Although hydrogen solubility may increase in the low hydrogen solubility metals with an increase in temperature, the solubility in the metal remains relatively low as indicated in Table 1 by the range of hydrogen absorption. As with the high hydrogen solubility metals, the solubility of hydrogen in a specific metal may vary as a function of temperature and hydrogen pressure.<tb> <sep> <sep>TABLE 1<tb> <sep> <sep>Low Hydrogen<sep>High Hydrogen<tb> <sep> <sep>Solubility Metals*<sep>Solubility Metals*<tb> <sep>[deg.] C.<sep>Ni<sep>Cu<sep>Ag<sep>Mo<sep>Ti<sep>V<sep>Zr<sep>Ta<tb> <sep> 20<sep> <sep> <sep> <sep> <sep>40,700<sep>15,000<sep>23,600<sep>46,000<tb> <sep>400<sep>3.2<sep>0.06<sep>0.06<sep>0.17<sep>38,770<sep> 3,800<sep> <sep> 2,500<tb> <sep>500<sep>4.1<sep>0.16<sep>0.11<sep>0.18<sep>36,600<sep> 1,900<sep> <sep> 1,400<tb> <sep>600<sep>5.3<sep>0.3 <sep>0.18<sep>0.19<sep>33,470<sep> 1,000<sep>18,400<sep> 700<tb> <sep>800<sep>7.8<sep>0.7 <sep>0.33<sep>0.25<sep>14,100<sep> 440<sep>16,500<sep> 250<tb> <sep>1000 <sep>9.8<sep>1.6 <sep> <sep>0.50<sep> 6,610<sep> 250<sep> 7,800<sep> 140<tb> <sep>*solubility values given above represent cm<3 >H2/100 grams of metal under a pressure of 1 atmosphere of hydrogen. Table 2, shown below, illustrates the solubility of hydrogen in one particular high hydrogen solubility metal, titanium. As shown, the solubility of hydrogen in titanium is dependant upon hydrogen pressure and temperature. For example, at approximately 50 millimeters (mm) of mercury (Hg) at about 500[deg.] C., approximately 30,000 cc of hydrogen/100 g is soluable in the titanium metal. As further shown in Table 2, hydrogen solubility is relatively high in titanium even under relatively low pressures of hydrogen, e.g., 10 mm Hg. This is especially true at a temperature of about 500[deg.] C. However, the hydrogen solubility is significantly less at higher temperatures, such as 800[deg.] C. Although Table 2 is specific for titanium metal, similar hydrogen solubility characteristics will be observed for other high hydrogen solubility metals under similar conditions.<tb> <sep>TABLE 2<tb> <sep>H2 Pressure (in mm of Hg)<sep>500[deg.] C.<sep>800[deg.] C.<tb> <sep>10<sep>20,000<sep>≈0<tb> <sep>50<sep>30,000<sep>4000<tb> <sep>100<sep>32,000<sep>7000<tb> <sep>760<sep>34,000<sep>13,000It is known that various high hydrogen solubility metals can absorb and retain certain amounts of hydrogen (Table 1). Titanium metal, for example, is capable of retaining the most hydrogen of the high hydrogen solubility metals. For example, bulk titanium metal, free from surface oxide and other impurity films, can absorb hydrogen very rapidly at temperatures as low as about 375[deg.] C. Hydrogen pressures on the order of about 10 mm (Hg) to about 100 mm (Hg), are typically sufficient to convert titanium metal entirely to TiH2, e.g., a metal hydride layer. Once TiH2 layer is formed and cooled to about room temperature, the titanium metal hydride layer will remain stable on reheating up to about 400[deg.] C. even without supplying additional hydrogen. In other words, hydrogen does not evolve from the TiH2 layer at a significant rate until a hydrogen release temperature of about 400[deg.] C. is reached and/or exceeded. Thus, depending on the high hydrogen solubility metal selected, the time required for release of hydrogen from the high hydrogen solubility metal is typically less than about 10 minutes, preferably less than about 5 minutes.Several metallurgical properties of high hydrogen solubility metals can influence their hydrogen absorption/desorption characteristics. These metallurgical properties include, for example, grain size, texture, secondary phases, impurity levels, such as oxygen, and oxide films, formed on the surface and along the boundaries of the metal grains. Additionally, metallurgical properties of thin layers of high hydrogen solubility metals will not necessarily be constant, but will be dependent on deposition rates, substrates employed, temperature, background gas pressures, etc., as well as the tools employed to provide deposition.With use of a layer of a high hydrogen solubility metal such as titanium, hydrogen generally forms a hydrogen containing titanium layer having a typical composition of TiH1.75-2. Generally, hydrogen solubility progressively increases with an increase in pressure but tends to decrease significantly above temperatures of about 400[deg.] C. As shown in Tables 1 and 2, increasing hydrogen pressure at a given temperature substantially increases hydrogen solubility. On the other hand, increasing the temperature at a given hydrogen pressure markedly lowers hydrogen solubility. For example, hydrogen solubility is about zero at 800[deg.] C./10 millimeters hydrogen pressure, and about 40,70.0 cm<3>/100 grams of titanium at 20[deg.] C./760 millimeters hydrogen pressure, respectively. Intermediate combinations of temperature and pressure yield smaller but still quite large effects on the hydrogen solubility. Thus, the adsorption-desorption equilibria and kinetics of a high hydrogen solubility metal are process-sensitive.Generally, the hydrogen containing first metal layer 12 is provided by forming a high hydrogen solubility metal on the substrate assembly surface 30, to a thickness, for example, of about 0.005 microns to about 0.050 microns. Optionally, as stated above, the high hydrogen solubility metal is patterned. Further, generally, hydrogen is then introduced into the high hydrogen solubility metal to form the hydrogen containing first metal layer 12, e.g., a hydrogen rich solid solution. Preferably, the first metal layer 12 is saturated with hydrogen thereby forming a metal hydride layer. However, although a hydrogen saturated first metal layer 12 is preferred, it should be clear to one of skill in the art that the first metal layer 12 need not be saturated with hydrogen. The first metal layer 12 need only incorporate therein a concentration of hydrogen effective to provide a desired deformation in the second metal layer 13 as described below. For example, a first metal layer 12 having any hydrogen concentration which is stable under ambient conditions and which when heated to a hydrogen release temperature evolves hydrogen adequate for achieving a desired deformation of the second metal layer 13 may be used.Hydrogen may be introduced (e.g., charged or loaded) into the high hydrogen solubility metal by one of several methods. In a first preferred hydrogen loading method, a high hydrogen solubility metal is formed on the substrate assembly surface 30 and patterned. Further, as shown in FIGS. 1A and 1B, the second metal layer 13 is formed over at least the high hydrogen solubility metal and regions of substrate assembly surface 30. The second metal layer 13 preferably includes one or more low hydrogen solubility metals. If desired, the second metal layer 13 may also be patterned.In the first hydrogen loading method, after the high hydrogen solubility metal and the second metal layer 13 are formed, they are subsequently exposed to hydrogen in situ. As used herein, "in situ," refers to the formation of a hydrogen containing first metal layer 12 on the substrate assembly surface 30 by exposing the high hydrogen solubility metal and the second metal layer 13 to hydrogen. Hydrogen is introduced into the high hydrogen solubility metal by diffusion through the second metal layer 13. The absorption of hydrogen by the high hydrogen solubility metal via diffusion through the second metal layer 13, will typically be dependant upon several factors including: the selected high and low hydrogen solubility metals, the temperature at which hydrogen is introduced, the selected hydrogen pressure and duration of hydrogen exposure. Generally, this first hydrogen loading process will be a relatively slow process due to the slow diffusion of hydrogen through the low hydrogen solubility metal of the second metal layer 13. For example, the process may require a time period in the range of about 1 minute to about 300 minutes, at a temperature of about 100[deg.] C. to about 400[deg.] C. and a pressure of about 10 millimeters to about 760 millimeters, depending on the low hydrogen solubility metal or metals selected for the second metal layer 13. However, depending on metal layer 13 and the thickness of metal layer 13, hydrogen pressures above 760 mm can be used to shorten the time period required to load metal layer 12 with the desired amount of hydrogen.In this first hydrogen loading method, hydrogen absorption by the high hydrogen solubility metal to form the hydrogen containing first metal layer 12, typically occurs more rapidly than diffusion through the second metal layer 13, and therefore, the absorption rate by the high hydrogen solubility metal is not rate-limiting. Although the permeability of hydrogen through the second metal layer 13 does not rapidly occur, it is still adequate for forming a hydrogen containing first metal layer 12 in situ at useful rates, for example, under hydrogen pressures of less than about 1 atmosphere. For example, a pressure differential of 1 atmosphere is sufficient to diffuse hydrogen at a rate of about 0.8 micron-liters/min/cm<2 >to about 12 micron-liters/min/cm<2 >through a 10,000 Å thick copper metal layer at 400[deg.] C. and into a titanium layer. Assuming the conversion process is 100% efficient, this flux would yield about 20 Å to about 340 Å of a hydrogen saturated titanium layer/minute. In some instances, the hydrogen pressure may be increased to above 760 mm Hg (approximately 1 atmosphere) to accomplish the desired transfer of hydrogen through the second metal layer 13 and into the high hydrogen solubility metal without employing undesirably high temperatures or long exposure times. Typically, temperatures greater than about 600[deg.] C. and/or exposure times greater than about 300 minutes are generally undesirable. For example, in diffusing hydrogen through copper and into titanium as described above, increasing the hydrogen pressure differential by a factor of 10 may increase the rate of hydride formation proportionately. Alternatively, raising the temperature during formation of a hydrogen containing metal layer from about 400[deg.] C. to about 500[deg.] C. may increase the flow rate of hydrogen through copper over as much as 500%.Therefore, according to the first hydrogen loading method, under a preselected temperature, hydrogen pressure and time conditions, hydrogen will penetrate the second metal layer 13 and form a hydrogen containing first metal layer 12. Once the hydrogen containing first metal layer 12 is formed, first and second metal layers 12 and 13 may be cooled to near room temperature, preferably about 50[deg.] C. to about 200[deg.] C., under a suitable pressure of hydrogen, preferably about 10 mm to about 760 mm. The structure including first and second metal layers 12 and 13 may then be stored under normal ambient conditions of room temperature and pressure.In a second hydrogen loading method, the high hydrogen solubility metal is formed on the substrate assembly surface 30 and optionally patterned. The high hydrogen solubility metal is then hydrogenated to form the hydrogen containing first metal layer 12 prior to depositing the second metal layer 13. Generally, this second hydrogen loading process will incorporate hydrogen into the high hydrogen solubility metal at a faster rate than the first hydrogen loading process. For example, this process will typically require a time period in the range of about 0.1 minutes to about 30 minutes, at a temperature of about 20[deg.] C. to about 200[deg.] C. and a hydrogen pressure of about 10 mm to about 760 mm, depending on the selected high hydrogen solubility metal or metals selected for the first metal layer 12.As formation of the hydrogen containing first metal layer 12 occurs prior to forming the second metal layer 13, care must be taken when forming the second metal layer 13 so as to prevent degradation of the first metal layer 12. For example, exceeding the hydrogen release temperature of the hydrogen containing first metal layer 12 during formation of the second metal layer 13 will undesirably result in hydrogen evolution from the hydrogen containing first metal layer 12.Advantageously, regardless of the method employed for introducing hydrogen into the first metal layer 12, once formed, the hydrogen containing first metal layer 12 and the second metal layer 13 are stable under the conditions described above until heated to a temperature sufficient to release hydrogen from the first metal layer 12, i.e., hydrogen release temperature.The high hydrogen solubility metal used for the first metal layer 12 may be formed by several methods known in the art. These methods include, but are not limited to, physical sputtering, evaporation and chemical vapor deposition such as metal-organic chemical vapor deposition (MOCVD). If a metal alloy is employed in the first metal layer 12, the alloy can be sputtered from an alloy source containing the prepared alloy at a temperature sufficient to produce a metal layer that has reproducible morphological properties, e.g., crystal structure, grain size and orientation of crystalites.Second metal layer 13 is typically deposited on surface 32 of first metal layer 12 and at least a portion of the substrate assembly surface 30 by a variety of methods. As described above in reference to the first metal layer 12, the second metal layer 13 may also be formed by physical sputtering, evaporation and chemical vapor deposition, such as MOCVD. These deposition methods should employ conditions that minimize interfacial metal mixing.Deformation of the Second Metal LayerAfter formation of the first and second metal layers 12 and 13, the hydrogen containing first metal layer 12 is thermally treated. During thermal treatment of the hydrogen containing first metal layer 12, (e.g., thermal treatment to a hydrogen release temperature for the first metal layer 12 for a relatively short period of time, such as a few seconds to as much as a few minutes), hydrogen evolves from the first metal layer 12. Due to the low solubility of the hydrogen in the second metal layer 13, hydrogen is unable to substantially diffuse through the second metal layer 13 during the relatively short time period of the thermal treatment, and is trapped, at least temporarily, by the second metal layer 13. As such, deformation of the second metal layer 13 is achieved under a pressure generated by the trapped hydrogen.As used herein, "deformation," refers generally to any displacement of any portion or all of the patterned or unpatterned second metal layer 13. For example, such deformations may be of any configuration, e.g., formed into a molded shape, entirely displaced from one position to another, curved, etc. One example of deformation employing a hydrogen containing titanium layer and a copper metal layer, is shown in Table 3 below, and is further described below with reference thereto.As shown in FIG. 1B, thermal treatment of at least the hydrogen containing first metal layer 12 allows hydrogen to diffuse from the first metal layer 12 and form a raised conductive region 23 from the second metal layer 13 supported over a void region 18. After thermal treatment, the void region 18 positioned between the first metal layer 12 and the raised conductive region 23, typically contains hydrogen gas that has evolved from the first metal layer 12. The evolved hydrogen gas is usually only temporarily retained in the void region 18 and after formation of the void region 18, will begin to out-diffuse through the raised conductive region 23. However, as complete out-diffusion of hydrogen gas may not occur and some hydrogen gas may be retained in the void region 18, a gas-filled void region may be formed. Alternatively, the void region 18 may contain a vacuum that is essentially free from hydrogen gas.To ensure plastic deformation of second metal layer 13, the hydrogen pressure burst resulting from the thermal treatment must first act to separate second metal layer 13 predictably from its interface with first metal layer 12 and surface 32 thereof. Although the low hydrogen solubility metals employed in the present invention are inert and typically adhere poorly to most surfaces, including other metals, additional measures may be taken, such as those described with reference to FIGS. 4A-4E, to ensure adequate separation of first metal layer 12 from second metal layer 13.Depending on the metal or combination of metals selected to form the first metal layer 12 and the second metal layer 13, hydrogen will evolve from the first metal layer 12 in a time ranging from about 1 second to about 5 minutes. During this time period, evolved hydrogen from the hydrogen containing first metal layer 12, will be temporarily trapped and unable to diffuse through the second metal layer 13 providing the force necessary to deform the second metal layer 13. Preferably, the temperature of the thermal treatment is above the hydrogen release temperature for the hydrogen containing first metal layer 12, e.g., 400[deg.] C. for a titanium hydride layer. One skilled in the art will recognize that the time period and temperature of thermal treatment will vary depending upon the desired deformation, the hydrogen content of the metal layer 12, as well as the thickness, strength and modulus of metal layer 13 and their dependencies upon temperature. After thermal treatment, a metal layer 20 results in a layer which is essentially free from hydrogen. Typically, the essentially hydrogen free metal layer 20 remains on the substrate assembly surface 30. For example, if the first metal layer 12 is formed of titanium, the titanium will remain on the substrate surface 30.Thermal treatment of the hydrogen containing first metal layer 12 can be accomplished by several methods. For example, thermal treatment may be accomplished by using a laser. Typically, the laser is localized on the structure containing the hydrogen containing first metal layer 12 and the second metal layer 13. The first and second metal layers 12 and 13 absorb energy from the laser generating heat that is sufficient to cause the evolution of hydrogen from the first metal layer 12 thereby causing deformation in the second metal layer 13. However, the energy required to heat and release hydrogen may also be provided by the use of a focused electron beam in a vacuum system, an intense beam of broad spectrum or monochromatic light with or without the use of antireflective coatings, or by heating in a suitable furnace.Further, for example, a hydrogen containing first metal layer 12 may be thermally treated using a thin film heater. A thin film heater may be positioned or formed over the substrate assembly surface 30 containing the first and second metal layers 12 and 13. The thin film heater passes electric current from an area of high concentration to an area of low concentration. The electric current generated from the thin film heater is sufficient to cause the evolution of hydrogen from the first metal layer 12 thereby causing deformation in the second metal layer 13.Alternatively, a heated mold may be used for thermal treatment of the hydrogen containing first metal layer 12. A heated mold is capable of controlling the dimensions of second metal layer 13 on substrate assembly surface, and providing heat sufficient to evolve hydrogen from the first metal layer 12. This embodiment is described further below with respect to FIGS. 3A-3B. It should be recognized by one of skill in the art, that any rapid thermal treatment suitable for evolving hydrogen from the hydrogen containing first metal layer 12 at a temperature and for a time period sufficient to cause deformation of the second metal layer 13 may be employed.The deformation examples set forth in Table 3 below, demonstrate that a large variety of deflections in a second metal layer 13 may be achieved by evolving hydrogen from a titanium hydride first metal layer 12. Deflection of the second metal layer 13, such as to form raised conductive region 23, can further be described with reference to a low hydrogen solubility metal, such as copper or gold. Copper metal is known to have a moderate modulus of elasticity, excellent ductility (in excess of 50% elongation before fracture) and is capable of annealing rapidly in about 0.5 seconds to about 8.0 seconds, at temperatures of over 300[deg.] C. (approaching [1/2]th the absolute melting point of that metal). Gold is softer and a more malleable metal than is copper metal. The moduli of copper and gold are respectively about 16 megapsi and about 12 megapsi at room temperature, and a third less over 300[deg.] C. Deformations of various conductive structure shapes under different loads can be estimated from classical mechanics.For example, for a circular plate supported on its perimeter and subjected to a uniform pressure load, e.g., the configuration of FIGS. 2A-2B, the deformation (SMax) is greatest at its center: where: W is the load (psi); E the modulus of elasticity (psi); and, R and T are the plate's radius (in) and thickness (in), respectively.Substantial volumes of hydrogen can be evolved from a thin disk of a hydrogen containing first metal layer 12, e.g., TiH2 source. Substantial volumes of hydrogen can be evolved from thin TiH2 sources. To estimate the gas volumes and attendant copper deformations, it is assumed that: the in situ hydrogenation process converts a disk of titanium completely to TiH2, and that upon thermal treatment, the titanium hydride liberates all retained hydrogen. Thus, any residual hydrogen present in solid solution after thermal treatment is "essentially free" from hydrogen. Therefore, a disk of TiH2 will liberate a quantity of hydrogen (hH2): where: R is the radius and h is the height of the disk; p is the density of TiH2 (3.9 g/cc); and, M is its gram-molecular weight. The volume of hydrogen at Standard Temperature and Pressure (STP) is the product of and the Gas Constant, G=22,400 cc/gram-mole. To estimate gas volume-copper deformation tradeoffs, it is convenient to assume that the liberated hydrogen fills a cylindrical column of radius R and height H in which case: A combination of Equations 1 and 3 can be used to estimate plate deflections from a given TiH2 source. Take, for example, a copper plate having a thickness of about 0.1 micron and a radius of about 10 micron, a TiH2 disk having a thickness of about 100 Å (0.1 micron) also with a radius of 10 micron. Equation 1 shows the deflection or deformation of the copper plate is proportional to pressure. The TiH2 disk yields H=18 microns (at STP). Application of Charles' Law (pressure*volume=a constant) may be used to calculate the pressure dependence of the height (H) of the gas column. As such, at a pressure of 1.75 atmospheres, the deflection of the copper plate=gas column height (H)=10 microns. Accordingly, this indicates that the 0.1 micron TiH2 source will cause a deflection of the copper of about 10 microns. A more accurate approximation takes into account that hydrogen will not fill a cylindrical shaped column of copper but rather a dome-shaped one such as shown in FIG. 2B. Accordingly, the hydrogen height is greater, e.g., by a factor of 1.5 if the dome were roughly hemispherical in shape. With this correction, the hydrogen will attain a pressure of about 2.1 atmosphere, sufficient to deflect the copper by about 12 micron.Further, to show the variety of deformations possible, changes in T and h can produce a wide range of deformations in a plate of fixed R. Equations 3 and 4 can also be solved simultaneously to determine the pressure factor (P) required to equalize the gas column height (H) and the maximum deflection (Smax) under the resulting pressure: From Equation 4: Equation 5 can be used to estimate deflections expected for several cases, and the results are summarized in Table I below. Although the results are only estimates, they demonstrate that this method can produce a wide range of deformations that can be about 0.1 micron or less to over 1000 microns. Finally, as discussed above in reference to Tables 1 and 2, it is realized that the deformations described above may require pressures higher than indicated such that the deformation may in order to take place in times that are short compared to those needed for hydrogen to diffuse through the second metal layer 13. For example, the pressure adjustment can be made by suitably increasing the TiH2 mass in the above example structure.<tb> <sep>TABLE 3<tb> <sep>CU DIA.<sep>CU THICK.<sep>TiH2 THICK.<sep>MAX. DEFLECTION<tb> <sep>(microns)<sep>(microns)<sep>(microns)<sep>(microns)<tb> <sep>2.5<sep>0.1<sep>0.01<sep>2.0<tb> <sep>2.5<sep>0.3<sep>0.01<sep>0.1<tb> <sep>25<sep>0.1<sep>0.01<sep>60<tb> <sep>25<sep>0.1<sep>0.02<sep>90<tb> <sep>25<sep>0.3<sep>0.01<sep>21<tb> <sep>25<sep>0.3<sep>0.02<sep>30<tb> <sep>25<sep>0.5<sep>0.01<sep>6<tb> <sep>25<sep>0.5<sep>0.02<sep>9<tb> <sep>2500<sep>25<sep>1.0<sep>1170<tb> <sep>Examples of plate deformations for a round, copper plate having a modulus of elasticity, E = 16 megapsi using a round TiH2 disc having the same diameter but different thickness. Referring now to FIGS. 2A-2B, a conductive structure can be formed in a manner similar to that described with reference to FIGS. 1A-1B. In this embodiment, a clamping structure 35 is employed. A "clamping structure," as used herein, refers to and includes any structure used to secure at least a portion of a metal layer being deformed according to the present invention. For example, the clamping structure may be a perimeter clamping film (described with reference to FIGS. 2A-2B), or a mold (discussed with reference to FIGS. 3A-3B). Although the clamping structure 35 is described with particularity to FIGS. 2A-2B, it is to be appreciated that clamping structure 35 may optionally be employed in any of the embodiments described herein. For example, clamping structure 35 may be optionally employed when adhesion between the substrate assembly surface 30 and the second metal layer 33 is not adequate for holding a portion of the second metal layer 33 in place during deformation thereof.As shown in FIGS. 2A-2B, a conductive structure is formed by providing a substrate assembly 14 having a substrate assembly surface 30. Formed on the substrate assembly surface 30 is a hydrogen containing first metal layer 31 having a first metal layer surface 32. As shown in FIG. 2A, first metal layer 31 is a patterned metal layer defined on a specific region of the substrate assembly surface 30. Further, a second metal layer 33 is formed on at least a portion of the first metal layer 31, and preferably over the entire first metal layer 31 and a portion of the substrate assembly surface 30.Clamping structure 35 is typically employed where adhesion at regions between second metal layer 33, first metal layer 31 and/or substrate assembly surface 30 may be insufficient for holding a layer in place as described. In the embodiment illustrated in FIGS. 2A-2B, clamping structure 35 serves to attach at least a portion of a perimeter 39 of second metal layer 33, and preferably a portion about the entire perimeter of second metal layer 33, to the underlying layers. For example, clamping structure 35 contacts several surfaces including, upper surface regions 37 and 41 of the second metal layer 33, surface regions 38 of substrate assembly 14, and at lower surface regions 36 of clamping structure 35. Clamping structure 35 serves to control possible lateral penetration of hydrogen gas during thermal treatment of the hydrogen containing first metal layer 31, and deformation of the second metal layer 33 beyond the lower surface region 36 of clamping structure 35.When a clamping structure 35 is employed, there are several metals and metal alloys that are suitable for serving as the clamping structure 35. The thickness of the clamping structure 35 required is dependent upon the clamping structure's strength and modulus relative to the strength and modulus of the metal film 33 in the temperature range selected for the deformation step. Such metals include, but are not limited to, aluminum, aluminum-copper alloy (e.g., aluminum-4% copper), tungsten, molybdenum, platinum. These metals provide good adhesion to substrate assembly surface 30 of substrate assembly 14, and are essentially impermeable to hydrogen. Additionally, these clamping structure metals typically do not form hydrides in the presence of hydrogen, and are thermally stable. Additionally, clamping structure 35 can be of any suitable configuration, i.e., size and/or shape.As shown in FIGS. 2A and 2B, clamping structure 35 may function along with processes to optimize delamination of the second metal layer 33 from the substrate assembly surface 30 and first metal layer 42 during thermal treatment as further described below, e.g., use of a thin oxide layer. Although a single clamping structure 35 is shown, it may be desirable to have multiple clamping structures on a single substrate assembly 14, or a clamping structure 35 capable of attaching multiple perimeters of second metal layers 33 to underlying layers. Regardless of the clamping structure 35 or structures employed in the present invention, clamping structure 35 offers considerable advantages in determining the dimensions of second metal layer 33. These advantages include, but are not limited to, (a) compensating for incomplete formation of the hydrogen containing first metal layer 31 and hydrogen evolution (for example, {fraction (1/50)}th to [1/2] of the hydrogen stored in metal layer 32 might be retained after the hydrogen evolution step as a result of the back pressure developed due to the inability of hydrogen to diffuse through the metal layer 33 in the time and at the temperature used to evolve it), (b) raising hydrogen pressures to achieve desired second metal layer 33 deformation rates if necessary, (c) produce a variety of desired second metal layer 33 deformations across the substrate surface 30 using a single, uniform thermal treatment, and (d) alternately, permitting the inside dimensions of the clamping structure 35 to be altered to yield various second metal layer 33 deformations.As shown in FIG. 2B, thermal treatment of at least the hydrogen containing first metal layer 31 allows at least some portion, preferably most of the hydrogen gas to diffuse rapidly out of the hydrogen containing first metal layer 31 to form a raised conductive region 47 from second metal layer 33 supported over a void region 50 with part of the second metal layer 33 retained in position by clamping structure 35. Raised conductive region 47 is thus a result of deformation of second metal layer 33. After thermal treatment, first metal layer 42 is essentially free from hydrogen and typically remains on the substrate assembly surface 30.Referring to another embodiment of the method of the invention shown in FIGS. 3A and 3B, a conductive structure can be formed using a mold 60 having mold surfaces 61 and 63. In a manner similar to that described with reference to FIGS. 1A-1B, a conductive structure is formed by providing a substrate assembly 14 having a substrate assembly surface 30. Formed on the substrate assembly surface 30 is a hydrogen containing first metal layer 52 having a first metal layer surface 54. The first metal layer 52 is a patterned metal layer. Further, a second metal layer 53 is formed on at least a portion of the first metal layer 52.Mold 60, in a manner similar to that described above with regard to clamping structures in general, is typically employed to control vertical and lateral dimensions of second metal layer 53, e.g., vertical and horizontal dimension, on substrate assembly surface 30. In other words, mold 60 is used to shape the second metal layer 53 into a desired configuration. First metal layer 52, as described above, is preferably saturated with hydrogen prior to thermal treatment by mold 60.As described above, mold 60 may be a heated mold to provide both heat for thermal treatment of the hydrogen containing first metal layer 52 and to provide mold surfaces 61 and 63 for formation of raised conductive regions 67 and 68 over void regions 80 and 81, respectively, in the desired shapes. It is not necessary to pattern the hydrogen containing first metal layer 52 (as shown). Upon the evolution of hydrogen from the hydrogen containing first metal layer 52, heated mold 60 can both shape the deformations of second metal layer 53 by heated mold surfaces 61 and 63 and serve to prevent film delamination of the second metal layer 53 from the underlying layers. Depending on the specific heated mold 60 selected, a variety of deformation sizes can be achieved. For example, the deformations shown in FIG. 3B are spherical and rectangular in shape. After thermal treatment, first metal layer 72 is essentially free from hydrogen and, as stated above, typically remains on the substrate assembly surface 30.Referring to FIGS. 4A-4E, a conductive structure 170 is formed according to another embodiment of the invention. In this embodiment, one or more additional layers are provided to further ensure separation of the second metal layer 150 from first metal layer surface 104 of the hydrogen containing first metal layer 102 during deformation of second metal layer 150. As stated above, upon thermal treatment, a pressure burst of hydrogen gas from the hydrogen containing first metal layer 102, serves to separate second metal layer 150 predictably from its interface with first metal layer surface 104. Adhesion between the first metal layer 102 and the second metal layer 150 can be reduced and/or controlled to enhance the separation between the first metal layer 102 and the second metal layer 150 by inclusion of the one or more additional layers described below.First, a thin oxidized layer 120 having an oxidized layer surface 122 can be formed on surface 104 of the hydrogen containing first metal layer 102 as shown in FIG. 4B. The oxidized layer 120 may be about 1 Å to about 20 Å in thickness, preferably about 3 Å to about 10 Å in thickness. Preferably, the oxidized layer 120 is formed by oxidizing the first metal layer surface 104 at room temperature via a controlled exposure to oxygen. The thin oxidized layer 120 typically does not represent a barrier to hydrogen absorption into the first metal layer 102. The oxidized layer 120 is typically employed to reduce adhesion between the first metal layer surface 140 and the second metal layer 150. Upon thermal treatment, the oxidized layer 120 is typically completely dissolved into the first metal layer 102.The oxidized layer 120 may be formed during the formation of the first metal layer 102. For example, at a temperature of about 200[deg.] C. or less, during formation of many high solubility metals, oxide growth may occur to a thickness of about 10 Å to about 200 Å.Referring to FIG. 4C, a carbon layer 130 having a carbon layer surface 132 is formed on the oxidized layer surface 122 to further reduce the adhesion between the first metal layer 102 and the second metal layer 150. Preferably, the carbon layer 130 is deposited by physical sputtering from a carbon source and is about 1 Å to about 25 Å in thickness, more preferably about 5 Å to about 20 Å in thickness.It is to be appreciated that to reduce adhesion between the first metal layer 102 and the second metal layer 150, either of the above described techniques, i.e., an oxidized layer 120 alone or an oxidized layer 120 in combination with a carbon layer 130, may be used to reduce adhesion. Additionally, an electrodepositing method for forming the second metal layer 150 may be used to further enhance such adhesion reducing techniques. The electrodepositing method technique typically includes forming a seed layer 140 on either the oxidized layer 120 and/or the carbon layer 130 by electrodeless deposition as shown in FIG. 4D. Thereafter, the second metal layer 150 is electro-deposited on the seed layer 140 as shown in FIG. 4D. For example, a seed layer 140 having a seed layer surface 141 may be deposited on the carbon layer surface 132. The seed layer 140 may be formed, for example, by vacuum deposition. Vacuum deposition is typically accomplished by evaporation from a thermally-heated source or by physical sputtering of the metal source. Seed layer 140 is preferably formed of the same metal as the second metal layer 150. Preferably, seed layer 140 is about 3 Å to about 50 Å in thickness, and more preferably about 5 Å to about 20 Å in thickness. Finally, the second metal layer 150 may be deposited on the seed layer surface 141 by electrodeposition. For example, to electroplate copper metal, the surface portion to be plated is typically immersed in an electrolytic cell containing a copper rod which is subsequently used as an anode to replace copper ions in solution. These copper ions are then plated on the copper seed layer which is biased negatively and serves as the cathode. The amount of copper plated is controlled by the amount of electrical current passed through the electrolytic cell, the chemical composition, and the geometry, etc., as is known to one skilled in the art.As shown in FIG. 4E, thermal treatment of at least the hydrogen containing first metal layer 102 allows hydrogen to diffuse rapidly out of the hydrogen containing first metal layer 102 to form a raised conductive region 170 from the second metal layer 150 supported over a void region 176. Raised conductive region 170 is thus a result of deformation of the second metal layer 150. After thermal treatment, first metal layer 160 is essentially free from hydrogen and typically remains on the substrate assembly surface 30.Referring to FIGS. 5A-5B, thermal treatment using a heated mold is illustrated for forming a conductive structure according to the present invention. In this embodiment, an unpatterned first metal layer 200 having a first metal layer surface 201 is formed on substrate assembly surface 30 of substrate assembly 14. The first metal layer 200, as described in reference to FIGS. 1A-1B, is preferably a hydrogen containing first metal layer 200. Second metal layer 204 is provided over at least a portion of first metal layer surface 201. Depending on the selected heated mold 206, a variety of deformation sizes can be achieved. For example, the deformation shown in FIG. 5B is rectangular in shape. As described with reference to FIGS. 3A and 3B, heated mold 206 serves to provide both heat, for thermal treatment, and a mold surface 210 for shaping raised conductive region 207.Referring now to FIGS. 6A-6B, thermal treatment using a laser 305 is illustrated forming a conductive structure according to the present invention. In this embodiment, an unpatterned first metal layer 300 having a first metal layer surface 301 is formed on substrate assembly surface 30 of substrate assembly 14. The first metal layer 300, as described with reference to FIGS. 1A-1B, is preferably a hydrogen containing first metal layer 300. Second metal layer 304 is provided over first metal layer surface 301 of the hydrogen containing first metal layer 300.Thermal treatment by laser 305 of at least the hydrogen containing first metal layer 300, allows hydrogen to diffuse rapidly out of the first metal layer 300 and form a raised conductive region 308 from second metal layer 304 supported over a void region 309. Raised conductive region 308 is thus a result of deformation of second metal layer 304 by laser 305 which can advantageously localize the heat to desired regions of the hydrogen containing first metal layer 301.Although the invention has been described with particular reference to various embodiments thereof, variations and modifications of the present invention can be made within the contemplated scope of the following claims as is readily known to one skilled in the art.
A biometric security method and apparatus for a capacitive sensor system is provided herein, where the method may include capturing a set of raw capacitive frames for a body part via the capacitive sensor system, wherein each raw capacitive frame includes a distribution of a plurality of capacitance levels measured from the body part; creating a capacitive profile based on the set of raw capacitive frames; comparing a first value in the capacitive profile to a second value in a biometric template generated from an enrolled body part, wherein the first value and the second value are located at a similar location with respect to the capacitive profile; and, generating an authentication signal based on a difference between the first value and the second value.
CLAIMS1. A biometric security apparatus comprising:a capacitive sensor system configured to capture a set of raw capacitive frames for a body part, wherein each raw capacitive frame includes a distribution of a plurality of capacitance levels measured from the body part;means for creating a capacitive profile based on the set of raw capacitive frames; and,a processing system configured to:compare a first value in the capacitive profile to a second value in a biometric template generated from an enrolled body part, wherein the first value and the second value are located at a similar location with respect to the capacitive profile; andgenerate an authentication signal based on a difference between the first value and the second value.2. The biometric authentication apparatus of claim 1, wherein the means for creating the capacitive profile based on the set of raw capacitive frames comprises: means for combining the set of raw capacitive frames to create a combined capacitive frame comprising a distributed plurality of averaged capacitance levels, each of which being an average of all capacitance levels for a respective location across all raw capacitive frames in the set of capacitive frames.3. The biometric authentication apparatus of claim 1, wherein the authentication signal indicates a reasonable match has been determined between the capacitive profile and the biometric template.4. The biometric authentication apparatus of claim 1, wherein the authentication signal identifies the body part as the enrolled body part.5. The biometric authentication apparatus of claim 1, wherein the first value and the second value each comprise a capacitive value determined from capacitive sensor measurements captured by a portion of a capacitive sensor array.6. The biometric authentication apparatus of claim 5, wherein the capacitive value comprises a sum of the capacitive sensor measurements.7. The biometric authentication apparatus of claim 5, wherein the capacitive sensor measurements from the portion of the capacitive sensor array comprises a plurality of capacitive sensor measurements taken along one of a vertical or horizontal direction with respect to the capacitive sensor array.8. The biometric authentication apparatus of claim 1, wherein the biometric template comprises one or more capacitive profiles, each capacitive profile generated from one or more raw capacitive frames captured by the capacitive sensor system in contact with the enrolled body part.9. The biometric authentication apparatus of claim 1, wherein each capacitance level of the plurality of capacitance levels comprises a capacitance level sensed by a capacitive sensing element in the capacitive sensor system.10. The biometric authentication apparatus of claim 1, wherein the capacitive sensor system comprises an arrangement of capacitive sensing elements in a particular shape and the plurality of capacitance levels contained in each raw capacitive frame are arranged in accordance with the particular shape.1 1. A biometric security method for a capacitive sensor system comprising: capturing a set of raw capacitive frames for a body part via the capacitive sensor system, wherein each raw capacitive frame includes a distribution of a plurality of capacitance levels measured from the body part;creating a capacitive profile based on the set of raw capacitive frames;comparing a first value in the capacitive profile to a second value in a biometric template generated from an enrolled body part, wherein the first value and the second value are located at a similar location with respect to the capacitive profile; and,generating an authentication signal based on a difference between the first value and the second value.12. The method of claim 1 1, wherein the creating the capacitive profile based on the set of raw capacitive frames comprises combining the set of raw capacitive frames to create a combined capacitive frame comprising a distributed plurality of averaged capacitance levels, each of which being an average of all capacitance levels for a respective location across all raw capacitive frames in the set of capacitive frames.13. The method of claim 1 1, wherein the authentication signal indicates a reasonable match has been determined between the capacitive profile and the biometric template.14. The method of claim 1 1, wherein the authentication signal identifies the body part as the enrolled body part.15. The method of claim 1 1 , wherein the first value and the second value each comprise a capacitive value determined from capacitive sensor measurements captured by a portion of a capacitive sensor array.16. The method of claim 15, wherein the capacitive value comprises a sum of the capacitive sensor measurements.17. The method of claim 15, wherein the capacitive sensor measurements from the portion of the capacitive sensor array comprises a plurality of capacitive sensor measurements taken along one of a vertical or horizontal direction with respect to the capacitive sensor array.18. The method of claim 1 1, wherein the biometric template comprises one or more capacitive profiles, each capacitive profile generated from one or more raw capacitive frames captured by the capacitive sensor system in contact with the enrolled body part.19. The method of claim 11, wherein each capacitance level of the plurality of capacitance levels comprises a capacitance level sensed by a capacitive sensing element in the capacitive sensor system.20. The method of claim 1 1, wherein the capacitive sensor system comprises an arrangement of capacitive sensing elements in a particular shape and the plurality of capacitance levels contained in each raw capacitive frame are arranged in accordance with the particular shape.21. A biometric security apparatus comprising:a capacitive sensor system configured to capture a set of raw capacitive frames for a body part, wherein each raw capacitive frame includes a distribution of a plurality of capacitance levels measured from the body part; and,a processor coupled to the capacitive sensor system to receive the set of raw capacitive frames, the processor configured to:create a capacitive profile based on the set of raw capacitive frames; compare a first value in the capacitive profile to a second value in a biometric template generated from an enrolled body part, wherein the first value and the second value are located at a similar location with respect to the capacitive profile; and,generate an authentication signal based on a difference between the first value and the second value.22. The apparatus of claim 21 , wherein the processor is further configured to:combine the set of raw capacitive frames to create a combined capacitive frame comprising a distributed plurality of averaged capacitance levels, each of which being an average of all capacitance levels for a respective location across all raw capacitive frames in the set of capacitive frames.23. The apparatus of claim 21, wherein the authentication signal indicates a reasonable match has been determined between the capacitive profile and the biometric template.24. The apparatus of claim 21, wherein the authentication signal identifies the body part as the enrolled body part.25. The apparatus of claim 21, wherein the first value and the second value each comprise a capacitive value determined from capacitive sensor measurements captured by a portion of a capacitive sensor array.26. The apparatus of claim 25, wherein the capacitive value comprises a sum of the capacitive sensor measurements.27. The apparatus of claim 25, wherein the capacitive sensor measurements from the portion of the capacitive sensor array comprises a plurality of capacitive sensor measurements taken along one of a vertical or horizontal direction with respect to the capacitive sensor array.28. The apparatus of claim 21, wherein the biometric template comprises one or more capacitive profiles, each capacitive profile generated from one or more raw capacitive frames captured by the capacitive sensor system in contact with the enrolled body part.29. The apparatus of claim 21, wherein each capacitance level of the plurality of capacitance levels comprises a capacitance level sensed by a capacitive sensing element in the capacitive sensor system.30. The apparatus of claim 21, wherein the capacitive sensor system comprises an arrangement of capacitive sensing elements in a particular shape and the plurality of capacitance levels contained in each raw capacitive frame are arranged in accordance with the particular shape.31. A computer program product comprising:non-transitory computer-readable medium comprising instructions executable by a biometric security system to:capture a set of raw capacitive frames for a body part via a capacitive sensor system, wherein each raw capacitive frame includes a distribution of a plurality of capacitance levels measured from the body part;create a capacitive profile based on the set of raw capacitive frames; compare a first value in the capacitive profile to a second value in a biometric template generated from an enrolled body part, wherein the first value and the second value are located at a similar location with respect to the capacitive profile; and,generate an authentication signal based on a difference between the first value and the second value.
METHOD AND APPARATUS FOR BIOMETRIC-BASED SECURITY USINGCAPACITIVE PROFILESCROSS-REFERENCE TO RELATED APPLICATIONS[0001] This application claims priority to and the benefit of provisional patent application no. 62/013,496 filed in the United States Patent and Trademark Office on June 17, 2014 and non-provisional patent application no. 14/328,575 filed in the United States Patent and Trademark Office on July 10, 2014, the entire contents of which are incorporated herein by reference.BACKGROUNDField[0002] Aspects of the present disclosure relate generally to touch sensor systems, and more particularly, to a method and apparatus for biometric -based security using capacitive profiles.Background[0003] Biometrics-based security offers convenience for users because there is nothing to be remembered and nothing to lose. Thus, a user may be authenticated or identified based on one or more biometrics belonging to the user. For example, a biometric authentication approach utilizes physical characteristics of a user's anatomy, such as handprints or fingerprints, for authenticating the user. As another example, a biometric authentication approach utilizes face recognition and voice recognition systems for authentication the user. The approaches provided in the latter example are convenient because they do not depend on the user physically touching the security system for identification. However, they are extremely susceptible to environmental parameters in which these systems operate, including insufficient lighting or excessive background noise. Further, for an audio-based biometric security system such as the aforementioned voice recognition system, limitations on deployment may include quiet environments that are optimized for capturing a user's voice clearly, but ironically that are intolerant of audio disturbances. These include such environments as occupied movie theaters, conference rooms, classrooms, and libraries. Many of these aforementioned issues are exacerbated for biometric security systems utilizing mobile devices because there is often very little predictability of or control over the environments in which mobile devices will operate. [0004] Examples of biometric sensing schemes that are typically used for biometric security systems on mobile devices include physical sensing schemes and optical read schemes. In an optically based read scheme, a picture of a body part is captured as optical image data using mainly light reflection and an image sensor. In contrast, in a physically based sensing scheme, effects of a body part on sensors such as capacitive sensors are determined. Typically, these sensors are formed into an array, with a sensing resolution that depends on how densely the sensors need to be packed in the array. For example, fingerprint authentication is commonly used as a biometric authentication method, but users may not accept this approach because of its association with criminality and privacy concerns, such as where a governmental database could be linked to identify the fingerprints. In addition, the fingerprint authentication approach typically involves increased hardware implementation costs due to a need to employ high-resolution sensors to be able to accurately read fingerprints. These high-resolution sensors are typically separated from other touch-sensing devices such as touchscreens. The "Touch ID" fingerprint authentication system used in the iPhone® 5s from Apple, Inc. and the "Fingerprint Scanner" fingerprint authentication system used in the Galaxy® S5 from Samsung Electronics Co., Ltd. are examples of these high-resolution sensors.[0005] In contrast to specialized sensor devices such as the aforementioned fingerprint sensors, touchscreens are commonly found in a variety of devices in today's industrial and consumer markets including such devices as cellular phones, global positioning (GPS) devices, set-top boxes, still and video cameras, computer screens, digital audio players, digital tablets, and the like. Because of their ubiquity and widespread implementation, touchscreens may be utilized to create biometric authentication systems that may be less costly and complex to implement as compared to those using specialized, high-resolution types of sensors such as, for example, fingerprint sensors. In many instances, biometric security systems that utilize touchscreens would not require additional hardware.[0006] For example, one type of touchscreen-based biometric system that is generally referred to as a behavioral biometric system uses touchscreen behavior to verify users from common touchscreen gestures, such as scroll, pinch and tap. Some behavioral biometric systems have extended this further to use parameters such as dwell-time and touch pressure to enhance user authentication. However, behavioral biometric systems are time-based in that, inherently, they can only provide authentication of a user after the user has utilized the system for a required period of time. Further, behavioral biometric systems are generally not as accurate as another type of touchscreen-based biometric system referred to generally as a physical biometric system that utilizes physical biometric characteristics to verify users system.[0007] It would be desirable for a biometrics-based security solution to be low cost, offer convenience, provide accuracy, and utilize existing hardware while offering simple user interaction.SUMMARY[0008] The following presents a simplified summary of one or more aspects of the disclosed approach, in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated features of the disclosure, and is intended neither to identify key or critical elements of all aspects of the disclosure nor to delineate the scope of any or all aspects of the disclosure. Its sole purpose is to present some concepts of one or more aspects of the disclosure in a simplified form as a prelude to the more detailed description that is presented later.[0009] Various aspects for providing biometric -based security using capacitive profiles are disclosed herein. In accordance with one aspect of the disclosed approach, a biometric security apparatus may be provided that includes a capacitive sensor system configured to capture a set of raw capacitive frames for a body part, wherein each raw capacitive frame includes a distribution of a plurality of capacitance levels measured from the body part; means for creating a capacitive profile based on the set of raw capacitive frames; and, a processing system. The processing system may be configured to compare a first value in the capacitive profile to a second value in a biometric template generated from an enrolled body part, wherein the first value and the second value are located at a similar location with respect to the capacitive profile; and generate an authentication signal based on a difference between the first value and the second value.[0010] In accordance with another aspect of the disclosed approach, a biometric security apparatus is provided that includes a capacitive sensor system configured to capture a set of raw capacitive frames for a body part, wherein each raw capacitive frame includes a distribution of a plurality of capacitance levels measured from the body part; and, a processor coupled to the capacitive sensor system to receive the set of raw capacitive frames. The processor is configured to create a capacitive profile based on the set of raw capacitive frames; compare a first value in the capacitive profile to a second value in a biometric template generated from an enrolled body part, wherein the first value and the second value are located at a similar location with respect to the capacitive profile; and, generate an authentication signal based on a difference between the first value and the second value.[0011] In accordance with yet another aspect of the disclosed approach, a biometric security method for a capacitive sensor system is provided that may include capturing a set of raw capacitive frames for a body part via the capacitive sensor system, wherein each raw capacitive frame includes a distribution of a plurality of capacitance levels measured from the body part; creating a capacitive profile based on the set of raw capacitive frames; comparing a first value in the capacitive profile to a second value in a biometric template generated from an enrolled body part, wherein the first value and the second value are located at a similar location with respect to the capacitive profile; and, generating an authentication signal based on a difference between the first value and the second value.[0012] In accordance with yet another aspect of the disclosed approach, a computer program product may be provided that includes non-transitory computer-readable medium having instructions executable by a biometric security system to capture a set of raw capacitive frames for a body part via a capacitive sensor system, wherein each raw capacitive frame includes a distribution of a plurality of capacitance levels measured from the body part; create a capacitive profile based on the set of raw capacitive frames; compare a first value in the capacitive profile to a second value in a biometric template generated from an enrolled body part, wherein the first value and the second value are located at a similar location with respect to the capacitive profile; and, generate an authentication signal based on a difference between the first value and the second value.[0013] These and other aspects of the disclosed approach will become more fully understood upon a review of the detailed description, which follows.BRIEF DESCRIPTION OF THE DRAWINGS[0014] These and other sample aspects of the disclosure will be described in the detailed description that follow, and in the accompanying drawings. [0015] FIG. 1 is a flow diagram of a finger biometric-based security process for a biometric security system configured in accordance with various aspects of the disclosed approach for biometric-based security using capacitive profiles.[0016] FIG. 2 is a comparison of a finger capacitive profile and associated capacitive heat map image used to visually illustrate a distribution of capacitive values in a capacitive frame captured of the finger in the biometric security system configured in accordance with other various aspects of the disclosed approach for biometric-based security using capacitive profiles.[0017] FIG. 3 is a flow diagram illustrating a biometric enrollment process in the biometric security system configured in accordance with various aspects of the disclosed approach for biometric-based security using capacitive profiles to create a biometric template.[0018] FIG. 4 is a flow diagram illustrating a biometric verification process in the biometric security system configured in accordance with various aspects of the disclosed approach for biometric authentication using capacitive profiles to authenticate or identify an unverified user.[0019] FIG. 5 is a plot of several finger capacitive profiles for a finger of a user, where each finger profile is generated for that same finger at a different period of time, which illustrates a consistency of biometric parameters for the user.[0020] FIG. 6 is a flow diagram of a hand capacitive profiling process of the biometric authentication system configured in accordance with various aspects of the disclosed approach for biometric authentication using capacitive profiles.[0021] FIG. 7 is a comparison of a hand length profile and associated capacitive heat map image used to visually illustrate a distribution of capacitive values in a capacitive frame captured of the hand in the biometric security system configured in accordance with other various aspects of the disclosed approach for biometric-based security using capacitive profiles.[0022] FIG. 8 is a comparison of a hand width profile and associated capacitive heat map image used to visually illustrate a distribution of capacitive values in a capacitive frame captured of the hand in the biometric security system configured in accordance with other various aspects of the disclosed approach for biometric-based security using capacitive profiles.[0023] FIG. 9 is a comparison of a finger length profile and associated capacitive heat map image used to visually illustrate a distribution of capacitive values in a capacitive profile captured of each finger of the hand in the biometric security system configured in accordance with other various aspects of the disclosed approach for biometric -based security using capacitive profiles.[0024] FIG. 10 is a comparison of capacitive heat maps and associated capacitive palm profiles when using a passive stylus for two users used to describe biometric authentication based on a palm profile in the biometric authentication system configured in accordance with other various aspects of the disclosed approach for biometric authentication using capacitive profiles.[0025] FIG. 11 is a capacitive heat map and associated capacitive ear profile for a user used to describe biometric authentication based on an ear profile in the biometric authentication system configured in accordance with still other various aspects of the disclosed approach for biometric authentication using capacitive profiles.[0026] FIG. 12 is a diagram including a plot of detected sensor signals as overlaid on a capacitive touch sensor array to describe an operation of various capacitive touch sensor arrays that may be used with the biometric authentication system configured in accordance with various aspects of the disclosed approach for biometric authentication using capacitive profiles.[0027] FIG. 13 is a block diagram of an exemplary capacitive sensor array configured in accordance with one aspect of the disclosed approach for biometric authentication using capacitive profiles.[0028] FIG. 14 is a cross-sectional profile view of an exemplary capacitance sensing structure that may be used to implement the capacitance sensing circuit in the capacitive sensor array of FIG. 13 in accordance with one aspect of the disclosed approach for biometric authentication using capacitive profiles.[0029] FIG. 15 is a block diagram illustrating an example of an apparatus employing a processing system that may be used to implement the biometric authentication system configured in accordance with various aspects of the disclosed approach for biometric authentication using capacitive profiles.[0030] In accordance with common practice, some of the drawings may be simplified for clarity. Thus, the drawings may not depict all of the components of a given apparatus (e.g., device) or method. Finally, like reference numerals may be used to denote like features throughout the specification and figures. DETAILED DESCRIPTION[0031] The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts.[0032] Various aspects of a method and apparatus for biometric -based security using capacitive profiles as described herein are provided in a biometric security system that analyzes raw capacitance sensor data received from a capacitive sensing system affected by a part of a biological entity. As used herein, a "biological entity" may refer to any biological entity, human or otherwise, of which a part thereof, such as a body part for a human, may be used for authentication or identification. For purposes of simplifying the discussion, the term "user" may be used for the biological entity, and the term "body part" may be used to refer to the part of the biological entity that is being used for authentication or identification, as further described herein.[0033] In one aspect of the disclosed approach, an "authentication" mode of operation may be used herein to refer to a mode of operation of the biometric security system for authenticating a user with respect to a biometric template associated with that user. It should be noted that the term "verification" and all variations thereof might be used interchangeably with the term "authentication." In another aspect of the disclosed approach, an "identification" mode of operation may be used herein to refer to a mode of operation of the biometric security system for matching a user with a single biometric template in a set of biometric templates associated with that user. In other words, a user with a known, but unconfirmed identity, also referred to herein as an "unverified user" may be verified using the authentication mode, while a user with an unknown identity, also referred to herein as an "unknown user" may be identified using the identification mode. Regardless of the distinction provided between the authentication and identification modes of operations described above, it should be understood that any reference to, or description of, various aspects of any disclosed approach for purposes of an authentication application may similarly apply to applications of the disclosed approach for purposes of an identification application. In addition, various aspects of the disclosed approach may be used in applications where either or both authentication and identification may enable access to devices, data, or software, etc.; or other applications where either or both authentication and identification may provide any benefit in terms of security or utility.[0034] In one aspect of the disclosed approach, raw capacitance data of the body part may be obtained directly from an arrangement of capacitive sensing elements in a capacitive sensor system. Each capture of raw capacitance data from the arrangement of capacitive sensing elements, stored in and referred to as a "raw capacitive frame," contains information regarding a distribution of relative capacitance levels sensed over the arrangement of capacitive sensing elements when the body part is placed on or near enough to the capacitive sensor system to affect any capacitive sensing elements.[0035] In one aspect of the disclosed approach, a raw capacitive frame may be analyzed both by locating which capacitive sensor element in the capacitive sensor array were affected by the body part, and also by determining a distribution of relative capacitance levels measured by those capacitive sensor element for that same body part. The analysis of each raw capacitive frame is not limited to determining such geometric measurements of the body part being analyzed. For example, the analysis of each raw capacitive frame may also provide a relative distribution of capacitance values as measured by all sensor elements during creation of the raw capacitive frame. Thus, the disclosed approach may use a combination of both the geometric measurements and the relative distribution of capacitance values in raw capacitive frames.[0036] For example, where the body part is a hand, the analysis of each raw capacitive frame may provide gross geometric measurements of lengths and widths of fingers (including a thumb portion of the hand), separation between knuckles of each finger, and other anatomical measurements. The analysis may also utilize relative distribution of capacitance values of all sensor elements of the capacitive sensor system affected by the hand. Being able to compare the variation of the relative distribution of capacitance values across raw capacitive frames allows authentication of users without being limited to use of gross geometric measurements. Thus, raw capacitive frames may be analyzed by not only locating which capacitive sensing elements in an capacitive sensor array are affected by a body part of a user, but also by determining a relative distribution over the capacitive sensor array of how much each capacitive sensing element is affected by the body part. An analysis of which capacitive sensing elements are affected would only be able to yield simple measurements, such as lengths and widths of fingers and digits. As described herein, capacitance measurements by the array of capacitive sensors of the body part and the relative distribution thereof, which varies due to such biometric factors as skin properties and manner of contact, may be used because they are highly consistent within— yet highly unique between— subjects.[0037] In accordance with one aspect of the disclosed approach, a capacitive profile may be created for the body part of the user based on an analysis of a set of raw capacitive frames captured for the body part. The set of raw capacitive frames may be captured as a sequential series of raw capacitive frames. The sequential series of raw capacitive frames may be processed and combined to create the capacitive profile, as further described herein.[0038] In accordance with one aspect of the disclosed approach, a biometric template may be created for the body part of the user during a biometric enrollment process, where the biometric template may later be used to authenticate the user. In one aspect of the disclosed approach, the biometric template may be created from one or more capacitive profiles, each of which may be generated from one or more raw capacitive frames, as discussed further herein. One created, the biometric template may be used later for authentication of the user.[0039] The arrangement of capacitive sensing elements in the capacitive sensor system may be implemented as a capacitive sensor array that includes multiple capacitive sensing elements. Each capacitive sensing element may be used to detect the presence and/or absence of a conductive object, although direct contact between the conductive object and capacitive sensing element is not necessarily required. As used herein, the terms "capacitive sensor element" and "capacitive sensing element" are both meant to refer to a sensor element capable of detecting a change of capacitance due to a body part of a biological entity being placed on or near a surface of the sensor element, where the terms "measurement," "determination," "value," and "level," when used with either terms "capacitive" or "capacitance," refers to a capacitance level sensed by the capacitive sensor element, where the capacitance level may be provided as an analog or digital form. Further, although a portion of any description contained herein may use the terms "touches," "contacts," or any other grammatical form of the terms "touch" or "contact" to refer to actual, physical contact between a body part or a biological entity with one or more capacitive sensing elements in the capacitive sensor system, such use should be understood to encompass locating the body part of the biological entity in close enough proximity to affect a capacitance level in any of the capacitive sensing elements, unless otherwise stated. In other words, unless a description specifically notes that a physical contact or touch is required with regard to capacitive sensing elements, no such limitation should be read therein.[0040] As further described herein, capacitive sensing elements are typically constructed of a conductive pad, a surrounding ground, and a connection to a controller. In most applications, the conductive pad is a large copper footprint and the surrounding ground is a poured fill. A native (parasitic) capacitance exists between these two objects. However, when a third conductive object such as a human finger— is brought into proximity with the sensor, the capacitance of the system is increased by the capacitance of the stimulus. For example, when a body part such as a hand comes into contact or close proximity with the arrangement of capacitive sensing elements in the capacitive sensor system, a capacitance in one or more of the capacitive sensing elements changes. One or more electrical circuit may measure a value of the capacitance in each of the one or more of the capacitive sensing elements and convert these measured values of the capacitive sense elements into digital values, as further described herein.[0041] In accordance with various aspects of the disclosed approach, the capacitive sensing system may be implemented as a part of a display device to create a capacitive touch screen, and raw data output from the capacitive touch screen may be used to generate raw capacitive frames reporting a distribution of varying capacitance levels caused by a body part of a biological entity that come in contact with or near the capacitive touch screen. To provide a better understanding of the various aspects of the disclosed approach, each raw capacitive frame may be represented in various figures contained herein as an image referred to as a "capacitive heat map image" or, more simply, as a "capacitive heat map." Each capacitive heat map may visually represent information stored in a corresponding raw capacitive frame by displaying a distribution of a plurality of capacitance levels contained in the raw capacitive frame as measured by the capacitive sensing elements of the capacitive sensor array of the capacitive sensing system. In the various capacitive heat maps that may be included in the figures, each capacitive measurement of a raw capacitive frame may be represented by a single pixel in the capacitive heat map and each level is assigned a different shade of gray as the capacitive heat map is displayed in gray scale. The capacitive heat map may also be displayed as a color image as well. It should be noted that the use of capacitive heat maps in the description associated with the various figures contained herein should not constitute a limitation of the applicability of any of the various aspects of the disclosed approach because use of any capacitive heat map should be understood to be solely for the purposes of assisting in the comprehension of the descriptive material contained herein.[0042] It should be noted that any suitable portion of the biological entity might be used as the body part for authentication purposes. For example, in addition to fingers, hands or portions of a hand of the user, such as a portion of a palm of the hand of the user that may come into contact with the touch screen when the user is using a stylus, may also be used as the body part. Another example of body parts that may be used would be ears or portions of an ear of the user that may come into contact with the touch screen when the user has the touch screen to the ear when, for example, the user is in a phone call. Similar to the process described above for fingers and hands, capacitive profiles may be generated for these body parts to create a biometric template to later be used in performing authentication by comparing one or more capacitive profiles generated for a presented body part that may include all or a portion of the same part of the body used in the creation of the biometric template.[0043] FIG. 1 illustrates a finger biometric -based security process 100 configured in accordance with various aspects of the disclosed approach for biometric -based security using capacitive profiles in a biometric security system that utilizes a finger of a user as the body part for authentication. Various aspects of an operation for each portion of the finger biometric -based security process 100, which includes a presentation and data capture portion 1 10, a pre-processing portion 120, a segmentation portion 130, a feature extraction portion 140, and a matching portion 150, will be described herein with reference to the biometric security system including a touch screen integrating a capacitive sensor system with a capacitive sensor array, an example of which is illustrated in FIG. 13 as a capacitive sensor system 1300 that includes a capacitive sensor array 1310 that may be configured in accordance with various aspects of the disclosed approach for biometric -based security using capacitive profiles, as further explained herein.[0044] In one aspect of the disclosed approach, a first part of the finger biometric -based security process 100, referred to and illustrated in FIG. 1 as a "capacitive profile creation session" 102 that includes the presentation and data capture portion 1 10, the pre-processing portion 120, the segmentation portion 130, and the feature extraction portion 140, may be used to create capacitive profiles, where each iteration of the capacitive profile creation session 102 may be used to create a single capacitive profile. As described further herein, each capacitive profile may be generated from a set of one or more captured raw capacitive frames and may be used for such purposes as: (1) establishing a biometric template for a body part such as the finger of the user, as further described herein with reference to FIG. 3; or (2) performing authentication using the biometric template, as further described herein with reference to FIG. 4.[0045] A description of an iteration of the capacitive profile creation session 102 of the finger biometric -based security process 100 begins at the presentation and data capture portion 110, where the biometric security system requests and receives presentation by the user of a body part such as the finger during a presentation operation 1 12. Once the body part is presented, one or more raw capacitive frames may be captured of the body part using the capacitive sensor array 1310 in the capacitive sensor system 1300 during a data capture operation 114. In one aspect of the disclosed approach, the user may be prompted and is expected to place his finger on the touch screen for a duration of time such that the capacitive sensor array 1310, which operate at a particular frame rate, may capture a set of raw capacitive frames. As an example, the user may leave his finger on the touch screen for approximately 3-4 seconds and assuming the capacitive sensor array 1310 may operate at 120 frames/second, then approximately 360-480 frames may be captured. An example of a distribution of various levels of capacitance measured for the finger that may be captured in a frame during the data capture operation 1 14 of the presentation and data capture portion 1 10 is visually illustrated as a capacitive heat map image 1 16.[0046] In one aspect of the disclosed approach for the presentation operation 112, the user may place his hand on the screen in a relaxed manner, preferably flat with all fingers of the hand touching each other. This method of hand placement, generally referred to as an unconstrained presentation modality, removes a need for any type of guiding posts or other means to ensure compliance with rigid body part positioning requirements in an approach referred to generally as a constrained presentation modality.[0047] Although the constrained presentation modality may provide such benefits as ease of finger length gauging where, for example, the body part to be analyzed is a finger and a predetermined anchor point of the finger is assumed, there are some drawbacks as well. For example, continuing with the discussion where the body part to be analyzed is the finger, rigid body part position requirements may be difficult for some people to meet and, having a fixed nature, almost eliminates extensibility to add other fingers. Further, even though a particular position of the finger may be imposed, a position of the finger may still change for each capture. More importantly, the constrained presentation modality may be very difficult to achieve on some devices, especially where those devices have thick bezels around the touch screen, which may result in partial finger measurements.[0048] In contrast, under the unconstrained presentation modality, users may more naturally position the body part to be analyzed, such as being able to place the finger on any part of the touch screen so that the whole length of the finger is within the touch screen— without having to worry about whether a palm portion of the hand is touching the touch screen as well. This allows full finger measurement, which may generally result in more consistent measurements. In addition, use of the unconstrained presentation modality provides extensibility of the various aspects of the disclosed approach for biometric authentication using capacitive profiles to multiple fingers, as described herein.[0049] Once the set of raw capacitive frames are acquired during the presentation and data capture portion 110, the pre-processing portion 120 of the finger biometric -based security process 100 removes any noise or other undesirable data from the frames. In one aspect of the disclosed approach for pre-processing, a normalization operation 122 may be performed to remove noise inherent in operation of the capacitive sensor array 1310 in the capacitive sensor system 1300. This noise should be removed because it affects all captured frames as a baseline noise level of the capacitive sensor array 1310. The normalization process may include averaging data in raw capacitive frames captured from the capacitive sensor array 1310 for a short period of time where there are no objects placed on the capacitive sensor array 1310 so that the baseline noise level may be established. This baseline noise level may then be subtracted from all subsequent raw capacitive frames.[0050] In another aspect of the disclosed approach for pre-processing, a masking operation 124 in the pre-processing portion 120 may involve creating a binary mask and identifying object contours using image dilation to create a dilated mask that may then be applied to each frame to filter out capacitive measurements that may not be of interest in the analysis. For example, the binary mask may be created for each frame by using a suitable threshold to separate touched/untouched sensors. Image dilation may then be used to enhance edges of the capacitive image of the captured body part by obtaining object contours to be masked before the dilated mask is applied to each frame to filter out the surrounding noise. Although description of various operations after the pre-processing portion 120 contained herein may refer to a processing of capacitive information associated with one or more capacitive data points, measurements, or levels from a particular frame in the set of raw capacitive frames, it should be noted that unless specifically referenced, this capacitive information preferably contains only capacitive data points, measurements, or levels that remain after the pre-processing portion 120 has removed and filtered any noise. Thus, for example, even though a particular area of a frame may have capacitive measurements with levels high enough to be included in an analysis of the frame in accordance with various aspects of the disclosed approach to create a capacitive profile, these capacitive measurements are irrelevant because they are associated with capacitive sensing elements that have not been touched but are caused by noise. Thus, these capacitive measurements should still be excluded if the particular area is not part of capacitive measurements within the dilated mask because they are not relevant to the analysis of the finger.[0051] Continuing to refer to FIG. 1, the segmentation portion 130 of the finger biometric -based security process 100, which includes a fingertip location determination operation 132, a raw capacitive frame filtering operation 134, and a Palmar Digital Crease (PDC) identification operation 136, may be used to create a capacitive profile. During the segmentation portion 130, a Palmar Digital Crease (PDC), which is a feature of the hand where the finger meets a palm portion of the hand, is identified in the capacitive profile. Identification of the PDC allows any data associated with the palm portion of the hand in the capacitive profile to be ignored, leaving only data associated with the finger in the capacitive profile to facilitate processing of the capacitive profile for feature extraction, as later described herein. In order to facilitate the description and understanding of various operations in the segmentation portion 130, reference will be made to FIG. 2, which includes a comparison 200 of a finger capacitive profile 220 plotted in a capacitive profile chart 210 to a capacitive heat map image 250.[0052] In one aspect of the disclosed approach for segmentation, the fingertip location determination operation 132 may locate a location of a tip of the finger using a mask such as the dilated mask created as described above during the masking operation 124 in the pre-processing portion 120. An example of a fingertip location that may be identified on the finger capacitive profile 220 by the fingertip location determination operation 132 is indicated by a dotted line 222. [0053] In accordance with various aspects of the disclosed approach, the raw capacitive frame filtering operation 134 may be used to filter and process the set of raw capacitive frames once the fingertip location has been identified. In one aspect of the disclosed approach, a time filter may be applied to the set of raw capacitive frames by averaging touch sensor data across all frames in the set of raw capacitive frames to generate a single, time-filtered capacitive frame referred to herein as a time-filtered capacitive frame. Each capacitive data point in the time-filtered capacitive frame is thus created based on an average of all capacitive measurement values from a corresponding capacitive data point in the raw capacitive frames in the set of raw capacitive frames. Thus, continuing with the example above where the capacitive sensor system 1300 operates at 120 frames/second, an average of each capacitive data point may be determined from either 360 capacitive data points if the presentation of the finger lasted exactly 3 seconds (such that 360 raw capacitive frames were captured), or 480 capacitive data points if the presentation of the finger lasted exactly 4 seconds (such that 480 raw capacitive frames were captured), where each capacitive data point in a particular raw capacitive frame was captured by a respective capacitive sensing element in the capacitive sensor array 1310. Although in the above example it is assumed that a period of time of data capture coincides exactly with a period of time of presentation, it should be noted that the period of data capture may not coincide with the period of presentation. Further, in other aspects of the disclosed approach, the number of raw capacitive frames over which each capacitive data point is averaged may vary, and it is within an expected implementation of the disclosed approach that lower and/or upper limits may be placed on a number of raw capacitive frames to be processed and combined; regardless of the number of raw capacitive frames actually captured.[0054] In addition to the time filter described above, in another aspect of the disclosed approach a row filter in the raw capacitive frame filtering operation 134 may be applied to the time-filtered capacitive frame by summing all touch sensor data along each row of the time-filtered capacitive frame to create the finger capacitive profile 220. As previously noted, each capacitive heat map image is a visual representation of a distribution of various levels of capacitance measurements stored in a respective frame. In FIG. 2, the capacitive heat map image 250 is a visual representation of the distribution of various levels of capacitance measurements for the finger in the time- filtered capacitive frame generated using the time filter. In one aspect of the disclosed approach, the finger capacitive profile 220 may be generated from a summation of normalized sensor output data stored in each row of the time-filtered capacitive frame, where capacitance data from all rows of the time-filtered capacitive frame, as represented by the capacitive heat map image 250, is summed and plotted as the finger capacitive profile 220 in the capacitive profile chart 210. It should be noted that the orientation of the capacitive heat map image 250 in FIG. 2 has been rotated such that an axis representing a touchscreen display height is horizontally displaced with respect to the bottom of FIG. 2. Thus, any reference to a summation of capacitance values in each "row" of the time-filtered capacitive frame is actually a reference to a summation of capacitance values in a vertical orientation as illustrated by the capacitive heat map image 250 in FIG. 2.[0055] The PDC identification operation 136 may be used to identify the PDC in the finger capacitive profile 220 once the row filter operation has been completed. In one aspect of the disclosed approach, a portion of the finger capacitive profile 220 with a predetermined length that is at least as long as a longest previously recorded measurement of a human index finger, which is approximately 8.5 centimeters (cm), is analyzed. This predetermined length is referred to and illustrated as a PDC search scope 252 in FIG. 2 and a geometry of the capacitive sensing elements, which is known, may be used to determine how many rows of data in the frame should be analyzed. For example, if a pitch between capacitive sensing elements is 0.24 cm, then 36 rows of data in the time-filtered capacitive frame corresponding to 36 sensors in length, where:36 sensors x 0 . 24 cm/sensor = 8 . 64 cm, (1 ) may be analyzed to meet the 8.5 cm minimum required length. A valley detector may be run over the PDC search scope 252 to identify the PDC in a base portion of the finger capacitive profile 220 where a location of a border of the palm portion is expected. Referring again to the finger capacitive profile 220, a PDC for the finger is illustrated by a dotted line 224.[0056] Once a palm portion of the capacitive profile has been identified by the segmentation portion 130, the feature extraction portion 140 of the finger biometric- based security process 100 may be used to convert the capacitive profile, such as the finger capacitive profile 220, to a z-score starting from the location of the fingertip to the PDC. The term "z-score" as used herein refers to a standard score that is a signed number of standard deviations of which an observation or datum is above a mean. In one aspect of the disclosed approach, a predetermined length is used for capacitive profile comparisons, and continuing with the example above of 36 rows in length, all fingertip capacitive profiles are aligned at the fingertip location. Then, a pad of zero values (i.e., z=0) is used to zero out the values from the location of the PDC to the origin.[0057] In accordance with various aspects of the disclosed approach, a z-score value is dependent on such factors such as a number of sensors touched in each row; body part (e.g., finger) geometry such as, for example, length, width, area, etc.; pressure of user's presentation; capacitance level of each touched sensor; and variations in conductivity across the skin, which may or may not be due to skin moisture.[0058] In one aspect of the disclosed approach, as further described herein, during an enrollment process an enrollment template that is created to authenticate a user may be generated from a number of capacitive profiles. For example, an exemplary number of capacitive profiles that may be averaged to create an enrollment template as used herein is three (3) capacitive profiles. However, any number of capacitive profiles may be combined, each of which is created through an operation of the capacitive profile creation session 102, which is the first part of the finger biometric -based security process 100 that includes the presentation and data capture portion 110, the preprocessing portion 120, the segmentation portion 130, and the feature extraction portion 140, as discussed above. Thus, one or more capacitive profiles, referred to as enrollment capacitive profiles, may be used to establish a biometric template for a finger of the user.[0059] In addition, during an authentication process, one or more capacitive profiles may also be captured utilizing the capacitive profile creation session 102 previously used to capture the enrollment capacitive profiles used to generate the biometric template during the enrollment process. During a validation session 104 in a second part of the finger biometric-based security process 100, an authentication capacitive profile may be compared to the biometric template to perform authentication of the user utilizing the matching portion 150 in accordance with various other aspects of the disclosed approach, as further described herein. In other words, the first part of the finger biometric-based security process 100 that was used to create the set of enrollment capacitive profiles for establishing the biometric template may also be used to create a capacitive profile referred to as an authentication capacitive profile. The authentication capacitive profile may be compared to the biometric template using a matching operation 152 in the matching portion 150 of the finger biometric-based security process 100, as described herein. [0060] In accordance with various aspects of the disclosed approach, during an authentication operation based on the matching operation 152, each point of the authentication capacitive profile may be compared a respective point in the biometric template using a standard matching algorithm, such as Euclidean distance, Hamming distance, etc., to evaluate if the verification data matches the biometric template at a given threshold. Thus, as used herein, the term "matched" or "matches" may refer to an authentication capacitive profile with a capacitive measurement distribution that does not have to be identical to the biometric template. If the authentication capacitive profile matches with the biometric template, then the user is authenticated, and if there is no match then authentication is denied.[0061] FIG. 3 illustrates a biometric enrollment process 300 configured in accordance with various aspects of the disclosed approach for biometric -based security using capacitive profiles in a biometric security system to create a biometric template for a body part of a user. The biometric template may later be utilized to perform authentication of the user in accordance with various other aspects of the disclosed approach, as described herein. In the example described herein, the body part to be later utilized to perform authentication is a finger of the user and reference will be made to the finger biometric -based security process 100 as illustrated in FIG. 1. The biometric authentication system may include a touch screen integrating a capacitive sensor system with a capacitive sensor array, an example of which is illustrated in FIG. 13 as the capacitive sensor system 1300 that includes the capacitive sensor array 1310 that may be configured in accordance with various aspects of the disclosed approach for biometric authentication using capacitive profiles, as further explained herein.[0062] The biometric enrollment process 300 begins at 302, where a request is made for the user to initiate creation of a biometric template through presentation of the body part. In one aspect of the disclosed approach, as the user in the example will be enrolling a finger as the body part, the operation for presentation may follow a presentation scheme as described in the presentation and data capture portion 1 10 of the finger biometric -based security process 100.[0063] At 304, one or more enrollment capacitive profiles are captured. In one aspect of the disclosed approach, each enrollment capacitive profile may be generated from a respective set of raw capacitive frames as described in the finger biometric-based security process 100, where the respective set of raw capacitive frames that is used to generate the capacitive enrollment profile include a data capture of a plurality of raw capacitive frames, such as approximately 360 to 480 frames as discussed in the data capture operation of the presentation and data capture portion 110. The set of raw capacitive frames are then processed to create the capacitive enrollment profile utilizing the operations described by the pre-processing portion 120, the segmentation portion 130, and the feature extraction portion 140. Operation then continues at 306.[0064] At 306, the capacitive enrollment profile generated by 304 is added to the biometric template. In one aspect of the disclosed approach, if more than one capacitive enrollment profile is to be added to the biometric template, then all capacitive profiles are averaged to create the biometric template that is later used for comparison during a verification process.[0065] At 310, it is determined if the biometric template is complete. In one aspect of the disclosed approach, the biometric template may be considered complete if a sufficient number of enrollment capacitive profiles have been processed.[0066] In one aspect of the disclosed approach, the biometric enrollment process 300 may be similar in implementation to a verification process with the exception that, during the biometric enrollment process 300, the user presents their finger (or hand) a number of times to create the set of enrollment capacitive profiles needed for the averaging capacitive profile operation to form the biometric template.In another aspect of the disclosed approach, the biometric template may be considered complete once all relevant information related to extracting relevant profiles for a body part (e.g., a fingers) or all portions of a body part (e.g., all fingers of a hand as well as various geometries related thereto) have been extracted.[0067] In one aspect of the disclosed approach, the operations described in the capacitive profile creation session 102, which includes the operations contained in the presentation and data capture portion 1 10, the pre-processing portion 120, the segmentation portion 130, and the feature extraction portion 140, may be repeated to create multiple enrollment capacitive profiles. As an example, three enrollment capacitive profiles may be used in the creation of the biometric template, with the enrollment capacitive profiles being averaged to create the biometric template, but a larger or smaller number of enrollment capacitive profiles could be used in other examples. Returning to FIG. 3, if more enrollment capacitive profiles need to be captured, operation returns to 304. Otherwise, the biometric template is stored at 312. In other words, once it is determined that the biometric template is complete, then operation will continue with 312. Otherwise, operation will return to 304, where another set of raw capacitive frames may be captured to create another enrollment capacitive profile.[0068] At 312, the newly created biometric template may be stored. In one aspect of the disclosed approach for biometric authentication using capacitive profiles, the newly created biometric template may be stored in a storage device of the biometric system. In another aspect of the disclosed approach, the newly created biometric template may be stored in the same integrated circuit in which the touchscreen capacitive sensor array is implemented, which increases security by reducing a likelihood that a third party may have unauthorized access to biometric template. Once the newly created biometric template is stored, operations continues with 320.[0069] At 320, it is determined if the enrollment process 300 has completed. For example, if the user wishes to enroll another body part such as another finger or the other hand, then the enrollment process 300 may be repeated. As another example, it may be possible to enroll several users through multiple iterations of the enrollment process 300-each user having a different user ID and biometric template. If the enrollment process 300 has not completed because a user desires to add more biometric templates say, for other body parts such as another finger or another hand, then operation returns to 302, where additional biometric templates may be created as discussed above. Otherwise, operation of the enrollment process 300 ends.[0070] FIG. 4 illustrates a biometric verification process 400 in the biometric security system configured in accordance with various aspects of the disclosed approach for biometric -based security using capacitive profiles that may be used to authenticate an unauthenticated user where, at 402, the unauthenticated user may be requested to present a body part for authentication. In one aspect of the disclosed approach, the biometric authentication process 400 utilizes operations from the finger biometric -based security process 100 that was used during the enrollment process 300 to create the biometric template. For example, the description in the presentation operation 1 12 in the presentation and data capture portion 1 10 of the finger biometric-based security process 100 may be applicable to 402. As discussed previously, the body part to be used for authentication should be the same body part that was enrolled in the biometric security system. For example, if a finger was enrolled during an iteration of the enrollment process 300, then request should be made for the same finger of the unauthenticated user to be presented. [0071] At 404, a set of raw authentication capacitive frames including one or more raw capacitive frames may be captured in accordance with the operation of the data capture operation 1 14 in the presentation and data capture portion 1 10 of the finger biometric - based security process 100. The set of raw authentication capacitive frames may be processed to create an authentication capacitive profile utilizing the operations described in the pre-processing portion 120, the segmentation portion 130, and the feature extraction portion 140 of the finger biometric -based security process 100.[0072] At 406, the authentication capacitive profile may be compared to the biometric template. In one aspect of the disclosed approach, the authentication capacitive profile may be compared to the biometric template utilizing an operation such as the matching operation 152 of the matching portion 150 from an iteration of the validation session 104. For example, a capacitive value for each location of the authentication capacitive profile may be matched with a capacitive value at a respective location of the biometric template. A difference between each pair of respective capacitive values from the authentication capacitive profile and the biometric template may be determined a tracked.[0073] At 408, if a difference between the authentication capacitive profile as compared to the biometric template is above a predetermined threshold, referred to as an authentication threshold, which would indicate that the difference was too large and the matching operation 152 indicated that the authentication capacitive profile did not match the biometric template, then operation continues with 420. Otherwise, if the difference between the authentication capacitive profile as compared to the biometric template is below the authentication threshold, then operation may continue with 410. In accordance with various aspects of the disclosed approach, differences between particular capacitive values in the authentication capacitive profile and respective capacitive values in the biometric template may be determined and compared with the authentication threshold. In one aspect of the disclosed approach, if all the differences are below the authentication threshold then a match is signaled. In another aspect of the disclosed approach, if a sufficient number of differences are below the authentication threshold then a match is signaled.[0074] At 410, access is granted to the user, who is now authenticated. In accordance with various aspects of the disclosed approach, granting access to the user may include providing the user with access to any systems, data, or resources protected by the biometric security system. For example, where the biometric security system is implemented as part of a touch screen of a mobile device such as a mobile phone, the user "unlock" the phone after being authenticated. In another example, if the mobile device is a table or a portable computer with one or more accounts, the user may be able to access his account after being authenticated. As discussed above, a user may be identified as well as authenticated using the biometric security system. Thus, in yet another example where the biometric security system is integrated into a computing device with multiple user accounts and a biometric template enrolled for each of at least two users— including one for the user, the biometric security system may both authenticate and login the user to his account by determining a suitable match between the authentication capacitive profile and one of the biometric templates stored by the biometric security system. In still yet another example, the computing device may be a tablet with a capacitive touch screen display that is shared among a limited number of users, each with their own access to the tablet, and the biometric security system may be implemented to allow these users to be authenticated and/or identified by measuring unique aspects of either fingers or hands of these users.[0075] At 420, in one aspect of the disclosed approach, if there is not a match between the authentication capacitive profile and the biometric template, then operation of the biometric authentication process 400 ends. It should be noted that a match may not occur because of reasons other than the unauthenticated user not being the same user who created the biometric template using the body part. For example, the user may not have presented the body part properly during the presentation and data capture portion 110, such as by moving the body part or not properly placing the body part with respect to the capacitive sensor system 1300 during data capture. In another aspect of the disclosed approach, operation of the biometric authentication process 400 may optionally return to 402, where the unauthenticated user is requested to again present the body part for authentication.[0076] Once it has been determined whether a match exists between the authentication capacitive profile and the biometric template, and either access is granted or denied at 410 or 420, respectively, then operation of the biometric authentication process 400 ends.[0077] It should be noted that the generation of capacitive profiles, whether it is during a verification process or a biometric enrollment process, utilizes the same operations as described with respect to the capacitive profile creation session 102. The difference between the verification process and the biometric enrollment process involves the user presenting a body part, such as his finger or hand, only once during the verification process, and more than once during the biometric enrollment process. As discussed with respect to an example operation as described in the enrollment process 300 of FIG. 300, it is preferable that the user presents the body part at least three (3) times to generate an equal number of enrollment capacitive profiles. These three enrollment capacitive profiles are then averaged to create the biometric template that is stored for later comparison with the authentication capacitive profile created during verification process. Those skilled in the art would know that the user could be asked to presenting the body part fewer or more times during either the verification process or the biometric enrollment process, and no limitations should be read into the description based on the number of times the user needs to present the body part or on the number of capacitive profiles generated therefrom.[0078] FIG. 5 illustrates a finger capacitive profile chart 500 for a collection of finger capacitive profiles 510 for a user that has been generated for a finger of the user collected over a period of time. The collection of finger capacitive profiles 500 includes finger capacitive profiles generated during different iterations of a capacitive profile creation session such as the capacitive profile creation session 102 in FIG. 1, including a finger capacitive profile generated from a first run 512, a finger capacitive profile generated from a second run 514, and a finger capacitive profile generated from a third run 516. To illustrate that capacitive profiles have excellent stability over time, creation of the collection of finger capacitive profiles 510 occurred over a two-month period on different devices used within different environments. However, it may be seen that the finger capacitive profiles created over these three runs are similar.[0079] FIG. 6 illustrates a hand biometric -based security process 600 configured in accordance with various aspects of the disclosed approach for biometric -based security using capacitive profiles in a biometric security system that utilizes a hand of a user as the body part for authentication. Various aspects of an operation for each portion of the hand biometric -based security process 600, which includes a presentation and data capture portion 610, a pre-processing portion 620, a segmentation portion 630, a feature extraction portion 640, and a matching portion 650, will be described herein with reference to the biometric security system including a touch screen integrating a capacitive sensor system with a capacitive sensor array, an example of which is illustrated in FIG. 13 as a capacitive sensor system 1300 that includes a capacitive sensor array 1310 that may be configured in accordance with various aspects of the disclosed approach for biometric -based security using capacitive profiles, as further explained herein.[0080] In one aspect of the disclosed approach, similar to the finger biometric -based security process 100, a first part of the hand biometric -based security process 600, referred to and illustrated in FIG. 6 as a "capacitive profile creation session" 602 that includes the presentation and data capture portion 610, the pre-processing portion 620, the segmentation portion 630, and the feature extraction portion 640, may be used to create capacitive profiles, where each iteration of the capacitive profile creation session 602 may be used to create a single capacitive profile. As described further herein, each capacitive profile may be generated from a set of one or more captured raw capacitive frames and may be used for such purposes as: (1) establishing a biometric template for a body part such as the hand of the user, as previously described with reference to FIG. 3; or (2) performing authentication using the biometric template, as previously described with reference to FIG. 4.[0081] A description of an iteration of the capacitive profile creation session 602 of the hand biometric -based security process 600 begins at the presentation and data capture portion 610, where the biometric security system requests and receives presentation by the user of a body part such as the hand during a presentation operation 612. Once the body part is presented, one or more raw capacitive frames may be captured of the body part using the capacitive sensor array 1310 in the capacitive sensor system 1300 during a data capture operation 614. In one aspect of the disclosed approach, the user may be prompted and is expected to place his hand on the touch screen for a duration of time such that the capacitive sensor array 1310, which operate at a particular frame rate, may capture a set of raw capacitive frames. As an example, the user may leave his hand on the touch screen for approximately 3-4 seconds and assuming the capacitive sensor array 1310 may operate at 120 frames/second, then approximately 360-480 frames may be captured. An example of a distribution of various levels of capacitance measured for the hand that may be captured in a frame during the data capture operation 614 of the presentation and data capture portion 610 is visually illustrated as a capacitive heat map image 616.[0082] In one aspect of the disclosed approach for the presentation operation 612, the user may place his hand on the screen in a relaxed manner, preferably flat with all hands of the hand touching each other. This method of hand placement, generally referred to as an unconstrained presentation modality, removes a need for any type of guiding posts or other means to ensure compliance with rigid body part positioning requirements in an approach referred to generally as a constrained presentation modality.[0083] Although the constrained presentation modality may provide such benefits as ease of hand length gauging where, for example, the body part to be analyzed is a hand and a predetermined anchor point of the hand is assumed, there are some drawbacks as well. For example, continuing with the discussion where the body part to be analyzed is the hand, rigid body part position requirements may be difficult for some people to meet and, having a fixed nature, almost eliminates extensibility to add other hands. Further, even though a particular position of the hand may be imposed, a position of the hand may still change for each capture. More importantly, the constrained presentation modality may be very difficult to achieve on some devices, especially where those devices have thick bezels around the touch screen, which may result in partial hand measurements.[0084] In contrast, under the unconstrained presentation modality, users may more naturally position the body part to be analyzed, such as being able to place the hand on any part of the touch screen so that the whole length of the hand is within the touch screen— without having to worry about whether a palm portion of the hand is touching the touch screen as well. This allows full hand measurement, which may generally result in more consistent measurements. In addition, use of the unconstrained presentation modality provides extensibility of the various aspects of the disclosed approach for biometric authentication using capacitive profiles to multiple fingers, as described herein.[0085] Once the set of raw capacitive frames are acquired during the presentation and data capture portion 610, the pre-processing portion 620 of the hand biometric -based security process 600 removes any noise or other undesirable data from the frames. In one aspect of the disclosed approach for pre-processing, a normalization operation 622 may be performed to remove noise inherent in operation of the capacitive sensor array 1310 in the capacitive sensor system 1300. This noise should be removed because it affects all captured frames as a baseline noise level of the capacitive sensor array 1310. The normalization process may include averaging data in raw capacitive frames captured from the capacitive sensor array 1310 for a short period of time where there are no objects placed on the capacitive sensor array 1310 so that the baseline noise level may be established. This baseline noise level may then be subtracted from all subsequent raw capacitive frames. [0086] In another aspect of the disclosed approach for pre-processing, a masking operation 624 in the pre-processing portion 620 may involve creating a binary mask and identifying object contours using image dilation to create a dilated mask that may then be applied to each frame to filter out capacitive measurements that may not be of interest in the analysis. For example, the binary mask may be created for each frame by using a suitable threshold to separate touched/untouched sensors. Image dilation may then be used to enhance edges of the capacitive image of the captured body part by obtaining object contours to be masked before the dilated mask is applied to each frame to filter out the surrounding noise. Although description of various operations after the pre-processing portion 620 contained herein may refer to a processing of capacitive information associated with one or more capacitive data points, measurements, or levels from a particular frame in the set of raw capacitive frames, it should be noted that unless specifically referenced, this capacitive information preferably contains only capacitive data points, measurements, or levels that remain after the pre-processing portion 620 has removed and filtered any noise. Thus, for example, even though a particular area of a frame may have capacitive measurements with levels high enough to be included in an analysis of the frame in accordance with various aspects of the disclosed approach to create a capacitive profile, these capacitive measurements are irrelevant because they are associated with capacitive sensing elements that have not been touched but are caused by noise. Thus, these capacitive measurements should still be excluded if the particular area is not part of capacitive measurements within the dilated mask because they are not relevant to the analysis of the hand.[0087] Continuing to refer to FIG. 6, the segmentation portion 630 of the hand biometric -based security process 600, which includes a hand length profile determination operation 632 and a hand width profile determination operation 634, may be used to create a capacitive profile. During the hand length profile determination operation 632, a location of the PDC, which as discussed previously is a feature of the hand where the hand meets a palm portion of the hand, is identified in the hand length profile. Identification of the PDC allows any data associated with the palm portion of the hand in the capacitive profile to be ignored, leaving only data associated with the fingers in the capacitive profile to facilitate processing of the capacitive profile for feature extraction, as later described herein. In order to facilitate the description and understanding of various operations in the segmentation portion 630, reference will be made to FIG. 7, which includes a comparison 700 of a hand length profile 720 plotted in a capacitive profile chart 710 to a capacitive heat map image 750. Reference will also be made to FIG. 8, which includes a comparison 800 of a hand width profile 820 plotted in a capacitive profile chart 810 to a capacitive heat map image 850[0088] In one aspect of the disclosed approach for segmentation, the hand length profile determination operation 632 may locate a location of a tip of the hand using a mask such as the dilated mask created as described above during the masking operation 624 in the pre-processing portion 620. An example of a hand length profile that may be identified on the hand capacitive profile 720 by the hand length profile determination operation 632 is indicated by a dotted line 722.[0089] In accordance with various aspects of the disclosed approach, a raw capacitive frame filtering may be used to filter and process the set of raw capacitive frames once the tip of the hand has been located. In one aspect of the disclosed approach, a time filter may be applied to the set of raw capacitive frames by averaging touch sensor data across all frames in the set of raw capacitive frames to generate a single, time-filtered capacitive frame referred to herein as a time-filtered capacitive frame. Each capacitive data point in the time-filtered capacitive frame is thus created based on an average of all capacitive measurement values from a corresponding capacitive data point in the raw capacitive frames in the set of raw capacitive frames. Thus, continuing with the example above where the capacitive sensor system 1300 operates at 120 frames/second, an average of each capacitive data point may be determined from either 360 capacitive data points if the presentation of the hand lasted exactly 3 seconds (such that 360 raw capacitive frames were captured), or 480 capacitive data points if the presentation of the hand lasted exactly 4 seconds (such that 480 raw capacitive frames were captured), where each capacitive data point in a particular raw capacitive frame was captured by a respective capacitive sensing element in the capacitive sensor array 1310. Although in the above example it is assumed that a period of time of data capture coincides exactly with a period of time of presentation, it should be noted that the period of data capture may not coincide with the period of presentation. Further, in other aspects of the disclosed approach, the number of raw capacitive frames over which each capacitive data point is averaged may vary, and it is within an expected implementation of the disclosed approach that lower and/or upper limits may be placed on a number of raw capacitive frames to be processed and combined; regardless of the number of raw capacitive frames actually captured. [0090] In addition to the time filter described above, in another aspect of the disclosed approach a row filter may be applied to the time-filtered capacitive frame by summing all touch sensor data along each row of the time-filtered capacitive frame to create the hand length profile 720. As previously noted, each capacitive heat map image is a visual representation of a distribution of various levels of capacitance measurements stored in a respective frame. In FIG. 7, the capacitive heat map image 750 is a visual representation of the distribution of various levels of capacitance measurements for the hand in the time-filtered capacitive frame generated using the time filter. In one aspect of the disclosed approach, the hand capacitive profile 720 may be generated from a summation of normalized sensor output data stored in each row of the time-filtered capacitive frame, where capacitance data from all rows of the time-filtered capacitive frame, as represented by the capacitive heat map image 750, is summed and plotted as the hand capacitive profile 720 in the capacitive profile chart 710. It should be noted that the orientation of the capacitive heat map image 750 in FIG. 7 has been rotated such that an axis representing a touchscreen display height is horizontally displaced with respect to the bottom of FIG. 7. Thus, any reference to a summation of capacitance values in each "row" of the time-filtered capacitive frame is actually a reference to a summation of capacitance values in a vertical orientation as illustrated by the capacitive heat map image 750 in FIG. 7.[0091] A PDC identification operation may be used to identify the PDC in the hand length profile 720 once the row filter operation has been completed. In one aspect of the disclosed approach, a portion of the hand length profile 720 with a predetermined length that is at least as long as a longest previously recorded measurement of a human index finger, which is approximately 8.5 cm, is analyzed. This predetermined length is referred to and illustrated as a PDC search scope 752 in FIG. 7 and a geometry of the capacitive sensing elements, which is known, may be used to determine how many rows of data in the frame should be analyzed. For example, if a pitch between capacitive sensing elements is 0.24 cm, then 36 rows of data in the time-filtered capacitive frame corresponding to 36 sensors in length, where:36 sensors x 0 . 24 cm/sensor = 8 . 64 cm, (2) may be analyzed to meet the 8.5 cm minimum required length. A valley detector may be run over the PDC search scope 752 to identify the PDC in a base portion of the hand capacitive profile 720 where a location of a border of the palm portion is expected. Referring again to the hand capacitive profile 720, a PDC for the hand is illustrated by a dotted line 724.[0092] Once the location of the PDC has been identified and the palm portion has been removed by the hand length profile determination operation 632, the hand width profile determination operation 634 may operate to determine a width of the hand as well as identify a portion in the hand width profile 820 associated with each finger. The capacitive data containing the capacitance values captured by the capacitive sensor array 1310 in the capacitive sensor system 1300, as represented by the capacitive heat map 850, is summed along a columns to generate a 1-D horizontal hand width profile, which represents a width of the hand, and a width of the individual fingers as well as the relative strength of each finger peak. Thus, a hand width 852 may be determined, and then a portion associated with each finger, such as an index finger portion 812, a middle finger portion 814, a ring finger portion 816, and a little finger portion 818, may be identified. In other words, the hand width profile 820 may be sub-divided to locate portions in the capacitive heat map associated with each finger.[0093] The feature extraction portion 640 of the hand biometric -based security process600 may be used to convert each portion of the hand width profile 820 capacitive profile, such as the index finger portion 812, the middle finger portion 814, the ring finger portion 816, and the little finger portion 818, to a z-score starting from the location of the finger tip to the PDC. Referring to FIG. 9, which compares a plurality of finger capacitive profiles 910 to a plurality of capacitive heat maps 950 in one aspect of the disclosed approach a z-score value is dependent on such factors such as a number of sensors touched in each row; body part (e.g., hand) geometry such as, for example, length, width, area, etc.; pressure of user's presentation; capacitance level of each touched sensor; and variations in conductivity across the skin, which may or may not be due to skin moisture. In one aspect of the disclosed approach, the plurality of finger capacitive profiles 910 is determined from having the hand width profile 820 divided to only consider each finger.[0094] For example, as shown in FIG. 9, an index finger capacitive profile 912, a middle finger capacitive profile 914, a ring finger capacitive profile 916, and a little finger capacitive profile 918 may each be converted to a z-score starting from the location of the finger tip to the PDC. By summing along the touched rows of a capacitive frame for each finger, as represented visually by an index finger capacitive heat map 912, a middle finger capacitive heat map 914, a ring finger capacitive heat map 916, and a little finger capacitive heat map 918 may each be converted to a z-score starting from the location of the finger tip to the PDC a 1-D vertical profile for each finger may be created.[0095] By summing only the rows and columns associated with each individual finger, a length profile is generated that represents the user's finger dimensions and capacitive signature. This 1-D representation of the finger is an accumulation of finger segment length and width as measured by the capacitive touch screen. Some user's finger 1-D capacitive profiles provide two very distinct joints, while many others do not. By using the 1-D capacitive profile, all finger dissimilarities may be determined in the data, such as the relative level of capacitance distributed across the hand (as shown in the heat map and on the y-axis in the graph) in addition to simple measurement of finger segment lengths by using the number of touched capacitive sensors within each segment (as shown on the x-axis in the graph). Thus, not only are users differentiated based on a number of simple lengths and widths of phalangeal joints, but perhaps more powerfully on the unique capacitance levels of each point in the profile.[0096] In a second part During an authentication operation, each point of the capacitive profile may then be used by a matching operation, such as Euclidean distance, Hamming distance, etc., to evaluate if the verification data matches the capacitive profile stored in the biometric template at a given threshold. If the biometric template matches, then the user is authenticated, and if there is no match then authentication is denied.[0097] In one aspect of the disclosed approach, as described herein, during an enrollment process an enrollment template that is created to authenticate a user may be generated from a number of capacitive profiles. For example, an exemplary number of capacitive profiles that may be averaged to create an enrollment template as used herein is three (3) capacitive profiles. However, any number of capacitive profiles may be combined, each of which is created through an operation of the capacitive profile creation session 602, which is the first part of the hand biometric -based security process 600 that includes the presentation and data capture portion 610, the pre-processing portion 620, the segmentation portion 630, and the feature extraction portion 640, as discussed above. Thus, one or more capacitive profiles, referred to as enrollment capacitive profiles, may be used to establish a biometric template for a hand of the user.[0098] In addition, during an authentication process, one or more capacitive profiles may also be captured utilizing the capacitive profile creation session 602 previously used to capture the enrollment capacitive profiles used to generate the biometric template during the enrollment process. During a validation session 604 in a second part of the hand biometric -based security process 600, an authentication capacitive profile may be compared to the biometric template to perform authentication of the user utilizing the matching portion 650 in accordance with various other aspects of the disclosed approach, as further described herein. In other words, the first part of the hand biometric -based security process 600 that was used to create the set of enrollment capacitive profiles for establishing the biometric template may also be used to create a capacitive profile referred to as an authentication capacitive profile. The authentication capacitive profile may be compared to the biometric template using a matching operation 652 in the matching portion 650 of the hand biometric -based security process 600, as described herein.[0099] In accordance with various aspects of the disclosed approach, during an authentication operation based on the matching operation 652, each point of the authentication capacitive profile may be compared a respective point in the biometric template using a standard matching algorithm, such as Euclidean distance, Hamming distance, etc., to evaluate if the verification data matches the biometric template at a given threshold. Thus, as used herein, the term "matched" or "matches" may refer to an authentication capacitive profile with a capacitive measurement distribution that does not have to be identical to the biometric template. If the authentication capacitive profile matches with the biometric template, then the user is authenticated, and if there is no match then authentication is denied.[00100] As discussed herein, the hand width profile and the finger length profiles may be stored on the device and they may collectively be used as the capacitive signature of the user's hand.[00101] In addition to the examples provided herein, such as hands, and fingers, the various aspects of the disclosed approach for biometric authentication using capacitive profiles may be applied to other body parts and authentication modalities. For example, authentication may be performed based on capacitive profiles captured using a capacitive sensing system integrated into a display device to implement a touch screen while a user interfaces with the capacitive sensing system using a stylus. FIG. 10 illustrates a comparison 1000 of two set of capacitive heat maps and associated capacitive palm profiles 1010, 1050 for User 1, User 2, respectively that may be used to describe biometric authentication based on a palm profile in the biometric authentication system configured in accordance with other various aspects of the disclosed approach for biometric -based security using capacitive profiles. Referring to the comparison 1000, it may be seen that a capacitive heat map 1012 for User 1 as compared to a capacitive heat map 1052 for User 2, as well as a capacitive profile chart 1014 for User 1 as compared to a capacitive profile chart 1054 for User 2 clearly shows a difference between the collected capacitive profiles and capacitive heat maps. Referring to the capacitive profile plot 1016 for User 1 as well as the capacitive profile plot 1056 for User 2, the uniqueness of each user's grip is apparent in these capacitive length profile graphs. Capacitive profile width graphs may also be created via the touch sensor columns and combined with length graphs to authenticate or identify the user. Continuous authentication while the user is writing may also be possible using this same method.[00102] As another example of a body part that may be used for biometric authentication, FIG. 11 illustrates a capacitive heat map 1150 and associated capacitive palm profile 1 110 in another aspect of the disclosed approach, where an earprint on the touch screen device is shown with a corresponding capacitive length profile 1 112. A width profile could also be produced. An earprint is another biometric modality that could make use of the flow outlined above for authentication.[00103] In other aspects of the disclose approach for biometric authentication using capacitive profiles, the biometric authentication approach described herein may be used in parallel with other viable biometric methods such as fingerprint, facial biometrics, voice biometrics, iris scans, or other methods. Thus, any number of biometric modalities may be fused together via a multimodal fusion algorithm for a more accurate and higher security system. In addition, the biometric authentication approach described herein may be advantageously used to both authenticate the user and wake a device that may be locked and asleep, such as in a low-power state.[00104] Existing multi-touch approaches to authentication typically require a multi-touch output of touch controllers. Various aspects of the disclosed approach may be either integrated into a touch controller or make use of unprocessed touch screen data, resulting in significantly improved performance.[00105] In accordance with various aspects of the disclosed approach, because a biometric template is not limited to a particular portion of a body part but may include other portions of the body part (e.g., an enrollment process for a hand may include capture of a capacitive profile for all fingers such that portions of the biometric template for the hand may include a capacitive profile for each finger), the concepts described herein may be extended in a variety of ways. A palm portion of the handprint may also be used, which means the biometric template may include a combination of one or more fingers and palm of the hand. The disclosed approach even contemplates being able to authenticate using any available portion of the particular part of the biological entity that is used for authentication purposes. For example, even a portion of the finger identified in a capacitive heat map captured during a biological authentication process may be matched to the biometric template.[00106] FIG. 12 illustrates how a capacitive touch sensor array, illustrated as a capacitive sensor array 1210, works by measuring a capacitance of each capacitive sense element 1212 in the capacitive sensor array 1210, and looking for a change in the capacitance indicating a touch or presence of a conductive object. The capacitive sensor array 1210 may be used to describe an operation of various capacitive touch sensor arrays that may be used with the biometric security system configured in accordance with various aspects of the disclosed approach for biometric -based security using capacitive profiles. A plot 1200 of detected sensor signals 1222, 1232 in respective Y and X axes 1220, 1230, as laid next to a capacitive touch sensor array 1210, shows that when a conductive object such as, for example, a finger, hand, or other object, as illustrated as a finger 1250 comes into contact or close proximity with capacitive sense elements such as a group of capacitive sensor elements 1250, capacitance changes and thereby the conductive object is detected. As further described herein, an electrical circuit may be configured to measure the capacitance changes of the group of capacitive sensor elements 1250 and then convert the measured capacitances into digital values.[00107] FIG. 13 illustrates the capacitive sensor system 1300 that may include a capacitive sensor array 1310 configured to capture raw capacitive frames in accordance with various aspect of the disclosed approach for biometric -based security using capacitive profiles. In one aspect of the disclosed approach, the capacitive sensor array 1310 may include a plurality of capacitive sensor elements, where a capacitance sensing circuit 1312 is illustrated as a representative capacitive sensor element in the capacitive sensor array 1310. A pair of capacitive sensing element connection buses 1322, 1324 facilitates coupling of each capacitive sensing elements of the capacitive sensor array 1310 to a processing circuit such as an analog-to-digital (A/D) converter 1330 to provide a digital representation of a capacitance value as measured by each capacitor sensing element. In one aspect of the disclosed approach, the A/D converter module 1330 allows direct reporting of capacitance values measured by the plurality of the capacitive sensing elements with a resolution of 14 bits at a sample rate of 120 frames/second. In other words, a raw capacitive frame may be captured in as little as 1/120thof a second with 16,384 levels of capacitance as measured by each of the capacitive sensing elements being differentiable because 14 bits are being used to quantize each capacitance level. The high rate of capture allows the sensor system 1300 to capture multiple samples of a presented body part and facilitates baseline noise filtering, as discussed herein.[00108] Preferably, raw capacitive frames and, ultimately, capacitive profiles, may be generated as described with reference to various examples provided above, after the "raw" data received by the A/D converter module 1330 is converted to a digital representation and sent to a processing system in the biometric authentication system configured in accordance with various aspects of the disclosed approach for biometric - based security using capacitive profiles. As previously discussed, providing direct access to capacitance levels as measured by the plurality of capacitive sensing elements to the detection and processing circuits in the biometric authentication system avoids reduction of functionality and efficacy caused by approaches that use data that have already been processed by touch screen controllers and therefore not able to provide the various benefits noted herein.[00109] Although description of the sensor system 1300 contained herein has been predicated by the sensor system 1300 being integrated into a touch screen, it should be understood that the sensor system 1300 may be used in implementing a variety of user interfaces including, but not limited to, touchpads, trackpads, and the like. These user interfaces may be integrated into a variety of devices including, but not limited to, computer servers, desktop computers, laptops, tablet computing devices, mobile devices, music devices, video devices, cellular telephones, and smartphones. Also, it should be noted that the capacitive sensor array 1310 may be configured in various geometric shapes including, but not limited to, a square, a rectangle, a circle, or a ring. Further, the capacitive sensor array 1310 may be arranged as a shape that is not limited to two dimensions. For example, the capacitive sensor array 1310 may be arranged on a three-dimensional (3D) shape including, but not limited to, a sphere, a cylinder, and other 3D shapes including other regular or even irregular shapes.[00110] FIG. 14 illustrates an exemplary capacitance sensing structure 1400 that may be used as part of the capacitance sensing circuit 1312 of the capacitive sensor array 1310. In one aspect of the disclosed approach for biometric -based security using capacitive profiles, the capacitance sensing structure 1400 may be configured to allow a capacitance element to be formed when a surface of the body part is brought near or in contact with the capacitance sensing structure 1400. For example, the capacitance element may be a capacitor structure created by the surface of the body part acting as a first plate of the capacitor structure and an electrode in the capacitance sensing structure 1400 as a second plate of the capacitor structure. The surface of the body part and the electrode may be separated by a distance that includes any materials covering the electrode and a space between the surface of the body part and the materials covering the electrode if the surface of the body part is not directly in contact with the materials covering the electrode. The distance determines a level of capacitance created in the capacitance element.[00111] As shown in FIG. 14, the capacitance sensing structure 1400 may include multiple sensor electrodes 1406 that are formed on an inter-level insulator 1408. Each of the multiple sensor electrodes 1406 may be connected to a sensor electrode interconnection 1412 by a plug 1410 in a through hole formed in the inter-level insulator 1408. The sensor electrode interconnection 1412 is formed on a lower insulating film 1404 on a semiconductor substrate 1402. A passivation film 1422 is formed on the inter-level insulator 1408 and covers the multiple sensor electrodes 1406.[00112] During operation of the capacitance sensing circuit 1400, when a body part surface such as a skin surface of the body part comes into contact with or is near enough to the passivation film 1422 to affect the capacitance sensing circuit 1400, a capacitor structure is formed by the body part surface and each affected sensor electrode of the multiple sensor electrodes 1406, where the body part surface and each affected sensor electrode forms a first plate and a second plate, respectively, of a respective capacitor structure. The first plate and the second plate of this capacitor structure are spaced apart by a distance that is minimally a thickness of the passivation film 1422 to create a capacitance level based on the distance. The capacitance that is created between the body part surface and the sensor electrode 1406 may be detected through the sensor electrode interconnection 1412.[00113] In effect, sensing for the capacitance sensing circuit 1400 is achieved as a basis of a distance difference between a skin surface as one plate and a sensor electrode as another plate, thereby detecting a capacitance level based on these two plates. In another possible configuration of the capacitance sensing circuit 1400, a second sensor electrode (not shown) may be included near the sensor electrode 1406 of the capacitance sensing circuit 1400, and a difference from this second sensor electrode acting as a reference plate may be used for actual sensing.[00114] In accordance with various aspects of the disclosed approach for biometric authentication using capacitive profiles, a plurality of capacitance sensing structures similar to the capacitance sensing structure 1400 may be mounted on an integrated circuit (IC) chip that may also include capacitance detection circuit for detecting a capacitance level of each of the sensor electrodes 1406, a processing circuit such as the A/D converter module 1330 for receiving and processing output from the capacitance detection circuit. The IC chip may also contain a storage circuit that stores any data necessary for the operation of the disclosed approach, as further described herein, and a comparison circuit for comparing raw capacitive profiles stored in the storage circuit with an authentication template.[00115] Although relevant features of a hand such as length and width may be extracted with touch sensors spaced as far apart as 4.5 mm, higher density arrangements of the capacitive sensor elements in the capacitive sensor array 1310 may generally result in more features being extracted utilizing various aspects of the disclosed approach for biometric -based security using capacitive profiles. In other words, smaller distances between each of the capacitance sensing circuit 1312 in the capacitive sensor array 1310 may generally allow capture of raw capacitive frames with higher definition and lower variability, which ultimately may generally result in capacitive profiles and enrollment templates with more information and result in a more robust and greater performing system. Using an exemplary circuit such as the capacitance sensing circuit 1312, it is preferable that about a separation of 0.85 mm or less exists between touch sensor centers, providing about 30 DPI. In general, in accordance with various aspect of the disclosed approach for biometric -based security using capacitive profiles, the capacitive sensor array 1310 preferably includes a density of the capacitance sensing circuit 1312 that is as high as possible to allow improved physiological feature extraction.[00116] Touch screens in most current consumer products have a touch resolution of approximately 5 millimeters per pixel (mm/pixel), with the industry trending towards denser touch resolutions. For example, development and deployment of touch screens with denser touch resolutions of 2.4 mm/pixel to sub-1 mm/pixel are expected. Denser touch resolution allow for more fine-grained resolution of variations of raw capacitive profiles to be captured, which means that raw capacitive profiles that more accurately reflect capacitive characteristics used to authenticate a biological entity may be created. It is to be noted that the touch resolution used to capture raw capacitive profiles for creation of an authentication template may not be equal to the touch resolution used to capture raw capacitive profiles for comparison to the authentication template.[00117] FIG. 15 is a conceptual diagram illustrating an example of a hardware implementation for an apparatus 1500 employing a processing system 1510 that may utilize various aspects of the disclosed approach for biometric -based security using capacitive profiles. Thus, in accordance with various aspects of the disclosure, an element, or any portion of an element, or any combination of elements in the apparatus 1500 that may be used to implement any device, including a mobile device, may utilize biometric -based security using capacitive profiles described herein.[00118] For example, the processing system 1510 includes one or more processors illustrated as a processor 1514. Examples of the processor 1514 include microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. . The processor 1514 may be used to provide processor functionality for the apparatus 1500. For example, if the apparatus 1500 is a tablet or a mobile device such as a mobile phone, the processor 1514 may be used to execute code and algorithms necessary to operate the apparatus 1500, such as an operating system and various applications of the tablet or mobile device. Further, the processor 1514 may be used to implement, for example, the operations described in the finger biometric -based security process 100 of FIG. 1; the biometric enrollment process 300 of FIG. 3; the biometric verification process 400 of FIG. 4; and hand biometric -based security process 600 of FIG. 6.[00119] The processing system 1510 may be implemented as having a bus architecture, represented generally by a bus 1512. The bus 1512 may include any number of interconnecting buses and bridges depending on the specific application of the processing system 1510 and overall design constraints. The bus 1512 links together various circuits including one or more processors (represented generally by the processor 1514), a memory 1518, and computer-readable media (represented generally by a computer-readable medium 1516). The bus 1512 may also link various other circuits such as timing sources, peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further. A bus interface 1520 provides an interface between the bus 1512 and a transceiver 1550. The transceiver 1550 provides a means for communicating with various other apparatus over a transmission medium. Depending upon the nature of the apparatus, a user interface 1530 (e.g., keypad, display, speaker, microphone, joystick) may also be provided.[00120] The processor 1514 is responsible for managing the bus 1512 and general processing, including execution of software that may be stored on the computer- readable medium 1516 or the memory 1518. The software, when executed by the processor 1514, causes the processing system 1510 to perform the various functions described herein for any particular apparatus. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.[00121] The computer-readable medium 1516 or the memory 1518 may also be used for storing data that is manipulated by the processor 1514 when executing software; capacitive profiles and biometric templates generated by the various operations contained herein; and any other suitable data. The computer-readable medium 1516 may be a non-transitory computer-readable medium such as a computer-readable storage medium. A non-transitory computer-readable medium includes, by way of example, a magnetic storage device (e.g., hard disk, floppy disk, magnetic strip), an optical disk (e.g., a compact disc (CD) or a digital versatile disc (DVD)), a smart card, a flash memory device (e.g., a card, a stick, or a key drive), a random access memory (RAM), a read only memory (ROM), a programmable ROM (PROM), an erasable PROM (EPROM), an electrically erasable PROM (EEPROM), a register, a removable disk, and any other suitable medium for storing software and/or instructions that may be accessed and read by a computer. The computer-readable medium may also include, by way of example, a carrier wave, a transmission line, and any other suitable medium for transmitting software and/or instructions that may be accessed and read by a computer. Although illustrated as residing in the processing system 1510, the computer-readable medium 1516 may reside externally to the processing system 1510, or distributed across multiple entities including the processing system 1510. The computer-readable medium 1516 may be embodied in a computer program product. By way of example, a computer program product may include a computer-readable medium in packaging materials. Those skilled in the art will recognize how best to implement the described functionality presented throughout this disclosure depending on the particular application and the overall design constraints imposed on the overall system.[00122] In one aspect of the disclosed approach, the processing system 1510 may include a capacitive sensor system 1502 that includes a capacitive sensing array 1504 and a controller 1506. The controller 1506 may be used to control various aspects of operation of the capacitive sensor system 1502, including capturing one or more raw capacitive frames of a presented body part as sensed by the capacitive sensing array 1504. The capacitive sensing array 1504 may be implemented using a capacitive sensor array such as that described as the capacitive sensor array 1310 in FIG. 13.[00123] A security module 1520 may be coupled to the capacitive sensor system 1502 to receive and process the raw capacitive frames captured using the capacitive sensing array 1504 and provided by the by controller 1506. The security module 1520 may also generate one or more capacitive profiles via a capacitive profile generator 1522 implementing a capacitive profile creation process such as that described in either the capacitive profile creation session 102 of the finger biometric -based security process 100 of FIG. 1, or the capacitive profile creation session 602 of the hand biometric -based security process 600 of FIG. 6. The security module 1520 may also generate and store any number of biometric templates 1524 created using an enrollment process such as that described in the biometric enrollment process 300. Further, the security module 1520 may also compare capacitive profiles to biometric templates that were previously stored during an enrollment process to perform authentication using a verification process such as that described in the biometric verification process 400 of FIG. 4. Based on a comparison, the security module 1520 may provide an authentication signal on the bus 1512. Thus, in various aspects of the disclosed approach, the security module 1520 may be used in addition to, or instead of, the processor 1514 to provide security-specific features. Consequently, the features described for the processor 1514 may apply equally to the security module 1520. As noted, risk of tampering is decreased and security is increased if the security module 1520 and the capacitive sensor system 1502 are integrated in a single device, such as an integrated circuit.[00124] For example, the security module 1520 may implement a biometric security procedure for a capacitive sensor system such as the capacitive sensor system 1502 that includes capturing a set of raw capacitive frames for a body part via the capacitive sensor system 1502, wherein each raw capacitive frame includes a distribution of a plurality of capacitance levels measured from the body part; creating a capacitive profile based on the set of raw capacitive frames; comparing a first value in the capacitive profile to a second value in a biometric template generated from an enrolled body part, wherein the first value and the second value are located at a similar location with respect to the capacitive profile; and, generating an authentication signal based on a difference between the first value and the second value.[00125] In one aspect of the disclosed approach, creating the capacitive profile based on the set of raw capacitive frames may include combining the set of raw capacitive frames to create a combined capacitive frame. The combined capacitive frame may include a distributed plurality of averaged capacitance levels, each of which may be an average of all capacitance levels for a respective location across all raw capacitive frames in the set of capacitive frames. In another aspect of the disclosed approach, the authentication signal may indicate a reasonable match has been determined between the capacitive profile and the biometric template. In yet another aspect of the disclosed approach, the authentication signal identifies the body part as the enrolled body part.[00126] The first value and the second value from the capacitor profile and the biometric template, respectively, may each include a capacitive value determined from capacitive sensor measurements captured by a portion of a capacitive sensor array such as the capacitance sensing array 1504. In one aspect of the disclosed approach, the capacitive value may include a sum of the capacitive sensor measurements. In another aspect of the disclosed approach, the capacitive sensor measurements from the portion of the capacitive sensor array may include a plurality of capacitive sensor measurements taken along one of a vertical or horizontal direction with respect to the capacitive sensor array.[00127] As discussed above, the biometric templates 1524 may be generated from one or more capacitive profiles, each capacitive profile generated from one or more raw capacitive frames captured by the capacitive sensor system 1502 in contact with the enrolled body part. Each capacitance level of the plurality of capacitance levels may include a capacitance level sensed by a capacitive sensing element in the capacitive sensor system 1504. The capacitive sensor system may include an arrangement of capacitive sensing elements in a particular shape and the plurality of capacitance levels contained in each raw capacitive frame may be arranged in accordance with the particular shape. [00128] Various aspects of the disclosed approach may require means to perform certain functions, such as means for creating a capacitive profile based on the set of raw capacitive frames, which includes means for combining the set of raw capacitive frames to create a combined capacitive frame that includes a distributed plurality of averaged capacitance levels, each of which being an average of all capacitance levels for a respective location across all raw capacitive frames in the set of capacitive frames. These and other means maybe implemented using one or more of the modules disclosed herein. For example, the security module 1520 may be used to implement the means for combining the set of raw capacitive frames to create a combined capacitive frame as well as the means for creating a capacitive profile based on the set of raw combined frames. It should be noted, however, that the means may also be implemented using various combinations of the processor 1514, the memory 1518, and the computer- readable medium 1516. Thus, the various hardware used in the description provided herein should not be taken as a limiting disclosure but merely examples of what elements may be used.[00129] Those of skill would further appreciate that any of the various illustrative logical blocks, modules, processors, means, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware (e.g., a digital implementation, an analog implementation, or a combination of the two, which may be designed using source coding or some other technique), various forms of program or design code incorporating instructions (which may be referred to herein, for convenience, as "software" or a "software module"), or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[00130] The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented within or performed by an integrated circuit ("IC"), an access terminal, or an access point. The IC may comprise a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, electrical components, optical components, mechanical components, or any combination thereof designed to perform the functions described herein, and may execute codes or instructions that reside within the IC, outside of the IC, or both. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.[00131] It is understood that any specific order or hierarchy of steps in any disclosed process is an example of a sample approach. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged while remaining within the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.[00132] The steps of a method or algorithm described in connection with the aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module (e.g., including executable instructions and related data) and other data may reside in a data memory such as RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM, or any other form of computer-readable storage medium known in the art. A sample storage medium may be coupled to a machine such as, for example, a computer/processor (which may be referred to herein, for convenience, as a "processor") such the processor can read information (e.g., code) from and write information to the storage medium. A sample storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in user equipment. In the alternative, the processor and the storage medium may reside as discrete components in user equipment. Moreover, in some aspects any suitable computer-program product may comprise a computer-readable medium comprising codes (e.g., executable by at least one computer) relating to one or more of the aspects of the disclosure. In some aspects a computer program product may comprise packaging materials. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but are to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean "one and only one" unless specifically so stated, but rather "one or more." Unless specifically stated otherwise, the term "some" refers to one or more. A phrase referring to "at least one of a list of items refers to any combination of those items, including single members. As an example, "at least one of: a, b, or c" is intended to cover: a; b; c; a and b; a and c; b and c; and a, b and c. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 1 12, sixth paragraph, unless the element is expressly recited using the phrase "means for" or, in the case of a method claim, the element is recited using the phrase "step for."
Systems, methods, and apparatuses for power efficient generation of length markers for a variable length instruction set are described. A hardware processor core includes a decoder circuit, an execution circuit, an instruction cache, an instruction length decoder circuit, a predecode cache, for each section of multiple sections of instruction data, an incomplete decode table comprising a bit, for each proper subset of sections of instruction data, that indicates when that proper subset of sections has an invalid predecode bits in the predecode cache; and a fetch circuit to, for an incoming address of instruction data, perform a lookup in the instruction cache and the incomplete decode table, and, when there is a hit in the instruction cache for the instruction data at the incoming address and a the incomplete decode table that indicates a proper subset of sections of the instruction data for the incoming address has one or more invalid predecode bits in the predecode cache.
CLAIMS1. A hardware processor core comprising: a decoder circuit to decode instructions into decoded instructions; an execution circuit to execute the decoded instructions; an instruction cache; an instruction length decoder circuit; a predecode cache comprising a predecode bit, for each section of multiple sections of instruction data, that indicates when that section is identified as a boundary of a variable length instruction; an incomplete decode table comprising a bit, for each proper subset of sections of instruction data, that indicates when that proper subset of sections has one or more invalid predecode bits in the predecode cache; and a fetch circuit to, for an incoming address of instruction data, perform a lookup in the instruction cache and the incomplete decode table, and, when there is a hit in the instruction cache for the instruction data at the incoming address and a hit in the incomplete decode table that indicates a proper subset of sections of the instruction data for the incoming address has one or more invalid predecode bits in the predecode cache, causes the instruction length decoder circuit to generate one or more predecode bits for the proper subset of sections of the instruction data for the incoming address that has the one or more invalid predecode bits.2. The hardware processor core of claim 1, wherein the instruction length decoder circuit is enabled from a disabled state when there is the hit in the instruction cache and the hit in the incomplete decode table.3. The hardware processor core of claim 1, wherein, when there is the hit in the instruction cache and the hit in the incomplete decode table, the instruction length decoder circuit is to send the one or more predecode bits for the proper subset of sections of the instruction data for the incoming address to the decoder circuit.4. The hardware processor core of claim 1, wherein, when there is the hit in the instruction cache and the hit in the incomplete decode table, the instruction length decoder circuit is to store the one or more predecode bits in the predecode cache and update one or more corresponding bits in the incomplete decode table as valid.495. The hardware processor core of claim 1, wherein, when there is a miss in the instruction cache for the instruction data at the incoming address, the fetch circuit causes a fetch of the instruction data from memory, the instruction length decoder circuit to generate one or more predecode bits for the instruction data at the incoming address, and an update of one or more corresponding bits in the incomplete decode table as valid.6. The hardware processor core of claim 1, wherein the decoder circuit comprises instruction length verification circuitry that updates a counter with a number of incorrect predecode bits from the predecode cache for decoded instructions, and when the counter exceeds a threshold, enables the instruction length decoder circuit from a disabled state.7. The hardware processor core of claim 1, further comprising disable logic circuitry to perform a comparison of predecode bits from the predecode cache to corresponding predecode bits generated by the instruction length decoder circuit, and disable the instruction length decoder circuit from an enabled state when the comparison indicates matches exceed a disengagement threshold number of matches.8. The hardware processor core of any one of claims 1-7, wherein the fetch circuit is to, when there is a hit in the instruction cache for the instruction data at the incoming address and a miss in the incomplete decode table that indicates a proper subset of sections of the instruction data for the incoming address has one or more valid predecode bits in the predecode cache, causes the one or more valid predecode bits for the proper subset of sections of the instruction data for the incoming address from the predecode cache and corresponding instruction data from the instruction cache to be sent to the decoder circuit.9. A method comprising: receiving an incoming address of instruction data at a fetch circuit of a processor; performing a lookup in an instruction cache of the processor for the instruction data at the incoming address in response to the receiving; performing a lookup in an incomplete decode table of the processor for the instruction data at the incoming address in response to the receiving, the incomplete decode table comprising a bit, for each proper subset of sections of instruction data, that indicates when that proper subset of sections has one or more invalid predecode bits in a predecode cache of the processor comprising a predecode bit, for each section of multiple sections of instruction50 data, that indicates when that section is identified as a boundary of a variable length instruction; and generating, when there is a hit in the instruction cache for the instruction data at the incoming address and a hit in the incomplete decode table that indicates a proper subset of sections of the instruction data for the incoming address has one or more invalid predecode bits in the predecode cache, one or more predecode bits for the proper subset of sections of the instruction data for the incoming address that has the one or more invalid predecode bits by an instruction length decoder circuit of the processor.10. The method of claim 9, further comprising enabling the instruction length decoder circuit from a disabled state when there is the hit in the instruction cache and the hit in the incomplete decode table.11. The method of claim 9, further comprising send the one or more predecode bits for the proper subset of sections of the instruction data for the incoming address from the instruction length decoder circuit to a decoder circuit when there is the hit in the instruction cache and the hit in the incomplete decode table.12. The method of claim 9, further comprising storing the one or more predecode bits from the instruction length decoder circuit into the predecode cache and updating one or more corresponding bits in the incomplete decode table as valid when there is the hit in the instruction cache and the hit in the incomplete decode table.13. The method of claim 9, further comprising, when there is a miss in the instruction cache for the instruction data at the incoming address, fetching the instruction data from memory by the fetch circuit, generating one or more predecode bits for the instruction data at the incoming address by the instruction length decoder circuit, and updating one or more corresponding bits in the incomplete decode table as valid.14. The method of claim 9, further comprising updating a counter with a number of incorrect predecode bits from the predecode cache for decoded instructions by instruction length verification circuitry of the processor, and enabling the instruction length decoder circuit from a disabled state when the counter exceeds a threshold.15. The method of claim 9, further comprising performing a comparison of predecode bits from the predecode cache to corresponding predecode bits generated by the instruction length51 decoder circuit, and disabling the instruction length decoder circuit from an enabled state when the comparison indicates matches exceed a disengagement threshold number of matches.16. The method of any one of claims 9-15, wherein, when there is a hit in the instruction cache for the instruction data at the incoming address and a miss in the incomplete decode table that indicates a proper subset of sections of the instruction data for the incoming address has one or more valid predecode bits in the predecode cache, sending the one or more valid predecode bits for the proper subset of sections of the instruction data for the incoming address from the predecode cache and corresponding instruction data from the instruction cache to a decoder circuit.17. An apparatus comprising: an instruction cache; an instruction length decoder circuit; a predecode cache comprising a predecode bit, for each section of multiple sections of instruction data, that indicates when that section is identified as a boundary of a variable length instruction; an incomplete decode table comprising a bit, for each proper subset of sections of instruction data, that indicates when that proper subset of sections has one or more invalid predecode bits in the predecode cache; and a circuit to, for an incoming address of instruction data, perform a lookup in the instruction cache and the incomplete decode table, and, when there is a hit in the instruction cache for the instruction data at the incoming address and a hit in the incomplete decode table that indicates a proper subset of sections of the instruction data for the incoming address has one or more invalid predecode bits in the predecode cache, causes the instruction length decoder circuit to generate one or more predecode bits for the proper subset of sections of the instruction data for the incoming address that has the one or more invalid predecode bits.18. The apparatus of claim 17, wherein the instruction length decoder circuit is enabled from a disabled state when there is the hit in the instruction cache and the hit in the incomplete decode table.52 apparatus of claim 17, wherein, when there is the hit in the instruction cache and the hit in the incomplete decode table, the instruction length decoder circuit is to send the one or more predecode bits for the proper subset of sections of the instruction data for the incoming address to a decoder circuit that decodes instructions into decoded instructions for execution. apparatus of claim 17, wherein, when there is the hit in the instruction cache and the hit in the incomplete decode table, the instruction length decoder circuit is to store the one or more predecode bits in the predecode cache and update one or more corresponding bits in the incomplete decode table as valid. apparatus of claim 17, wherein, when there is a miss in the instruction cache for the instruction data at the incoming address, the circuit causes a fetch of the instruction data from memory, the instruction length decoder circuit to generate one or more predecode bits for the instruction data at the incoming address, and an update of one or more corresponding bits in the incomplete decode table as valid. apparatus of claim 17, further comprising instruction length verification circuitry that updates a counter with a number of incorrect predecode bits from the predecode cache for decoded instructions, and when the counter exceeds a threshold, enables the instruction length decoder circuit from a disabled state. apparatus of claim 17, wherein the circuit is to perform a comparison of predecode bits from the predecode cache to corresponding predecode bits generated by the instruction length decoder circuit, and disable the instruction length decoder circuit from an enabled state when the comparison indicates matches exceed a disengagement threshold number of matches. apparatus of any one of claims 17-23, wherein the circuit is to, when there is a hit in the instruction cache for the instruction data at the incoming address and a miss in the incomplete decode table that indicates a proper subset of sections of the instruction data for the incoming address has one or more valid predecode bits in the predecode cache, causes the one or more valid predecode bits for the proper subset of sections of the instruction data for the incoming address from the predecode cache and corresponding instruction data from the instruction cache to be sent to a decoder circuit that decodes instructions into decoded instructions for execution.
CIRCUITRY AND METHODS FOR POWER EFFICIENT GENERATION OF LENGTH MARKERS FOR A VARIABLE LENGTH INSTRUCTION SETTECHNICAL FIELD[0001] The disclosure relates generally to electronics, and, more specifically, an embodiment of the disclosure relates to circuitry for power efficient generation of length markers for a variable length instruction set.BACKGROUND[0002] A processor, or set of processors, executes instructions from an instruction set, e.g., the instruction set architecture (ISA). The instruction set is the part of the computer architecture related to programming, and generally includes the native data types, instructions, register architecture, addressing modes, memory architecture, interrupt and exception handling, and external input and output (I/O). It should be noted that the term instruction herein may refer to a macro-instruction, e.g., an instruction that is provided to the processor for execution, or to a micro-instruction, e.g., an instruction that results from a processor’s decoder decoding macroinstructions.BRIEF DESCRIPTION OF THE DRAWINGS[0003] The present disclosure is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:[0004] Figure 1 illustrates a block diagram of a processor core having instruction length circuitry according to embodiments of the disclosure.[0005] Figure 2 illustrates a block diagram of circuitry to perform on-demand instruction length decoding according to embodiments of the disclosure.[0006] Figure 3 is a block diagram illustrating an example format of a predecode cache according to embodiments of the disclosure.[0007] Figure 4 illustrates an example format of a cache line of instruction data according to embodiments of the disclosure.[0008] Figure 5 illustrates an example format of a predecode cache entry according to embodiments of the disclosure.[0009] Figure 6 is a block diagram illustrating an example format of an incomplete decode table according to embodiments of the disclosure. [0010] Figure 7 is a block flow diagram illustrating operations of a method of generating one or more predecode bits according to embodiments of the disclosure.[0011] Figure 8A is a block diagram illustrating a generic vector friendly instruction format and class A instruction templates thereof according to embodiments of the disclosure.[0012] Figure 8B is a block diagram illustrating the generic vector friendly instruction format and class B instruction templates thereof according to embodiments of the disclosure.[0013] Figure 9A is a block diagram illustrating fields for the generic vector friendly instruction formats in Figures 8A and 8B according to embodiments of the disclosure.[0014] Figure 9B is a block diagram illustrating the fields of the specific vector friendly instruction format in Figure 9A that make up a full opcode field according to one embodiment of the disclosure.[0015] Figure 9C is a block diagram illustrating the fields of the specific vector friendly instruction format in Figure 9A that make up a register index field according to one embodiment of the disclosure.[0016] Figure 9D is a block diagram illustrating the fields of the specific vector friendly instruction format in Figure 9A that make up the augmentation operation field 850 according to one embodiment of the disclosure.[0017] Figure 10 is a block diagram of a register architecture according to one embodiment of the disclosure[0018] Figure 11A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the disclosure.[0019] Figure 1 IB is a block diagram illustrating both an exemplary embodiment of an inorder architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the disclosure.[0020] Figure 12A is a block diagram of a single processor core, along with its connection to the on-die interconnect network and with its local subset of the Level 2 (L2) cache, according to embodiments of the disclosure.[0021] Figure 12B is an expanded view of part of the processor core in Figure 12A according to embodiments of the disclosure.[0022] Figure 13 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the disclosure.[0023] Figure 14 is a block diagram of a system in accordance with one embodiment of the present disclosure. [0024] Figure 15 is a block diagram of a more specific exemplary system in accordance with an embodiment of the present disclosure.[0025] Figure 16, shown is a block diagram of a second more specific exemplary system in accordance with an embodiment of the present disclosure.[0026] Figure 17, shown is a block diagram of a system on a chip (SoC) in accordance with an embodiment of the present disclosure.[0027] Figure 18 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the disclosure.DETAILED DESCRIPTION[0028] In the following description, numerous specific details are set forth. However, it is understood that embodiments of the disclosure may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description.[0029] References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.[0030] A (e.g., hardware) processor (e.g., having one or more cores) may execute (e.g., userlevel) instructions (e.g., a macro-instructions) to operate on data, for example, to perform arithmetic, logic, or other functions. For example, software may include a plurality of instructions (e.g., macro-instructions) that are provided to a processor (e.g., a core or cores thereof) that then executes (e.g., decodes and executes) the plurality of instructions to perform the corresponding operations. In certain embodiments, a processor includes circuity (e.g., one or more decoder circuits) to translate (e.g., decode) an instruction into one or more microoperations (pops or micro-ops), for example, with these micro-operations directly executed by the hardware (e.g., by execution circuits). One or more micro-operations corresponding to an instruction (e.g., macro-instruction) may be referred to as a microcode flow for that instruction. A micro-operation may be referred to as a micro-instruction, for example, a micro-instruction that resulted from a processor’s decoding of a macro-instruction. In one embodiment, the instructions are (e.g., 64 bit and/or 32 bit) instructions of an instruction set architecture (ISA), e.g., an Intel® ISA.[0031] Certain ISAs include instructions (e.g., the instruction data itself and not the operands to be operated on by the instruction) that vary in length. Such variable length instructions may be from a complex instruction-set computing (CISC) architecture. Additionally or alternatively, may be stored in an unaligned manner (e.g., in memory or in an instruction cache) before being decoded and executed. In certain embodiments, the length (e.g., bit width) of the instructions (e.g., the beginning and/or end of the instruction) is first determined before decoding of the instruction, e.g., in order to properly align the instruction for execution.[0032] Example instruction formats are discussed below in reference to Figure 9 A. One example format of an instruction includes a prefix field (e.g., 0-4 Bytes (B)), an opcode field (e.g., 1-3 Bytes), a MOD/RM field (e.g., 0 or 1 Byte), a SIB field (e.g., 0 or 1 Byte), a displacement field (e.g., 0, 1, 2, or 4 Bytes), and an immediate field (e.g., 0, 1, 2, or 4 Bytes). Thus, an instruction according to this format may have different widths depending on which fields are included and/or which width granularities are used for included fields.[0033] In certain embodiments, the prefix appears before the opcode and may override various default attributes associated with the opcode. For example, a prefix may override the default size of the operand, the default size of the address specifier, and/or the default segment. Furthermore, the prefix may indicate a string instruction loop and/or indicate a bus lock cycle while executing an instruction. Prefixes that affect the length decoding of instruction may include the overriding address size prefix, the overriding operand size prefix, and the repeat prefix. In certain embodiments, the operand size override prefix may alter the default size of an operand associated with an instruction. For example, a 16-bit instruction containing the operand size override prefix may contain a 32-bit operand instead of the default 16-bit operand. Conversely, a 32-bit instruction containing the operand size override prefix may contain a 16-bit operand instead of the default 32-bit operand. In certain embodiments, the address size override prefix may alter the default size of the address associated with the instruction. For example, a 16- bit instruction containing the address size override prefix may contain a 32-bit address instead of the default 16-bit address. Conversely, a 32-bit instruction containing the address size override prefix may contain a 16-bit address instead of the default 32-bit address.[0034] In certain embodiments, the opcode identifies the operation to be performed by the instruction. The opcode may specify the number of immediate bytes, presence of the MOD/RM field, and/or displacement bytes. For example, an opcode may specify (e.g., up to 8) displacement bytes and/or (e g., up to 8) immediate bytes in certain embodiments. [0035] In certain embodiments, the MOD/RM byte indicates the type of source and/or destination operands that are to be used in conjunction with an instruction. For example, the MOD/RM byte may indicate the existence within the instruction (e.g., of up to four) displacement bytes or a scale index (SIB) byte.[0036] In certain embodiments, the SIB byte indicates other complex addressing modes. For example, the SIB byte may specify (e g., up to four) displacement bytes.[0037] Where one or more (e.g., each) instruction may vary in length (e.g., according to the above fields), in certain embodiments it is necessary to first determine the length (e.g., beginning and/or end) of the instruction before it can be decoded and subsequently executed by a processor. [0038] In certain embodiments, an instruction length decoder is utilized to generate a (e.g., prediction) of the length of an instruction, for example, by generating an indication (e.g., which may be referred to as hint bits or predecode bits) of the length of the instruction. To efficiently decode multiple instructions per cycle (e g., per decode cluster), certain embodiments of a decoder uses the indications (e.g., predecode bits) to mark the boundaries between the variable length (e g., x86) instructions. In certain embodiments, the instruction length decode generates the indications, e.g., before the instruction is decoded (e.g., decoded into one or more microoperations). In certain embodiments, the indications of the length are corresponding bits that indicate a boundary between variable length instructions in (e.g., raw) instruction data, for example, a cache line (e.g., 64 Bytes) of instruction data that may include one or a plurality of instructions therein. In certain embodiments, the indication is a predecode bit that indicates a corresponding part of the instruction data (e.g., a particular section (e.g., Byte) of multiple sections (e.g., 64 sections) of the instruction data) is an end (or beginning) of the instruction, for example, a predecode bit that indicates an end of a macro-instruction (e.g., an EOM bit). In one embodiment, one predecode bit per section (e.g., byte) of raw instruction data (e g., bytes) are cached (e.g., in a predecode cache) alongside (or part of) an (e.g., first level (LI)) instruction cache. In certain embodiments, the predecode bits are consumed in a single cycle loop at the beginning of each decoder (e.g., decode cluster) to steer a set of raw contiguous instruction sections (e.g., bytes) to parallel decoder circuits to complete the multi-cycle (e.g., 2, 3, 4, etc. cycle total) instruction decode process.[0039] Note that in certain embodiments, the predecode bits are not required to be correct (e g , they can be wrong by being in the wrong locations). In certain embodiments, all predecode bits could begin at a state of all being a first value (e.g., a zero indicating a corresponding section of the instruction data is not an end of an instruction or a one indicating a corresponding section of the instruction data is an end of an instruction) or a random collection of first values (e g , zeros) and second values (e.g., ones), for example, and certain microarchitecture will still make forward progress, e.g., because (e.g., all) predicted instruction lengths are verified during the decode pipeline such that the true length is (e.g., always) computed and verified against the predicted length. However, in certain embodiments without predecode (e.g., hint) bits to enable parallel decode, decode falls from supporting a plurality of (e.g., 3) instructions per cycle (e.g., per cluster) to supporting only one instruction every plurality of (e g., 3) cycles per cluster. In certain embodiments, at the end of the decode pipeline the true lengths of each instruction are resolved, so such true lengths (e.g., the true EOM) can be used to resteer the decode pipeline to properly decode each instruction as necessary and/or update corresponding predecode bits in a predecode cache(s).[0040] In certain embodiments, one predecode bit is included per section (e.g., byte) of instruction data, e.g., and the predecode bit (e.g., predecode cache storing those bits) can be attached to an (first level (LI)) instruction cache for this process to work. To improve performance, though, in certain embodiments, a second level predecode cache is utilized, e.g., a (e.g., 128 KB) second level predecode cache built as a physically hashed structure attached to a large (e.g., up to 4MB) second level (L2) cache. In certain embodiments, misses in the LI instruction cache index into this second level predecode cache in parallel to the lookup to the L2 cache. In certain embodiments, this second level predecode cache is an untagged structure indexed via a physical address hash function, e.g., because it is untagged it always produces a result. Regardless of whether the L2 cache hit or missed for the (e.g., cache line) (e.g., 64B) instruction data, in certain embodiments, the predecode cache provides (e.g., 64 bits of) the predecode bits for the requested instruction data. In certain embodiments, each time there is an update to a first level predecode cache, the same update would be shuttled to the shared 2nd level predecode cache.[0041] In certain embodiments, when a portion of code (for example, instruction data, e.g., the ones and zeros that define the instruction itself and not the operands) is decoded for the very first time, predecode bits for this code have not been generated, e.g., and the corresponding predecode bits provided by the (e.g., second level) predecode cache, therefore will likely be incorrect (e.g., and each decode cluster may be incapable of parallel decode). In certain embodiments (e.g., where the code footprint is large), even though instructions have been decoded (e.g., once) before, the (e.g., 2nd level) predecode cache can be overwhelmed such that it will be incorrect.[0042] In certain embodiments, an alternative front-end pipeline that avoids the need for caching predecode bits is built by preceding the instruction decode pipeline with a block of circuitry that explicitly extracts instruction length information e.g., the circuitry being an instruction length decoder (ILD). In certain embodiments, the ILD logic circuitry is computationally difficult where it scans through raw instruction data (e.g., bytes) speculatively assuming every section (e.g., byte) may be the beginning of an instruction. In one embodiment, this is pipelined using a cascade of logic circuitry with feedback loops. In certain embodiments, an ILD inputs a plurality of sections (e.g., 16 or 32 Bytes) and takes a plurality of (e.g., 2-5) cycles to process those sections. To counteract the long latencies, low bandwidth, and high capacitance (e.g., power consumption) associated with this decode, certain embodiments use a fully decoded cache (e.g., decoded stream buffer (DSB)) (e.g., storing micro-operations for instructions that have already been decoded). However, using a decoded cache generally comes at an increased area cost and creates its own issues around cold code starts and large code footprints where decoded caches only hold a fraction of the instructions held in the first level instruction caches. In certain embodiments, a decoded cache has poor performance when the decoded cache is insufficient or cold.[0043] In certain embodiments, a predecode cache only solution has poor performance when the predecode cache is insufficient or cold. In certain embodiments, an ILD-only solution suffers from low bandwidth, high latency, and high power.[0044] To overcome the above issues, certain embodiments herein combine usage of an instruction length decoder and a predecode cache. In certain embodiments, the ILD is an on- demand ILD (OD-ILD) that is switchable between an enabled (e.g., on) state and a disabled (e.g., off) state, for example, to merge the power and area benefits of a predecode cache while extracting the performance benefits of an OD-ILD pipeline for cold code starts and/or large code footprint scenarios. Certain embodiments herein include a front-end with multiple OD-ILDs, e.g., with one or more OD-ILDs for each decode cluster and/or core, or a front-end with a single OD-ILD for a single decode cluster or a single OD-ILD shared by a plurality of decode clusters. [0045] In certain embodiments, when the predecode cache is unable to provide correct predecode bit(s), the OD-ILD is selectively engaged to generate bits, e.g., only when necessary. This removes the power and latency of an ILD pipeline for certain execution scenarios. Certain embodiments herein are directed to determining when to engage and when to disengage the OD- ILD. In one embodiment, the OD-ILD is engaged on (e.g., LI) instruction cache misses (e.g., where there is no L2 predecode cache). In certain embodiments, the OD-ILD (e.g., pipeline) disengages once the predecode cache is warm removing both the power and latency from the pipeline. Embodiments herein remove the issues in predecode cache only pipelines that cause a loss of decode bandwidth while preserving the power characteristics of designs without an always alive preceding ILD pipeline and the area advantages of designs without fully decoded caches (e.g., pop caches / DSBs / trace caches, etc ). In certain embodiments, the inclusion of multiple OD-ILD pipelines per cluster and per core add some logic area within a core boundary, but the total area is smaller than using a (e.g., 128 KB) second level predecode cache.[0046] Figure 1 illustrates a block diagram of a processor core 100 having instruction length circuitry 108 according to embodiments of the disclosure. Depicted processor core 100 includes an optional branch predictor 102 (e.g., to predict one or more branches of the code (e.g., instructions) that are to be executed by the processor core 100. Depicted processor core 100 includes a fetch circuit 104 to fetch instruction data, for example, from memory via port 103, e.g., to access memory 1180 in Figure 1 IB. In one embodiment, the fetch circuit 104 uses the address 101 (e.g., virtual address) of instruction data (e.g., program counter (PC) or instruction pointer (IP)) to check if a corresponding entry (e.g., mapping that address 101 to the corresponding instruction data stored at that address) containing the instruction data is already stored in (e.g., LI) instruction cache 106, for example, and if no, performing a memory access (e.g., page walk in a paged memory) (e.g., via memory port 103) to obtain the instruction data (e.g., cache line of instruction data) and if yes, passing that cached instruction data forward in the processor core 100, e.g., for eventual decode and execution. Instruction cache 106 may include an instruction cache tag and/or instruction translation lookaside buffer (TLB).[0047] As discussed above, instruction data from instruction cache 106 may comprise multiple sections of raw instruction data, e g., such that the boundaries of the instruction corresponding to address 101 are not necessarily known at that time. In certain embodiments, instruction length circuity 108 is included, e.g., between the instruction cache 106 and decoder 118.[0048] Instruction length circuitry 108 may include a predecode cache 110 to store predecode bits, e.g., with each predecode bit indicating if a particular section (e.g., byte) of instruction data of multiple sections of instruction data in instruction cache (e.g., a cache line (e.g., 64 Bytes)) are identified as a boundary of an instruction (e.g., a macro-instruction boundary, e g., an EOM). An example format for predecode cache 110 is depicted in Figure 3 and an example predecode cache entry is depicted in Figure 5.[0049] Instruction length circuitry 108 may include an incomplete decode table 116 to store (e.g., incomplete decode indication) bits, e g., with each bit indicating if a section (or a proper subset of sections) of instruction data (e.g., instruction data in instruction cache 106) has one or more invalid (e g., stale) predecode bits in the predecode cache 110 An example format for incomplete decode table 116 is depicted in Figure 6.[0050] Instruction length circuitry 108 may include an instruction length decoder (ILD) circuit 112, for example, an on-demand instruction length decoder (OD-ILD) circuit 112 (for example, switchable between an enabled (e.g., on) state and a disabled (e.g., off) state, e.g., to conserve power).[0051] In certain embodiments, a processor core 100 is to, for an input of an address 101 of instruction (e.g., program) data, search the instruction cache 106 for that address, and if a hit, pass that cached instruction data (e.g., which may include bits from more than one instruction) to instruction length circuitry 108 to indicate if one or more sections (e.g., bytes in one example granularity) of the cached instruction data are a boundary (e.g., end) of an instruction (e.g., a probable boundary in an embodiment where the inference is incorrect). In one embodiment, instruction length circuitry 108 is to first check if there are corresponding predecode bit or bits in predecode cache 110 that indicate one or more sections (e.g., bytes in one example granularity) of the cached instruction data are a boundary (e g., end) of an instruction (e.g., a probable boundary in an embodiment where the inference is incorrect) and if the incomplete decode table 116 has one or more (e.g., incomplete decode indication) bits that indicate that at least one or more of those predecode bits are invalid (e.g., there are no current predecode bits in predecode cache 110 for the corresponding section(s) of the cached instruction data). In certain embodiments, if there are no invalid predecode bits (e.g., all those predecode bits are marked as valid), the instruction data from instruction cache (e.g., for a single instruction) and their corresponding predecode bit(s) are sent to decoder 118. In certain embodiments, if there are one or more invalid predecode bits (e.g., one or more predecode bits are marked as invalid), the instruction data from instruction cache is sent to instruction length decoder circuit 112 for instruction length decoding, e g., and those predecode bits generated by that instruction length decoding are sent to decoder 118 and/or predecode cache 110 (e.g., with corresponding bit(s) set to valid in incomplete decode table 116).[0052] In certain embodiments, an on-demand version of ILD circuit 112 is enabled (e.g., activated) when there is an (e.g., LI) instruction cache 106 miss, when predicting that instruction data (e.g., a cache line) within (e.g., LI) instruction cache 106 does not have predecode information (e.g., via a look-up in the incomplete decode table 116), and/or instruction length verification circuitry 120 (e.g., within the decoder 118) detects too many instructions with incorrect predecode bits, e.g., where counter 122A tracks the number of incorrect predecode bits from predecode cache 110 (e.g., or the corresponding number of instructions) in comparison to the output from instruction length verification circuitry 120, e.g., and activates the on-demand ILD circuit 112 when threshold 122B and/or flushes the fetch circuit 104when threshold 122B is exceeded. Further examples of circuitry (e.g., components thereof) to perform on-demand instruction length decoding are depicted in Figure 2. [0053] Decoder 118 may be a single decoder circuit 124A, e.g., that generate corresponding micro-operation or micro-operations for a single instruction. Decoder 118 may include multiple clusters 124, 126 of decode clusters, e.g., each decode cluster having a plurality of decoder circuits in parallel. Although two are shown, three or more clusters may be utilized (e.g., where “N” is a positive integer greater than one). In certain embodiments, each decode cluster includes two or more (e g., superscalar x86) instruction decoders capable of decoding different basic blocks of code out-of-order with respect to each other, for example, with decode cluster 124 including a first decoder circuit 124A and a second decoder circuit 124B (e.g., decoder), and decode cluster 126 including a first decoder circuit 126 A and a second decoder circuit 126B. [0054] In certain embodiments, once instructions are sent to their corresponding decode cluster 108A-108B (e.g., in instruction data queue in each decode cluster), decode clusters begin decoding the instructions in parallel (e.g., via the parallel decoder circuits therein). In certain embodiments, the allocation circuit 128 is responsible for allocating the operations (e.g., microoperations) to the execution circuits 130 (e.g., execution units), e.g., in program order. Core 100 may also include a microcode sequencer 125 to load a corresponding set of one or more (e.g., plurality of) micro-operations (pops) from the microcode sequencer’s memory (e.g., read-only memory (ROM)) into the decode pipeline (e.g., into the allocation circuit 128).[0055] Execution circuits 130 may access storage, e.g., registers 132 and/or data cache 134 (e.g., one or more levels of a cache hierarchy). Once the resultants are generated by the execution circuits 130, a retirement circuit 128 may then retire a corresponding instruction.[0056] Figure 2 illustrates a block diagram of circuitry 200 to perform on-demand instruction length decoding according to embodiments of the disclosure. One or more components here may be duplicated for each decode cluster (e.g., or for each decoder circuit thereof). For example, decoder circuit 124A is depicted in Figure 2, but one or more components of 200 may be duplicated (or shared) by other decoder circuits. For example, a processor core (e.g., may include a plurality of decode clusters fed by (e.g., a queue of) raw instruction data (e.g., instruction data bytes in cache line alignment). In one embodiment, each decode cluster can decode a plurality of (e.g., 3 or more) instructions per cycle (e.g., as the decode width of the cluster).[0057] In certain embodiments, an on-demand instruction length decoder (OD-ILD) 112 is used to selectively generate predecode bits, e g., end of macroinstruction (EOM) markers The following discusses possible interactions between the OD-ILD 112, decoder circuit 124A (e.g., decoder 118), and predecode cache 110.[0058] In certain embodiments, on-demand version of ILD circuit 112 is enabled (e g., activated) when there is an (e.g., LI) instruction cache 106 miss, when predicting that instruction data (e.g., a cache line) within (e.g., LI) instruction cache 106 does not have predecode information (e.g., via a look-up in the incomplete decode table 116), and/or instruction length verification circuitry 120 (e.g., within the decoder circuit 124A) detects too many instructions with incorrect predecode bits, e.g., where counter 122A tracks the number of incorrect predecode bits from predecode cache 110 (e.g., or the corresponding number of instructions) in comparison to the output from instruction length verification circuitry 120, e.g., and activates the on-demand ILD circuit 112 when threshold 122B and/or flushes the predecode cache 110 when threshold 122B is exceeded.[0059] Certain components may utilize data in a cache line width of granularity. For example, where a cache line is a block (e.g., Bytes) of memory that may be managed as a unit, e.g., for cache coherency purposes. A cache line of data may be stored in cache memory (e.g., of any level, such as, but not limited to, LI, L2, L3, L4, etc ), system memory, or combinations thereof. Cache memory may be shared by multiple cores of a processor or local (e.g., not shared) to each core of a processor. Cache memory (e.g., a cache) may generally refer to a memory buffer inserted between one or more processors and other memory, for example, to store (e.g., hold) currently active copies of cache lines (e.g., blocks from system (main) memory). Cache memory may be local to each processor. Additionally, or alternatively, cache memory may be shared by multiple processors, e.g., separate from each processor. System memory may be separate from any cache memory, e g., system memory that is off-die relative to a processor core. Processing elements that use (e.g., share) a cache may be processor cores of a data processor and/or graphic processors. Cache line may refer to a 64-byte sized section of memory, e.g., 64 byte granularity. Cache line coherency may generally refer to each cache (e.g., cache memory) and/or system (e.g., main) memory in the coherence domain observing all modifications of that same cache line (e.g., that each instance of that cache line contains the same data). For example, a modification may be said to be observed by memory when any subsequent read would return the newly (e.g., current) written value.[0060] In certain embodiments, for every entry in the (e.g., LI) instruction cache 106 (e.g., 64B) there is a corresponding entry in the (e.g., LI) predecode cache 110 (e.g., one bit in a predecode cache entry for each section (e.g., byte) of instruction data in a corresponding instruction cache entry, e.g., a single (e.g., 64 bit wide) entry in predecode cache 110 for each single entry (e g., 64B of instruction data) in instruction cache 106. In certain embodiments, each entry is identified (e.g., indexed) by the address (or a proper subset of the address or addresses) of the corresponding instruction. In certain embodiments, when there is a miss in the (e.g., LI) instruction cache 106 for an input (e g., an address for instruction data), OD-ILD circuit 112 is enabled. However, in order to decode any instruction, certain embodiments need a known (e.g., valid) starting point. For example, if the incoming address of the instruction that is to be processed is not aligned at the granularity of the data that is being loaded into the instruction cache 106 (e.g., the instruction address is in the middle of a cache line), certain embodiments begin (or “seed”) the ILD circuit 112 (e.g., OD-ILD) at the target of the jump. For example, if you jumped to (hexadecimal) address 0xl05C which is within the cache line (e.g., 64 Bytes) of instruction data that begins at address 0x1040 and ends at address 0xl07F, certain embodiments herein start the (e.g., 16B wide) ILD circuit 112 (e.g., OD-ILD) with the section(s) (e.g., 16 Bytes) starting from address 0x1050 and at the starting offset address of OxC. Thus, in certain embodiments, the ILD circuit 112 (e.g., OD-ILD) can then generate predecode bits from one or more instructions that start at 0xl05C, e.g., and then proceed sequentially from there (e.g., but not for the sections of the cache line before 0xl05C, i.e., not for instruction data from addresses 0xl040-0xl05B).[0061] However, in certain embodiments where full (e.g., cache line) predecode bit information about the multiple sections (e.g., bytes) of instruction data preceding 0xI05C cannot be generated, later a walk can occur sequentially from cache line starting at address 0x1000 to the cache line starting at address 0x1040 with cache line starting at address 0x1040 hitting in the instruction cache 106 (e.g., as the cache line amount (e.g., 64B) of instruction data starting at address 0x1040 is cached in instruction cache 106. In certain embodiments, the cache line of starting address 0x1040 has incomplete instruction length decode information as the predecode bits from 0x1040 through 0xl05B are likely incorrect. In certain embodiments, the OD-ILD circuit 112 is engaged in this situation, e.g., assuming it is not already engaged.[0062] In certain embodiments, an Incomplete instruction length Decode Table (IDT) (e.g., with one bit per byte covering the entire instruction cache 106) is included that specifies whether each section of instruction data (e g., byte) has been instruction length decoded before, e.g., and the predecode cache 110 updated accordingly. By reading this new table in parallel with the instruction cache 106 and the predecode cache 110, certain embodiments herein determine if circuitry 200 (e.g., OD-ILD controller 204) is to engage the OD-ILD circuit 112 on instruction cache 106 hit cases. However, in other embodiments it is desirable to include a smaller version of the IDT 116, e.g., as a multiple (e.g., 32) entry and multiple (e.g., 4) way set associative table. See an example in Figure 6. In certain embodiments, each entry in IDT table includes a (e.g., partial) tag and a plurality of (e g., 8) bits of data, e g., with each data bit represents a proper subset of section of instruction data, e.g., with each data bit representing a multiple (e.g., 8) byte region of a larger plurality of (e g., 64) bytes of a cache line.[0063] In certain embodiments, when an (LI) instruction cache 106 miss occurs, circuitry 200 (e.g., via OD-ILD controller 204) allocates an entry for that cache line in the IDT 116 (e g., IDT update), for example, by replacing an invalid entry (e.g., an entry that has all of its data bits set to valid (e.g., set to one)) or via a (e.g., least recently used (LRU)) replacement policy. In certain embodiments, upon allocation, the data bits in an entry are cleared (e.g., set to zero) which specifies that no portion of the corresponding entry in the instruction cache 106 (e.g., cache line) has valid predecode bits. In certain embodiments, as updates to the predecode cache 110 occur (e.g., from OD-ILD circuit 112 and/or from updates triggered by one or more decoder circuits 124A (e.g., instruction length verification circuit 120 thereof), matching IDT 116 entries are updated for their affected region(s). In certain embodiments, circuitry 200, e.g., coupled to fetch circuit 104 in Figure 1) looks up the IDT 116 in parallel with the instruction tag array 202 (ITAG). For example, perform a look-up based on the input address for a corresponding cache line. In certain embodiments, an IDT 116 hit (e.g., indicating invalid predecode bit(s) in predecode cache 110) that accompany an instruction cache 106 hit, engaged the OD-ILD circuit 112 to begin determining predecode bits (e.g., “OD-ILD (e.g., EOM) bits”) for at least those corresponding section(s) of the instruction data (e.g., 16B of instruction data).[0064] As mentioned previously, in certain embodiments, an ILD 112 requires a known good starting point in order to determine predecode bits (e.g., EOM markers). In certain embodiments, when walking sequentially through code and circuitry 200 (e.g., suddenly) decides to engage the OD-ILD circuit 112, the OD-ILD circuit 112 is to be “seeded” with a byte-wise starting point and code bytes from that location and sequentially forward. Using the prior example of cache line 0x1000 and cache line 0x1040, in certain embodiments this is the last (e.g., 16) bytes from cache line 0x1000 and its predecode bits. In an embodiment with an OD- ILD circuit 112 is 16 bytes wide, this would be the bytes from 0x1030. Certain embodiments herein provide for this by storing the last set of bytes and predecode bits delivered from the instruction cache 106 when the OD-ILD circuit 112 is disengaged in case it needs to become engaged while walking sequentially through the code bytes. Some embodiments may store these last set of bytes and predecode bits in storage elements within OD-ILD circuit 112 or within storage elements of decoder 118.[0065] In certain embodiments, the circuitry 200 can enable the OD-ILD circuit 112, e.g., via enable logic circuitry 204A of OD-ILD controller 204. For example, enabling the OD-ILD circuit 112 to begin generating predecode bits for instruction data (e.g., from instruction cache 106) when a miss for an address in instruction tag 202 and/or when a hit in IDT 116 for a hit in instruction tag 202.[0066] In certain embodiments, the decoder circuit 124A (e.g., decoder 118) itself can enable the OD-ILD circuit 112 For example, with the decoder circuit 124A (e.g., decoder 118) using a weighted counter 122 A to track if a number of the recently decoded instructions have had incorrect predecode bits exceed a threshold 122B, e.g., and when the counter 122A exceeds the (e.g., programmable) threshold 122B, the decoder circuit 124A (e.g., decoder 118) sends a clear to the fetch circuit (e.g., a “BAClear”) which restarts the fetch circuit, clears buffer 210, and/or causes engagement of the OD-ILD circuit 112.[0067] In certain embodiments, while OD-ILD circuit 112 is engaged, all instruction bytes are processed by the OD-ILD circuit 112, e.g., to generate “OD-ILD (e g., EOM) bits”. In certain embodiments, the OD-ILD circuit 112 processes multiple (e.g., 16-bytes) sections of instruction data per cycle and generates predecode bits (e.g., EOM) markers for these sections (e g., bytes), e.g., in multiple (e.g., 2) cycles. In certain embodiments, when the OD-ILD circuit 112 has generated the predecode bits for the instruction sections (e.g., bytes), the predecode cache 110 is updated with the generated bits (e.g., from update queue 212), the IDT 116 is updated, and the predecode bits are sent to the decoder (e.g., via multiplexer (mux) 208 so the decoder can begin decoding, e.g., decoding multiple instructions in parallel (e.g., from buffer 210). In certain embodiments, decoder 118 and the OD-ILD circuit 112 provide updates to the same portion of the predecode cache 110 in the same clock cycle creating a write conflict to the predecode cache. In this situation, certain embodiments queue the update from the OD-ILD circuit 112 and/or queue the update from the decoder 118 in update queue 212, e.g., where this queue has 1 to N (where N is a positive integer greater than 1) number of entries.[0068] In certain embodiments, the decoder will not start decoding instruction sections (e.g., bytes) in this mode until the OD-ILD circuit 112 has produced predecode bits for those sections (e.g., bytes), e.g., where the decoder circuit 124A is to not start decoding the instruction bytes until the predecode bits are available. In certain embodiments, mux 208, e.g., via control by OD- ILD controller 204, is to send instruction data (e.g., 32B wide) and predecode bits (e.g., via predecode cache 110 or OD-ILD circuit 122) to decoder circuit 124A (e.g., decoder 118).[0069] In certain embodiments, to disengage the OD-ILD circuit 112, the circuitry 200 (e.g., disable logic circuitry 204B of OD-ILD controller 204) compares the contents of the predecode cache 110 to the predecode bits generated by the OD-ILD circuit, for example, where when the predecode cache’s predecode bits match the OD-ILD circuit 112 generated bits for a programable number (e.g., threshold 206) of (e.g., consecutive) cycles, the OD-ILD circuit 112 is disengaged, e.g., removing any cycles of latency in the pipeline from the OD-ILD circuit 112. [0070] In certain embodiments, such circuitry 200 is used with a microarchitecture that does not require predecode bits to be correct, e.g., the predecode bits from OD- ILD circuit 112 are allowed to be incorrect and/or incomplete. For example, certain (e.g., uncommon) prefixes within an ISA can trigger differences in the lengths of instructions, e.g., with such prefixes referred to as length changing prefixes (LCPs). In certain embodiments, these length characteristics can be ignored in an OD-ILD circuit 112 implementation. For example, to prevent the decoder circuit 124A from detecting a series of incorrect predict bits and flushing the pipeline, the decoder circuit 12A (e.g., instruction length verification circuitry 120) detects an LCP and does not attempt to force OD-ILD circuit 112 engagement. Furthermore, whenever LCP prefixes are detected when the OD-ILD circuit is already engaged, the OD-ILD controller 204 causes bypassing of predecode bits from the predecode cache 110 instead of sending (e.g., “garbage”) down the pipeline.[0071] Certain embodiments of circuitry 200 are utilized with a clustered front end, e.g., decode clusters depicted in Figure 1. In certain embodiments, an OD-ILD circuit 112 (e.g., and one or more other components) is deployed on each decoder cluster. In certain embodiments, a single OD-ILD circuit services all decode clusters.[0072] Figure 3 is a block diagram illustrating an example format of a predecode cache 110 according to embodiments of the disclosure. In certain embodiments, each entry (e.g., entry 306) in predecode cache 110 is for a corresponding section (e.g., cache line) of instruction data (e.g., an entry) in the (e.g., LI) instruction cache 106 (e.g., 64B), for example, as a multiple (e.g., 32) entry 302 and multiple (e.g., 4) way 304 cache.[0073] Continuing the address 0xl05C example above, in certain embodiments there are (e g., at least) three cache lines worth of instruction data stored into instruction cache 106: a first cache line (e.g., 64 Bytes) of instruction data starting at address 0x1000, a second cache line (e.g., 64 Bytes) of instruction data starting at address 0x1040, and a third cache line (e.g., 64 Bytes) of instruction data starting at address 0x1080. In certain embodiments, as the starting address of a potential instruction is for address 0xl05C of cache line having addresses of 0xl040-0xl07F, only those sections (e.g., bytes) between address 0xl05C and address 0xl07F have corresponding predecode bits generated (e.g., via OD-ILD circuit 112), and thus those bits are valid (e.g., correct) and any predecode bits for the other sections (e.g., bytes) between address 0x1040 and 0xl05B are invalid (e.g., missing).[0074] Figure 4 illustrates an example format of a cache line of instruction data 400 according to embodiments of the disclosure. In certain embodiments, cache line of instruction data 400 (e.g., 64B) is formed from multiple sections (e.g., bytes) of instruction data (e.g., from section 1 402, section 2404, and to section M 406, e.g., where M is any number 2 or greater), e.g., but stored as a single entry in instruction cache 106.[0075] Figure 5 illustrates an example format of a predecode cache entry 500 according to embodiments of the disclosure. In certain embodiments, predecode cache entry 500 includes multiple bits (e g., bit 1 502, bit 2 504, and bit M 506, e.g., where M is any number 2 or greater) and each bit represents a corresponding section of a cache line of instruction data (e.g., cache line of instruction data 400 where M is the same number for both format 400 and format 500). [0076] Continuing the address 0xl05C example above, in certain embodiments each cache line is 64 Bytes and thus the corresponding predecode cache entry 500 is to have 64 bits, e.g., with each bit indicating if a corresponding byte of the 64 Bytes cache line is a last (e.g., end) byte of an instruction (e.g., EOM).[0077] Figure 6 is a block diagram illustrating an example format of an incomplete decode table 116 according to embodiments of the disclosure. In certain embodiments, each entry in incomplete decode table 116 represents a single cache line, for example, as a multiple (e.g., 32) entry 602 and multiple (e.g., 4) way 604 table. In certain embodiments, each entry in incomplete decode table is for a corresponding entry in predecode cache 110 and/or is for a corresponding section (e.g., cache line) of instruction data (e.g., an entry) in the (e.g., LI) instruction cache 106 (e.g., 64B). In certain embodiments, each entry is identified (e.g., indexed) by the address (or a proper subset of the address or addresses) of the corresponding instruction. The granularity of the proper subset of sections (e.g., bytes) may be one to one, or one to many (e.g., as shown in entry 608). In one embodiment, each cache line is 64 bytes, and the IDT uses a bit to represent every 8 bytes, e.g., where if a bit is “0” that indicates that portion of the cache line does not have known valid bits in predecode cache (e.g., those predecode bits are presumed invalid) and where if a bit is “1” that indicates that portion of the cache line has known valid bits in predecode cache (e.g., those predecode bits are presumed valid).[0078] Continuing the address 0xl05C example above, in certain embodiments there are (e.g., at least) three cache lines worth of instruction data stored into instruction cache 106: a first cache line (e.g., 64 Bytes) of instruction data starting at address 0x1000, a second cache line (e.g., 64 Bytes) of instruction data starting at address 0x1040, and a third cache line (e.g., 64 Bytes) of instruction data starting at address 0x1080, and thus entry 606, 608, and entry 610 in incomplete decode table 116, respectively. In certain embodiments, as the starting address of a potential instruction is for address 0xl05C in the cache line having addresses of 0x1040- 0xl07F, those sections (e.g., bytes) between address 0xl05C and address 0xl07F have corresponding predecode bits in predecode cache 110 that are (e.g., presumed) valid (e.g., via OD-ILD circuit 112) and thus the corresponding bits (indexed from right to left as bit positions 7-3) in entry 608 are marked with a valid (e g , “1”) and those sections (e g., bytes) between address 0x1040 and address 0xl05B have corresponding predecode bits in predecode cache 110 that are (e.g., presumed) invalid (e.g., not updated via OD-ILD circuit 112) and thus the corresponding bits (indexed from left to right as bit positions 2-0) in entry 608 are marked with an invalid (e.g., “0”) (e.g., not known to be valid) for the three following 8 byte regions (i) 0x1040-0x1047, (ii) 0xl048-0xl04F, and (iii) 0x1050-0x1057 (e.g., because 0xl05C has valid bits, the entire 8 byte region is marked as valid so 0xl058-0xl05F would be marked as valid). [0079] Figure 7 is a block flow diagram illustrating operations of a method of generating one or more predecode bits according to embodiments of the disclosure. Some or all of the operations 700 (or other processes described herein, or variations, and/or combinations thereof) are performed under the control of a processor core (for example, circuitry 200 thereof, e.g., OD- ILD controller 204). The operations 700 include, at block 702, receiving an incoming address of instruction data at a fetch circuit of a processor. The operations 700 further include, at block 704, performing a lookup in an instruction cache of the processor for the instruction data at the incoming address in response to the receiving. The operations 700 further include, at block 706, performing a lookup in an incomplete decode table of the processor for the instruction data at the incoming address in response to the receiving, the incomplete decode table comprising a bit, for each proper subset of sections of instruction data, that indicates when that proper subset of sections has one or more invalid predecode bits in a predecode cache of the processor comprising a predecode bit, for each section of multiple sections of instruction data, that indicates when that section is identified as an end boundary of a variable length instruction. The operations 700 further include, at block 708, generating, when there is a hit in the instruction cache for the instruction data at the incoming address and a hit in the incomplete decode table that indicates a proper subset of sections of the instruction data for the incoming address has one or more invalid predecode bits in the predecode cache, one or more predecode bits for the proper subset of sections of the instruction data for the incoming address that has the one or more invalid predecode bits by an instruction length decoder circuit of the processor[0080] Exemplary architectures, systems, etc. that the above may be used in are detailed below.[0081] At least some embodiments of the disclosed technologies can be described in view of the following examples:Example 1. A hardware processor core comprising: a decoder circuit to decode instructions into decoded instructions; an execution circuit to execute the decoded instructions; an instruction cache; an instruction length decoder circuit; a predecode cache comprising a predecode bit, for each section of multiple sections (e.g., each having a same width) of instruction data, that indicates when that section is identified as a boundary of a variable length instruction; an incomplete decode table comprising a bit, for each proper subset of sections of instruction data, that indicates when that proper subset of sections has one or more invalid predecode bits in the predecode cache; and a fetch circuit to, for an incoming address of instruction data, perform a lookup in the instruction cache and the incomplete decode table, and, when there is a hit in the instruction cache for the instruction data at the incoming address and a hit in the incomplete decode table that indicates a proper subset of sections of the instruction data for the incoming address has one or more invalid predecode bits in the predecode cache, causes the instruction length decoder circuit to generate one or more predecode bits for the proper subset of sections of the instruction data for the incoming address that has the one or more invalid predecode bits.Example 2. The hardware processor core of example 1, wherein the instruction length decoder circuit is enabled from a disabled state when there is the hit in the instruction cache and the hit in the incomplete decode table.Example 3. The hardware processor core of example 1, wherein, when there is the hit in the instruction cache and the hit in the incomplete decode table, the instruction length decoder circuit is to send the one or more predecode bits for the proper subset of sections of the instruction data for the incoming address to the decoder circuit.Example 4. The hardware processor core of example 1, wherein, when there is the hit in the instruction cache and the hit in the incomplete decode table, the instruction length decoder circuit is to store the one or more predecode bits in the predecode cache and update one or more corresponding bits in the incomplete decode table as valid.Example 5. The hardware processor core of example 1, wherein, when there is a miss in the instruction cache for the instruction data at the incoming address, the fetch circuit causes a fetch of the instruction data from memory, the instruction length decoder circuit to generate one or more predecode bits for the instruction data at the incoming address, and an update of one or more corresponding bits in the incomplete decode table as validExample 6. The hardware processor core of example 1, wherein the decoder circuit comprises instruction length verification circuitry that updates a counter with a number of incorrect predecode bits from the predecode cache for decoded instructions, and when the counter exceeds a threshold, enables the instruction length decoder circuit from a disabled state.Example 7. The hardware processor core of example 1, further comprising disable logic circuitry to perform a comparison of predecode bits from the predecode cache to corresponding predecode bits generated by the instruction length decoder circuit, and disable the instruction length decoder circuit from an enabled state when the comparison indicates matches exceed a disengagement threshold number of matches.Example 8. The hardware processor core of example 1, wherein the fetch circuit is to, when there is a hit in the instruction cache for the instruction data at the incoming address and a miss in the incomplete decode table that indicates a proper subset of sections of the instruction data for the incoming address has one or more valid predecode bits in the predecode cache, causes the one or more valid predecode bits for the proper subset of sections of the instruction data for the incoming address from the predecode cache and corresponding instruction data from the instruction cache to be sent to the decoder circuit.Example 9. A method comprising: receiving an incoming address of instruction data at a fetch circuit of a processor; performing a lookup in an instruction cache of the processor for the instruction data at the incoming address in response to the receiving; performing a lookup in an incomplete decode table of the processor for the instruction data at the incoming address in response to the receiving, the incomplete decode table comprising a bit, for each proper subset of sections of instruction data, that indicates when that proper subset of sections has one or more invalid predecode bits in a predecode cache of the processor comprising a predecode bit, for each section of multiple sections of instruction data, that indicates when that section is identified as a boundary of a variable length instruction; and generating, when there is a hit in the instruction cache for the instruction data at the incoming address and a hit in the incomplete decode table that indicates a proper subset of sections of the instruction data for the incoming address has one or more invalid predecode bits in the predecode cache, one or more predecode bits for the proper subset of sections of the instruction data for the incoming address that has the one or more invalid predecode bits by an instruction length decoder circuit of the processor Example 10. The method of example 9, further comprising enabling the instruction length decoder circuit from a disabled state when there is the hit in the instruction cache and the hit in the incomplete decode table.Example 11. The method of example 9, further comprising send the one or more predecode bits for the proper subset of sections of the instruction data for the incoming address from the instruction length decoder circuit to a decoder circuit when there is the hit in the instruction cache and the hit in the incomplete decode table.Example 12. The method of example 9, further comprising storing the one or more predecode bits from the instruction length decoder circuit into the predecode cache and updating one or more corresponding bits in the incomplete decode table as valid when there is the hit in the instruction cache and the hit in the incomplete decode table.Example 13. The method of example 9, further comprising, when there is a miss in the instruction cache for the instruction data at the incoming address, fetching the instruction data from memory by the fetch circuit, generating one or more predecode bits for the instruction data at the incoming address by the instruction length decoder circuit, and updating one or more corresponding bits in the incomplete decode table as valid.Example 14. The method of example 9, further comprising updating a counter with a number of incorrect predecode bits from the predecode cache for decoded instructions by instruction length verification circuitry of the processor, and enabling the instruction length decoder circuit from a disabled state when the counter exceeds a threshold.Example 15. The method of example 9, further comprising performing a comparison of predecode bits from the predecode cache to corresponding predecode bits generated by the instruction length decoder circuit, and disabling the instruction length decoder circuit from an enabled state when the comparison indicates matches exceed a disengagement threshold number of matches.Example 16. The method of example 9, wherein, when there is a hit in the instruction cache for the instruction data at the incoming address and a miss in the incomplete decode table that indicates a proper subset of sections of the instruction data for the incoming address has one or more valid predecode bits in the predecode cache, sending the one or more valid predecode bits for the proper subset of sections of the instruction data for the incoming address from the predecode cache and corresponding instruction data from the instruction cache to a decoder circuit.Example 17. An apparatus comprising: an instruction cache; an instruction length decoder circuit; a predecode cache comprising a predecode bit, for each section of multiple sections of instruction data, that indicates when that section is identified as a boundary of a variable length instruction; an incomplete decode table comprising a bit, for each proper subset of sections of instruction data, that indicates when that proper subset of sections has one or more invalid predecode bits in the predecode cache; and a circuit to, for an incoming address of instruction data, perform a lookup in the instruction cache and the incomplete decode table, and, when there is a hit in the instruction cache for the instruction data at the incoming address and a hit in the incomplete decode table that indicates a proper subset of sections of the instruction data for the incoming address has one or more invalid predecode bits in the predecode cache, causes the instruction length decoder circuit to generate one or more predecode bits for the proper subset of sections of the instruction data for the incoming address that has the one or more invalid predecode bits.Example 18. The apparatus of example 17, wherein the instruction length decoder circuit is enabled from a disabled state when there is the hit in the instruction cache and the hit in the incomplete decode table.Example 19. The apparatus of example 17, wherein, when there is the hit in the instruction cache and the hit in the incomplete decode table, the instruction length decoder circuit is to send the one or more predecode bits for the proper subset of sections of the instruction data for the incoming address to a decoder circuit that decodes instructions into decoded instructions for execution.Example 20. The apparatus of example 17, wherein, when there is the hit in the instruction cache and the hit in the incomplete decode table, the instruction length decoder circuit is to store the one or more predecode bits in the predecode cache and update one or more corresponding bits in the incomplete decode table as valid.Example 21. The apparatus of example 17, wherein, when there is a miss in the instruction cache for the instruction data at the incoming address, the circuit causes a fetch of the instruction data from memory, the instruction length decoder circuit to generate one or more predecode bits for the instruction data at the incoming address, and an update of one or more corresponding bits in the incomplete decode table as valid.Example 22. The apparatus of example 17, further comprising instruction length verification circuitry that updates a counter with a number of incorrect predecode bits from the predecode cache for decoded instructions, and when the counter exceeds a threshold, enables the instruction length decoder circuit from a disabled state.Example 23. The apparatus of example 17, wherein the circuit is to perform a comparison of predecode bits from the predecode cache to corresponding predecode bits generated by the instruction length decoder circuit, and disable the instruction length decoder circuit from an enabled state when the comparison indicates matches exceed a disengagement threshold number of matches.Example 24. The apparatus of example 17, wherein the circuit is to, when there is a hit in the instruction cache for the instruction data at the incoming address and a miss in the incomplete decode table that indicates a proper subset of sections of the instruction data for the incoming address has one or more valid predecode bits in the predecode cache, causes the one or more valid predecode bits for the proper subset of sections of the instruction data for the incoming address from the predecode cache and corresponding instruction data from the instruction cache to be sent to a decoder circuit that decodes instructions into decoded instructions for execution.[0082] In yet another embodiment, an apparatus comprises a data storage device that stores code that when executed by a hardware processor causes the hardware processor to perform any method disclosed herein. An apparatus may be as described in the detailed description. A method may be as described in the detailed description.[0083] An instruction set may include one or more instruction formats. A given instruction format may define various fields (e.g., number of bits, location of bits) to specify, among other things, the operation to be performed (e.g., opcode) and the operand(s) on which that operation is to be performed and/or other data field(s) (e.g., mask). Some instruction formats are further broken down though the definition of instruction templates (or subformats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format’s fields (the included fields are typically in the same order, but at least some have different bit positions because there are less fields included) and/or defined to have a given field interpreted differently. Thus, each instruction of an ISA is expressed using a given instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and includes fields for specifying the operation and the operands. For example, an exemplary ADD instruction has a specific opcode and an instruction format that includes an opcode field to specify that opcode and operand fields to select operands (source 1/destinati on and source2); and an occurrence of this ADD instruction in an instruction stream will have specific contents in the operand fields that select specific operands. A set of SIMD extensions referred to as the Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extensions (VEX) coding scheme has been released and/or published (e.g., see Intel® 64 and IA-32 Architectures Software Developer’s Manual, November 2018; and see Intel® Architecture Instruction Set Extensions Programming Reference, October 2018).Exemplary Instruction Formats[0084] Embodiments of the instruction(s) described herein may be embodied in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) may be executed on such systems, architectures, and pipelines, but are not limited to those detailed.Generic Vector Friendly Instruction Format[0085] A vector friendly instruction format is an instruction format that is suited for vector instructions (e.g., there are certain fields specific to vector operations). While embodiments are described in which both vector and scalar operations are supported through the vector friendly instruction format, alternative embodiments use only vector operations the vector friendly instruction format.[0086] Figures 8A-8B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments of the disclosure. Figure 8A is a block diagram illustrating a generic vector friendly instruction format and class A instruction templates thereof according to embodiments of the disclosure; while Figure 8B is a block diagram illustrating the generic vector friendly instruction format and class B instruction templates thereof according to embodiments of the disclosure. Specifically, a generic vector friendly instruction format 800 for which are defined class A and class B instruction templates, both of which include no memory access 805 instruction templates and memory access 820 instruction templates. The term generic in the context of the vector friendly instruction format refers to the instruction format not being tied to any specific instruction set.[0087] While embodiments of the disclosure will be described in which the vector friendly instruction format supports the following: a 64 byte vector operand length (or size) with 32 bit (4 byte) or 64 bit (8 byte) data element widths (or sizes) (and thus, a 64 byte vector consists of either 16 doubleword-size elements or alternatively, 8 quadword-size elements); a 64 byte vector operand length (or size) with 16 bit (2 byte) or 8 bit (1 byte) data element widths (or sizes); a 32 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); and a 16 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); alternative embodiments may support more, less and/or different vector operand sizes (e.g., 256 byte vector operands) with more, less, or different data element widths (e.g., 128 bit (16 byte) data element widths).[0088] The class A instruction templates in Figure 8A include: 1) within the no memory access 805 instruction templates there is shown a no memory access, full round control type operation 810 instruction template and a no memory access, data transform type operation 815 instruction template; and 2) within the memory access 820 instruction templates there is shown a memory access, temporal 825 instruction template and a memory access, non-temporal 830 instruction template. The class B instruction templates in Figure 8B include: 1) within the no memory access 805 instruction templates there is shown a no memory access, write mask control, partial round control type operation 812 instruction template and a no memory access, write mask control, vsize type operation 817 instruction template; and 2) within the memory access 820 instruction templates there is shown a memory access, write mask control 827 instruction template.[0089] The generic vector friendly instruction format 800 includes the following fields listed below in the order illustrated in Figures 8A-8B.[0090] Format field 840 - a specific value (an instruction format identifier value) in this field uniquely identifies the vector friendly instruction format, and thus occurrences of instructions in the vector friendly instruction format in instruction streams. As such, this field is optional in the sense that it is not needed for an instruction set that has only the generic vector friendly instruction format.[0091] Base operation field 842 - its content distinguishes different base operations. [0092] Register index field 844 - its content, directly or through address generation, specifies the locations of the source and destination operands, be they in registers or in memory. These include a sufficient number of bits to select N registers from a PxQ (e.g. 32x512, 16x128, 32x1024, 64x1024) register file. While in one embodiment N may be up to three sources and one destination register, alternative embodiments may support more or less sources and destination registers (e.g., may support up to two sources where one of these sources also acts as the destination, may support up to three sources where one of these sources also acts as the destination, may support up to two sources and one destination).[0093] Modifier field 846 - its content distinguishes occurrences of instructions in the generic vector instruction format that specify memory access from those that do not; that is, between no memory access 805 instruction templates and memory access 820 instruction templates.Memory access operations read and/or write to the memory hierarchy (in some cases specifying the source and/or destination addresses using values in registers), while non-memory access operations do not (e.g., the source and destinations are registers). While in one embodiment this field also selects between three different ways to perform memory address calculations, alternative embodiments may support more, less, or different ways to perform memory address calculations.[0094] Augmentation operation field 850 - its content distinguishes which one of a variety of different operations to be performed in addition to the base operation. This field is context specific. In one embodiment of the disclosure, this field is divided into a class field 868, an alpha field 852, and a beta field 854. The augmentation operation field 850 allows common groups of operations to be performed in a single instruction rather than 2, 3, or 4 instructions. [0095] Scale field 860 - its content allows for the scaling of the index field’s content for memory address generation (e.g., for address generation that uses 2scale* index + base).[0096] Displacement Field 862A- its content is used as part of memory address generation (e.g., for address generation that uses 2scale* index + base + displacement).[0097] Displacement Factor Field 862B (note that the juxtaposition of displacement field 862A directly over displacement factor field 862B indicates one or the other is used) - its content is used as part of address generation; it specifies a displacement factor that is to be scaled by the size of a memory access (N) - where N is the number of bytes in the memory access (e.g., for address generation that uses 2scale* index + base + scaled displacement). Redundant low- order bits are ignored and hence, the displacement factor field’s content is multiplied by the memory operands total size (N) in order to generate the final displacement to be used in calculating an effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 874 (described later herein) and the data manipulation field 854C. The displacement field 862A and the displacement factor field 862B are optional in the sense that they are not used for the no memory access 805 instruction templates and/or different embodiments may implement only one or none of the two.[0098] Data element width field 864 - its content distinguishes which one of a number of data element widths is to be used (in some embodiments for all instructions; in other embodiments for only some of the instructions). This field is optional in the sense that it is not needed if only one data element width is supported and/or data element widths are supported using some aspect of the opcodes.[0099] Write mask field 870 - its content controls, on a per data element position basis, whether that data element position in the destination vector operand reflects the result of the base operation and augmentation operation. Class A instruction templates support mergingwritemasking, while class B instruction templates support both merging- and zeroingwritemasking. When merging, vector masks allow any set of elements in the destination to be protected from updates during the execution of any operation (specified by the base operation and the augmentation operation); in other one embodiment, preserving the old value of each element of the destination where the corresponding mask bit has a 0. In contrast, when zeroing vector masks allow any set of elements in the destination to be zeroed during the execution of any operation (specified by the base operation and the augmentation operation); in one embodiment, an element of the destination is set to 0 when the corresponding mask bit has a 0 value. A subset of this functionality is the ability to control the vector length of the operation being performed (that is, the span of elements being modified, from the first to the last one); however, it is not necessary that the elements that are modified be consecutive. Thus, the write mask field 870 allows for partial vector operations, including loads, stores, arithmetic, logical, etc. While embodiments of the disclosure are described in which the write mask field’s 870 content selects one of a number of write mask registers that contains the write mask to be used (and thus the write mask field’s 870 content indirectly identifies that masking to be performed), alternative embodiments instead or additional allow the mask write field’s 870 content to directly specify the masking to be performed.[00100] Immediate field 872 - its content allows for the specification of an immediate. This field is optional in the sense that is it not present in an implementation of the generic vector friendly format that does not support immediate and it is not present in instructions that do not use an immediate.[00101] Class field 868 - its content distinguishes between different classes of instructions. With reference to Figures 8A-B, the contents of this field select between class A and class B instructions. In Figures 8A-B, rounded comer squares are used to indicate a specific value is present in a field (e.g., class A 868A and class B 868B for the class field 868 respectively inFigures 8A-B).Instruction Templates of Class A[00102] In the case of the non-memory access 805 instruction templates of class A, the alpha field 852 is interpreted as an RS field 852A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 852A.1 and data transform 852A.2 are respectively specified for the no memory access, round type operation 810 and the no memory access, data transform type operation 815 instruction templates), while the beta field 854 distinguishes which of the operations of the specified type is to be performed. In the no memory access 805 instruction templates, the scale field 860, the displacement field 862A, and the displacement scale filed 862B are not present.No-Memory Access Instruction Templates - Full Round Control Type Operation[00103] In the no memory access full round control type operation 810 instruction template, the beta field 854 is interpreted as a round control field 854A, whose content(s) provide static rounding. While in the described embodiments of the disclosure the round control field 854A includes a suppress all floating point exceptions (SAE) field 856 and a round operation control field 858, alternative embodiments may support may encode both these concepts into the same field or only have one or the other of these concepts/fields (e.g., may have only the round operation control field 858).[00104] SAE field 856 - its content distinguishes whether or not to disable the exception event reporting; when the SAE field’s 856 content indicates suppression is enabled, a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler.[00105] Round operation control field 858 - its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round- to-nearest). Thus, the round operation control field 858 allows for the changing of the rounding mode on a per instruction basis. In one embodiment of the disclosure where a processor includes a control register for specifying rounding modes, the round operation control field’s 850 content overrides that register value.No Memory Access Instruction Templates - Data Transform Type Operation[00106] In the no memory access data transform type operation 815 instruction template, the beta field 854 is interpreted as a data transform field 854B, whose content distinguishes which one of a number of data transforms is to be performed (e.g., no data transform, swizzle, broadcast).[00107] In the case of a memory access 820 instruction template of class A, the alpha field 852 is interpreted as an eviction hint field 852B, whose content distinguishes which one of the eviction hints is to be used (in Figure 8A, temporal 852B.1 and non-temporal 852B.2 are respectively specified for the memory access, temporal 825 instruction template and the memory access, non-temporal 830 instruction template), while the beta field 854 is interpreted as a data manipulation field 854C, whose content distinguishes which one of a number of data manipulation operations (also known as primitives) is to be performed (e.g., no manipulation; broadcast; up conversion of a source; and down conversion of a destination). The memory access 820 instruction templates include the scale field 860, and optionally the displacement field 862A or the displacement scale field 862B.[00108] Vector memory instructions perform vector loads from and vector stores to memory, with conversion support. As with regular vector instructions, vector memory instructions transfer data from/to memory in a data element-wise fashion, with the elements that are actually transferred is dictated by the contents of the vector mask that is selected as the write mask.Memory Access Instruction Templates - Temporal[00109] Temporal data is data likely to be reused soon enough to benefit from caching. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Memory Access Instruction Templates - Non-Temporal[00110] Non-temporal data is data unlikely to be reused soon enough to benefit from caching in the Ist-level cache and should be given priority for eviction. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Instruction Templates of Class B[00111] In the case of the instruction templates of class B, the alpha field 852 is interpreted as a write mask control (Z) field 852C, whose content distinguishes whether the write masking controlled by the write mask field 870 should be a merging or a zeroing.[00112] In the case of the non-memory access 805 instruction templates of class B, part of the beta field 854 is interpreted as an RL field 857A, whose content distinguishes which one of the different augmentation operation types are to be performed (e g., round 857A.1 and vector length (VSIZE) 857A.2 are respectively specified for the no memory access, write mask control, partial round control type operation 812 instruction template and the no memory access, write mask control, VSIZE type operation 817 instruction template), while the rest of the beta field 854 distinguishes which of the operations of the specified type is to be performed. In the no memory access 805 instruction templates, the scale field 860, the displacement field 862A, and the displacement scale filed 862B are not present.[00113] In the no memory access, write mask control, partial round control type operation 810 instruction template, the rest of the beta field 854 is interpreted as a round operation field 859A and exception event reporting is disabled (a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler).[00114] Round operation control field 859A - just as round operation control field 858, its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round-to-nearest). Thus, the round operation control field 859A allows for the changing of the rounding mode on a per instruction basis. In one embodiment of the disclosure where a processor includes a control register for specifying rounding modes, the round operation control field’s 850 content overrides that register value. [00115] In the no memory access, write mask control, VSIZE type operation 817 instruction template, the rest of the beta field 854 is interpreted as a vector length field 859B, whose content distinguishes which one of a number of data vector lengths is to be performed on (e.g., 128, 256, or 512 byte).[00116] In the case of a memory access 820 instruction template of class B, part of the beta field 854 is interpreted as a broadcast field 857B, whose content distinguishes whether or not the broadcast type data manipulation operation is to be performed, while the rest of the beta field 854 is interpreted the vector length field 859B. The memory access 820 instruction templates include the scale field 860, and optionally the displacement field 862A or the displacement scale field 862B.[00117] With regard to the generic vector friendly instruction format 800, a full opcode field 874 is shown including the format field 840, the base operation field 842, and the data element width field 864. While one embodiment is shown where the full opcode field 874 includes all of these fields, the full opcode field 874 includes less than all of these fields in embodiments that do not support all of them. The full opcode field 874 provides the operation code (opcode).[00118] The augmentation operation field 850, the data element width field 864, and the write mask field 870 allow these features to be specified on a per instruction basis in the generic vector friendly instruction format.[00119] The combination of write mask field and data element width field create typed instructions in that they allow the mask to be applied based on different data element widths. [00120] The various instruction templates found within class A and class B are beneficial in different situations. In some embodiments of the disclosure, different processors or different cores within a processor may support only class A, only class B, or both classes. For instance, a high performance general purpose out-of-order core intended for general-purpose computing may support only class B, a core intended primarily for graphics and/or scientific (throughput) computing may support only class A, and a core intended for both may support both (of course, a core that has some mix of templates and instructions from both classes but not all templates and instructions from both classes is within the purview of the disclosure). Also, a single processor may include multiple cores, all of which support the same class or in which different cores support different class. For instance, in a processor with separate graphics and general purpose cores, one of the graphics cores intended primarily for graphics and/or scientific computing may support only class A, while one or more of the general purpose cores may be high performance general purpose cores with out of order execution and register renaming intended for general- purpose computing that support only class B. Another processor that does not have a separate graphics core, may include one more general purpose in-order or out-of-order cores that support both class A and class B. Of course, features from one class may also be implement in the other class in different embodiments of the disclosure. Programs written in a high level language would be put (e.g., just in time compiled or statically compiled) into an variety of different executable forms, including: 1) a form having only instructions of the class(es) supported by the target processor for execution; or 2) a form having alternative routines written using different combinations of the instructions of all classes and having control flow code that selects the routines to execute based on the instructions supported by the processor which is currently executing the code.Exemplary Specific Vector Friendly Instruction Format[00121] Figure 9 is a block diagram illustrating an exemplary specific vector friendly instruction format according to embodiments of the disclosure. Figure 9 shows a specific vector friendly instruction format 900 that is specific in the sense that it specifies the location, size, interpretation, and order of the fields, as well as values for some of those fields. The specific vector friendly instruction format 900 may be used to extend the x86 instruction set, and thus some of the fields are similar or the same as those used in the existing x86 instruction set and extension thereof (e.g., AVX). This format remains consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate fields of the existing x86 instruction set with extensions. The fields from Figure 8 into which the fields from Figure 9 map are illustrated. [00122] It should be understood that, although embodiments of the disclosure are described with reference to the specific vector friendly instruction format 900 in the context of the generic vector friendly instruction format 800 for illustrative purposes, the disclosure is not limited to the specific vector friendly instruction format 900 except where claimed. For example, the generic vector friendly instruction format 800 contemplates a variety of possible sizes for the various fields, while the specific vector friendly instruction format 900 is shown as having fields of specific sizes. By way of specific example, while the data element width field 864 is illustrated as a one bit field in the specific vector friendly instruction format 900, the disclosure is not so limited (that is, the generic vector friendly instruction format 800 contemplates other sizes of the data element width field 864).[00123] The generic vector friendly instruction format 800 includes the following fields listed below in the order illustrated in Figure 9A.[00124] EVEX Prefix (Bytes 0-3) 902 - is encoded in a four-byte form.[00125] Format Field 840 (EVEX Byte 0, bits [7:0]) - the first byte (EVEX Byte 0) is the format field 840 and it contains 0x62 (the unique value used for distinguishing the vector friendly instruction format in one embodiment of the disclosure).[00126] The second-fourth bytes (EVEX Bytes 1-3) include a number of bit fields providing specific capability.[00127] REX field 905 (EVEX Byte 1, bits [7-5]) - consists of a EVEX.R bit field (EVEX Byte 1, bit [7] - R), EVEX.X bit field (EVEX byte 1, bit [6] - X), and 857BEX byte 1, bit[5] - B). The EVEX.R, EVEX.X, and EVEX B bit fields provide the same functionality as the corresponding VEX bit fields, and are encoded using Is complement form, i.e. ZMM0 is encoded as 111 IB, ZMM15 is encoded as 0000B. Other fields of the instructions encode the lower three bits of the register indexes as is known in the art (rrr, xxx, and bbb), so that Rrrr, Xxxx, and Bbbb may be formed by adding EVEX.R, EVEX.X, and EVEX.B.[00128] REX’ field 810 - this is the first part of the REX’ field 810 and is the EVEX.R’ bit field (EVEX Byte 1, bit [4] - R’) that is used to encode either the upper 16 or lower 16 of the extended 32 register set. In one embodiment of the disclosure, this bit, along with others as indicated below, is stored in bit inverted format to distinguish (in the well-known x86 32-bit mode) from the BOUND instruction, whose real opcode byte is 62, but does not accept in the MOD R/M field (described below) the value of 11 in the MOD field; alternative embodiments of the disclosure do not store this and the other indicated bits below in the inverted format. A value of 1 is used to encode the lower 16 registers. In other words, R’Rrrr is formed by combining EVEX.R’, EVEX R, and the other RRR from other fields. [00129] Opcode map field 915 (EVEX byte 1, bits [3:0] - mmmm) - its content encodes an implied leading opcode byte (OF, OF 38, or OF 3).[00130] Data element width field 864 (EVEX byte 2, bit [7] - W) - is represented by the notation EVEX.W. EVEX.W is used to define the granularity (size) of the datatype (either 32- bit data elements or 64-bit data elements).[00131] EVEX.vvvv 920 (EVEX Byte 2, bits [6:3]-vwv)- the role of EVEX.vvvv may include the following: 1) EVEX.vvvv encodes the first source register operand, specified in inverted (Is complement) form and is valid for instructions with 2 or more source operands; 2) EVEX.vvvv encodes the destination register operand, specified in Is complement form for certain vector shifts; or 3) EVEX.vvvv does not encode any operand, the field is reserved and should contain 1111b. Thus, EVEX.vvvv field 920 encodes the 4 low-order bits of the first source register specifier stored in inverted (Is complement) form. Depending on the instruction, an extra different EVEX bit field is used to extend the specifier size to 32 registers.[00132] EVEX.U 868 Class field (EVEX byte 2, bit [2]-U) - If EVEX.U = 0, it indicates class A or EVEX.U0; if EVEX.U = 1, it indicates class B or EVEX.U1.[00133] Prefix encoding field 925 (EVEX byte 2, bits [1 :0]-pp) - provides additional bits for the base operation field. In addition to providing support for the legacy SSE instructions in the EVEX prefix format, this also has the benefit of compacting the SIMD prefix (rather than requiring a byte to express the SIMD prefix, the EVEX prefix requires only 2 bits). In one embodiment, to support legacy SSE instructions that use a SIMD prefix (66H, F2H, F3H) in both the legacy format and in the EVEX prefix format, these legacy SIMD prefixes are encoded into the SIMD prefix encoding field; and at runtime are expanded into the legacy SIMD prefix prior to being provided to the decoder’s PLA (so the PLA can execute both the legacy and EVEX format of these legacy instructions without modification). Although newer instructions could use the EVEX prefix encoding field’s content directly as an opcode extension, certain embodiments expand in a similar fashion for consistency but allow for different meanings to be specified by these legacy SIMD prefixes. An alternative embodiment may redesign the PLA to support the 2 bit SIMD prefix encodings, and thus not require the expansion.[00134] Alpha field 852 (EVEX byte 3, bit [7] - EH; also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX. write mask control, and EVEX.N; also illustrated with a) - as previously described, this field is context specific.[00135] Beta field 854 (EVEX byte 3, bits [6:4]-SSS, also known as EVEX.S2-0, EVEX.r2-o, EVEX.rrl, EVEX.LL0, EVEX.LLB; also illustrated with PPP) - as previously described, this field is context specific. [00136] REX’ field 810 - this is the remainder of the REX’ field and is the EVEX.V’ bit field (EVEX Byte 3, bit [3] - V’) that may be used to encode either the upper 16 or lower 16 of the extended 32 register set. This bit is stored in bit inverted format. A value of 1 is used to encode the lower 16 registers. In other words, V’VVVV is formed by combining EVEX.V’, EVEX.vvvv.[00137] Write mask field 870 (EVEX byte 3, bits [2:0]-kkk) - its content specifies the index of a register in the write mask registers as previously described. In one embodiment of the disclosure, the specific value EVEX.kkk=000 has a special behavior implying no write mask is used for the particular instruction (this may be implemented in a variety of ways including the use of a write mask hardwired to all ones or hardware that bypasses the masking hardware).[00138] Real Opcode Field 930 (Byte 4) is also known as the opcode byte. Part of the opcode is specified in this field.[00139] MOD R/M Field 940 (Byte 5) includes MOD field 942, Reg field 944, and R/M field 946. As previously described, the MOD field’s 942 content distinguishes between memory access and non-memory access operations. The role of Reg field 944 can be summarized to two situations: encoding either the destination register operand or a source register operand, or be treated as an opcode extension and not used to encode any instruction operand. The role of R/M field 946 may include the following: encoding the instruction operand that references a memory address, or encoding either the destination register operand or a source register operand.[00140] Scale, Index, Base (SIB) Byte (Byte 6) - As previously described, the scale field’s 850 content is used for memory address generation. SIB.xxx 954 and SIB.bbb 956 - the contents of these fields have been previously referred to with regard to the register indexes Xxxx and Bbbb. [00141] Displacement field 862A (Bytes 7-10) - when MOD field 942 contains 10, bytes 7-10 are the displacement field 862A, and it works the same as the legacy 32-bit displacement (disp32) and works at byte granularity.[00142] Displacement factor field 862B (Byte 7) - when MOD field 942 contains 01, byte 7 is the displacement factor field 862B. The location of this field is that same as that of the legacy x86 instruction set 8-bit displacement (disp8), which works at byte granularity. Since disp8 is sign extended, it can only address between -128 and 127 bytes offsets; in terms of 64 byte cache lines, disp8 uses 8 bits that can be set to only four really useful values -128, -64, 0, and 64; since a greater range is often needed, disp32 is used; however, disp32 requires 4 bytes. In contrast to disp8 and disp32, the displacement factor field 862B is a reinterpretation of disp8; when using displacement factor field 862B, the actual displacement is determined by the content of the displacement factor field multiplied by the size of the memory operand access (N). This type of displacement is referred to as disp8*N. This reduces the average instruction length (a single byte of used for the displacement but with a much greater range). Such compressed displacement is based on the assumption that the effective displacement is multiple of the granularity of the memory access, and hence, the redundant low-order bits of the address offset do not need to be encoded. In other words, the displacement factor field 862B substitutes the legacy x86 instruction set 8-bit displacement. Thus, the displacement factor field 862B is encoded the same way as an x86 instruction set 8-bit displacement (so no changes in the ModRM/SIB encoding rules) with the only exception that disp8 is overloaded to disp8*N. In other words, there are no changes in the encoding rules or encoding lengths but only in the interpretation of the displacement value by hardware (which needs to scale the displacement by the size of the memory operand to obtain a byte-wise address offset). Immediate field 872 operates as previously described.Full Opcode Field[00143] Figure 9B is a block diagram illustrating the fields of the specific vector friendly instruction format 900 that make up the full opcode field 874 according to one embodiment of the disclosure. Specifically, the full opcode field 874 includes the format field 840, the base operation field 842, and the data element width (W) field 864. The base operation field 842 includes the prefix encoding field 925, the opcode map field 915, and the real opcode field 930.Register Index Field[00144] Figure 9C is a block diagram illustrating the fields of the specific vector friendly instruction format 900 that make up the register index field 844 according to one embodiment of the disclosure. Specifically, the register index field 844 includes the REX field 905, the REX’ field 910, the MODR/M.reg field 944, the MODR/M.r/m field 946, the VVVV field 920, xxx field 954, and the bbb field 956.Augmentation Operation Field[00145] Figure 9D is a block diagram illustrating the fields of the specific vector friendly instruction format 900 that make up the augmentation operation field 850 according to one embodiment of the disclosure. When the class (U) field 868 contains 0, it signifies EVEX.U0 (class A 868A); when it contains 1, it signifies EVEX.U1 (class B 868B). When U=0 and the MOD field 942 contains 11 (signifying a no memory access operation), the alpha field 852 (EVEX byte 3, bit [7] - EH) is interpreted as the rs field 852A. When the rs field 852A contains a 1 (round 852A.1), the beta field 854 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the round control field 854A. The round control field 854A includes a one bit SAE field 856 and a two bit round operation field 858. When the rs field 852A contains a 0 (data transform 852A.2), the beta field 854 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data transform field 854B. When U=0 and the MOD field 942 contains 00, 01, or 10 (signifying a memory access operation), the alpha field 852 (EVEX byte 3, bit [7] - EH) is interpreted as the eviction hint (EH) field 852B and the beta field 854 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data manipulation field 854C.[00146] When U= 1 , the alpha field 852 (EVEX byte 3, bit [7] - EH) is interpreted as the write mask control (Z) field 852C. When U=1 and the MOD field 942 contains 11 (signifying a no memory access operation), part of the beta field 854 (EVEX byte 3, bit [4]- So) is interpreted as the RL field 857A; when it contains a 1 (round 857A.1) the rest of the beta field 854 (EVEX byte 3, bit [6-5]- S2-1) is interpreted as the round operation field 859A, while when the RL field 857A contains a 0 (VSIZE 857. A2) the rest of the beta field 854 (EVEX byte 3, bit [6-5]- S2-1) is interpreted as the vector length field 859B (EVEX byte 3, bit [6-5]- Li-o). When U=1 and the MOD field 942 contains 00, 01, or 10 (signifying a memory access operation), the beta field 854 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the vector length field 859B (EVEX byte 3, bit [6-5]- Li-o) and the broadcast field 857B (EVEX byte 3, bit [4]- B).Exemplary Register Architecture[00147] Figure 10 is a block diagram of a register architecture 1000 according to one embodiment of the disclosure. In the embodiment illustrated, there are 32 vector registers 1010 that are 512 bits wide; these registers are referenced as zmmO through zmm31. The lower order 256 bits of the lower 16 zmm registers are overlaid on registers ymmO-16. The lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm registers) are overlaid on registers xmm0-15. The specific vector friendly instruction format 900 operates on these overlaid register file as illustrated in the below tables.[00148] In other words, the vector length field 859B selects between a maximum length and one or more other shorter lengths, where each such shorter length is half the length of the preceding length; and instructions templates without the vector length field 859B operate on the maximum vector length. Further, in one embodiment, the class B instruction templates of the specific vector friendly instruction format 900 operate on packed or scalar single/double- precision floating point data and packed or scalar integer data. Scalar operations are operations performed on the lowest order data element position in a zmm/ymm/xmm register; the higher order data element positions are either left the same as they were prior to the instruction or zeroed depending on the embodiment.[00149] Write mask registers 1015 - in the embodiment illustrated, there are 8 write mask registers (kO through k7), each 64 bits in size. In an alternate embodiment, the write mask registers 1015 are 16 bits in size. As previously described, in one embodiment of the disclosure, the vector mask register kO cannot be used as a write mask; when the encoding that would normally indicate kO is used for a write mask, it selects a hardwired write mask of OxFFFF, effectively disabling write masking for that instruction.[00150] General-purpose registers 1025 - in the embodiment illustrated, there are sixteen 64- bit general-purpose registers that are used along with the existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.[00151] Scalar floating point stack register file (x87 stack) 1045, on which is aliased the MMX packed integer flat register file 1050 - in the embodiment illustrated, the x87 stack is an eightelement stack used to perform scalar floating-point operations on 32/64/80-bit floating point data using the x87 instruction set extension; while the MMX registers are used to perform operations on 64-bit packed integer data, as well as to hold operands for some operations performed between the MMX and XMM registers.[00152] Alternative embodiments of the disclosure may use wider or narrower registers. Additionally, alternative embodiments of the disclosure may use more, less, or different register files and registers.Exemplary Core Architectures, Processors, and Computer Architectures[00153] Processor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures.Exemplary Core ArchitecturesIn-order and out-of-order core block diagram[00154] Figure 11A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the disclosure. Figure 1 IB is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the disclosure. The solid lined boxes in Figures 11A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.[00155] In Figure 11 A, a processor pipeline 1100 includes a fetch stage 1102, a length decode stage 1104, a decode stage 1106, an allocation stage 1108, a renaming stage 1110, a scheduling (also known as a dispatch or issue) stage 1112, a register read/memory read stage 1114, an execute stage 1116, a write back/memory write stage 1118, an exception handling stage 1122, and a commit stage 1124.[00156] Figure 11B shows processor core 1190 including a front end unit 1130 coupled to an execution engine unit 1150, and both are coupled to a memory unit 1170. The core 1190 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 1190 may be a special -purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.[00157] The front end unit 1130 includes a branch prediction unit 1132 coupled to an instruction cache unit 1134, which is coupled to an instruction translation lookaside buffer (TLB) 1136, which is coupled to an instruction fetch unit 1138, which is coupled to a decode unit 1140. The decode unit 1140 (or decoder or decoder unit) may decode instructions (e.g., macroinstructions), and generate as an output one or more micro-operations, micro-code entry points, micro-instructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 1140 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 1190 includes a microcode ROM or other medium that stores microcode for certain macro-instructions (e.g., in decode unit 1140 or otherwise within the front end unit 1130). The decode unit 1140 is coupled to a rename/allocator unit 1152 in the execution engine unit 1150.[00158] The execution engine unit 1150 includes the rename/allocator unit 1152 coupled to a retirement unit 1154 and a set of one or more scheduler unit(s) 1156. The scheduler unit(s) 1156 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 1156 is coupled to the physical register file(s) unit(s) 1158. Each of the physical register file(s) units 1158 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point,, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 1158 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 1158 is overlapped by the retirement unit 1154 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 1154 and the physical register file(s) unit(s) 1158 are coupled to the execution cluster(s) 1160. The execution clusters) 1160 includes a set of one or more execution units 1162 and a set of one or more memory access units 1164. The execution units 1162 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 1156, physical register file(s) unit(s) 1158, and execution cluster(s) 1160 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 1164). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.[00159] The set of memory access units 1164 is coupled to the memory unit 1170, which includes a data TLB unit 1172 coupled to a data cache unit 1174 coupled to a level 2 (L2) cache unit 1176. In one exemplary embodiment, the memory access units 1164 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 1172 in the memory unit 1170. The instruction cache unit 1134 is further coupled to a level 2 (L2) cache unit 1176 in the memory unit 1170. The L2 cache unit 1176 is coupled to one or more other levels of cache and eventually to a main memory.[00160] In certain embodiments, a prefetch circuit 1178 is included to prefetch data, for example, to predict access addresses and bring the data for those addresses into a cache or caches (e.g., from memory 1180).[00161] By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 1100 as follows: 1) the instruction fetch 1138 performs the fetch and length decoding stages 1102 and 1104; 2) the decode unit 1140 performs the decode stage 1106; 3) the rename/allocator unit 1152 performs the allocation stage 1108 and renaming stage 1110; 4) the scheduler unit(s) 1156 performs the schedule stage 1112; 5) the physical register file(s) unit(s) 1158 and the memory unit 1170 perform the register read/memory read stage 1114; the execution cluster 1160 perform the execute stage 1116; 6) the memory unit 1170 and the physical register file(s) unit(s) 1158 perform the write back/memory write stage 1118; 7) various units may be involved in the exception handling stage 1122; and 8) the retirement unit 1154 and the physical register file(s) unit(s) 1158 perform the commit stage 1124. [00162] The core 1190 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 1190 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.[00163] It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyper-Threading technology).[00164] While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 1134/1174 and a shared L2 cache unit 1176, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (LI) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.Specific Exemplary In-Order Core Architecture[00165] Figures 12A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application.[00166] Figure 12A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 1202 and with its local subset of the Level 2 (L2) cache 1204, according to embodiments of the disclosure. In one embodiment, an instruction decode unit 1200 supports the x86 instruction set with a packed data instruction set extension. An LI cache 1206 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 1208 and a vector unit 1210 use separate register sets (respectively, scalar registers 1212 and vector registers 1214) and data transferred between them is written to memory and then read back in from a level 1 (LI) cache 1206, alternative embodiments of the disclosure may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).[00167] The local subset of the L2 cache 1204 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 1204. Data read by a processor core is stored in its L2 cache subset 1204 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 1204 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring datapath is 1012-bits wide per direction.[00168] Figure 12B is an expanded view of part of the processor core in Figure 12A according to embodiments of the disclosure. Figure 12B includes an LI data cache 1206A part of the LI cache 1204, as well as more detail regarding the vector unit 1210 and the vector registers 1214. Specifically, the vector unit 1210 is a 16-wide vector processing unit (VPU) (see the 16-wide ALU 1228), which executes one or more of integer, single-precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 1220, numeric conversion with numeric convert units 1222A-B, and replication with replication unit 1224 on the memory input. Write mask registers 1226 allow predicating resulting vector writes.[00169] Figure 13 is a block diagram of a processor 1300 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the disclosure. The solid lined boxes in Figure 13 illustrate a processor 1300 with a single core 1302A, a system agent 1310, a set of one or more bus controller units 1316, while the optional addition of the dashed lined boxes illustrates an alternative processor 1300 with multiple cores 1302A-N, a set of one or more integrated memory controller unit(s) 1314 in the system agent unit 1310, and special purpose logic 1308.[00170] Thus, different implementations of the processor 1300 may include: 1) a CPU with the special purpose logic 1308 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 1302A-N being one or more general purpose cores (e g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 1302A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 1302A-N being a large number of general purpose in-order cores. Thus, the processor 1300 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 1300 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.[00171] The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 1306, and external memory (not shown) coupled to the set of integrated memory controller units 1314. The set of shared cache units 1306 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 1312 interconnects the integrated graphics logic 1308, the set of shared cache units 1306, and the system agent unit 1310/integrated memory controller unit(s) 1314, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 1306 and cores 1302-A-N.[00172] In some embodiments, one or more of the cores 1302A-N are capable of multithreading. The system agent 1310 includes those components coordinating and operating cores 1302A-N. The system agent unit 1310 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 1302A-N and the integrated graphics logic 1308. The display unit is for driving one or more externally connected displays.[00173] The cores 1302A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 1302A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.Exemplary Computer Architectures[00174] Figures 14-17 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.[00175] Referring now to Figure 14, shown is a block diagram of a system 1400 in accordance with one embodiment of the present disclosure. The system 1400 may include one or more processors 1410, 1415, which are coupled to a controller hub 1420. In one embodiment the controller hub 1420 includes a graphics memory controller hub (GMCH) 1490 and an Input/Output Hub (IOH) 1450 (which may be on separate chips); the GMCH 1490 includes memory and graphics controllers to which are coupled memory 1440 and a coprocessor 1445; the IOH 1450 is couples input/output (I/O) devices 1460 to the GMCH 1490. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 1440 and the coprocessor 1445 are coupled directly to the processor 1410, and the controller hub 1420 in a single chip with the IOH 1450. Memory 1440 may include instruction length decode code 1440 A, for example, to store code that when executed causes a processor to perform any method of this disclosure.[00176] The optional nature of additional processors 1415 is denoted in Figure 14 with broken lines. Each processor 1410, 1415 may include one or more of the processing cores described herein and may be some version of the processor 1300.[00177] The memory 1440 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1420 communicates with the processors) 1410, 1415 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as Quickpath Interconnect (QPI), or similar connection 1495.[00178] In one embodiment, the coprocessor 1445 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 1420 may include an integrated graphics accelerator.[00179] There can be a variety of differences between the physical resources 1410, 1415 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.[00180] In one embodiment, the processor 1410 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 1410 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 1445 Accordingly, the processor 1410 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 1445. Coprocessor(s) 1445 accept and execute the received coprocessor instructions.[00181] Referring now to Figure 15, shown is a block diagram of a first more specific exemplary system 1500 in accordance with an embodiment of the present disclosure. As shown in Figure 15, multiprocessor system 1500 is a point-to-point interconnect system, and includes a first processor 1570 and a second processor 1580 coupled via a point-to-point interconnect 1550. Each of processors 1570 and 1580 may be some version of the processor 1300. In one embodiment of the disclosure, processors 1570 and 1580 are respectively processors 1410 and 1415, while coprocessor 1538 is coprocessor 1445. In another embodiment, processors 1570 and 1580 are respectively processor 1410 coprocessor 1445.[00182] Processors 1570 and 1580 are shown including integrated memory controller (IMC) units 1572 and 1582, respectively. Processor 1570 also includes as part of its bus controller units point-to-point (P-P) interfaces 1576 and 1578; similarly, second processor 1580 includes P- P interfaces 1586 and 1588. Processors 1570, 1580 may exchange information via a point-to- point (P-P) interface 1550 using P-P interface circuits 1578, 1588. As shown in Figure 15, IMCs 1572 and 1582 couple the processors to respective memories, namely a memory 1532 and a memory 1534, which may be portions of main memory locally attached to the respective processors.[00183] Processors 1570, 1580 may each exchange information with a chipset 1590 via individual P-P interfaces 1552, 1554 using point to point interface circuits 1576, 1594, 1586, 1598. Chipset 1590 may optionally exchange information with the coprocessor 1538 via a high- performance interface 1539. In one embodiment, the coprocessor 1538 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. [00184] A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors’ local cache information may be stored in the shared cache if a processor is placed into a low power mode.[00185] Chipset 1590 may be coupled to a first bus 1516 via an interface 1596. In one embodiment, first bus 1516 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present disclosure is not so limited.[00186] As shown in Figure 15, various I/O devices 1514 may be coupled to first bus 1516, along with a bus bridge 1518 which couples first bus 1516 to a second bus 1520. In one embodiment, one or more additional processor(s) 1515, such as coprocessors, high-throughput MIC processors, GPGPU’s, accelerators (such as, e g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 1516. In one embodiment, second bus 1520 may be a low pin count (LPC) bus.Various devices may be coupled to a second bus 1520 including, for example, a keyboard and/or mouse 1522, communication devices 1527 and a storage unit 1528 such as a disk drive or other mass storage device which may include instruct ons/code and data 1530, in one embodiment. Further, an audio I/O 1524 may be coupled to the second bus 1520. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 15, a system may implement a multi-drop bus or other such architecture.[00187] Referring now to Figure 16, shown is a block diagram of a second more specific exemplary system 1600 in accordance with an embodiment of the present disclosure. Like elements in Figures 15 and 16 bear like reference numerals, and certain aspects of Figure 15 have been omitted from Figure 16 in order to avoid obscuring other aspects of Figure 16. [00188] Figure 16 illustrates that the processors 1570, 1580 may include integrated memory and I/O control logic (“CL”) 1572 and 1582, respectively. Thus, the CL 1572, 1582 include integrated memory controller units and include I/O control logic. Figure 16 illustrates that not only are the memories 1532, 1534 coupled to the CL 1572, 1582, but also that I/O devices 1614 are also coupled to the control logic 1572, 1582. Legacy I/O devices 1615 are coupled to the chipset 1590.[00189] Referring now to Figure 17, shown is a block diagram of a SoC 1700 in accordance with an embodiment of the present disclosure. Similar elements in Figure 13 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 17, an interconnect unit(s) 1702 is coupled to: an application processor 1710 which includes a set of one or more cores 1302A-N and shared cache unit(s) 1306; a system agent unit 1310; a bus controller unit(s) 1316; an integrated memory controller unit(s) 1314; a set or one or more coprocessors 1720 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 1730; a direct memory access (DMA) unit 1732; and a display unit 1740 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 1720 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high- throughput MIC processor, embedded processor, or the like.[00190] Embodiments (e.g., of the mechanisms) disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the disclosure may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.[00191] Program code, such as code 1530 illustrated in Figure 15, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.[00192] The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.[00193] One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as “IP cores” may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.[00194] Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.[00195] Accordingly, embodiments of the disclosure also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products. Emulation (including binary translation, code morphing, etc.)[00196] In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.[00197] Figure 18 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the disclosure. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 18 shows a program in a high level language 1802 may be compiled using an x86 compiler 1804 to generate x86 binary code 1806 that may be natively executed by a processor with at least one x86 instruction set core 1816. The processor with at least one x86 instruction set core 1816 represents any processor that can perform substantially the same functions as an Intel® processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel® x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel® processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel® processor with at least one x86 instruction set core. The x86 compiler 1804 represents a compiler that is operable to generate x86 binary code 1806 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 1816. Similarly, Figure 18 shows the program in the high level language 1802 may be compiled using an alternative instruction set compiler 1808 to generate alternative instruction set binary code 1810 that may be natively executed by a processor without at least one x86 instruction set core 1814 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 1812 is used to convert the x86 binary code 1806 into code that may be natively executed by the processor without an x86 instruction set core 1814. This converted code is not likely to be the same as the alternative instruction set binary code 1810 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 1812 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 1806.
Integrated circuit (IC) structures in memory devices are described. In an example, an IC structure includes a memory cell including a bit line (BL) extending along a first direction and a channel extending over the BL along a second direction diagonal to the BL. In an example, a word line (WL) extends in a third direction perpendicular to the first direction of the BL and intersects the channel to control current in the channel along a gate controlled channel length. In some examples, a channel is electrically coupled to a storage capacitor on a first side via a storage node contact (SNC) and to a bit line contact (BLC) on an underside or backside of the channel on a second side via the BL.
1.An integrated circuit (IC) structure in a memory device, comprising:bit lines (BL) extending in the first direction;a channel extending in a second diagonal direction relative to the BL;a word line (WL) extending in a third direction perpendicular to the first direction of the BL and intersecting the channel to control the channel along a gate controlled channel length, wherein the The channel is electrically coupled to the storage capacitor on a first side via a storage node contact (SNC) and on a second side to all via a bit line contact (BLC) on the underside or backside of the channel. described BL.2.The IC structure of claim 1, wherein the IC structure comprises a 1 transistor-1 capacitor (1T-1C) memory cell of a DRAM memory array.3.3. The IC structure of claim 2 wherein the BLC is located along the gate-controlled channel length relative to gate oxides of transistors on the front side of the channel on the underside or back.4.The IC structure of claim 1, wherein the BL is included in a back end layer of an interlayer dielectric layer (ILD) of a DRAM memory array.5.The IC structure of claim 1, further comprising an etch stop (ES) layer below the channel layer.6.The IC structure of claim 1, 2, 3, 4, or 5, wherein the channel comprises at least one of the following: amorphous silicon, polycrystalline silicon (polycrystalline Si), polycrystalline germanium (polycrystalline Ge ), polycrystalline silicon germanium (polycrystalline SiGe), gallium nitride (GaN), indium gallium arsenide (InGaAs), transition metal dichalcogenides, or oxide semiconductors.7.A method for fabricating a memory array, comprising:forming a bit line (BL) extending along a first direction;depositing a channel layer in a region above the BL, the channel layer extending in a second direction diagonal to the first direction of the BL; andforming a word line (WL) extending in a third direction perpendicular to the first direction of the BL and intersecting the channel layer to follow a gate controlled channel length controlling current flow through the channel layer, wherein the channel layer is electrically coupled to a storage capacitor on a first side via a storage node contact (SNC) over the BL and the WL, and on a second side The side is electrically coupled to the BL via a bit line contact (BLC) on the underside or backside of the channel layer.8.8. The method of claim 7, wherein forming the channel layer comprises depositing over the substrate over the BL at least one of: amorphous silicon, polycrystalline silicon (polycrystalline Si), polycrystalline germanium (polycrystalline Ge), polycrystalline silicon germanium (polycrystalline SiGe), gallium nitride (GaN), indium gallium arsenide (InGaAs), transition metal dichalcogenide, or oxide semiconductor.9.9. The method of claim 8, further comprising depositing an etch stop (ES) layer over the substrate prior to depositing the channel layer.10.7. The method of claim 7, wherein the channel layer changes direction at one end of the channel layer to extend toward the storage capacitor in a vertical direction.11.The method of claim 7, wherein the ES layer comprises silicon nitride (SiN), silicon (Si), silicon carbide (SiC), silicon oxynitride (SiON), cadmium oxide (CDO), aluminum oxide ( One or more of Al2O3), hafnium oxide (HfO2) and zirconium oxide (ZrO2).12.11. The method of claim 7, 8, 9, 10, or 11, wherein forming the BL includes forming a BL for a back-end layer of a DRAM memory array.13.A computing device comprising:plate;A component coupled to the board, the component including an integrated circuit (IC) structure including:a bit line (BL) extending along the first direction;a channel extending above the BL in a second direction diagonal to the BL; anda word line (WL) extending in a third direction perpendicular to the first direction of the BL and intersecting the channel to control the channel length along a gate controlled channel a channel, wherein the channel is electrically coupled on a first side to a storage capacitor via a storage node contact (SNC) over the BL and the WL, and the channel is further on a second side via a storage node contact (SNC) located at A bit line contact (BLC) on the underside or backside of the channel is coupled to the BL.14.14. The computing device of claim 13, wherein the IC structure comprises IT-1C memory cells of a DRAM memory array.15.14. The computing device of claim 13, further comprising the storage capacitor, and wherein the channel redirects at one end of the channel to extend in a vertical direction toward the storage capacitor.16.14. The computing device of claim 13, further comprising a memory coupled to the board.17.14. The computing device of claim 13, further comprising a communication chip coupled to the board.18.14. The computing device of claim 13, wherein the component is a dual inline memory module (DIMM)19.14. The computing device of claim 13, wherein the component is a packaged integrated circuit die.20.The computing device of claim 13, 14, 15, 16, 17, 18, or 19, wherein the component comprises dynamic random access memory (DRAM).
Thin film transistors with backside channel contacts for high density memorytechnical fieldEmbodiments of the present disclosure are in the field of integrated circuit structures, and in particular thin film transistors with backside channel contacts in memory cells.Background techniqueScaling of features in integrated circuits has been the driving force behind the growing semiconductor industry over the past few decades. Scaling to smaller and smaller features can increase the density of functional units on the limited die area of a semiconductor chip. For example, shrinking transistor size allows an increased number of memory or logic devices to be incorporated on a chip, enabling the manufacture of products with increased capacity. However, the drive to larger capacities is not without problems. The need to optimize the performance of each device becomes increasingly important. For example, in conventional fabrication of dynamic random access memory (DRAM), various challenges associated with increased density may arise, such as space constraints due to capacitor reliability and interference between cells.Description of drawings1 illustrates a cross-sectional view of an integrated circuit (IC) structure having a memory cell architecture with bit line contacts ( BLC).2A and 2B illustrate respective cross-sectional views and corresponding top views of the IC structure of FIG. 1 in accordance with embodiments of the present disclosure.3 illustrates a cross-sectional view of an IC structure having a memory cell architecture with bit line contacts ( BLC).4 is a flowchart associated with the embodiment of FIGS. 1 and 2A and 2B, according to an embodiment of the present disclosure.5 is a cross-sectional side view of an integrated circuit (IC) device assembly that may include one having a bit line contact (BLC) on the underside or backside of the channel in accordance with one or more embodiments disclosed herein or multiple thin-film transistors.6 illustrates a computing device according to one implementation of embodiments of the present disclosure.Detailed waysIntegrated circuit (IC) structures are described having bit line contacts (BLCs) on the underside or backside of the channel. In an embodiment, the channel is located near or above a bit line (BL) in a 1 transistor-1 capacitor (1T-1C) memory cell of the memory device. In some embodiments, the BL is formed in the back-end interlayer dielectric (ILD) stack of the memory device. In the following description, numerous specific details are set forth, such as specific materials and toolkits, in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known features such as single or dual damascene processing have not been described in detail so as not to unnecessarily obscure embodiments of the present disclosure. Furthermore, it is to be understood that the various embodiments shown in the figures are illustrative representations and are not necessarily drawn to scale. In some instances, various operations will be described as multiple discrete operations in turn, in a manner that is most helpful in understanding the present disclosure, however, the order of description should not be construed to imply that these operations are necessarily order dependent. In particular, the operations need not be performed in the order presented.In the following description, certain terms may also be used for reference purposes only and are therefore not intended to be limiting. For example, terms such as "under" (eg, "underside" or "back"), "upper", "lower", "over", "below", "bottom" and "top" refer to direction in the figure. Terms such as "front," "back," "rear," and "side" describe the orientation and/or position of parts of a component within a consistent but arbitrary frame of reference, by reference to the text and associations describing the component in question to make this frame of reference clear. Such terms may include the words specifically mentioned above, derivatives thereof, and words of similar import. Embodiments described herein may be directed to front-end-of-the-line (FEOL) semiconductor processing and structures. FEOL is the first part of integrated circuit (IC) fabrication in which individual devices (eg, transistors, capacitors, resistors, etc.) are patterned in a semiconductor substrate or layer. FEOL generally covers all processes up to, but not including, metal interconnect layer deposition. After the FEOL operation, the result is usually a wafer with isolated transistors (eg, without any wires).Embodiments described herein may be directed to back-end-of-the-line (BEOL) semiconductor processing and structures. BEOL is the second part of IC fabrication, where individual devices (eg, transistors, capacitors, resistors, etc.) are interconnected with wiring (eg, one or more metallization layers) on the wafer. A BEOL includes contacts for chip-to-package connections, insulating layers (dielectrics), metal layers, and bonding sites. In the BEOL portion of the manufacturing stage, contacts (pads), interconnect wires, vias and dielectric structures are formed. For modern IC processes, more than 10 metal layers can be added to the BEOL.The embodiments described below may be applicable to FEOL processes and structures, BEOL processes and structures, or both FEOL processes and structures and BEOL processes and structures. In particular, although an exemplary processing scheme may be illustrated using a FEOL processing scenario, such an approach may also be applicable to BEOL processing. Likewise, although a BEOL processing scenario may be used to illustrate an exemplary processing scheme, such an approach may also be applicable to FEOL processing.Advantages of implementing embodiments described herein may include the ability to achieve greater memory cell densities while maintaining transistor (eg, thin film transistor (TFT)) performance. In an embodiment, a bit line contact (BLC) on the underside or backside of the channel is coupled to a bit line (BL) located below the channel of the TFT in the memory array. In an embodiment, the memory device includes a one transistor-one capacitor (1T-1C) memory device, such as a DRAM.In an embodiment, the location of the BL below the channel in the substrate (or on a higher back-end layer of the memory array) allows the BL and storage node contact (SNC) to be located on the back-end layer's substrate or ILD at different levels or heights in the BL to allow the space around the BL and SNC to be less constrained. In an embodiment, a word line (WL) pitch/bit line (BL) pitch ratio may be selected that allows gate-controlled channel lengths of thin film transistors in memory cells (eg, L1 as will be discussed below) to be comparable to having Structures of similar memory cell area are longer.Referring now to FIG. 1 , a cross-sectional view of an integrated circuit (IC) structure 100 having a memory cell architecture with bit line contacts on the underside or backside of a channel is shown in accordance with an embodiment of the present disclosure Department (BLC). Note that the underside or backside of the channel may refer to the underside of the channel relative to the frontside of the channel including, eg, the gate oxide of the channel. The cross-sectional view of Figure 1 is taken along the channel ("gate length") of a transistor of a memory device (shown and discussed in more detail with respect to Figure 2A). In FIG. 1, a bit line (BL) 103 extending along the first direction is formed. In an embodiment, the channel 106 extends in a second (diagonal) direction above the BL 103 . In some embodiments, the channel layer 106 is deposited as a layer of channel material in a substrate, such as a silicon substrate. In other embodiments, the channel layer is deposited or formed in a higher back end layer of the memory array (eg, metal 4 (M4) or metal 5 (M5)).In this embodiment, the etch stop (ES) layer 105 is located below the channel layer 106 . Word lines (WL) 108A and 108B extend in a third direction (out of the page) perpendicular to the first direction of BL 103 . In an embodiment, WL 108A intersects channel layer 106 to control the channel along a gate-controlled channel length (shown in more detail with respect to FIG. 2). Note that the gate oxide is shown at 145 (and 147). Additionally, channel layer 106 changes direction to extend vertically along portion 106A to electrically couple to storage capacitor 111A via storage node contact (SNC) 109A on the first or upper side. Similarly, WL 108B also intersects channel layer 106 to control the channel along the gate-controlled channel length (shown in more detail with respect to FIG. 2). As shown, channel layer 106 redirects to extend vertically along portion 106B to electrically couple to storage capacitor 111B via storage node contact (SNC) 109B on the first or upper side.As shown in FIG. 1 , the channel layer 106 is electrically coupled to a bit line contact (BLC) 104 on the second or lower side to couple to the BL 103 located below the channel layer 106 . Note that the first or front side of the channel layer 106 may be considered to be the side facing the storage capacitor 111A or 111B (or including the gate oxides 145 and 147 ). In some embodiments, the channel layer 106 includes a channel material deposited over a substrate such as a silicon substrate (eg, crystalline silicon). In an embodiment, the channel layer 106 includes a channel material deposited over a silicon substrate. In various embodiments, the memory architectures described herein are enabled by channel materials that can be deposited over, for example, crystalline silicon or other foundations of IC structures.In an embodiment, the channel material includes one or more of the following: amorphous silicon, polycrystalline silicon (polycrystalline Si), polycrystalline germanium (polycrystalline Ge), polycrystalline silicon germanium (polycrystalline SiGe), gallium nitride ( GaN), indium gallium arsenide (InGaAs), transition metal dichalcogenides such as tungsten disulfide (WS2), indium selenide (InSe), molybdenum disulfide (MoS2), molybdenum selenide (MoSe2), black phosphorus (phosphorus alkene), oxide semiconductors such as IGZO (indium gallium zinc oxide), indium oxide (In2O3), zinc oxide (ZnO), copper oxide (Cu2O), tin oxide (SnOx), and indium tungsten oxide (IWO).2A and 2B illustrate a cross-sectional view and a corresponding top view, respectively, of the IC structure of FIG. 1 in accordance with embodiments of the present disclosure. FIG. 2A shows the IC structure 100 of FIG. 1, which is shown in greater detail. The cross-sectional view of FIG. 2A is taken at a cut along the dashed line 120 shown in FIG. 2B and along the channel length of the IC structure 100 .As shown in FIGS. 2A and 2B , the BL 103 extends along the first direction. The top view shown in FIG. 2B also includes a second BL 123 (not visible in FIG. 2A ) parallel to the BL 103 . In an embodiment, BL 103 is a first BL and BL 123 is a second BL of a plurality of BLs included in a memory array such as a DRAM memory array. In FIGS. 2A and 2B , the horizontal portion of the channel layer 106 extends in the second direction (diagonally) above the BL 103 . 2A and 2B both include arrows 135, which illustrate exemplary WL spacing between WLs 108A and 108B. In FIG. 2B , the BL spacing between BLs 103 and 123 is indicated by arrow 218 .As previously described, WL 108A intersects channel layer 106 to control the gate-controlled channel length (eg, L1) of first transistor 115. Similarly, in an embodiment, WL 108B intersects channel layer 106 to control the gate-controlled channel length (eg, L2 ) of second transistor 116 . Note that the first and second transistors 115 and 116 are only shown/labeled in the perspective of FIG. 2A. Note that certain elements of transistors 115 and 116 (eg, gate electrodes, source and drain regions, etc.) are not shown or described in order not to obscure the picture.In FIG. 2B, a plurality of channel layers (eg, channel layers 106, 126, 136, and 146) extending diagonally to BLs 103 and 123 are shown. In an embodiment, multiple capacitors are shown in the top view of FIG. 2B, only a few of which are labeled to avoid obscuring the picture (eg, capacitors 111A and 111B).As shown in FIG. 2B, WLs 108A, 108B, and 108C extend in a third direction, perpendicular to the first direction of BLs (eg, BLs 103 and 123). In an embodiment, WLs 108A, 108B, and 108C intersect corresponding portions of the channels of, eg, channel layers 106, 126, 136, and 146, to control the corresponding channels along gate-controlled channel lengths. For example, WL 108A intersects channel layer 106 to control the current flow of transistor 115 (visible in the view of FIG. 2A ) along a gate-controlled channel length (eg, L1 ).Channel layer 106 is electrically coupled to storage capacitors 111A and 111B on the first or upper side via respective storage node contacts (SNCs) (eg, SNCs 109A and 109B). Channel layer 106 is electrically coupled on the second side via bit line contact (BLC) 104 to couple with BL 103 on the underside or backside of channel 106 . Note that although the additional SNCs and BLCs are shown coupled to transistors including channel layers (eg, 126, 136, and 146), they are not labeled in order to avoid obscuring FIG. 2B.Note that the BL and SNC must be electrically isolated from each other, which requires additional space around the BL and SNC. For example, an interlayer dielectric (ILD) including oxide and insulator is formed around the metal and/or metal contacts of each of the BL and SNC. Therefore, when the BL and SNC are located at similar levels in the IC structure, it may be difficult to achieve a high density of memory cells. As previously discussed, the location of the BL below the channel (relative to gate/gate oxide) allows the BL and storage node contact (SNC) to be located at different levels or heights, allowing space around the BL and SNC Less constrained. In embodiments, a word line (WL) pitch/bit line (BL) pitch ratio may be selected that allows gate-controlled channel lengths (eg, L1 and L2) of thin film transistors (TFTs) in memory cells to be Memory cells of similar area are longer. In some embodiments, the WL/BL pitch ratio includes a ratio of 0.87 or other suitable ratio that allows longer gate-controlled channel lengths than conventional structures of the same memory cell area.Referring now to FIG. 3, a cross-sectional view of an IC structure having a memory cell architecture with bit line contacts (BLCs) on a first or upper side of a channel is shown in accordance with an example of the present disclosure . The cross-sectional view of FIG. 1 is taken along the channel (“gate length”) of an embodiment of a portion 300 of a memory device. Compared to FIG. 1 (and FIGS. 2A and 2B ), in FIG. 3 , a bit line (BL) 303 is formed over a channel layer 306 . In the example, the channel layer 306 extends in the second (diagonal) direction and is located below the BL 303 .Word lines (WL) 308A and 308B extend in a third direction (out of the page) perpendicular to the first direction of BL 303 . For example, in an embodiment, WL 308A intersects channel layer 306 to control the channel or channel layer 306 along a gate-controlled channel length (eg, L1 ) of first transistor 315 . As shown, channel layer 306 changes direction to extend vertically along portion 306A to electrically couple to storage capacitor 311A via storage node contact (SNC) 309A on the first or upper side. Similarly, WL 308B intersects channel layer 306 to control the channel or channel layer 306 along a gate-controlled channel length (eg, L2 ) of second transistor 316 . As shown, channel layer 306 changes direction to extend vertically along portion 306B to electrically couple to storage capacitor 311B via storage node contact (SNC) 309B on the first or upper side.In an embodiment, bit line contact (BLC) 381 and SNCs 309A and 309B are located over channel layer 306 to couple to BL 303 . Note that in the embodiment of Figure 3, BL 303 and SNCs 309A and 309B are located at the same height or level, thus requiring more space than in the embodiment of Figures 1 and 2, where BL is located in the channel or channel layer under.Reference is now made to FIG. 4 , which is a flowchart illustrating a method associated with forming the integrated circuit structure 100 of FIGS. 1 and 2 in accordance with an embodiment of the present disclosure. At start block 401 , method 400 includes forming a BL (eg, BL 103 of FIG. 1 ) extending along a first direction. In embodiments, the BL may be formed of any suitable conductive material or combination of conductive materials such as, but not limited to, tungsten, tantalum, copper, ruthenium, titanium nitride (TiN), tantalum nitride (TaN), which is isolated material or dielectric overlay. In an embodiment, at the next block 403, the method 400 includes forming or depositing a channel layer over the BL. In an embodiment, the channel layer extends along a second direction diagonal to the first direction of the BL. For example, depositing the channel layer over the BL over the substrate may include depositing amorphous silicon or other channel material over the silicon substrate, as described above. In some embodiments, method 400 includes depositing an ES layer over the substrate prior to depositing the channel layer.At block 405, the method 400 includes forming a WL extending in a third direction perpendicular to the first direction of the BL and intersecting the channel layer to control the channel layer along a gate-controlled channel length (the WL in the channel layer) current). In an embodiment, the channel layer is electrically coupled to the storage capacitor on a first side via storage node contacts (SNCs) over BL and WL, and on a second side via bits on the underside or backside of the channel layer A line contact (BLC) is electrically coupled to the BL.Note that implementations of the embodiments of the invention described in FIGS. 1-4 may be formed or performed on a substrate, such as a semiconductor substrate. In one embodiment, the semiconductor substrate may be a crystalline substrate formed using bulk silicon or silicon-on-insulator substructures. In other embodiments, the semiconductor substrate may be formed using alternative materials that may or may not be combined with silicon, including but not limited to germanium, indium antimonide, lead telluride, indium arsenide, indium phosphide, arsenide Gallium, indium gallium arsenide, gallium antimonide, or other combinations of III-V or IV materials. Although a few examples of materials from which the substrate can be formed are described herein, any material that can be used as the basis upon which a semiconductor device can be built falls within the spirit and scope of the present invention.A number of transistors, such as metal oxide semiconductor field effect transistors (MOSFETs or MOS transistors for short) can be fabricated on the substrate. In various embodiments of the invention, the MOS transistors may be planar transistors, non-planar transistors, or a combination of both. Non-planar transistors include FinFET transistors such as dual-gate transistors and tri-gate transistors, as well as gate-all-around or gate-all-around transistors, such as nanoribbon and nanowire transistors. Although the embodiments described herein may only show planar transistors, it should be noted that the invention may also be implemented using non-planar transistors.Each MOS transistor includes a gate stack formed of at least two layers, a gate dielectric layer and a gate electrode layer. The gate dielectric layer may comprise one layer or a stack of layers. One or more layers may include silicon oxide, silicon dioxide (SiO2), and/or high-k dielectric materials. High-k dielectric materials may include elements such as hafnium, silicon, oxygen, titanium, tantalum, lanthanum, aluminum, zirconium, barium, strontium, yttrium, lead, scandium, niobium, and zinc. Examples of high-k materials that can be used in the gate dielectric layer include, but are not limited to, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium oxide silicon, tantalum oxide, titanium oxide, barium strontium titanium oxide, Barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide and lead zinc niobate. In some embodiments, when using high-k materials, the gate dielectric layer may be subjected to an annealing process to improve its quality.The gate electrode layer is formed on the gate dielectric layer and may be composed of at least one P-type work function metal or N-type work function metal, depending on whether the transistor will be a PMOS transistor or an NMOS transistor. In some embodiments, the gate electrode layer may consist of a stack of two or more metal layers, wherein one or more of the metal layers is a work function metal layer and at least one of the metal layers is a fill metal layer.For PMOS transistors, metals that can be used for the gate electrode include, but are not limited to, ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides such as ruthenium oxide. The P-type metal layer will enable the formation of a PMOS gate electrode with a work function between about 4.5 eV and about 5.2 eV. For NMOS transistors, metals that can be used for the gate electrode include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, alloys of these metals, and carbides of these metals, such as hafnium carbide, zirconium carbide, titanium carbide, tantalum carbide, and carbide aluminum. The N-type metal layer will enable the formation of an NMOS gate electrode with a work function between about 3.9 eV and about 4.5 eV.In some embodiments, the gate electrode may be composed of a "U"-shaped structure including a bottom portion substantially parallel to the substrate surface and two sidewall portions substantially perpendicular to the substrate top surface. In another embodiment, at least one of the metal layers forming the gate electrode may simply be a planar layer that is substantially parallel to the top surface of the substrate and does not include sidewall portions that are substantially perpendicular to the top surface of the substrate. In other embodiments of the present invention, the gate electrode may be composed of a combination of U-shaped structures and planar non-U-shaped structures. For example, the gate electrode may consist of one or more U-shaped metal layers formed on top of one or more planar non-U-shaped layers.In some embodiments of the invention, a pair of sidewall spacers may be formed on opposite sides of the gate stack surrounding the gate stack. The sidewall spacers may be formed of materials such as silicon nitride, silicon oxide, silicon carbide, carbon-doped silicon nitride, and silicon oxynitride. Processes for forming sidewall spacers are well known in the art and typically include deposition and etching process steps. In alternative embodiments, multiple spacer pairs may be used, eg, two, three, or four pairs of sidewall spacers may be formed on opposite sides of the gate stack.As is known in the art, source and drain regions are formed within the substrate adjacent to the gate stack of each MOS transistor. The source and drain regions are typically formed using implant/diffusion processes or etch/deposition processes. In the former process, dopants such as boron, aluminum, antimony, phosphorus or arsenic may be ion implanted into the substrate to form the source and drain regions. An annealing process that activates the dopants and causes them to further diffuse into the substrate typically follows the ion implantation process. In the latter process, the substrate may first be etched to form recesses at the locations of the source and drain regions. An epitaxial deposition process can then be performed to fill the recesses with the material used to fabricate the source and drain regions. In some embodiments, the source and drain regions may be fabricated using a silicon alloy such as silicon germanium or silicon carbide. In some embodiments, the epitaxially deposited silicon alloy can be doped in situ with dopants such as boron, arsenic, or phosphorous. In other embodiments, one or more alternative semiconductor materials, such as germanium or III-V materials or alloys, may be used to form the source and drain regions. And in other embodiments, one or more layers of metal and/or metal alloy may be used to form the source and drain regions.One or more interlayer dielectrics (ILDs) are deposited over the MOS transistors. The ILD layer may be formed using dielectric materials known for their suitability in integrated circuit structures, such as low-k dielectric materials. Examples of dielectric materials that may be used include, but are not limited to, silicon dioxide (SiO2), carbon doped oxide (CDO), silicon nitride, organic polymers such as perfluorocyclobutane or polytetrafluoroethylene, fluorosilicon Salt glass (FSG) and organosilicates such as silsesquioxane, siloxane or organosilicate glass. The ILD layers may include holes or air gaps to further reduce their dielectric constant.5 is a cross-sectional side view of an integrated circuit (IC) device assembly that may include one having a bit line contact (BLC) on the underside or backside of the channel in accordance with one or more embodiments disclosed herein or multiple thin-film transistors.5, an IC device assembly 500 includes components having one or more integrated circuit structures described herein. IC device assembly 500 includes a number of components that are on a circuit board 502 (which may be, for example, a motherboard). IC device assembly 500 includes components disposed on a first side 540 of circuit board 502 and an opposing second side 542 of circuit board 502 . In general, components may be provided on one or both of faces 540 and 542 . In particular, any suitable of the components of IC device assembly 500 may include the plurality of TFT structures disclosed herein.In some embodiments, circuit board 502 may be a printed circuit board (PCB) that includes multiple metal layers separated from each other by layers of dielectric material and interconnected by conductive vias. Any one or more of the metal layers may be formed in a desired circuit pattern to route electrical signals between components coupled to the circuit board 502 (optionally in combination with other metal layers). In other embodiments, the circuit board 502 may be a non-PCB substrate.The IC device assembly 500 shown in FIG. 5 includes a package-on-interposer structure 536 that is coupled to the first side 540 of the circuit board 502 by a coupling member 516 . Coupling components 516 may electrically and mechanically couple the package-on-interposer structure 536 to the circuit board 502 and may include solder balls (as shown in FIG. 5 ), male and female portions of the socket, adhesive, underfill material, and/or or any other suitable electrical and/or mechanical coupling structure.Package-on-interposer structure 536 may include IC package 520 coupled to interposer 504 by coupling member 518 . Coupling member 518 may take any suitable form for the application, such as the form discussed above with reference to coupling member 516 . Although a single IC package 520 is shown in FIG. 5 , multiple IC packages may be coupled to the interposer 504 . It should be understood that additional interposers may be coupled to interposer 504 . Interposer 504 may provide an intermediate substrate for bridging circuit board 502 and IC package 520 . IC package 520 may be or include, for example, a memory die including an IC structure (eg, IC structure 100 of FIGS. 1 and 2 ), or any other suitable component. In general, the interposer 504 can expand the connection to a wider spacing or reroute the connection to a different connection. For example, interposer 504 may couple IC package 520 (eg, a die) to a ball grid array (BGA) of coupling components 516 for coupling to circuit board 502 . In the embodiment shown in FIG. 5 , IC package 520 and circuit board 502 are attached to opposite sides of interposer 504 . In other embodiments, IC package 520 and circuit board 502 may be attached to the same side of interposer 504 . In some embodiments, three or more components may be interconnected by interposer 504 .The interposer 504 may be formed of epoxy, glass fiber reinforced epoxy, ceramic materials, or polymeric materials such as polyimide. In some embodiments, the interposer 504 may be formed from alternating rigid or flexible materials, which may include the same materials described above for semiconductor substrates, such as silicon, germanium, and other III-V and IV materials . Interposer 504 may include metal interconnects 508 and vias 510 , including but not limited to through-silicon vias (TSVs) 506 . The interposer 504 may also include embedded devices, including passive and active devices. Such devices may include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, sensors, electrostatic discharge (ESD) devices, and memory devices. More complex devices such as radio frequency (RF) devices, power amplifiers, power management devices, antennas, arrays, sensors, and microelectromechanical systems (MEMS) devices may also be formed on the interposer 504 . Package-on-interposer 536 may take the form of any package-on-interposer structure known in the art.IC device assembly 500 may include IC package 524 coupled to first side 540 of circuit board 502 by coupling member 522 . Coupling member 522 may take the form of any of the embodiments discussed above with reference to coupling member 516 , and IC package 524 may take the form of any of the embodiments discussed above with reference to IC package 520 .The IC device assembly 500 shown in FIG. 5 includes a package-on-package structure 534 coupled to the second side 542 of the circuit board 502 by a coupling member 528 . Package-on-package structure 534 may include IC package 526 and IC package 532 coupled together by coupling member 530 such that IC package 526 is disposed between circuit board 502 and IC package 532 . Coupling components 528 and 530 may take the form of any embodiment of coupling component 516 discussed above, and IC packages 526 and 532 may take the form of any embodiment of IC package 520 discussed above. The package-on-package structure 534 may be configured according to any package-on-package structure known in the art.Embodiments disclosed herein may be used to fabricate many different types of integrated circuits and/or microelectronic devices. Examples of such integrated circuits include, but are not limited to, processors, chipset components, graphics processors, digital signal processors, microcontrollers, and the like. In other embodiments, semiconductor memories may be fabricated. Additionally, integrated circuits or other microelectronic devices may be used in a variety of electronic devices known in the art. For example, in computer systems (eg, desktops, laptops, servers), cellular telephones, personal electronic devices, and the like. Integrated circuits may be coupled to buses and other components in the system. For example, a processor may be coupled to memory, a chipset, etc. through one or more buses. Each of the processors, memories, and chipsets can potentially be fabricated using the schemes disclosed herein.FIG. 6 illustrates a computing device 600 according to one embodiment of the present disclosure. Computing device 600 houses board 602 . Board 602 may include various components including, but not limited to, processor 604 and at least one communication chip 606 . Processor 604 is physically and electrically coupled to board 602 . In some embodiments, at least one communication chip 606 is also physically and electrically coupled to board 602 . In other embodiments, the communication chip 606 is part of the processor 604 .Depending on its application, computing device 600 may include other components that may or may not be physically and electrically coupled to board 602 . These other components include, but are not limited to, volatile memory (eg, DRAM as shown, and including IC structures 100 and 300 of FIGS. 1-3 ), non-volatile memory (eg, ROM), flash memory, graphics processors , digital signal processor, cryptographic processor, chipset, antenna, display, touch screen display, touch screen controller, battery, audio codec, video codec, power amplifier, global positioning system (GPS) device, compass, acceleration gauges, gyroscopes, speakers, cameras, and mass storage devices (eg, hard drives, compact discs (CDs), digital versatile discs (DVDs), etc.).The communication chip 606 implements wireless communication for transmitting data to and from the computing device 600 . The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communication channels, etc. that can communicate data through a non-solid medium using modulated electromagnetic radiation. The term does not imply that the associated devices do not contain any wires, although in some embodiments they may not. Communication chip 606 may implement any of a variety of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 series), WiMAX (IEEE 802.16 series), IEEE 802.20, Long Term Evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, and any other wireless protocol designated as 3G, 4G, 5G and beyond. Computing device 600 may include multiple communication chips 606 . For example, the first communication chip 606 may be dedicated to shorter-range wireless communication such as Wi-Fi and Bluetooth, and the second communication chip 606 may be dedicated to wireless communication such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, etc. Long range wireless communication.The processor 604 of the computing device 600 includes an integrated circuit die packaged within the processor 604 . The processor may be coupled to a memory device having a memory cell architecture having bit line contacts (BLCs) on the underside or backside of the channel in accordance with embodiments of the present disclosure. The term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to convert the electronic data into other electronic data that may be stored in the registers and/or memory.The communication chip 606 also includes an integrated circuit die packaged within the communication chip 606 .In various embodiments, computing device 600 may be a laptop, netbook, notebook, ultrabook, smartphone, tablet, personal digital assistant (PDA), ultra-mobile PC, mobile phone, desktop computer, server, printer , scanners, monitors, set-top boxes, entertainment control units, digital cameras, portable music players or digital video recorders. In other implementations, computing device 600 may be any other electronic device that processes data.The foregoing description of illustrated implementations of embodiments of the present disclosure, including what is described in the Abstract, is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. While specific embodiments of, and examples for, the disclosure are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the disclosure, as those skilled in the relevant art will recognize.These modifications can be made to the present disclosure in light of the above detailed description. The terms used in the following claims should not be construed to limit the disclosure to the specific embodiments disclosed in the specification and the claims. Rather, the scope of the present disclosure is to be determined solely by the appended claims, which are to be construed in accordance with the established principles of claim interpretation.Example 1 includes an integrated circuit (IC) structure in a memory device comprising: a bit line (BL) extending along a first direction; a channel extending along a second diagonal direction relative to the BL; A word line (WL) extending in a third direction perpendicular to the first direction of the channel to control the channel along a gate-controlled channel length, wherein the channel is on the first side via the storage node contact (SNC) ) is electrically coupled to the storage capacitor and to the BL on the second side via a bit line contact (BLC) on the underside or backside of the channel.Example 2 includes the IC structure of Example 1, wherein the IC structure includes a 1 transistor-1 capacitor (1T-1C) memory cell of a DRAM memory array.Example 3 includes the IC structure of Example 2, wherein the storage node contacts (SNCs) are over BL and WL in the DRAM memory array.Example 4 includes the IC structure of Example 1, wherein the BL is included in a back-end layer of an interlayer dielectric layer (ILD) of a DRAM memory array.Example 5 includes the IC structure of Example 1, further including an etch stop (ES) layer below the channel layer.Example 6 includes the IC structure of any of Examples 1-5, wherein the channel comprises amorphous silicon, polysilicon (polySi), polygermanium (polyGe), polysilicon germanium (polySiGe), gallium nitride ( GaN), indium gallium arsenide (InGaAs), transition metal dichalcogenides, or at least one of oxide semiconductors.Example 7 includes a method of fabricating a memory array comprising: forming a bit line (BL) extending along a first direction; depositing a channel layer in a region over the BL, the channel layer extending in a second direction that is diagonal to the first direction of BL; and forming word lines (WL) extending in a third direction perpendicular to the first direction of BL and to the channel layer intersect to control current flow through the channel layer along the gate-controlled channel length, wherein the channel layer is electrically coupled to the storage capacitor on the first side via the storage node contact (SNC) over BL and WL, and on the first side The second side is electrically coupled to the BL via a bit line contact (BLC) on the underside or backside of the channel layer.Example 8 includes the method of Example 7, wherein forming the channel layer includes depositing over the substrate over the BL at least one of: amorphous silicon, polycrystalline silicon (polycrystalline Si), polycrystalline germanium (polycrystalline Ge), Polycrystalline silicon germanium (polycrystalline SiGe), gallium nitride (GaN), indium gallium arsenide (InGaAs), transition metal dichalcogenides, or oxide semiconductors.Example 9 includes the method of Example 8, further comprising depositing an etch stop (ES) layer over the substrate prior to depositing the channel layer.Example 10 includes the method of Example 7, wherein the channel layer changes direction at one end of the channel layer to extend in a vertical direction toward the storage capacitor.Example 11 includes the method of Example 7, wherein the ES layer includes silicon nitride (SiN), silicon (Si), silicon carbide (SiC), silicon oxynitride (SiON), cadmium oxide (CDO), aluminum oxide (Al2O3), oxide One or more of hafnium (HfO2) and zirconia (ZrO2).Example 12 includes the method of any of Examples 7-11, wherein forming the BL includes forming the BL for a back end layer of the DRAM memory array.Example 13 includes a computing device comprising: a board; a component coupled to the board, the component comprising an integrated circuit (IC) structure, comprising: a bit line (BL) extending along a first direction; a channel extending in a second direction diagonal to the BL; and a word line ( WL), where the channel is electrically coupled to the storage capacitor on the first side via the storage node contact (SNC) over the BL and on the second side via the bit line contact (BLC) on the underside or backside of the channel ) is electrically coupled to BL.Example 14 includes the computing device of Example 13, wherein the IC structure includes 1T-1C memory cells of a DRAM memory array.Example 15 includes the computing device of Example 13, further comprising a storage capacitor, and wherein the channel changes direction at one end of the channel to extend in a vertical direction toward the storage capacitor.Example 16 includes the computing device of Example 13, further comprising a memory coupled to the board.Example 17 includes the computing device of Example 13, further comprising a communication chip coupled to the board.Example 18 includes the computing device of Example 13, wherein the component is a dual inline memory module (DIMM).Example 19 includes the computing device of Example 13, wherein the component is a packaged integrated circuit die.Example 20 includes the computing device of any of Examples 13-19, wherein the components include dynamic random access memory (DRAM).
A static random access memory cell comprising a first inverter including a first p-channel pullup transistor, and a first n-channel pulldown transistor in series with the first p-channel pullup transistor; a second inverter including a second p-channel pullup transistor, and a second n-channel pulldown transistor in series with the second n-channel pullup transistor, the first inverter being cross-coupled with the second inverter, the first and second pullup transistors sharing a common active area; a first access transistor having an active terminal connected to the first inverter; a second access transistor having an active terminal connected to the second inverter; and an isolator isolating the first pullup transistor from the second pullup transistor.
1. A method of manufacturing a static random access memory cell, the method comprising:providing a silicon substrate; defining first and second inverters on the substrate, the first and second inverters respectively each including a first conductivity type of transistor having a respective gate, drain and source, defining first and second inverters, the defining including forming an active area common to the drains of the first conductivity type transistors by doping a contiguous region in the substrate extending from the drain of one of the first conductivity type transistors to the drain of the other first conductivity type transistor and a field oxide formed in the surface, surrounding and isolating the contiguous region; and defining an isolation gate relative to the common active area, between the drains of the first conductivity type transistors, by forming polysilicon on the common active area. 2. The method of claim 1, wherein defining first and second inverters includes defining first and second inverters having p-channel FETs as the first conductivity type transistors, the method further comprising defining the first and second inverters to each include respective n-channel transistors couple in series with the p-channel transistors, the p-channel transistors including respective sources and the n-channel transistors including respective sources.3. The method of claim 2, further comprising coupling the sources of the p-channel transistors to a first voltage, coupling the sources of the n-channel transistors to a second voltage lower than the first voltage, and coupling the isolation gate to a voltage higher than the second voltage.4. The method of claim 3, further comprising coupling the isolation gate to the first voltage.5. The method of claim 1, wherein defining the first and second inverters includes defining respective outputs for the first and second inverters, the method further comprising:defining a first access transistor having a first active terminal configured to be coupled to the output of the first inverter, having a second active terminal configured to be coupled to a first bit line, and having a gate configured to be coupled to a word line; and defining a second access transistor having a first active terminal configured to be coupled to the output of the second inverter, having a second active terminal configured to be coupled to a second bit line, and having a gate configured to be coupled to the word line. 6. The method of claim 1 wherein:defining the first and second inverters comprises forming a first and a second active area each isolated by a field oxide and including an n-channel transistor, each n-channel transistor including a source configured to be coupled to a first voltage; and forming a common active area comprises forming a third active area isolated by a field oxide and including two p-channel transistors, the two p-channel transistors including respective drains configured to be coupled to a second voltage higher than the first voltage and configured to be isolated from each other by coupling the isolation gate to a voltage higher than the first voltage. 7. A method of manufacturing a static random access memory cell, the method comprising:providing a substrate having a planar surface; defining first and second inverters on the substrate, the first and second inverters respectively each including a first conductivity type of transistor having a respective gate, drain and source, defining first and second inverters comprising forming an active area common to the drains of the first conductivity type transistors by doping a contiguous region in the substrate extending from the drain of one of the first conductivity type transistors to the drain of the other first conductivity type transistor and a field oxide formed in the planar surface surrounding and isolating the contiguous region; and defining an isolation gate relative to the common active area, between the drains of the first conductivity type transistors. 8. The method of claim 7, wherein defining an isolation gate relative to the common active area comprises forming polysilicon on the common active area.9. The method of claim 7, wherein defining the first and second inverters includes defining respective outputs for the first and second inverters, the method further comprising:defining a first access transistor having a first active terminal configured to be coupled to the output of the first inverter, having a second active terminal configured to be coupled to a first bit line, and having a gate configured to be coupled to a word line; and defining a second access transistor having a first active terminal configured to be coupled to the output of the second inverter, having a second active terminal configured to be coupled to a second bit line, and having a gate configured to be coupled to the word line. 10. The method of claim 7, wherein:providing a substrate comprises providing a substrate having a planar surface; forming an active area comprises forming a field oxide in the planar surface surrounding the contiguous region; and defining the first and second inverters comprises: forming n-channel active areas each including a n-channel transistor; and isolating each of the n-channel active areas by growing a field oxide surrounding each of the n-channel active areas. 11. The method of claim 7, wherein defining the first and second inverters comprises:forming n-channel active areas each including a n-channel transistor; and isolating each of the n-channel active areas by growing a field oxide surrounding each of the n-channel active areas. 12. A method of manufacturing a static random access memory cell, the method comprising:providing a substrate having a planar surface; defining first and second inverters on the substrate, the first and second inverters including respective outputs for the first and second inverters, the first and second inverters respectively each including a first conductivity type of transistor having a respective gate, drain and source, defining first and second inverters comprising forming an active area common to the drains of the first conductivity type transistors by doping a contiguous region in the substrate extending from the drain of one of the first conductivity type transistors to the drain of the other first conductivity type transistor and a field oxide formed in the planar surface surrounding and isolating the contiguous region; defining an isolation gate relative to the common active area, between the drains of the first conductivity type transistors; defining a first active area isolated by a first field oxide, the first active area including a first access transistor having a first active terminal coupled to the output of the first inverter, having a second active terminal coupled to a first bit line, and having a gate coupled to a word line; and defining a second active area isolated by a second field oxide, the second active area including a second access transistor having a first active terminal coupled to the output of the second inverter, having a second active terminal coupled to a second bit line, and having a gate coupled to the word line. 13. The method of claim 12, wherein defining an isolation gate relative to the common active area comprises forming polysilicon on the common active area.14. A method of manufacturing a static random access memory cell comprising:providing a silicon substrate having a surface; defining first and second inverters on the substrate, the first and second inverters respectively including p-channel transistors having respective gates, drains and sources, defining first and second inverters, the defining including forming an active area common to the drains of the p-channel transistors by doping a contiguous region in the substrate extending from the drain of one p-channel transistor to the drain of the other p-channel transistor and forming a field oxide in the surface, surrounding and isolating the contiguous region; and defining an isolation gate relative to the common active area, between the drains of the p-channel transistors. 15. The method of claim 14, wherein defining the first and second inverters comprises forming n-channel transistors respectively in series with the p-channel transistors, sources of the p-channel transistors being configured to be coupled to a first voltage, the n-channel transistors including respective sources configured to be coupled to a second voltage lower than the first voltage, wherein the isolation gate is configured to be coupled to a voltage higher than the second voltage.16. The method of claim 14, wherein defining the first and second inverters comprises forming n-channel transistors respectively in series with the p-channel transistors, sources of the p-channel transistors being configured to be coupled to a first voltage, the n-channel transistors including respective sources configured to be coupled to a second voltage lower than the first voltage, wherein the isolation gate is configured to be coupled to the first voltage.17. The method of claim 14, wherein defining the first and second inverters includes defining respective outputs for the first and second inverters, the method further comprising:defining a first access transistor having a first active terminal configured to be coupled to the output of the first inverter, having a second active terminal configured to be coupled to a first bit line and having a gate configured to be coupled to a word line; and defining a second access transistor having a first active terminal configured to be coupled to the output of the second inverter, having a second active terminal configured to be coupled to a second bit line and having a gate configured to be coupled to the word line. 18. A method of manufacturing a static random access memory cell comprising:providing a silicon substrate having a planar surface; defining first and second inverters on the substrate, the first and second inverters respectively including p-channel transistors having respective gates, drains and sources, defining first and second inverters, the defining including forming an active area common to the drains of the p-channel transistors by doping a contiguous region in the substrate extending from the drain of one p-channel transistor to the drain of the other p-channel transistor and forming a field oxide in the planar surface, surrounding the contiguous region; and defining an isolation gate relative to the common active area, between the drains of the p-channel transistors. 19. The method of claim 18, wherein defining the first and second inverters includes defining respective outputs for the first and second inverters; defining a first active area isolated by a field oxide, forming a first access transistor in the first active area, the first access transistor having a first active terminal configured to be coupled to the output of the first inverter, having a second active terminal configured to be coupled to a first bit line and having a gate configured to be coupled to a word line; and defining a second active area isolated by a field oxide, the second active area including a second access transistor having a first active terminal configured to be coupled to the output of the second inverter, having a second active terminal configured to be coupled to a second bit line and having a gate configured to be coupled to the word line.20. The method of claim 18, further comprising:forming a plurality of active areas; and forming field oxide isolating each of the plurality of active areas, wherein each of the first and second inverters includes n-channel transistors each formed in a respective one of the plurality of field oxide isolated active areas. 21. The method of claim 18, wherein:defining the first and second inverters comprises forming a first and a second active area each isolated by a field oxide and including an n-channel transistor, each n-channel transistor including a source configured to be coupled to a first voltage, and wherein the respective drains of the p-channel transistors are, in operation, coupled to a second voltage higher than the first voltage and are, in operation, isolated from each other by coupling the isolation gate to a voltage higher than the first voltage. 22. A method of manufacturing a static random access memory cell, the method comprising:providing a silicon substrate having a planar surface; defining first and second inverters on the substrate by defining respective first and second field effect transistors having respective gates, drains and sources, the first and second field effect transistors including an active area common to the drains of the first and second field effect transistors and comprising a doped contiguous region in the substrate extending from the drain of one field effect transistor to the drain of the other field effect transistor and a field oxide formed in the planar surface surrounding the contiguous region; and defining an isolation gate relative to the common active area, between the drains of the first and second field effect transistors, by forming polysilicon on the common active area. 23. The method of claim 22, wherein defining respective first and second field effect transistors includes forming p-channel transistors.24. A method of manufacturing a static random access memory cell, the method comprising:providing a silicon substrate having a planar surface; defining first and second inverters on the substrate, the first and second inverters respectively including a first conductivity type of transistor having a gate, drain and source, defining first and second inverters, the defining including forming an active area common to the drains of the first conductivity type transistors by doping a contiguous region in the substrate extending from the drain of one of the first conductivity type transistors to the drain of the other first conductivity type transistor and forming a field oxide in the planer surface surrounding the contiguous region, wherein defining the first and second inverters further includes forming n-channel active areas each including a n-channel transistor and isolating each of the n-channel active areas by growing a field oxide surrounding each of the n-channel active areas; and defining an isolation gate relative to the common active area, between the drains of the first conductivity type transistors, by forming polysilicon on the common active area. 25. A method of manufacturing a static random access memory cell comprising:providing a silicon substrate having a planar surface; defining first and second inverters on the substrate, the first and second inverters respectively including p-channel transistors having respective gates, drains and sources, defining first and second inverters, the defining including forming an active area common to the drains of the p-channel transistors by doping a contiguous region in the substrate extending from the drain of one p-channel transistor to the drain of the other p-channel transistor, and forming a field oxide surrounding the contiguous region; and defining an isolation gate relative to the common active area, between the drains of the p-channel transistors, by forming polysilicon on the common active area; wherein defining the first and second inverters comprises: forming first and second active areas; forming field oxide isolating the first and second active areas; forming a first n-channel transistor in the first active area; and forming a second n-channel transistor in the second active area.
CROSS REFERENCE TO RELATED APPLICATIONThis is a Continuation of U.S. patent application Ser. No. 08/960,875, filed Oct. 30, 1997, entitled "Method of Isolating a SRAM Cell", now U.S. Pat. No. 6,103,579, which is a Continuation of U.S. patent application Ser. No. 08/819,546, filed Mar. 17, 1997, now abandoned, which is a Continuation of U.S. patent application Ser. No. 08/594,747, filed Jan. 31, 1996, now abandoned.TECHNICAL FIELDThe invention relates to non-volatile static memory devices. More particularly, the invention relates to methods of manufacturing static random access memory devices.BACKGROUND OF THE INVENTIONOne known type of static read/write memory cell is a high-density static random access memory (SRAM). A static memory cell is characterized by operation in one of two mutually-exclusive and self-maintaining operating states. Each operating state defines one of the two possible binary bit values, zero or one. A static memory cell typically has an output which reflects the operating state of the memory cell. Such an output produces a "high" voltage to indicate a "set" operating state. The memory cell output produces a "low" voltage to indicate a "reset" operating state. A low or reset output voltage usually represents a binary value of zero, while a high or set output voltage represents a binary value of one.A static memory cell is said to be bistable because it has two stable or self-maintaining operating states, corresponding to two different output voltages. Without external stimuli, a static memory cell will operate continuously in a single one of its two operating states. It has internal feedback to maintain a stable output voltage, corresponding to the operating state of the memory cell, as long as the memory cell receives power.The two possible output voltages produced by a static memory cell correspond generally to upper (Vccinternal-VT) and lower (Vss) circuit supply voltages. Intermediate output voltages, between the upper (Vcc-VT) and lower (VSS) circuit supply voltages, generally do not occur except for during brief periods of memory cell power-up and during transitions from one operating state to the other operating state.The operation of a static memory cell is in contrast to other types of memory cells such as dynamic cells which do not have stable operating states. A dynamic memory cell can be programmed to store a voltage which represents one of two binary values, but requires periodic reprogramming or "refreshing" to maintain this voltage for more than very short time periods.A dynamic memory cell has no internal feedback to maintain a stable output voltage. Without refreshing, the output of a dynamic memory cell will drift toward intermediate or indeterminate voltages, resulting in loss of data. Dynamic memory cells are used in spite of this limitation because of the significantly greater packaging densities which can be attained. For instance, a dynamic memory cell can be fabricated with a single MOSFET transistor, rather than the six transistors typically required in a static memory cell. Because of the significantly different architectural arrangements and functional requirements of static and dynamic memory cells and circuits, static memory design has developed along generally different paths than has the design of dynamic memories.A static memory cell 10 is illustrated in FIG. 1. Static memory cell 10 generally comprises first and second inverters 12 and 14 which are cross-coupled to form a bistable flip-flop. Inverters 12 and 14 are formed by first and second n-channel pulldown (driver) transistors N1 and N2, and first and second p-channel load (pullup) transistors P1 and P2. Transistors N1 and N2 are typically metal oxide silicon field effect transistors (MOSFETs) formed in an underlying silicon semiconductor substrate. P-channel transistors P1 and P2 can be thin film transistors formed above the driver transistors or bulk devices.Driver transistors N1 and N2 have respective source regions 66 and 68 tied to a low reference or circuit supply voltage, labelled VSS, and typically referred to as "ground." Driver transistors N1 and N2 have respective drain regions 64 and 62, and respective gates. Load transistors P1 and P2 have respective source regions 78 and 80 tied to a high reference or circuit supply voltage, labelled Vcc, and have respective drain regions 70 and 72 tied to the drains 64 and 62, respectively, of the corresponding driver transistors N1 and N2. The gate of load transistor P1 is connected to the gate of driver transistor N1. The gate to load transistor P2 is connected to the gate of the driver transistor N2.Inverter 12 has an inverter output 20 formed by the drain of driver transistor N1. Similarly, inverter 14 has an inverter output 22 formed by the drain of driver transistor N2. Inverter 12 has an inverter input 76 formed by the gate of driver transistor N1. Inverter 14 has an inverter input 74 formed by the gate of driver transistor N2.The inputs and outputs of inverters 12 and 14 are cross-coupled to form a flip-flop having a pair of complementary two-state outputs. Specifically, inverter output 20 is coupled to inverter input 74 via line 26, and inverter output 22 is coupled to inverter input 76 via line 24. In this configuration, inverter outputs 20 and 22 form the complementary two-state outputs of the flip-flop.A memory flip-flop such as that described typically forms one memory element of an integrated array of static memory elements. A plurality of access transistors, such as access transistors 30 and 32, are used to selectively address and access individual memory elements within the array. Access transistor 30 has one active terminal 58 connected to cross-coupled inverter output 20. Access transistor 32 has one active terminal 60 connected to cross-coupled inverter output 22. A pair of complementary column or bit lines 34 and 36, are connected to the remaining active terminals 56 and 54 of access transistors 30 and 32, respectively. A row or word line 38 is connected to the gates of access transistors 30 and 32. In the illustrated embodiment, access transistors 30 and 32 are n-channel transistors.Reading static memory cell 10 requires activating row line 38 to connect inverter outputs 20 and 22 to column lines 34 and 36. Writing to static memory cell 10 requires complementary logic voltage on column lines 34 and 36 with row line 38 activated. This forces the outputs to the selected logic voltages, which will be maintained as long as power is supplied to the memory cell, or until the memory cell is reprogrammed.In semiconductor processing, there is a continuing desire to make circuits denser, and to place components closer and closer together to reduce the size of circuits. However, certain processing steps employed in manufacturing static memory cells such as the static memory cell shown in FIG. 1 result in some undesirable variations between desired results and actual results in the manufacturing process. For example, there are precision limits inherent in photolithography. Another process that results in some undesirable variations between desired results and actual results is called LOCOS isolation (for LOCal Oxidation of Silicon). LOCOS isolation is a common technique for isolating devices.Implementing a static memory cell on an integrated circuit involves is connecting isolated circuit components or devices, such as inverters and access transistors, through specific electrical paths. When fabricating integrated circuits into a semiconductor substrate, devices within the substrate must be electrically isolated from other devices within the substrate. The devices are subsequently interconnected to create specific desired circuit configurations.LOCOS isolation involves the formation of a semi-recessed oxide in the non-active (or field) areas of the bulk substrate. Such oxide is typically thermally grown by means of wet oxidation of the bulk silicon substrate at temperatures of around 1000[deg.] C. for two to six hours. The oxide grows where there is no masking material over other silicon areas on the substrate. A typical masking material used to cover areas where field oxide is not desired is nitride, such as Si3N4.However, at the edges of a nitride mask, some of the oxidant also diffuses laterally immediately therebeneath. This causes oxide to grow under and lift the nitride edges. The shape of the oxide at the nitride edges is that of a slowly tapering oxide wedge that merges into a previously formed thin layer of pad oxide, and has been termed as a "bird's beak". The bird's beak is generally a lateral extension of the field oxide into the active areas of devices.BRIEF DESCRIPTION OF THE DRAWINGSPreferred embodiments of the invention are described below with reference to the following accompanying drawings.FIG. 1 is a circuit schematic of a static random access memory cell.FIG. 2 is a broken away portion of circuit layout diagram illustrating a novel layout for manufacturing a plurality of static random access memory cells including cells such as the cell shown in FIG. 1.FIG. 3 illustrates the same layout shown in FIG. 2, except with information removed for increased clarity. For example, local interconnects that are shown in FIG. 2 are deleted in FIG. 3.FIG. 4 illustrates pullback that results during manufacturing when using the layout shown in FIGS. 2 and 3.FIG. 5 is a circuit schematic of an improved static memory cell embodying another novel layout.FIG. 6 is a broken away portion of a circuit layout diagram illustrating a method of manufacturing a plurality of static random access memory cells including cells such as the cell shown in FIG. 5.FIG. 7 is a circuit layout diagram for the layout shown in FIG. 6, with information removed for increased clarity. For example, local interconnects that are shown in FIG. 6 are deleted.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSThe invention provides a static random access memory cell comprising a first p-channel pullup transistor having a gate, drain, and source; a first n-channel pulldown transistor having a gate, drain, and source; a second p-channel pullup transistor having a gate, drain, and source; a second n-channel pulldown transistor having a gate, drain, and source; the source of the first pullup transistor being adapted to be connected to a first voltage; the source of the second pullup transistor being adapted to be connected to the first voltage; the drain of the first pulldown transistor being connected to the drain of the first pullup transistor; the drain of the second pulldown transistor being connected to the drain of the second pullup transistor; the source of the first pulldown transistor being adapted to be connected to a second voltage lower than the first voltage; the source of the second pulldown transistor being adapted to be connected to the second voltage; the gate of the first pullup transistor being connected to the gate of the first pulldown transistor; the gate of the second pullup transistor being connected to the gate of the second pulldown transistor; the first pullup transistor and the first pulldown transistor together defining a first inverter having an output defined by the drain of the first pulldown transistor and an input defined by the gate of the first pulldown transistor, the second pullup transistor and the second pulldown transistor together defining a second inverter having an output defined by the drain of the second pulldown transistor and an input defined by the gate of the second pulldown transistor, the input of the first inverter being connected to the output of the second inverter, and the input of the second inverter being connected to the output of the first inverter; and a p-channel isolation transistor connected between the drain of the first pullup transistor and the drain of the second pullup transistor, and having a gate.In one aspect of the invention, a static random access memory cell comprises a first inverter including a first p-channel pullup transistor, and a first n-channel pulldown transistor in series with the first p-channel pullup transistor; a second inverter including a second p-channel pullup transistor, and a second n-channel pulldown transistor in series with the second n-channel pullup transistor, the first inverter being cross-coupled with the second inverter, the first and second pullup transistors sharing a common active area; a first access transistor having an active terminal connected to the first inverter; a second access transistor having an active terminal connected to the second inverter; and an isolator isolating the first pullup transistor from the second pullup transistor.In one aspect of the invention, a method of manufacturing a static random access memory cell including first and second cross-coupled inverters, each inverter including a p-channel transistor connected in series with an n-channel transistor, the p-channel transistors having sources that are connected to each other and that are adapted to be connected to a common first voltage, and the p-channel transistors having respective drains; the n-channel transistors having respective sources that are connected to each other and that are adapted to be connected to a common second voltage, lower than the first voltage, and the n-channel transistors having respective drains; the method comprising the following steps: providing a silicon substrate; defining the first and seconds inverters relative to the substrate and including an active area common to drains of the p-channel transistors; and defining an isolation gate relative to the common active area, between the drains of the p-channel transistors.In one aspect of the invention, a method of manufacturing a wafer including a plurality of static random access memory cells, each cell including first and second cross-coupled inverters, each inverter including a p-channel transistor connected in series with an n-channel transistor, the p-channel transistors having sources that are connected together and that are adapted to be connected to a common first voltage, and having respective drains; the n-channel transistors having sources that are connected together and that are adapted to be connected to a common second voltage, lower than the first voltage, and having respective drains; the method comprising the following steps: providing a silicon substrate; defining active areas relative to the substrate for the static random access memory cells, the active areas including an active area having the general shape of a stepladder, including two parallel, spaced apart sides, and a plurality of parallel, spaced apart portions extending between the sides, such that the sides define drains of a plurality of the p-channel transistors; and defining respective isolation gates relative to active areas, between the drains of the p-channel transistors within each static random access memory cell.FIG. 2 illustrates a circuit layout diagram illustrating a novel layout for manufacturing a plurality of static random access memory cells including cells such as the cell shown in FIG. 1. Circuits such as the one shown in FIG. 1 are manufactured using silicon processing techniques which are known in the art. There are many different ways of laying out any circuit on a bulk substrate.In the layout of FIG. 2, active areas of the bulk substrate (e.g., the silicon wafer itself or doped areas beneath the wafer surface) are designated by reference numeral 42, polysilicon is designated by reference numeral 44, local interconnects (straps formed of a conductor such as Titanium Nitride) are designated by reference numeral 48, exhumed contacts are designated by reference numeral 46, Vcc metal is designated by reference numeral 50, and Vss metal is designated by reference numeral 52. The term "exhumed contact" refers to contacts which connect polysilicon to a local interconnect. This is in contrast to a buried contact.Generally speaking, transistors are formed where polysilicon 44 intersects an active area 42. There is generally no physical distinction between the source and drain of any of the transistors shown; instead, the distinction is based on the direction of current flow when the static memory cell is connected to a power source.In the embodiment shown in FIG. 2, of the areas shown, the order in which they are formed is as follows: active areas, then polysilicon, then local interconnects, and then exhumed contacts.Reference numerals are provided on FIG. 2 which correspond with reference numerals shown in FIG. 1 to illustrate how the circuit of FIG. 1 is laid out in one embodiment. For example, the source of transistor P1 is indicated by reference numeral 78 in both FIGS. 1 and 2; the drain of transistor P1 is indicated by reference numeral 70; the source of transistor P2 is indicated by reference numeral 80; the drain of transistor P2 is indicated by reference numeral 72; the source of transistor N1 is indicated by reference numeral 66; the drain of transistor N1 is indicated by reference numeral 64; the source of transistor N2 is indicated by reference numeral 68; and the drain of transistor N2 is indicated by reference numeral 62 in both FIGS. 1 and 2. Remaining white regions in these layout views (FIGS. 2-3, and 6-7) represent field oxide.FIG. 3 illustrates the same layout shown in FIG. 2, except at an earlier processing step for increased clarity. For example, local interconnects, Vcc metal, and Vss metal shown in FIG. 2 are not included in FIG. 3.As best seen in FIG. 3, the active areas 42 include areas in the general shape of a letter "H" (rotated 90[deg.]), as well as areas in the general shape of a dog bone (rotated 90[deg.]). The dog bone shaped areas are where the n-channel transistors N1 and N2 are formed, and the H-shaped regions are where the p-channel transistors P1 and P2 are formed. Accordingly, for an intrinsic p-type monocrystalline substrate, an elongated n-well is provided centrally; e.g., where the center of the H-shaped regions intersect with Vcc metal. Each SRAM cell is contained relative to two opposed legs of separate H's and two corners of separate but adjacent dogbones which are adjacent to those legs of the H's.There is a problem relating to the spacing of the ends of H's relative to adjacent H's. During the manufacturing process, there is significant pullback of the H-shaped active areas that form the drains 70 and 72 of the pullup transistors P1 and P2. This is illustrated in FIG. 4, which represents two adjacent H-shaped active area regions 42 intersecting polysilicon 44. The adjacent H-shaped active area regions 42 are separated by field oxide in the layout shown in FIGS. 2 and 3.The desired shape of the H-shaped regions 42 is indicated in FIG. 4 by outer dashed line 88. This is the shape of the active area as drawn on a reticle employed in defining the H-shaped regions 42. Inner dashed line 90 represents the shape of the area after photolithography (I-line 365 nm). Finally, the shape after aggressive LOCOS isolation (described above in the Background of the Invention) is illustrated with solid line 92. Encroachment takes place along two dimensions; i.e., along both the length and the width of the "H". The most extreme pullback occurs at the ends of the legs of the "H" where the drains 70 and 72 of the p-channel transistors P1 and P2 are defined. Also, the polysilicon has an associated spacer (e.g., 800 angstroms wide) which reduces the size of the active area, even further.Because of these pullback effects, the lengths of the legs of the H-shaped regions must be exaggerated so that contact can be made between the drains of the p-channel transistors P1 and P2.The transistors P1 and P2 are defined where polysilicon 44 traverses the active area 42. Active areas 42 which are not traversed by polysilicon 44 are doped to form the source and drain regions of the transistors. The drains 70 and 72 of the p-channel transistors need to be contacted with local interconnect 48 in the layout shown in FIG. 2. If the length of the H-shaped region is not sufficiently exaggerated to account for this pullback, the active areas defined by the legs of the H-shaped regions will disappear under the polysilicon 44, and it will not be possible to contact them with the local interconnect 48. On the other hand, exaggerating the size of the H-shaped regions results in a larger size for each static random access memory cell.The layout shown in FIGS. 5-7 reduces this encroachment problem, and thus reduces the need to exaggerate the lengths of the legs of the H-shaped active areas, by interconnecting the ends of the active areas. Thus, instead of spaced apart H-shaped active areas, active areas in the general shape of a stepladder are formed (FIG. 7). Each ladder-shaped active area has two spaced apart parallel sides, and a plurality of parallel spaced apart areas ("rungs") extending transversely between the parallel sides. This results in space saving, so that smaller static random access memory cells are produced.Note, however, that the purpose of separating the H-shaped active areas in the first place was to provide electrical isolation between active area regions (e.g., to provide electrical isolation between the drains 70 and 72 of the p-channel transistors P1 and P2). The two p-channel transistors P1 and P2 share a common active area in the embodiment of FIGS. 5-7. More particularly, the drains 70 and 72 of the p-channel transistors P1 and P2 share a common active area in the embodiment of FIGS. 5-7.The inventor of the present invention has accomplished the necessary isolation by providing an isolator which isolates the pullup transistor P1 from the pullup transistor P2. More particularly, the isolator comprises an isolation gate 84 defined relative to the common active area, between the drains 70 and 72 of the p-channel transistors P1 and P2. In the illustrated embodiment, polysilicon is employed to define the isolation gate 84. By causing polysilicon 44 to intersect the common active area, an isolation p-channel transistor 82 is defined (FIG. 5) between the drains 70 and 72. Similarly, an isolation p-channel transistor 83 is defined for an adjacent memory cell.The isolation gate is adapted to be connected to a voltage higher than Vss. More particularly, the isolation gate is adapted to be connected to a voltage sufficient to turn off (tri-state) the isolation transistor, and thus isolate, drain 70 from drain 72 (except for leakage current). In one embodiment, the isolation gate 84 is connected to the sources of the p-channel transistors P1 and P2. More particularly, in the illustrated embodiment, the isolation gate 84 is connected to the Vcc metal.Other than the common active area shared by drains 70 and 72, and the isolation gate 84, the embodiment shown in FIGS. 5-7 is substantially identical to the embodiment shown in FIGS. 2-3, like reference numerals indicating like components. The silicon processing steps employed in forming the embodiment shown in FIG. 6 is substantially identical to the silicon processing steps employed in manufacturing the embodiment shown in FIG. 2, except for the formation of the common active area (the ladder shaped active areas of FIG. 7 are formed at the same stage in the process, and in a similar manner, as the H-shaped active areas of FIG. 2). FIG. 5 also shows a parasitic transistor 40 formed because of an intersection of polysilicon with an active area, which is not shown in FIG. 1.Thus, a layout for manufacturing static random access memory cells has been provided which results in reduced size of each cell. Each cell includes first and second cross-coupled inverters, each inverter including a first p-channel pullup transistor, and a first n-channel pulldown transistor in series with the first p-channel pullup transistor; the first and second pullup transistors sharing a common active area; and an isolator isolating the first pullup transistor from the second pullup transistor.In compliance with the statute, the invention has been described in language more or less specific as to structural and methodical features. It is to be understood, however, that the invention is not limited to the specific features shown and described, since the means herein disclosed comprise preferred forms of putting the invention into effect. The invention is, therefore, claimed in any of its forms or modifications within the proper scope of the appended claims appropriately interpreted in accordance with the doctrine of equivalents.
A method and computer program for verifying a design of a circuit comprises selecting a portion of a model of the design having a plurality of inputs and outputs; providing a property for the design that defines a predetermined behavior of one or more of the outputs; determining whether a stimulus exists that, when applied to the inputs of the portion, can produce a behavior other than the predetermined behavior at the outputs of the portion; when the stimulus exists, determining whether the model of the design of the circuit can produce the stimulus at the inputs of the portion of the model of the circuit; and when the stimulus cannot be produced by the model of the design of the circuit at the inputs of the portion of the model of the circuit, preserving a description of the stimulus for analysis.
What is claimed is:1. A method for verifying a design of a circuit, comprising:selecting a first portion of a model of the design, wherein the first portion has a plurality of first inputs and outputs;providing a property for the design, wherein the property defines a predetermined behavior of one or more of the first outputs;determining whether a first stimulus exists that, when applied to the first inputs, can produce a behavior other than the predetermined behavior at the one or more first outputs;when the stimulus exists, determining whether the model can produce the first stimulus at the first inputs; andwhen the stimulus cannot be produced by the model at the inputs, preserving a first description of the first stimulus for analysis.2. The method of claim 1, further comprising:outputting the description.3. The method of claim 1, further comprising:generating a counterexample describing states of the first portion and a further portion of the model for one or more cycles.4. The method of claim 3, further comprising:outputting the counterexample.5. The method of claim 1, further comprising:selecting a second portion of the model, wherein the second portion has a plurality of second inputs and outputs, and wherein at least one of the second outputs is provided to at least one of the first inputs;determining whether a second stimulus exists that, when applied to the inputs of, can produce a behavior other than the predetermined behavior at the one or more outputs of the first portion;when the second stimulus exists, determining whether the model can produce the second stimulus at the second inputs; andwhen the second stimulus cannot be produced by the model at the second inputs of the second portion, preserving a second description of the second stimulus for analysis.6. The method of claim 5, further comprising:outputting the second description.7. The method of claim 5, further comprising:generating a counterexample describing states of the first portion and the second portion for one or more cycles.8. The method of claim 7, further comprising:outputting the counterexample.9. The method of claim 1, further comprising:providing the model.10. The method of claim 1, further comprising:asserting that the first portion contains a fault when no stimulus exists that, when applied to the first inputs, can produce a behavior other than the predetermined behavior at the one or more outputs.11. The method of claim 1, wherein:the first portion is selected based on the property for the design.12. A semiconductor verified by the method of claim 1.13. A computer program stored on a tangible computer medium embodying instructions executable by a computer for verifying a design of a circuit, the computer program comprising:selecting a first portion of a model of the design, wherein the first portion has a plurality of first inputs and outputs;providing a property for the design, wherein the property defines a predetermined behavior of one or more of the first outputs;determining whether a first stimulus exists that, when applied to the first inputs, can produce a behavior other than the predetermined behavior at the one or more first outputs;when the first stimulus exists, determining whether the model can produce the first stimulus at the first inputs; andwhen the stimulus cannot be produced by the model at the first inputs, preserving a first description of the first stimulus for analysis.14. The computer program of claim 13, further comprising:generating a counterexample describing states of the first portion and a second portion of the model for one or more cycles.15. The computer program of claim 13, further comprising:selecting a second portion of the model, wherein the second portion has a plurality of second inputs and outputs, and wherein at least one of the second outputs is provided to at least one of the first inputs;determining whether a second stimulus exists that, when applied to the second inputs, can produce a behavior other than the predetermined behavior at the one or more first outputs;when the second stimulus exists, determining whether the model can produce the second stimulus at the second inputs; andwhen the second stimulus cannot be produced by the model at the second inputs, preserving a description of the second stimulus for analysis.16. The computer program of claim 15, further comprising:generating a counterexample describing states of the first portion and the second portion for one or more cycles.17. The computer program of claim 13, further comprising:providing the model.18. The computer program of claim 13, further comprising:asserting that the first portion contains a fault when no stimulus exists that, when applied to the first inputs, can produce a behavior other than the predetermined behavior at the one or more first outputs.19. The computer program of claim 13, wherein:the first portion is selected based on the property for the design.20. A semiconductor verified by the computer program of claim 13.21. A method for verifying a design of a circuit, comprising:selecting a portion of a model of the design, wherein the portion has a plurality of inputs and outputs;providing a property for the design, wherein the property defines a predetermined behavior of one or more of the outputs;determining whether a stimulus exists that, when applied to the inputs, can produce a behavior other than the predetermined behavior at the one or more outputs;determining whether the model can produce the stimulus at the inputs based on said stimulus existence determination; andpreserving a description of the stimulus for analysis when the stimulus cannot be produced by the model.
BACKGROUNDThe present invention relates generally to hardware verification for electronic circuit designs. More particularly, the present invention relates to using local reduction in model checking to identify faults in logically correct circuits.Recent advances in the design of application specific integrated circuits (ASIC) and system-on-chip (SoC) circuits are producing circuit designs of rapidly increasing complexity. These designs are driving the search for techniques that are capable of verifying such complex designs.One commonly-used verification technique is model checking, which employs exhaustive mathematical techniques to prove whether a property holds true for a given design. A model checker uses a model of the design to consider all possible input combinations, and covers all possible reachable states to verify a property of the design. This is possible due to efficient techniques such as symbolic model checking and Binary Decision Diagram (BDD) representation used in model checking tools that allow analysis of sets of states simultaneously, and only consider the logic in the cone of influence of the property the tool is verifying.However, conventional model checking tools verify only the logical design of a circuit. A circuit that is logically correct can still fail due to problems with timing, crosstalk, and other electrical anomalies. Conventional model checking tools are unable to detect such failures.SUMMARYIn general, in one aspect, the invention features a method and computer program for verifying a design of a circuit. It comprises selecting a portion of a model of the design of the circuit, wherein the portion of the model of the circuit has a plurality of inputs and outputs; providing a property for the design, wherein the property defines a predetermined behavior of one or more of the outputs of the portion of the model of the design of the circuit; determining whether a stimulus exists that, when applied to the inputs of the portion of the model of the circuit, can produce a behavior other than the predetermined behavior at the one or more of the outputs of the portion of the model of the design of the circuit; when the stimulus exists, determining whether the model of the design of the circuit can produce the stimulus at the inputs of the portion of the model of the circuit; and when the stimulus cannot be produced by the model of the design of the circuit at the inputs of the portion of the model of the circuit, preserving a description of the stimulus for analysis.Particular implementations can include one or more of the following features. Implementations comprise outputting the description of the stimulus. Implementations comprise generating a counterexample describing states of the portion and the further portion of the model of the design for one or more cycles. Implementations comprise outputting the counterexample. Implementations comprise selecting a further portion of the model of the design of the circuit, wherein the further portion of the model of the circuit has a plurality of inputs and outputs, and wherein at least one of the outputs is provided to at least one of the inputs of the portion of the model of the circuit; determining whether a further stimulus exists that, when applied to the inputs of the further portion of the model of the circuit, can produce a behavior other than the predetermined behavior at the one or more of the outputs of the portion of the model of the design of the circuit; when the further stimulus exists, determining whether the model of the design of the circuit can produce the further stimulus at the inputs of the further portion of the model of the circuit; and when the further stimulus cannot be produced by the model of the design of the circuit at the inputs of the further portion of the model of the circuit, preserving a description of the further stimulus for analysis. Implementations comprise outputting the further description of the stimulus. Implementations comprise generating a further counterexample describing states of the portion and the further portion of the model of the design for one or more cycles. Implementations comprise outputting the further counterexample. Implementations comprise providing the model of the design of the circuit. Implementations comprise asserting that the portion of the model of the design of the circuit contains a fault when no stimulus exists that, when applied to the inputs of the portion of the model of the circuit, can produce a behavior other than the predetermined behavior at the one or more of the outputs of the portion of the model of the design of the circuit. The portion of the model of the design of the circuit is selected based on the property for the design. Implementations comprise a semiconductor verified by the method or computer program.The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims.DESCRIPTION OF DRAWINGSFIG. 1 shows a conventional process for a model checker using localization reduction.FIG. 2 shows a process for a model checker using local reduction according to a preferred embodiment of the present invention.FIG. 3 illustrates the relationship between a model and portions of the model selected by the model checker.The leading digit(s) of each reference numeral used in this specification indicates the number of the drawing in which the reference numeral first appears.DETAILED DESCRIPTIONFormal verification is a name for a variety of methods used also for proving the correctness of electronic circuit designs. Given a formal description of a design, and a formal specification of its desired behavior, formal verification methods can be used to mathematically prove that the design complies with its specification. For example, formal verification can be used to prove that a cache control unit complies with specified cache coherency rules.One family of formal verification tools is referred to as model checkers. A model checker is a software program that checks whether a model of the design satisfies a logical specification of the design. A model checker output is generally "pass" or "fail," stating whether the design satisfies the specification (pass) or not (fail). When a model checker output is "fail," it produces a "counterexample," which is a waveform that describes a fail. If no fails exist, the tool provides a "proof" that the design complies with its specification.One problem with model checking is state explosion, which refers to the exponential increase in the number of states to be checked as the complexity of a design increases. One technique for coping with state explosion is local reduction, which is currently implemented in several commercially-available tools. In local reduction, only a small portion of a design model is checked. If the result is a pass, then the check is completed in a fraction of the time. If the result is a fail, a counterexample is generated. However, because the inputs to the portion of the design checked are not constrained to logically possible values, the counterexample may not be valid. If the counterexample is not valid, the portion of the design model is refined to include more of the design model, and re-checked. In general, this logical reduction process can produce a pass or a valid counterexample without checking the entire design model, and in a fraction of the time needed to do so.The time spent by conventional local reduction model checking tools in generating invalid counterexamples is generally considered a waste. However, the inventors have found a use for these invalid counterexamples, as described in detail below. In particular, these invalid counterexamples, which are currently discarded because they could not possibly represent a valid logical operation of the circuit being checked, may represent a fault caused by an electrical problem in an electronic circuit.Of course, while described in terms of electronic circuits, the techniques disclosed herein are equally applicable to other sorts of logic circuits, such as optical logic circuits and the like.FIG. 1 shows a conventional process 100 for a model checker using localization reduction. The model checker receives a model of the design of a circuit and a property for verification (step 102). The model checker localizes the model of the design based on the property (step 104). That is, the model checker selects a portion of the model.The model checker then verifies the property using the selected portion of the model (step 106). If the result of the verification is "pass" (step 108), then verification process 100 is done (step 110). However, if the result of the verification is "fail" (step 108), then the model checker generates a counterexample (step 112).The model checker then determines whether the counterexample is valid (step 114). If the counterexample is valid, then process 100 is done (step 116). However, if the counterexample is not valid, the model checker discards the counterexample (step 118).The model checker then refines the localization (step 120), and resumes process 100 at the verification step (step 106). The model checker then verifies the refined localization (step 106). Process 100 repeats in this way until a valid counterexample is found, or the program runs out of memory or time (step 114).FIG. 2 shows a process 200 for a model checker using local reduction according to a preferred embodiment of the present invention. As mentioned above, the model checker can be obtained as commercially-available software, and can execute on a general-purpose or special-purpose computer.The model checker receives a model of the design of a circuit and a property for verification (step 202). The model is a description of the design, and is preferably provided as a register-transfer-level (RTL) specification, although other descriptions can be used. The property is a description of an intended behavior of the design. For example, the property can specify that a buffer overflow can never occur. Properties are conventionally derived from the design specification for the circuit. The property is preferably provided in a language such as Property Specification Language (PSL), Sugar, or the like.The model checker localizes the model of the design based on the property (step 204). That is, the model checker selects a portion of the model. For example, if the property to be verified is that a bus in the circuit is not violated, the model checker could select only those circuits that provide outputs to the bus. Thus the property defines a predetermined behavior of one or more of the outputs of the portion of the model. FIG. 3 illustrates the relationship between a portion 302 selected by the model checker and a model 300.The model checker then verifies the property using portion 302 of model 300 (step 206). That is, the model checker determines whether a stimulus exists that, when applied to the inputs of portion 302 of model 300, can produce a behavior other than the predetermined behavior defined by the property at the outputs of the portion 302 of model 300. For example, the model checker generates a verification model representation based on the model 300 and the property to be verified. The verification model representation can be states and transitions relations represented in a BDD data structure and the model checker is a Symbolic Model Verifier (SMV) engine, although other techniques can be used.If the result of the verification is "pass" (step 208), then verification process 200 is done (step 210). This means portion 302 of the circuit is sufficient for the circuit to comply with the specification, that is, no electrical fault in the boundary of portion 302 could have caused the fault we are looking for and therefore, any electrical fault must reside inside portion 302. This is very helpful in narrowing the search for the fault. In some embodiments, process 200 then asserts that a fault exists in portion 302.However, if the result of the verification is "fail" (step 208), then the model checker generates a counterexample (step 212). The "fail" result indicates that a stimulus exists that, when applied to the inputs of portion 302 of model 300, can produce a behavior other than the predetermined behavior defined by the property at the outputs of portion 302 of model 300. The counterexample describes states of the portion 302 of model 300 for one or more cycles. For example, the counterexample can include a trace of the waveforms for each input, output, and internal signal for several cycles leading up to the fail state.The model checker then determines whether the counterexample is valid (step 214). That is, the model checker determines whether model 300 can produce the stimulus at the inputs of portion 302 of model 300. If the counterexample is valid, then process 200 is done (step 216). This means a logical fault has been found rather than an electrical fault.However, if the counterexample is not valid, meaning the stimulus cannot be produced by model 300 at the inputs of portion 302, the model checker, instead of discarding the counterexample, preserves the counterexample (step 218), or at least preserves a description of the stimulus, for further analysis.The model checker then refines the localization (step 220), and resumes process 200 at the verification step (step 206). To refine the localization, the model checker increases the scope of the portion of the model being verified in order to increase the likelihood of finding a valid counterexample. For example, referring to FIG. 3, the model checker selects a second portion 304 of model 300 having at least one output that is provided as an input to portion 302.The model checker then verifies the refined localization (step 206) by determining whether a stimulus exists that, when applied to the inputs of portion 304 of the model of the circuit, can produce a behavior other than the predetermined behavior defined by the property at the outputs of portion 302 of model 300. Process 200 repeats in this way until a valid counterexample is found (step 214). That is, referring again to FIG. 3, if no valid counterexample is found using portion 304, the model checker refines the localization to include a portion 306 that includes circuits in the logic cone of portion 304, and so on with each iteration to include further portions 30N and, if necessary, the entire model 300. The model checker then outputs the preserved counterexamples for further analysis and testing.The invalid counterexamples produced in this manner can be extremely useful in debugging a circuit because they represent sensitivities of the circuit to faults other than logical faults, such as electrical faults and the like. For example, the invalid counterexamples can be used to guide debugging a hardware implementation of the circuit, thus saving significant test time.The invention can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. Apparatus of the invention can be implemented in a computer program product tangibly embodied in a machine-readable storage device for execution by a programmable processor; and method steps of the invention can be performed by a programmable processor executing a program of instructions to perform functions of the invention by operating on input data and generating output. The invention can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. Each computer program can be implemented in a high-level procedural or object-oriented programming language, or in assembly or machine language if desired; and in any case, the language can be a compiled or interpreted language. Suitable processors include, by way of example, both general and special purpose microprocessors. Generally, a processor will receive instructions and data from a read-only memory and/or a random access memory. Generally, a computer will include one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM disks. Any of the foregoing can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).A number of implementations of the invention have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the invention. An immediate application of embodiments of the present invention is in debug of hardware in the post-silicon phase. In this phase the debugger can not look into the internal behavior of the chip and can only monitor the behavior of the signals on the chip boundaries. In case that the chip violates any part of its logical specification, the debugger will have a trace of the bad scenario that violated the chip specification, but this trace will only show the signals on the chip boundaries. Now the debugger has to find which fault in the chip internal circuit could have caused such violation. The debugger can now phrase the violated specification in a specification description language such as PSL, take a description of the circuit in RTL, gate level or any other description, and employ the techniques described in this document to find the electrical fault. Accordingly, other implementations are within the scope of the following claims.
The present disclosure is directed to systems and methods for performing one or more operations on a two dimensional tile register using an accelerator that includes a tiled matrix multiplication unit (TMU). The processor circuitry includes reservation station (RS) circuitry to communicatively couple the processor circuitry to the TMU. The RS circuitry coordinates the operations performed by the TMU. TMU dispatch queue (TDQ) circuitry in the TMU maintains the operations received from the RS circuitry in the order that the operations are received from the RS circuitry. Since the duration of each operation is not known prior to execution by the TMU, the RS circuitry maintains shadow dispatch queue (RS-TDQ) circuitry that mirrors the operations in the TDQ circuitry. Communication between the RS circuitry 134 and the TMU provides the RS circuitry with notification of successfully executed operations and allows the RS circuitry to cancel operations where the operations are associated with branch mispredictions and/or non-retired speculatively executed instructions.
Core circuitry comprising:processor circuitry;re-order buffer (ROB) circuitry coupled to the processor circuitry; andreservation station (RS) circuitry that includes matrix multiplication unit dispatch shadow queue (RS-TDQ) circuitry, the RS circuitry to:dispatch at least one first matrix operation to TDQ circuitry disposed in a matrix multiplication unit (TMU) communicatively coupled to the RS circuitry;dispatch the at least one first matrix operation to the RS-TDQ circuitry;receive a dispatch indication from the TMU upon execution of the at least one first matrix operation by the TMU;communicate, to the ROB circuitry, a signal that includes information indicative of a completion of the at least one first matrix operation by the TMU;cause the ROB circuitry to commit the at least one first matrix operation; andcause a transfer of data from the at least one first matrix operation from TMU buffer (TMB) circuitry to memory circuitry.The core circuitry of claim 1, further comprising:cache memory circuity communicatively coupled to the processor circuitry;wherein to cause a transfer of data from the at least one first matrix operation from TMU buffer (TMB) circuitry to memory circuitry, the RS circuitry to further:cause the transfer of an output tile register that includes two dimensional array generated by the at least one first matrix operation from the TMB circuitry to the cache memory circuitry.The core circuitry of any of claims 1 or 2 wherein to cause a transfer of data from the at least one first matrix operation from TMU buffer (TMB) circuitry to memory circuitry, the RS circuitry to further:cause the transfer of an output tile register that includes a two dimensional array generated by the at least one first matrix operation from the TMB circuitry to system memory circuitry communicatively coupled to the core circuitry.The core circuitry of any of claims 1 through 3, the RS circuitry to further:dispatch at least one second matrix operation to the TDQ circuitry, the second matrix operation dependent upon the data from the at least one first matrix operation; dispatch the at least one first matrix operation to the TDQ shadow queue circuitry;receive a dispatch indication from the TMU upon execution of the at least one second matrix operation by the TMU;communicate, to the ROB circuitry, a signal that includes information indicative of a completion of the at least one second matrix operation by the TMU;cause the ROB circuitry to commit the at least one second matrix operation; andcause a transfer of data from the at least one second matrix operation from the TMB circuitry to memory circuitry.The core circuitry of claim 4 wherein to dispatch at least one second matrix operation to the TDQ circuitry, the RS circuitry to further:dispatch at least one second matrix operation to the TDQ circuitry responsive to dispatch of the at least one first matrix operation to the TDQ circuitry.The core circuitry of any of claims 4 or 5 wherein to dispatch at least one second matrix operation to the TDQ circuitry, the RS circuitry to further:dispatch at least one second matrix operation to the TDQ circuitry responsive to receipt of an indication of the dispatch of the at least one first matrix operation to matrix multiplication (TMM) circuitry in the TMU.The core circuitry of any of claims 4 through 6 wherein to cause a transfer of data from the at least one second matrix operation from TMU buffer (TMB) circuitry to memory circuitry, the RS circuitry to further:cause the transfer a two dimensional array generated by the at least one second matrix operation from the TMB circuitry to memory circuitry that includes at least one of: processor cache circuitry or system memory circuitry.A method of performing one or more matrix operations, the method comprising:dispatching, by reservation station (RS) circuitry, at least one first matrix operation to TDQ circuitry disposed in a matrix multiplication unit (TMU) communicatively coupled to the RS circuitry;dispatching, by the RS circuitry, the at least one first matrix operation to the RS-TDQ circuitry;communicating, by the RS circuitry, to reorder buffer (ROB) circuitry, a signal that includes information indicative of a completion of the at least one first matrix operation by the TMU;causing, by the RS circuitry, the ROB circuitry to commit the at least one first matrix operation; andcausing, by the RS circuitry, a transfer of data from the at least one first matrix operation from TMU buffer (TMB) circuitry to memory circuitry.The method of claim 8 wherein causing the transfer of data from the at least one first matrix operation from the TMB circuitry to the memory circuitry further comprises:causing, by the RS circuitry, the transfer of an output tile register that includes two dimensional array generated by the at least one first matrix operation from the TMB circuitry to processor cache memory circuitry communicatively coupled to core circuitry.The method of any of claims 8 or 9 wherein causing the transfer of data from the at least one first matrix operation from the TMB circuitry to the memory circuitry further comprises:causing, by the RS circuitry, the transfer of an output tile register that includes a two dimensional array generated by the at least one first matrix operation from the TMB circuitry to system memory circuitry communicatively coupled to core circuitry.The method of any of claims 8 thorough 10, further comprising:dispatching, by the RS circuitry, at least one second matrix operation to the TDQ circuitry, the second matrix operation using at least a portion of the data from the at least one first matrix operation;dispatching, by the RS circuitry, the at least one second matrix operation to the RS-TDQ circuitry;communicating, by the RS circuitry, to reorder buffer (ROB) circuitry, a signal that includes information indicative of a completion of the at least one second matrix operation by the TMU;causing, by the RS circuitry, the ROB circuitry to commit the at least one second matrix operation; andcausing, by the RS circuitry, a transfer of data from the at least one second matrix operation from the TMU buffer (TMB) circuitry to the memory circuitryThe method of claim 11 wherein dispatching at least one second matrix operation to the TDQ circuitry further comprises:dispatching, by the RS circuitry, the at least one second matrix operation to the TDQ circuitry responsive to dispatching the at least one first matrix operation to the TDQ circuitry.The method of any of claims 11 or 12 wherein dispatching at least one second matrix operation to the TDQ circuitry further comprises:dispatching, by the RS circuitry, the at least one second matrix operation to the TDQ circuitry responsive to receiving an indication of the dispatch of the at least one first matrix operation to matrix multiplication (TMM) circuitry in the TMU.The method of any of claims 11 through 13 wherein causing a transfer of data from the at least one second matrix operation from the TMU buffer (TMB) circuitry to the memory circuitry further comprises:causing, by the RS circuitry, the transfer a two dimensional array generated by the at least one second matrix operation from the TMB circuitry to memory circuitry that includes at least one of: processor cache circuitry or system memory circuitry.At least one non-transitory machine-readable storage device that includes instructions that, in response to execution by a computing device, cause the computing device to carry out the method according to any of claims 8 through 14.
TECHNICAL FIELDThe present disclosure relates to accelerator circuitry, specifically to accelerator circuitry used in conjunction with processor core circuitry having out-of-order execution capabilities.BACKGROUNDAccelerators improve system performance by offloading repetitive or time-consuming tasks from other system hardware, such as the processor circuitry in a central processing unit (CPU). Typically, processor circuitry will either transfer of cause a transfer of input data to an accelerator circuit, the accelerator circuit will perform one or more operations, such as matrix multiplication or convolution operations, using the input data to generate output data that is either communicated to the processor circuitry or stored in memory circuitry. Modern CPUs frequently include processor circuitry and instruction sets that perform speculative execution of instructions and/or instruction branch prediction to improve system speed, efficiency and responsiveness. Accelerator circuitry must be able to accommodate processor circuitry speed and efficiency enhancements such as speculative execution of instructions, out-of-order (OOO) instruction execution, and instruction branch prediction.BRIEF DESCRIPTION OF THE DRAWINGSFeatures and advantages of various embodiments of the claimed subject matter will become apparent as the following Detailed Description proceeds, and upon reference to the Drawings, wherein like numerals designate like parts, and in which:FIG 1 is a block diagram of an illustrative system that includes a tiled matrix multiplication unit (TMU) communicatively coupled to processor circuitry, in accordance with at least one embodiment described herein;FIG 2 is a block diagram of a system depicting the flow of commands and/or data between the TMU and the processor circuitry, in accordance with at least one embodiment described herein;FIG 3 is a schematic diagram of an illustrative electronic, processor-based, device that includes a TMU and processor circuitry, in accordance with at least one embodiment described herein;FIG 4 is a high-level flow diagram of an illustrative method of performing one or more operations on a two-dimensional tile register using an accelerator that includes a TMU, in accordance with at least one embodiment described herein;FIGs 5A and 5B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments of the invention;FIGs 6A, 6B, 6C and 6D are block diagrams illustrating an exemplary specific vector friendly instruction format according to embodiments of the invention;FIG 7 is a block diagram of a register architecture according to one embodiment of the invention;FIG 8A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention.FIG 8B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention;FIGs 9A and 9B illustrate a block diagrams of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip;FIG 10 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention;FIGs 11 , 12 , 13 , and 14 are block diagrams of exemplary computer architectures; andFIG 15 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention.DETAILED DESCRIPTIONA tile register includes a set of two-dimensional registers, each having a plurality of rows and a plurality of columns that represent a packed region of memory. A tile register multiplication (TMUL) instruction set provides an instruction set architecture (ISA) that improves machine learning (ML) performance. The TMUL ISA is used in performing matrix multiplication of tile registers. The systems and methods disclosed herein include a TMU having self-contained accelerator circuitry that is communicatively coupled to processor circuitry, such as OOO core circuitry. The TMU includes TMU buffer (TMB) circuitry to accommodate the transfer of tiles into and out of the TMU while simultaneously performing the matrix multiplication operations - all without assistance from the OOO core/processor core circuitry (hereinafter "processor circuitry"). The TMU includes tiled matrix multiplication (TMM) circuitry containing the MAC computation grid. The TMU includes TMU control circuitry (TMC) to communicate between the processor circuitry, the TMM circuitry, and the TMB circuitry thereby improving the overall efficiency of the TMU while further offloading processor circuitry.The systems and methods disclosed herein provide an autonomous TMU communicatively coupled to the processor circuitry and capable of transmitting and receiving tiled data from the processor circuitry. The processor circuitry and the TMU monitor the execution and completion of matrix operations performed by the TMM circuitry.The systems and methods disclosed herein beneficially provide a continuous stream of matrix multiplication operations while allowing direct access to the tiles register on completion of the operation.The systems and methods disclosed herein beneficially dispatch tiled matrix multiplication operations for execution, resolve dependencies, and allow direct processor circuitry access to the tiles once the multiplication operation completes.The systems and methods disclosed herein provide a TMU that includes TMC (TMU control) circuitry; TMB (TMU buffer) circuitry; TMU dispatch queue (TDQ) circuitry; and TMM (matrix multiplication) circuitry. Processor circuitry communicably coupled to the TMU includes reorder buffer (ROB) circuitry; reservation station (RS) circuitry; and TMU shadow queue (RS-TDQ) circuitry. In operation, the RS circuitry dispatches matrix multiplication operations to the TDQ circuitry in the TMU, each of the operations occupies one entry within the TDQ circuitry. The matrix multiplication operations remain in the TDQ circuitry until the TMC circuitry generates a writeback (WB) indication and schedules the execution of the matrix multiplication operation by the TMM circuitry. When a matrix multiplication operation is available, the TMC circuitry dispatches the matrix multiplication operation to the TMM circuitry and generates a TDQ dispatch notification that is communicated to the RS circuitry.Within the processor circuitry, the RS circuitry issues a completion indication to the ROB circuitry. The ROB circuitry commits the matrix multiplication operation and permits the reclamation of the of the renamed copy of the tile register. Where a dependent matrix multiplication operation occurs, the RS circuitry dispatches the dependent matrix multiplication operation from the processor circuitry to the TMU either when the operation was written to the TDQ circuitry or based on the dispatch of the matrix multiplication operation to the TMM circuitry. On completion of the matrix multiplication operation, the RS circuitry dispatches a dependent tile READ into memory operation.To maintain a consistent architectural state, the RS circuitry clears bogus entries from the TDQ circuitry and the RS-TDQ circuitry in response to a branch misprediction or nuke. The RS-TDQ circuitry generates the request to cancel such operations from the TDQ circuitry and communicates the request to the TMC circuitry. The TMC circuitry responds to the RS circuitry using a writeback request on each canceled matrix multiplication operation. In such instances, the RS circuitry does not communicate the writeback to the ROB circuitry and the matrix multiplication operation is dropped.Core circuitry is provided. The core circuitry includes: processor circuitry; re-order buffer (ROB) circuitry coupled to the processor circuitry; and reservation station (RS) circuitry that includes matrix multiplication unit dispatch shadow queue (RS-TDQ) circuitry, the RS circuitry to: dispatch at least one first matrix operation to TDQ circuitry disposed in a matrix multiplication unit (TMU) communicatively coupled to the RS circuitry; dispatch the at least one first matrix operation to the RS-TDQ circuitry; receive a dispatch indication from the TMU upon execution of the at least one first matrix operation by the TMU; communicate, to the ROB circuitry, a signal that includes information indicative of a completion of the at least one first matrix operation by the TMU; cause the ROB circuitry to commit the at least one first matrix operation; and cause a transfer of data from the at least one first matrix operation from TMU buffer (TMB) circuitry to memory circuitry.A method of performing one or more tiled register matrix multiplication operations is provided. The method may include: dispatching, by reservation station (RS) circuitry, at least one first matrix operation to TDQ circuitry disposed in a matrix multiplication unit (TMU) communicatively coupled to the RS circuitry; dispatching, by the RS circuitry, the at least one first matrix operation to the RS-TDQ circuitry; communicating, by the RS circuitry, to reorder buffer (ROB) circuitry, a signal that includes information indicative of a completion of the at least one first matrix operation by the TMU; causing, by the RS circuitry, the ROB circuitry to commit the at least one first matrix operation; and causing, by the RS circuitry, a transfer of data from the at least one first matrix operation from TMU buffer (TMB) circuitry to memory circuitry.A system for performing one or more tiled register matrix multiplication operations is provided. The system may include: means for dispatching at least one first matrix operation to TDQ circuitry disposed in a matrix multiplication unit (TMU); means for dispatching the at least one first matrix operation to RS-TDQ circuitry; means for communicating to reorder buffer (ROB) circuitry, a signal that includes information indicative of a completion of the at least one first matrix operation by the TMU; means for causing the ROB circuitry to commit the at least one first matrix operation; and means for causing a transfer of data from the at least one first matrix operation from TMU buffer (TMB) circuitry to memory circuitry.A non-transitory storage device is provided. The non-transitory storage device may include instructions that, when executed by reservation station (RS) circuitry, cause the RS circuitry to: dispatch at least one first matrix operation to TDQ circuitry disposed in a matrix multiplication unit (TMU) communicatively coupled to the RS circuitry; dispatch the at least one first matrix operation to the RS-TDQ circuitry; communicate to reorder buffer (ROB) circuitry, a signal that includes information indicative of a completion of the at least one first matrix operation by the TMU; cause the ROB circuitry to commit the at least one first matrix operation; and cause a transfer of data from the at least one first matrix operation from TMU buffer (TMB) circuitry to memory circuitry.A TMU is provided. The TMU may include: TMU data storage buffer (TMB) circuitry; TMU operation queue (TMQ) circuitry; TMU matrix multiplication (TMM) circuitry; TMU control (TMC) circuitry coupled to the TMB circuitry, the TMQ circuitry, and the TMM circuitry, the TMC circuitry to: cause the TMB circuitry to store at least one tile register in the TMB circuitry, the at least one tile register received from reservation station (RS) circuitry communicatively coupled to a core circuit, wherein each of the at least one tile registers includes a respective two-dimensional data array; cause the TMQ circuitry to store at least one first matrix multiplication operation using the at least one tile register; cause the TMM circuitry to execute the at least one first matrix multiplication operation on the one or more tile registers to generate at least one first output tile register; cause the TMB circuitry to store the at least one first output tile register; and cause a transfer of the at least one first output tile register to memory circuitry external to the TMU responsive to a receipt of a request by the RS circuitry.A non-transitory storage device is provided. The non-transitory storage device may include instructions that, when executed by TMU control (TMC) circuitry, cause the TMC circuitry to: cause the TMB circuitry to store at least one tile register in the TMB circuitry, the at least one tile register received from reservation station (RS) circuitry communicatively coupled to a core circuit, wherein each of the at least one tile registers includes a respective two-dimensional data array; cause the TMQ circuitry to store at least one first matrix multiplication operation using the at least one tile register; cause the TMM circuitry to execute the at least one first matrix multiplication operation on the one or more tile registers to generate at least one first output tile register; cause the TMB circuitry to store the at least one first output tile register; and cause a transfer of the at least one first output tile register to memory circuitry external to the TMU responsive to a receipt of a request by the RS circuitry.FIG 1 is a block diagram of an illustrative system 100 that includes a TMU 110 communicatively coupled to processor circuitry 130, in accordance with at least one embodiment described herein. As depicted in FIG 1 , the TMU 110 includes TMU control (TMC) circuitry 112, TMU dispatch queue (TDQ) circuity 114, TMU buffer (TMB) circuitry 116, and tile matrix multiplication (TMM) circuitry 118. As depicted in FIG 1 , the processor circuitry 130 may include processor core circuitry 132, re-order buffer (ROB) circuitry 134, reservation station (RS) circuitry 136, and RS shadow TDQ (RS-TDQ) circuitry 136. A high bandwidth interconnect 140 communicatively couples or links the TMU 110 to the processor circuitry 130.The TMU 110 autonomously performs matrix multiplication operations using input data, such as two-dimensional (2D) tile registers transferred to the TMU 110 from system memory circuitry and/or processor cache memory circuitry to the TMB circuitry 116. The TMU 110 receives input data and instructions to perform various matrix operations using the received input data to generate output data. In embodiments, the TMB circuitry 116 includes a physical register file (PRF) that stores, contains, or otherwise retains either or both the received input data and the generated output data. The tiles used by the TMM circuitry 118 may be transferred from external memory circuitry (system memory circuitry, processor cache memory circuitry, etc.) and the tile generated by the TMM circuitry 118 may be stored in the TMB circuitry 116 prior to transfer to the processor circuitry 130 and/or other external memory circuitry. In embodiments, where the TMU 110 performs multiple dependent tiled matrix operations, the TMB circuitry 116 may store or otherwise retain the intermediate tiled matrices generated by the TMM circuitry 118. The processor circuitry 130 converts the tiled matrix multiplication (TMUL) instruction into a plurality of operations that include but are not limited to: transfer the input data to the TMB circuitry 116 and the transfer of tiled matrix multiplication operations to the TDQ circuitry 114, including managing the destination in all its forms from allocation, renaming, and reclamation. The TMC circuitry 112 manages the communication between the processor circuitry 130, The TDQ circuitry 114, the TMB circuitry 116, and the TMM circuitry 118.While the TDQ circuitry 114 stores or otherwise retains the TMUL operations received from the processor circuitry 130, the RS circuitry 134 includes TMU shadow queue (RS-TDQ) circuitry 136. The RS-TDQ circuitry 136 controls the TMUL operations and the associated tile registers. The TMU 110 includes three interface circuits. The first interface circuit provides access to the TMB circuitry 116 for tiles loaded from memory circuitry. The second interface circuit provides access to the processor circuitry 130 to store tile data read from the TMB circuitry 116. The third interface circuit provides access to the TDQ circuitry 114 to receive TMUL operations from the processor circuitry 130.The processor circuitry 130 causes memory load/transfer operations that cause the transfer of tiled input data from system or processor memory circuitry to the TMB circuitry 116. The RS circuitry 134 maintains the dependency of the loaded tiled input data transferred to the TMB circuitry 116. Upon completion of the tiled input data load to the TMB circuitry 116 (for a non-dependent TMUL operation) and/or the completion of a prior TMUL operation (for a dependent TMUL operation), the RS circuitry 134 causes the transfer of the tiled input data to the TMM circuitry 118. The RS circuitry 134 dispatches the TMUL operation to the TDQ circuitry 114 and maintains a shadow copy of the TMUL operation in the RS-TDQ circuitry 136. The RS circuitry 134 tracks the progress of the TMUL operation to determine when a new TMUL may be dispatched to the TMU 110 and the output data generated by the TMM circuitry 118 transferred from the TMB circuitry 116.In embodiments, the RS circuitry 134 handles each TMUL operation as a long latency operation. The RS circuitry 134 determines wakeup timing as a function of the dimensions of the tiled input data provided to the TMU 110. However, the RS circuitry 134 cannot predict either or both the completion of the TMUL operation and/or the timing of when the TMUL operation will be ready for dispatch by the TMU 110.The TMC circuitry 112 controls the performance of the TMUL operation by the TMM circuitry 118. In embodiments, the TMC circuitry 112 performs tiled register source bypass control, tile register READ/WRITE operations to/from the TMB circuitry 116, scheduling operations, resolving dependencies on late accumulation, etc. The TMU 110 and the processor circuitry 130 co-manage the physical register files of the TMB circuitry 116. The TMC circuitry 112 advantageously improves the efficiency of the TMU 110 by managing the communication between the TMU 110 and the processor circuitry 130. In embodiments, the TMC circuitry 112 receives one or more operations dispatched by the RS circuitry 134 and stores the one or more operations in the TDQ circuitry 114. The RS circuitry 134 stores a copy (or shadow) of the one or more operations in the RS-TDQ circuitry 136. The RS circuitry 134 uses the information stored in the RS-TDQ circuitry 136 to track the completion of the one or more operations dispatched to the TMU 110 by the RS circuitry 134. In embodiments, the TDQ circuitry 114 functions as a first in/first out (FIFO) data storage circuit, consequently the TDQ circuitry 114 causes the execution of the operations transferred by the RS circuitry 134 in the order which the operations were received by the TDQ circuitry 114. The operations stored in the TDQ circuitry 114 are executed in the order of receipt from the RS circuitry 134. The TDQ circuitry 114 tracks the head and tail pointers. Upon dispatch by the RS circuitry 134, the TMC circuitry 112 generates an identifier associated with the respective operation by binding the operation to a respective entry stored in the TDQ circuitry 114. Each communication between the TMC circuitry 112 and the RS circuitry 134 involving a operation incudes the identifier associated with the respective operation. Including the identifier allows the proper wakeup, writeback request, and release of the entry by the RS circuitry 134. Each operation is able to begin and conclude as TMU resources become available, allowing simultaneous execution of several operations, beneficially improving the efficiency of the TMU 110.The RS circuitry 134 resolves any dependencies within the TMUL operations. In some embodiments, the RS circuitry 134 manages a wakeup indication to the TMU 110 that permits the dispatch of two dependent operations such that the operations are aligned within the TMU by the TMC circuitry 112. The TMC circuitry 112 communicates a notification message to the RS circuitry 134 upon dispatch of a TMUL operation from the TDQ circuitry 114 to the TMM circuitry 118. In other embodiments, upon receipt of such a notification, the RS circuitry 134 may dispatch dependent TMUL operations.The TMU 110 includes the TMC circuitry 112, the TDQ circuitry 114, the TMB circuitry 116, and the TMM circuitry 118. In embodiments, the TMU may be formed or otherwise disposed on a semiconductor chiplet that is, in turn, integrated into a semiconductor package, such as a multi-chip module, that contains the processor circuitry 130. In other embodiments, the TMU 110 may be formed or otherwise disposed on a portion of a semiconductor die, such as a system-on-chip (SoC) that includes the processor circuitry 130. In embodiments, the TMU 110 directly or indirectly communicates with system memory circuitry via one or more high bandwidth connections or communications links, such a connection enables, for example, the exchange of input data and output data between the TMU 110 and system memory without further burdening the processor memory management unit (MMU) circuitry.The TMC circuitry 112 includes any number and/or combination of any currently available and/or future developed electronic components, semiconductor devices, and/or logic elements capable of bidirectional communication between the processor circuitry 130, the TDQ circuitry 114, the TMB circuitry 116, and the TMM circuitry 118. The TMC circuitry 112 receives operations from the RS circuitry 134 and loads the received operations into the TDQ circuitry 114. In embodiments, the TMC circuitry 112 loads the received operations into the TDQ circuity in the order that the operations are dispatched by the RS circuitry 134. The TMC circuitry 112 handles one or more operations associated with the matrix multiplication performed by the TMM circuitry 118. These operations include but are not limited to: tile register source bypass control, tile register READ from and WRITE to the TMB circuitry 116, scheduling operations in the TMM circuitry 118, and resolving dependencies on late accumulation.The TDQ circuitry 114 includes any number and/or combination of any currently available and/or future developed electronic components, semiconductor devices, and/or logic elements capable of storing operations received from the TMC circuitry 112, causing the execution of the operations by the TMM circuitry 118, and tracking the head and tail pointers as the operations are executed by the TMM circuitry 118. In embodiments, the TDQ circuitry 114 causes execution of the received operations on a FIFO basis. In embodiments, the TDQ circuitry 114 dispatches operations to the TMM circuitry 118 in response to receipt of a dispatch command generated by the TMC circuitry 112.The TMB circuitry 116 includes any number and/or combination of any currently available and/or future developed electronic components, semiconductor devices, and/or logic elements capable of receiving and storing or otherwise retaining input data transferred to the TMU 110 and receiving and storing or otherwise retaining output data generated by the TMM circuitry 118 prior to transferring the output data to system memory circuitry and/or processor cache memory circuitry. In embodiments, the RS circuitry 134 reads output data from the TMB circuitry 116 at the conclusion of a TMUL operation and/or the completion of a tile load operation.The TMM circuitry 118 includes any number and/or combination of any currently available and/or future developed electronic components, semiconductor devices, and/or logic elements capable of performing operations and/or mathematical functions on input data transferred from the TMB circuitry 116 to produce output data for transfer back to the TMB circuitry 116. In embodiments, the input data transferred from the TMB circuitry 116 to the TMM circuitry 118 includes one or more two-dimensional tile registers. In embodiments, the output data transferred from the TMM circuitry 118 to the TMB circuitry 116 includes one or more two-dimensional tile registers. In embodiments, the TMM circuitry 118 may include one or more MAC computation grids.The processor circuitry 130 includes any number and/or combination of any currently available and/or future developed electronic components, semiconductor devices, and/or logic elements capable of executing machine readable instruction sets. In embodiments, the processor circuitry 110 may provide all or a portion of the ROB circuitry 132, all or a portion of the RS circuitry 134 and/or all or a portion of the RS-TDQ circuitry 136. In embodiments, the processor circuitry 110 may include any number of single- or multi-thread processor core circuits. In embodiments, the processor circuitry 110 may include a system-on-chip (SoC) or multi-chip module (MCM) architecture. In embodiments, the processor circuitry 110 may include but is not limited to, any number and/or combination of: controllers, digital signal processors, microcontrollers, microprocessors, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), reduced instruction set computers (RISCs), and similar.The ROB circuitry 132 includes any number and/or combination of any currently available and/or future developed electronic components, semiconductor devices, and/or logic elements capable of storing the original sequence or order of execution of instructions such that instructions executed out of order are executed, by the processor circuitry, in the original order. The ROB circuitry 132 is disposed at least partially within the processor circuitry 130. In embodiments, the processor circuitry 130 may provide at least a portion of the ROB circuitry 132. In embodiments, the ROB circuitry 132 may include a circular buffer circuit in which instructions are added to a first end of the buffer circuit when dispatched and removed from an opposite end of the buffer circuit when completed.The RS circuitry 134 includes any number and/or combination of any currently available and/or future developed electronic components, semiconductor devices, and/or logic elements capable of directly or indirectly causing the transfer of operations, instructions, and/or data from the processor circuitry 130 and/or memory circuitry to the TMU 110. The RS circuitry 134 is disposed at least partially within the processor circuitry 130. In embodiments, the processor circuitry 130 may provide at least a portion of the RS circuitry 134.In embodiments, the RS circuitry 134 includes RS-TDQ circuitry 136. In embodiments, as the RS circuitry 134 communicates instructions to the TDQ circuitry 114, the RS circuitry 134 stores or otherwise retains a shadow copy of each instruction in the RS-TDQ circuitry 136. Thus, the contents of the RS-TDQ circuitry 136 mirror the contents of the TDQ circuitry 114 in the TMU 110. The RS circuitry 134 monitors the progress of the operations performed by the TMM circuitry 118 via the TMC circuitry 112. Based on the progress of the operations in the TMU 110, the RS circuitry 134 may dispatch dependent operations to the TDQ circuitry 114 and/or additional/new operations to the TDQ circuitry 114. The RS circuitry 134 and the TMC circuitry 112 exchange bidirectional notifications. Thus, for example, if an operation previously communicated to the TMU 110 is later determined to be erroneously communicated (e.g., a mis-predicted branch or an incorrect speculative execution), the RS circuitry 134 can cause the TMC circuitry 112 to remove the operation (and any dependent operations) from the TDQ circuitry 114.The interconnect 140 facilitates bidirectional communication between the TMU 110 and the processor circuitry 130. In embodiments, the interconnect 140 may include a plurality of communication channels or paths. In embodiments, the interconnect 140 may include a high-bandwidth connection communicatively coupling the TMU 110 with the processor circuitry 130.FIG 2 is a block diagram of a system 200 depicting the flow of commands and/or data between the TMU 110 and the processor circuitry 130, in accordance with at least one embodiment described herein. As depicted in FIG 2 , the RS circuitry 134 dispatches 220 an operation (e.g., a TMUL operation) that is stored or otherwise occupies one entry in the TDQ circuitry 114. in the order received from the RS circuitry 134. Simultaneously, the RS circuitry 134 dispatches 222 a shadow copy of the operation to the RS-TDQ circuitry 136. The RS circuitry 134 uses the RS-TDQ circuitry 136 to track and/or monitor the operations communicated to the TDQ circuitry 114.Responsive to receipt of a writeback 240, the TMC circuitry 112 schedules the execution of the operation by the TMM circuitry 180. Upon availability of a TMUL operation, the TMC circuitry 112 dispatches the operation to the TMM circuitry 118 and issues a dispatch indication to the RS circuitry 134. The RS circuitry 134 issues a completion indication 224 to the ROB circuitry 136. The ROB circuitry 136 commits the operation and reclaims the output tile register from the TMB circuitry 116. In embodiments, the output tile register may be transferred and stored in memory circuitry 250, such as system memory circuitry or processor cache circuitry. The RS circuitry 134 may dispatch dependent operations, either when written to the TDQ circuitry 114 or based on the dispatch of the operation to the TMM circuitry 118. Upon completion, the RS circuitry 134 may dispatch a dependent tile register read to system memory 250 operation.To maintain architectural state consistency, the RS circuitry 134 clears cancelled operations (e.g., operations generated by branch mispredictions or incorrect speculatively executed instructions) from the TDQ circuitry 114 and the RS-TDQ circuitry 136. The RS circuitry 134 generates the request to cancel operations and communicates 230 the request to the TMC circuitry 112. The request includes one or more unique identifiers associated with each of the cancelled operations. In embodiments, the TMC circuitry 112 responds to the RS circuitry 134 with writeback requests on each of the cancelled operations. In such instances, the RS circuitry 134 will not communicate a writeback indicative of successful completion of the operation to the ROB circuitry 132, thereby resulting in the ROB circuitry 132 dropping the operation.FIG 3 is a schematic diagram of an illustrative electronic, processor-based, device 300 that includes a TMU 110 and processor circuitry 130, in accordance with at least one embodiment described herein. The processor-based device 300 may additionally include graphical processing unit (GPU) circuitry 312. The processor-based device 300 may additionally include one or more of the following: a wireless input/output (I/O) interface 320, a wired I/O interface 330, system memory circuitry 340, power management circuitry 350, a non-transitory storage device 360, and a network interface 370 used to communicatively couple the processor-based device 300 to one or more external devices (e.g., a cloud-based server) 390 via one or more networks 380. The following discussion provides a brief, general description of the components forming the illustrative processor-based device 300. Example, non-limiting processor-based devices 300 may include, but are not limited to: autonomous motor vehicles, semi-autonomous motor vehicles, manually controlled motor vehicles, smartphones, wearable computers, portable computing devices, handheld computing devices, desktop computing devices, blade server devices, workstations, and similar.Those skilled in the relevant art will appreciate that the illustrated embodiments as well as other embodiments may be practiced with other processor-based device configurations, including portable electronic or handheld electronic devices, for instance smartphones, portable computers, wearable computers, consumer electronics, personal computers ("PCs"), network PCs, minicomputers, server blades, mainframe computers, and the like. The processor circuitry 130 may include any number of hardwired or configurable circuits, some or all of which may include programmable and/or configurable combinations of electronic components, semiconductor devices, and/or logic elements that are disposed partially or wholly in a PC, server, or other computing system capable of executing machine-readable instructions. In embodiments, the processor circuitry 130 may include ROB circuitry 132, RS circuitry 134, and RS-TDQ circuitry 136.The processor-based device 300 includes a bus or similar communications link 316 that communicably couples and facilitates the exchange of information and/or data between various system components including the processor circuitry 130, the graphics processor circuitry 312, one or more wireless I/O interfaces 320, one or more wired I/O interfaces 330, the system memory 340, one or more storage devices 360, and/or the network interface circuitry 370. The processor-based device 300 may be referred to in the singular herein, but this is not intended to limit the embodiments to a single processor-based device 300, since in certain embodiments, there may be more than one processor-based device 300 that incorporates, includes, or contains any number of communicably coupled, collocated, or remote networked circuits or devices.The processor circuitry 130 may include any number, type, or combination of currently available or future developed devices capable of executing machine-readable instruction sets. The processor circuitry 130 may include but is not limited to any current or future developed single- or multi-core processor or microprocessor, such as: on or more systems on a chip (SOCs); central processing units (CPUs); digital signal processors (DSPs); graphics processing units (GPUs); application-specific integrated circuits (ASICs), programmable logic units, field programmable gate arrays (FPGAs), and the like. Unless described otherwise, the construction and operation of the various blocks shown in FIG 3 are of conventional design. Consequently, such blocks need not be described in further detail herein, as they will be understood by those skilled in the relevant art. The bus 316 that interconnects at least some of the components of the processor-based device 300 may employ any currently available or future developed serial or parallel bus structures or architectures.The system memory 340 may include read-only memory ("ROM") 342 and random access memory ("RAM") 346. A portion of the ROM 342 may be used to store or otherwise retain a basic input/output system ("BIOS") 344. The BIOS 344 provides basic functionality to the processor-based device 300, for example by causing the processor circuitry 130 to load and/or execute one or more machine-readable instruction sets 314. In embodiments, at least some of the one or more machine-readable instruction sets 314 cause at least a portion of the processor circuitry 130 to provide, create, produce, transition, and/or function as a dedicated, specific, and particular machine.The processor-based device 300 may include at least one wireless input/output (I/O) interface 320. The at least one wireless I/O interface 320 may be communicably coupled to one or more physical output devices 322 (tactile devices, video displays, audio output devices, hardcopy output devices, etc.). The at least one wireless I/O interface 320 may communicably couple to one or more physical input devices 324 (pointing devices, touchscreens, keyboards, tactile devices, etc.). The at least one wireless I/O interface 320 may include any currently available or future developed wireless I/O interface. Example wireless I/O interfaces include, but are not limited to: BLUETOOTH®, near field communication (NFC), and similar.The processor-based device 300 may include one or more wired input/output (I/O) interfaces 330. The at least one wired I/O interface 330 may be communicably coupled to one or more physical output devices 322 (tactile devices, video displays, audio output devices, hardcopy output devices, etc.). The at least one wired I/O interface 330 may be communicably coupled to one or more physical input devices 324 (pointing devices, touchscreens, keyboards, tactile devices, etc.). The wired I/O interface 330 may include any currently available or future developed I/O interface. Example wired I/O interfaces include but are not limited to: universal serial bus (USB), IEEE 1394 ("FireWire"), and similar.The processor-based device 300 may include one or more communicably coupled, non-transitory, data storage devices 360. The data storage devices 360 may include one or more hard disk drives (HDDs) and/or one or more solid-state storage devices (SSDs). The one or more data storage devices 360 may include any current or future developed storage appliances, network storage devices, and/or systems. Non-limiting examples of such data storage devices 360 may include, but are not limited to, any current or future developed non-transitory storage appliances or devices, such as one or more magnetic storage devices, one or more optical storage devices, one or more electro-resistive storage devices, one or more molecular storage devices, one or more quantum storage devices, or various combinations thereof. In some implementations, the one or more data storage devices 360 may include one or more removable storage devices, such as one or more flash drives, flash memories, flash storage units, or similar appliances or devices capable of communicable coupling to and decoupling from the processor-based device 300.The one or more data storage devices 360 may include interfaces or controllers (not shown) communicatively coupling the respective storage device or system to the bus 316. The one or more data storage devices 360 may store, retain, or otherwise contain machine-readable instruction sets, data structures, program modules, data stores, databases, logical structures, and/or other data useful to the processor circuitry 130 and/or graphics processor circuitry 312 and/or one or more applications executed on or by the processor circuitry 310 and/or graphics processor circuitry 312. In some instances, one or more data storage devices 360 may be communicably coupled to the processor circuitry 130, for example via the bus 316 or via one or more wired communications interfaces 330 (e.g., Universal Serial Bus or USB); one or more wireless communications interfaces 320 (e.g., Bluetooth®, Near Field Communication or NFC); and/or one or more network interfaces 370 (IEEE 802.3 or Ethernet, IEEE 802.11, or WiFi®, etc.).Machine-readable instruction sets 314 and other programs, applications, logic sets, and/or modules may be stored in whole or in part in the system memory 340. Such instruction sets 314 may be transferred, in whole or in part, from the one or more data storage devices 360. The instruction sets 314 may be loaded, stored, or otherwise retained in system memory 340, in whole or in part, during execution by the processor circuitry 130 and/or graphics processor circuitry 312.The processor-based device 300 may include power management circuitry 350 that controls one or more operational aspects of the energy storage device 352. In embodiments, the energy storage device 352 may include one or more primary (i.e., non-rechargeable) or secondary (i.e., rechargeable) batteries or similar energy storage devices. In embodiments, the energy storage device 352 may include one or more supercapacitors or ultracapacitors. In embodiments, the power management circuitry 350 may alter, adjust, or control the flow of energy from an external power source 354 to the energy storage device 352 and/or to the processor-based device 300. The power source 354 may include, but is not limited to, a solar power system, a commercial electric grid, a portable generator, an external energy storage device, or any combination thereof.For convenience, the processor circuitry 130, the GPU circuitry 312, the wireless I/O interface 320, the wired I/O interface 330, the system memory circuitry 340, the power management circuitry 350, the storage device 360, and the network interface 370 are illustrated as communicatively coupled to each other via the bus 316, thereby providing connectivity between the above-described components. In alternative embodiments, the above-described components may be communicatively coupled in a different manner than illustrated in FIG 3 . For example, one or more of the above-described components may be directly coupled to other components, or may be coupled to each other, via one or more intermediary components (not shown). In another example, one or more of the above-described components may be integrated into the processor circuitry 130 and/or the graphics processor circuitry 312. In some embodiments, all or a portion of the bus 316 may be omitted and the components are coupled directly to each other using suitable wired or wireless connections.FIG 4 is a high-level flow diagram of an illustrative method 400 of performing one or more operations on a two dimensional tile register using an accelerator that includes a tiled matrix multiplication unit (TMU) 110, in accordance with at least one embodiment described herein. The processor circuitry 130 includes reservation station (RS) circuitry 134 to communicatively couple the processor circuitry 130 to the TMU 110. The RS circuitry 134 coordinates the operations performed by the TMU 110. TMU dispatch queue (TDQ) circuitry 114 in the TMU 110 maintains the operations received from the RS circuitry 134 in the order that the operations are received from the RS circuitry 134. Since the duration of each operation is not known prior to execution by the TMU 110, the RS circuitry 134 maintains shadow dispatch queue (RS-TDQ) circuitry 136 that mirrors the operations in the TDQ circuitry 114. Communication between the RS circuitry 134 and the TMU 110 provides the RS circuitry 134 with notification of successfully executed operations and allows the RS circuitry 134 to cancel operations where the operations are associated with branch mispredictions and/or non-retired speculatively executed instructions. The method 400 beneficially improves the performance of the host system by offloading substantially all of the overhead associated with management of the TMU 110 from the processor circuitry 130 to the RS circuitry 134 and/or the TMU control (TMC) circuitry 112 in the TMU. The method 400 commences at 402.At 404, the RS circuitry 134 dispatches 220 a first matrix operation to the TDQ circuitry 114 in the TMU 110. In embodiments, the RS circuitry 134 may dispatch 220 the first matrix operation to TMC circuitry 112 disposed in the TMU 110. The TMC circuitry 112 may then dispatch the operation to the TDQ circuitry 114 where the operations are stored in the order received from the RS circuitry 134.At 406, the RS circuitry dispatches 222 the first matrix operation to the RS-TDQ circuitry 136. In embodiments, the RS circuitry 134 dispatches 220 the first matrix operation to the TDQ circuitry 114 simultaneous with the dispatch 222 of the first matrix operation to the RS-TDQ circuitry 136. The matrix operations stored in the RS-TDQ circuitry 136 mirror the matrix operations stored in the TDQ circuitry 114 and allow the RS circuitry 134 to monitor the status of the first matrix operation - beneficially permitting the RS circuitry 134 to receive a writeback indication upon successful completion of the first matrix operation and also allowing the RS circuitry to cancel the first matrix operation in the event of a speculative (e.g., an unretired speculatively executed instruction) or out-of-order execution error (e.g., a mis-predicted branch instruction). At the conclusion of the first matrix operation, the TMM circuitry 118 transfers output data to the TMB circuitry 116.At 408, the TMC circuitry 112 communicates completion of the first matrix operation to the RS circuitry 134. The RS circuitry 134 communicates completion of the first matrix operation to reorder buffer (ROB) circuitry 132.At 410, responsive to receipt of the communication indicative of the completion of the first matrix operation, the ROB circuitry 132 commits the first matrix operation.At 412, the TMC circuitry 112 causes the transfer of the output data from the TMB circuitry 116 to memory circuitry, such as system memory circuitry or processor cache memory circuitry. The method concludes at 414.The figures below detail exemplary architectures and systems to implement embodiments of the above. In some embodiments, one or more hardware components and/or instructions described above are emulated as detailed below or implemented as software modules.Embodiments of the instruction(s) detailed above are embodied may be embodied in a "generic vector friendly instruction format" which is detailed below. In other embodiments, such a format is not utilized and another instruction format is used, however, the description below of the writemask registers, various data transformations (swizzle, broadcast, etc.), addressing, etc. is generally applicable to the description of the embodiments of the instruction(s) above. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) above may be executed on such systems, architectures, and pipelines, but are not limited to those detailed.An instruction set may include one or more instruction formats. A given instruction format may define various fields (e.g., number of bits, location of bits) to specify, among other things, the operation to be performed (e.g., opcode) and the operand(s) on which that operation is to be performed and/or other data field(s) (e.g., mask). Some instruction formats are further broken down though the definition of instruction templates (or subformats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format's fields (the included fields are typically in the same order, but at least some have different bit positions because there are less fields included) and/or defined to have a given field interpreted differently. Thus, each instruction of an ISA is expressed using a given instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and includes fields for specifying the operation and the operands. For example, an exemplary ADD instruction has a specific opcode and an instruction format that includes an opcode field to specify that opcode and operand fields to select operands (source1/destination and source2); and an occurrence of this ADD instruction in an instruction stream will have specific contents in the operand fields that select specific operands. A set of SIMD extensions referred to as the Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extensions (VEX) coding scheme has been released and/or published (e.g., see Intel® 64 and IA-32 Architectures Software Developer's Manual, September 2014 ; and see Intel® Advanced Vector Extensions Programming Reference, October 2014 ).Exemplary Instruction FormatsEmbodiments of the instruction(s) described herein may be embodied in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) may be executed on such systems, architectures, and pipelines, but are not limited to those detailed.Generic Vector Friendly Instruction FormatA vector friendly instruction format is an instruction format that is suited for vector instructions (e.g., there are certain fields specific to vector operations). While embodiments are described in which both vector and scalar operations are supported through the vector friendly instruction format, alternative embodiments use only vector operations the vector friendly instruction format.FIGs 5A and 5B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments of the invention. FIG 5A is a block diagram illustrating a generic vector friendly instruction format and class A instruction templates thereof according to embodiments of the invention; while FIG 5B is a block diagram illustrating the generic vector friendly instruction format and class B instruction templates thereof according to embodiments of the invention. Specifically, a generic vector friendly instruction format 500 for which are defined class A and class B instruction templates, both of which include no memory access 505 instruction templates and memory access 520 instruction templates. The term generic in the context of the vector friendly instruction format refers to the instruction format not being tied to any specific instruction set.While embodiments of the invention will be described in which the vector friendly instruction format supports the following: a 64 byte vector operand length (or size) with 32 bit (4 byte) or 64 bit (8 byte) data element widths (or sizes) (and thus, a 64 byte vector consists of either 16 doubleword-size elements or alternatively, 8 quadword-size elements); a 64 byte vector operand length (or size) with 16 bit (2 byte) or 8 bit (1 byte) data element widths (or sizes); a 32 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); and a 16 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); alternative embodiments may support more, less and/or different vector operand sizes (e.g., 256 byte vector operands) with more, less, or different data element widths (e.g., 128 bit (16 byte) data element widths).The class A instruction templates in FIG 5A include: 1) within the no memory access 505 instruction templates there is shown a no memory access, full round control type operation 510 instruction template and a no memory access, data transform type operation 515 instruction template; and 2) within the memory access 520 instruction templates there is shown a memory access, temporal 525 instruction template and a memory access, non-temporal 530 instruction template. The class B instruction templates in FIG 5B include: 1) within the no memory access 505 instruction templates there is shown a no memory access, write mask control, partial round control type operation 512 instruction template and a no memory access, write mask control, vsize type operation 517 instruction template; and 2) within the memory access 520 instruction templates there is shown a memory access, write mask control 527 instruction template.The generic vector friendly instruction format 500 includes the following fields listed below in the order illustrated in FIGs 5A and 5B .Format field 540 - a specific value (an instruction format identifier value) in this field uniquely identifies the vector friendly instruction format, and thus occurrences of instructions in the vector friendly instruction format in instruction streams. As such, this field is optional in the sense that it is not needed for an instruction set that has only the generic vector friendly instruction format.Base operation field 542 - its content distinguishes different base operations.Register index field 544 - its content, directly or through address generation, specifies the locations of the source and destination operands, be they in registers or in memory. These include a sufficient number of bits to select N registers from a PxQ (e.g. 32x512, 16x128, 32x1024, 64x1024) register file. While in one embodiment N may be up to three sources and one destination register, alternative embodiments may support more or less sources and destination registers (e.g., may support up to two sources where one of these sources also acts as the destination, may support up to three sources where one of these sources also acts as the destination, may support up to two sources and one destination).Modifier field 546 - its content distinguishes occurrences of instructions in the generic vector instruction format that specify memory access from those that do not; that is, between no memory access 505 instruction templates and memory access 520 instruction templates. Memory access operations read and/or write to the memory hierarchy (in some cases specifying the source and/or destination addresses using values in registers), while non-memory access operations do not (e.g., the source and destinations are registers). While in one embodiment this field also selects between three different ways to perform memory address calculations, alternative embodiments may support more, less, or different ways to perform memory address calculations.Augmentation operation field 550 - its content distinguishes which one of a variety of different operations to be performed in addition to the base operation. This field is context specific. In one embodiment of the invention, this field is divided into a class field 568, an alpha field 552, and a beta field 554. The augmentation operation field 550 allows common groups of operations to be performed in a single instruction rather than 2, 3, or 4 instructions.Scale field 560 - its content allows for the scaling of the index field's content for memory address generation (e.g., for address generation that uses 2scale ∗ index + base).Displacement Field 562A- its content is used as part of memory address generation (e.g., for address generation that uses 2scale ∗ index + base + displacement).Displacement Factor Field 562B (note that the juxtaposition of displacement field 562A directly over displacement factor field 562B indicates one or the other is used) - its content is used as part of address generation; it specifies a displacement factor that is to be scaled by the size of a memory access (N) - where N is the number of bytes in the memory access (e.g., for address generation that uses 2scale ∗ index + base + scaled displacement). Redundant low-order bits are ignored and hence, the displacement factor field's content is multiplied by the memory operands total size (N) in order to generate the final displacement to be used in calculating an effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 574 (described later herein) and the data manipulation field 554C. The displacement field 562A and the displacement factor field 562B are optional in the sense that they are not used for the no memory access 505 instruction templates and/or different embodiments may implement only one or none of the two.Data element width field 564 - its content distinguishes which one of a number of data element widths is to be used (in some embodiments for all instructions; in other embodiments for only some of the instructions). This field is optional in the sense that it is not needed if only one data element width is supported and/or data element widths are supported using some aspect of the opcodes.Write mask field 570 - its content controls, on a per data element position basis, whether that data element position in the destination vector operand reflects the result of the base operation and augmentation operation. Class A instruction templates support merging-writemasking, while class B instruction templates support both merging- and zeroing-writemasking. When merging, vector masks allow any set of elements in the destination to be protected from updates during the execution of any operation (specified by the base operation and the augmentation operation); in other one embodiment, preserving the old value of each element of the destination where the corresponding mask bit has a 0. In contrast, when zeroing vector masks allow any set of elements in the destination to be zeroed during the execution of any operation (specified by the base operation and the augmentation operation); in one embodiment, an element of the destination is set to 0 when the corresponding mask bit has a 0 value. A subset of this functionality is the ability to control the vector length of the operation being performed (that is, the span of elements being modified, from the first to the last one); however, it is not necessary that the elements that are modified be consecutive. Thus, the write mask field 570 allows for partial vector operations, including loads, stores, arithmetic, logical, etc. While embodiments of the invention are described in which the write mask field's 570 content selects one of a number of write mask registers that contains the write mask to be used (and thus the write mask field's 570 content indirectly identifies that masking to be performed), alternative embodiments instead or additional allow the mask write field's 570 content to directly specify the masking to be performed.Immediate field 572 - its content allows for the specification of an immediate. This field is optional in the sense that is it not present in an implementation of the generic vector friendly format that does not support immediate and it is not present in instructions that do not use an immediate.Class field 568 - its content distinguishes between different classes of instructions. With reference to FIGs 5A and 5B , the contents of this field select between class A and class B instructions. In FIGs 5A and 5B , rounded corner squares are used to indicate a specific value is present in a field (e.g., class A 568A and class B 568B for the class field 568 respectively in FIGs 5A and 5B ).Instruction Templates of Class AIn the case of the non-memory access 505 instruction templates of class A, the alpha field 552 is interpreted as an RS field 552A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 552A.1 and data transform 552A.2 are respectively specified for the no memory access, round type operation 510 and the no memory access, data transform type operation 515 instruction templates), while the beta field 554 distinguishes which of the operations of the specified type is to be performed. In the no memory access 505 instruction templates, the scale field 560, the displacement field 562A, and the displacement scale filed 562B are not present.No-Memory Access Instruction Templates - Full Round Control Type OperationIn the no memory access full round control type operation 510 instruction template, the beta field 554 is interpreted as a round control field 554A, whose content(s) provide static rounding. While in the described embodiments of the invention the round control field 554A includes a suppress all floating point exceptions (SAE) field 556 and a round operation control field 558, alternative embodiments may support may encode both these concepts into the same field or only have one or the other of these concepts/fields (e.g., may have only the round operation control field 558).SAE field 556 - its content distinguishes whether or not to disable the exception event reporting; when the SAE field's 556 content indicates suppression is enabled, a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler.Round operation control field 558 - its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round-to-nearest). Thus, the round operation control field 558 allows for the changing of the rounding mode on a per instruction basis. In one embodiment of the invention where a processor includes a control register for specifying rounding modes, the round operation control field's 550 content overrides that register value.No Memory Access Instruction Templates - Data Transform Type OperationIn the no memory access data transform type operation 515 instruction template, the beta field 554 is interpreted as a data transform field 554B, whose content distinguishes which one of a number of data transforms is to be performed (e.g., no data transform, swizzle, broadcast).In the case of a memory access 520 instruction template of class A, the alpha field 552 is interpreted as an eviction hint field 552B, whose content distinguishes which one of the eviction hints is to be used (in FIG 5A , temporal 552B.1 and non-temporal 552B.2 are respectively specified for the memory access, temporal 525 instruction template and the memory access, non-temporal 530 instruction template), while the beta field 554 is interpreted as a data manipulation field 554C, whose content distinguishes which one of a number of data manipulation operations (also known as primitives) is to be performed (e.g., no manipulation; broadcast; up conversion of a source; and down conversion of a destination). The memory access 520 instruction templates include the scale field 560, and optionally the displacement field 562A or the displacement scale field 562B.Vector memory instructions perform vector loads from and vector stores to memory, with conversion support. As with regular vector instructions, vector memory instructions transfer data from/to memory in a data element-wise fashion, with the elements that are actually transferred is dictated by the contents of the vector mask that is selected as the write mask.Memory Access Instruction Templates - TemporalTemporal data is data likely to be reused soon enough to benefit from caching. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Memory Access Instruction Templates - Non-TemporalNon-temporal data is data unlikely to be reused soon enough to benefit from caching in the 1st-level cache and should be given priority for eviction. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Instruction Templates of Class BIn the case of the instruction templates of class B, the alpha field 552 is interpreted as a write mask control (Z) field 552C, whose content distinguishes whether the write masking controlled by the write mask field 570 should be a merging or a zeroing.In the case of the non-memory access 505 instruction templates of class B, part of the beta field 554 is interpreted as an RL field 557A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 557A.1 and vector length (VSIZE) 557A.2 are respectively specified for the no memory access, write mask control, partial round control type operation 512 instruction template and the no memory access, write mask control, VSIZE type operation 517 instruction template), while the rest of the beta field 554 distinguishes which of the operations of the specified type is to be performed. In the no memory access 505 instruction templates, the scale field 560, the displacement field 562A, and the displacement scale filed 562B are not present.In the no memory access, write mask control, partial round control type operation 510 instruction template, the rest of the beta field 554 is interpreted as a round operation field 559A and exception event reporting is disabled (a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler).Round operation control field 559A - just as round operation control field 558, its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round-to-nearest). Thus, the round operation control field 559A allows for the changing of the rounding mode on a per instruction basis. In one embodiment of the invention where a processor includes a control register for specifying rounding modes, the round operation control field's 550 content overrides that register value.In the no memory access, write mask control, VSIZE type operation 517 instruction template, the rest of the beta field 554 is interpreted as a vector length field 559B, whose content distinguishes which one of a number of data vector lengths is to be performed on (e.g., 128, 256, or 512 byte).In the case of a memory access 520 instruction template of class B, part of the beta field 554 is interpreted as a broadcast field 557B, whose content distinguishes whether or not the broadcast type data manipulation operation is to be performed, while the rest of the beta field 554 is interpreted the vector length field 559B. The memory access 520 instruction templates include the scale field 560, and optionally the displacement field 562A or the displacement scale field 562B.With regard to the generic vector friendly instruction format 500, a full opcode field 574 is shown including the format field 540, the base operation field 542, and the data element width field 564. While one embodiment is shown where the full opcode field 574 includes all of these fields, the full opcode field 574 includes less than all of these fields in embodiments that do not support all of them. The full opcode field 574 provides the operation code (opcode).The augmentation operation field 550, the data element width field 564, and the write mask field 570 allow these features to be specified on a per instruction basis in the generic vector friendly instruction format.The combination of write mask field and data element width field create typed instructions in that they allow the mask to be applied based on different data element widths.The various instruction templates found within class A and class B are beneficial in different situations. In some embodiments of the invention, different processors or different cores within a processor may support only class A, only class B, or both classes. For instance, a high performance general purpose out-of-order core intended for general-purpose computing may support only class B, a core intended primarily for graphics and/or scientific (throughput) computing may support only class A, and a core intended for both may support both (of course, a core that has some mix of templates and instructions from both classes but not all templates and instructions from both classes is within the purview of the invention). Also, a single processor may include multiple cores, all of which support the same class or in which different cores support different class. For instance, in a processor with separate graphics and general purpose cores, one of the graphics cores intended primarily for graphics and/or scientific computing may support only class A, while one or more of the general purpose cores may be high performance general purpose cores with out of order execution and register renaming intended for general-purpose computing that support only class B. Another processor that does not have a separate graphics core, may include one more general purpose in-order or out-of-order cores that support both class A and class B. Of course, features from one class may also be implement in the other class in different embodiments of the invention. Programs written in a high level language would be put (e.g., just in time compiled or statically compiled) into an variety of different executable forms, including: 1) a form having only instructions of the class(es) supported by the target processor for execution; or 2) a form having alternative routines written using different combinations of the instructions of all classes and having control flow code that selects the routines to execute based on the instructions supported by the processor which is currently executing the code.Exemplary Specific Vector Friendly Instruction FormatFIG 6 is a block diagram illustrating an exemplary specific vector friendly instruction format according to embodiments of the invention. FIG 6 shows a specific vector friendly instruction format 600 that is specific in the sense that it specifies the location, size, interpretation, and order of the fields, as well as values for some of those fields. The specific vector friendly instruction format 600 may be used to extend the x86 instruction set, and thus some of the fields are similar or the same as those used in the existing x86 instruction set and extension thereof (e.g., AVX). This format remains consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate fields of the existing x86 instruction set with extensions. The fields from FIG 5 into which the fields from FIG 6 map are illustrated.It should be understood that, although embodiments of the invention are described with reference to the specific vector friendly instruction format 600 in the context of the generic vector friendly instruction format 500 for illustrative purposes, the invention is not limited to the specific vector friendly instruction format 600 except where claimed. For example, the generic vector friendly instruction format 500 contemplates a variety of possible sizes for the various fields, while the specific vector friendly instruction format 600 is shown as having fields of specific sizes. By way of specific example, while the data element width field 564 is illustrated as a one bit field in the specific vector friendly instruction format 600, the invention is not so limited (that is, the generic vector friendly instruction format 500 contemplates other sizes of the data element width field 564).The generic vector friendly instruction format 500 includes the following fields listed below in the order illustrated in FIG 6A .EVEX Prefix (Bytes 0-3) 602 - is encoded in a four-byte form.Format Field 540 (EVEX Byte 0, bits [7:0]) - the first byte (EVEX Byte 0) is the format field 540 and it contains 0x62 (the unique value used for distinguishing the vector friendly instruction format in one embodiment of the invention).The second-fourth bytes (EVEX Bytes 1-3) include a number of bit fields providing specific capability.REX field 605 (EVEX Byte 1, bits [7-5]) - consists of a EVEX.R bit field (EVEX Byte 1, bit [7] - R), EVEX.X bit field (EVEX byte 1, bit [6] - X), and 557BEX byte 1, bit[5] - B). The EVEX.R, EVEX.X, and EVEX.B bit fields provide the same functionality as the corresponding VEX bit fields, and are encoded using 1s complement form, i.e. ZMM0 is encoded as 1111B, ZMM15 is encoded as 0000B. Other fields of the instructions encode the lower three bits of the register indexes as is known in the art (rrr, xxx, and bbb), so that Rrrr, Xxxx, and Bbbb may be formed by adding EVEX.R, EVEX.X, and EVEX.B.REX' field 510 - this is the first part of the REX' field 510 and is the EVEX.R' bit field (EVEX Byte 1, bit [4] - R') that is used to encode either the upper 16 or lower 16 of the extended 32 register set. In one embodiment of the invention, this bit, along with others as indicated below, is stored in bit inverted format to distinguish (in the well-known x86 32-bit mode) from the BOUND instruction, whose real opcode byte is 62, but does not accept in the MOD R/M field (described below) the value of 11 in the MOD field; alternative embodiments of the invention do not store this and the other indicated bits below in the inverted format. A value of 1 is used to encode the lower 16 registers. In other words, R'Rrrr is formed by combining EVEX.R', EVEX.R, and the other RRR from other fields.Opcode map field 615 (EVEX byte 1, bits [3:0] - mmmm) - its content encodes an implied leading opcode byte (0F, 0F 38, or 0F 3).Data element width field 564 (EVEX byte 2, bit [7] - W) - is represented by the notation EVEX.W. EVEX.W is used to define the granularity (size) of the datatype (either 32-bit data elements or 64-bit data elements).EVEX.vvvv 620 (EVEX Byte 2, bits [6:3]-vvvv)- the role of EVEX.vvvv may include the following: 1) EVEX.vvvv encodes the first source register operand, specified in inverted (1s complement) form and is valid for instructions with 2 or more source operands; 2) EVEX.vvvv encodes the destination register operand, specified in 1s complement form for certain vector shifts; or 3) EVEX.vvvv does not encode any operand, the field is reserved and should contain 1111b. Thus, EVEX.vvvv field 620 encodes the 4 low-order bits of the first source register specifier stored in inverted (1s complement) form. Depending on the instruction, an extra different EVEX bit field is used to extend the specifier size to 32 registers.EVEX.U 568 Class field (EVEX byte 2, bit [2]-U) - If EVEX.U = 0, it indicates class A or EVEX.U0; if EVEX.U = 1, it indicates class B or EVEX.U1.Prefix encoding field 625 (EVEX byte 2, bits [1:0]-pp) - provides additional bits for the base operation field. In addition to providing support for the legacy SSE instructions in the EVEX prefix format, this also has the benefit of compacting the SIMD prefix (rather than requiring a byte to express the SIMD prefix, the EVEX prefix requires only 2 bits). In one embodiment, to support legacy SSE instructions that use a SIMD prefix (66H, F2H, F3H) in both the legacy format and in the EVEX prefix format, these legacy SIMD prefixes are encoded into the SIMD prefix encoding field; and at runtime are expanded into the legacy SIMD prefix prior to being provided to the decoder's PLA (so the PLA can execute both the legacy and EVEX format of these legacy instructions without modification). Although newer instructions could use the EVEX prefix encoding field's content directly as an opcode extension, certain embodiments expand in a similar fashion for consistency but allow for different meanings to be specified by these legacy SIMD prefixes. An alternative embodiment may redesign the PLA to support the 2 bit SIMD prefix encodings, and thus not require the expansion.Alpha field 552 (EVEX byte 3, bit [7] - EH; also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX.write mask control, and EVEX.N; also illustrated with α) - as previously described, this field is context specific.Beta field 554 (EVEX byte 3, bits [6:4]-SSS, also known as EVEX.s2-0, EVEX.r2-0, EVEX.rrl, EVEX.LL0, EVEX.LLB; also illustrated with βββ) - as previously described, this field is context specific.REX' field 510 - this is the remainder of the REX' field and is the EVEX.V' bit field (EVEX Byte 3, bit [3] - V') that may be used to encode either the upper 16 or lower 16 of the extended 32 register set. This bit is stored in bit inverted format. A value of 1 is used to encode the lower 16 registers. In other words, V'VVVV is formed by combining EVEX.V', EVEX.vvvv.Write mask field 570 (EVEX byte 3, bits [2:0]-kkk) - its content specifies the index of a register in the write mask registers as previously described. In one embodiment of the invention, the specific value EVEX.kkk=000 has a special behavior implying no write mask is used for the particular instruction (this may be implemented in a variety of ways including the use of a write mask hardwired to all ones or hardware that bypasses the masking hardware).Real Opcode Field 630 (Byte 4) is also known as the opcode byte. Part of the opcode is specified in this field.MOD R/M Field 640 (Byte 5) includes MOD field 642, Reg field 644, and R/M field 646. As previously described, the MOD field's 642 content distinguishes between memory access and non-memory access operations. The role of Reg field 644 can be summarized to two situations: encoding either the destination register operand or a source register operand, or be treated as an opcode extension and not used to encode any instruction operand. The role of R/M field 646 may include the following: encoding the instruction operand that references a memory address, or encoding either the destination register operand or a source register operand.Scale, Index, Base (SIB) Byte (Byte 6) - As previously described, the scale field's 550 content is used for memory address generation. SIB.xxx 654 and SIB.bbb 656 - the contents of these fields have been previously referred to with regard to the register indexes Xxxx and Bbbb.Displacement field 562A (Bytes 7-10) - when MOD field 642 contains 10, bytes 7-10 are the displacement field 562A, and it works the same as the legacy 32-bit displacement (disp32) and works at byte granularity.Displacement factor field 562B (Byte 7) - when MOD field 642 contains 01, byte 7 is the displacement factor field 562B. The location of this field is that same as that of the legacy x86 instruction set 8-bit displacement (disp8), which works at byte granularity. Since disp8 is sign extended, it can only address between -128 and 127 bytes offsets; in terms of 64 byte cache lines, disp8 uses 8 bits that can be set to only four really useful values -128, -64, 0, and 64; since a greater range is often needed, disp32 is used; however, disp32 requires 4 bytes. In contrast to disp8 and disp32, the displacement factor field 562B is a reinterpretation of disp8; when using displacement factor field 562B, the actual displacement is determined by the content of the displacement factor field multiplied by the size of the memory operand access (N). This type of displacement is referred to as disp8*N. This reduces the average instruction length (a single byte of used for the displacement but with a much greater range). Such compressed displacement is based on the assumption that the effective displacement is multiple of the granularity of the memory access, and hence, the redundant low-order bits of the address offset do not need to be encoded. In other words, the displacement factor field 562B substitutes the legacy x86 instruction set 8-bit displacement. Thus, the displacement factor field 562B is encoded the same way as an x86 instruction set 8-bit displacement (so no changes in the ModRM/SIB encoding rules) with the only exception that disp8 is overloaded to disp8*N. In other words, there are no changes in the encoding rules or encoding lengths but only in the interpretation of the displacement value by hardware (which needs to scale the displacement by the size of the memory operand to obtain a byte-wise address offset). Immediate field 572 operates as previously described.Full Opcode FieldFIG 6B is a block diagram illustrating the fields of the specific vector friendly instruction format 600 that make up the full opcode field 574 according to one embodiment of the invention. Specifically, the full opcode field 574 includes the format field 540, the base operation field 542, and the data element width (W) field 564. The base operation field 542 includes the prefix encoding field 625, the opcode map field 615, and the real opcode field 630.Register Index FieldFIG 6C is a block diagram illustrating the fields of the specific vector friendly instruction format 600 that make up the register index field 544 according to one embodiment of the invention. Specifically, the register index field 544 includes the REX field 605, the REX' field 610, the MODR/M.reg field 644, the MODR/M.r/m field 646, the VVVV field 620, xxx field 654, and the bbb field 656.Augmentation Operation FieldFIG 6D is a block diagram illustrating the fields of the specific vector friendly instruction format 600 that make up the augmentation operation field 550 according to one embodiment of the invention. When the class (U) field 568 contains 0, it signifies EVEX.U0 (class A 568A); when it contains 1, it signifies EVEX.U1 (class B 568B). When U=0 and the MOD field 642 contains 11 (signifying a no memory access operation), the alpha field 552 (EVEX byte 3, bit [7] - EH) is interpreted as the rs field 552A. When the rs field 552A contains a 1 (round 552A.1), the beta field 554 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the round control field 554A. The round control field 554A includes a one bit SAE field 556 and a two bit round operation field 558. When the rs field 552A contains a 0 (data transform 552A.2), the beta field 554 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data transform field 554B. When U=0 and the MOD field 642 contains 00, 01, or 10 (signifying a memory access operation), the alpha field 552 (EVEX byte 3, bit [7] - EH) is interpreted as the eviction hint (EH) field 552B and the beta field 554 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data manipulation field 554C.When U=1, the alpha field 552 (EVEX byte 3, bit [7] - EH) is interpreted as the write mask control (Z) field 552C. When U=1 and the MOD field 642 contains 11 (signifying a no memory access operation), part of the beta field 554 (EVEX byte 3, bit [4]- So) is interpreted as the RL field 557A; when it contains a 1 (round 557A.1) the rest of the beta field 554 (EVEX byte 3, bit [6-5]- S2-1) is interpreted as the round operation field 559A, while when the RL field 557A contains a 0 (VSIZE 557.A2) the rest of the beta field 554 (EVEX byte 3, bit [6-5]- S2-1) is interpreted as the vector length field 559B (EVEX byte 3, bit [6-5]- L1-0). When U=1 and the MOD field 642 contains 00, 01, or 10 (signifying a memory access operation), the beta field 554 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the vector length field 559B (EVEX byte 3, bit [6-5]- L1-0) and the broadcast field 557B (EVEX byte 3, bit [4]- B).Exemplary Register ArchitectureFIG 7 is a block diagram of a register architecture 700 according to one embodiment of the invention. In the embodiment illustrated, there are 32 vector registers 710 that are 512 bits wide; these registers are referenced as zmm0 through zmm31. The lower order 256 bits of the lower 16 zmm registers are overlaid on registers ymm0-16. The lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm registers) are overlaid on registers xmm0-15. The specific vector friendly instruction format 600 operates on these overlaid register file as illustrated in the below tables.Adjustable Vector LengthClassOperationsRegistersInstruction Templates that do not include the vector length field 559BA ( Figure 5A ; U=0)510,515, 525,530zmm registers (the vector length is 64 byte)B ( Figure 5B ; U=1)512zmm registers (the vector length is 64 byte)Instruction templates that do include the vector length field 559BB ( Figure 5B ; U=1)517,527zmm, ymm, or xmm registers (the vector length is 64 byte, 32 byte, or 16 byte) depending on the vector length field 559BIn other words, the vector length field 559B selects between a maximum length and one or more other shorter lengths, where each such shorter length is half the length of the preceding length; and instructions templates without the vector length field 559B operate on the maximum vector length. Further, in one embodiment, the class B instruction templates of the specific vector friendly instruction format 600 operate on packed or scalar single/double-precision floating point data and packed or scalar integer data. Scalar operations are operations performed on the lowest order data element position in an zmm/ymm/xmm register; the higher order data element positions are either left the same as they were prior to the instruction or zeroed depending on the embodiment.Write mask registers 715 - in the embodiment illustrated, there are 8 write mask registers (k0 through k7), each 64 bits in size. In an alternate embodiment, the write mask registers 715 are 16 bits in size. As previously described, in one embodiment of the invention, the vector mask register k0 cannot be used as a write mask; when the encoding that would normally indicate k0 is used for a write mask, it selects a hardwired write mask of 0xFFFF, effectively disabling write masking for that instruction.General-purpose registers 725 - in the embodiment illustrated, there are sixteen 64-bit general-purpose registers that are used along with the existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.Scalar floating point stack register file (x87 stack) 745, on which is aliased the MMX packed integer flat register file 750 - in the embodiment illustrated, the x87 stack is an eight-element stack used to perform scalar floating-point operations on 32/64/80-bit floating point data using the x87 instruction set extension; while the MMX registers are used to perform operations on 64-bit packed integer data, as well as to hold operands for some operations performed between the MMX and XMM registers.Alternative embodiments of the invention may use wider or narrower registers. Additionally, alternative embodiments of the invention may use more, less, or different register files and registers.Exemplary Core Architectures, Processors, and Computer ArchitecturesProcessor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures.Exemplary Core ArchitecturesIn-order and out-of-order core block diagramFIG 8A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. Figure 8B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes in FIGs 8A and 8B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In FIG 8A , a processor pipeline 800 includes a fetch stage 802, a length decode stage 804, a decode stage 806, an allocation stage 808, a renaming stage 810, a scheduling (also known as a dispatch or issue) stage 812, a register read/memory read stage 814, an execute stage 816, a write back/memory write stage 818, an exception handling stage 822, and a commit stage 824.FIG 8B shows processor core 890 including a front end unit 830 coupled to an execution engine unit 850, and both are coupled to a memory unit 870. The core 890 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 890 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.The front end unit 830 includes a branch prediction unit 832 coupled to an instruction cache unit 834, which is coupled to an instruction translation lookaside buffer (TLB) 836, which is coupled to an instruction fetch unit 838, which is coupled to a decode unit 840. The decode unit 840 (or decoder) may decode instructions, and generate as an output one or more operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 840 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 890 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 840 or otherwise within the front end unit 830). The decode unit 840 is coupled to a rename/allocator unit 852 in the execution engine unit 850.The execution engine unit 850 includes the rename/allocator unit 852 coupled to a retirement unit 854 and a set of one or more scheduler unit(s) 856. The scheduler unit(s) 856 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 856 is coupled to the physical register file(s) unit(s) 858. Each of the physical register file(s) units 858 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point,, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 858 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 858 is overlapped by the retirement unit 854 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 854 and the physical register file(s) unit(s) 858 are coupled to the execution cluster(s) 860. The execution cluster(s) 860 includes a set of one or more execution units 862 and a set of one or more memory access units 864. The execution units 862 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 856, physical register file(s) unit(s) 858, and execution cluster(s) 860 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 864). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.The set of memory access units 864 is coupled to the memory unit 870, which includes a data TLB unit 872 coupled to a data cache unit 874 coupled to a level 2 (L2) cache unit 876. In one exemplary embodiment, the memory access units 864 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 872 in the memory unit 870. The instruction cache unit 834 is further coupled to a level 2 (L2) cache unit 876 in the memory unit 870. The L2 cache unit 876 is coupled to one or more other levels of cache and eventually to a main memory.By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 800 as follows: 1) the instruction fetch 838 performs the fetch and length decoding stages 802 and 804; 2) the decode unit 840 performs the decode stage 806; 3) the rename/allocator unit 852 performs the allocation stage 808 and renaming stage 810; 4) the scheduler unit(s) 856 performs the schedule stage 812; 5) the physical register file(s) unit(s) 858 and the memory unit 870 perform the register read/memory read stage 814; the execution cluster 860 perform the execute stage 816; 6) the memory unit 870 and the physical register file(s) unit(s) 858 perform the write back/memory write stage 818; 7) various units may be involved in the exception handling stage 822; and 8) the retirement unit 854 and the physical register file(s) unit(s) 858 perform the commit stage 824.The core 890 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 890 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 834/874 and a shared L2 cache unit 876, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.Specific Exemplary In-Order Core ArchitectureFIGs 9A and 9B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application.FIG 9A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 902 and with its local subset of the Level 2 (L2) cache 904, according to embodiments of the invention. In one embodiment, an instruction decoder 900 supports the x86 instruction set with a packed data instruction set extension. An L1 cache 906 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 908 and a vector unit 910 use separate register sets (respectively, scalar registers 912 and vector registers 914) and data transferred between them is written to memory and then read back in from a level 1 (L1) cache 906, alternative embodiments of the invention may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).The local subset of the L2 cache 904 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 904. Data read by a processor core is stored in its L2 cache subset 904 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 904 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring data-path is 1012-bits wide per direction.FIG 9B is an expanded view of part of the processor core in FIG 9A according to embodiments of the invention. FIG 9B includes an L1 data cache 906A part of the L1 cache 904, as well as more detail regarding the vector unit 910 and the vector registers 914. Specifically, the vector unit 910 is a 16-wide vector processing unit (VPU) (see the 16-wide ALU 928), which executes one or more of integer, single-precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 920, numeric conversion with numeric convert units 922A-B, and replication with replication unit 924 on the memory input. Write mask registers 926 allow predicating resulting vector writes.FIG 10 is a block diagram of a processor 1000 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. The solid lined boxes in FIG 10 illustrate a processor 1000 with a single core 1002A, a system agent 1010, a set of one or more bus controller units 1016, while the optional addition of the dashed lined boxes illustrates an alternative processor 1000 with multiple cores 1002A-N, a set of one or more integrated memory controller unit(s) 1014 in the system agent unit 1010, and special purpose logic 1008.Thus, different implementations of the processor 1000 may include: 1) a CPU with the special purpose logic 1008 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 1002A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 1002A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 1002A-N being a large number of general purpose in-order cores. Thus, the processor 1000 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 1000 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 1006, and external memory (not shown) coupled to the set of integrated memory controller units 1014. The set of shared cache units 1006 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 1012 interconnects the integrated graphics logic 1008, the set of shared cache units 1006, and the system agent unit 1010/integrated memory controller unit(s) 1014, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 1006 and cores 1002-A-N.In some embodiments, one or more of the cores 1002A-N are capable of multi-threading. The system agent 1010 includes those components coordinating and operating cores 1002A-N. The system agent unit 1010 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 1002A-N and the integrated graphics logic 1008. The display unit is for driving one or more externally connected displays.The cores 1002A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 1002A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.Exemplary Computer ArchitecturesFIG 11 , FIG 12 , FIG 13 , and FIG 14 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.Referring now to FIG 11 , shown is a block diagram of a system 1100 in accordance with one embodiment of the present invention. The system 1100 may include one or more processors 1110, 1115, which are coupled to a controller hub 1120. In one embodiment the controller hub 1120 includes a graphics memory controller hub (GMCH) 1190 and an Input/Output Hub (IOH) 1150 (which may be on separate chips); the GMCH 1190 includes memory and graphics controllers to which are coupled memory 1140 and a coprocessor 1145; the IOH 1150 is couples input/output (I/O) devices 1160 to the GMCH 1190. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 1140 and the coprocessor 1145 are coupled directly to the processor 1110, and the controller hub 1120 in a single chip with the IOH 1150.The optional nature of additional processors 1115 is denoted in FIG 11 with broken lines. Each processor 1110, 1115 may include one or more of the processing cores described herein and may be some version of the processor 1000.The memory 1140 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1120 communicates with the processor(s) 1110, 1115 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 1195.In one embodiment, the coprocessor 1145 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 1120 may include an integrated graphics accelerator.There can be a variety of differences between the physical resources 1110, 1115 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.In one embodiment, the processor 1110 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 1110 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 1145. Accordingly, the processor 1110 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 1145. Coprocessor(s) 1145 accept and execute the received coprocessor instructions.Referring now to FIG 12 , shown is a block diagram of a first more specific exemplary system 1200 in accordance with an embodiment of the present invention. As shown in FIG 12 , multiprocessor system 1200 is a point-to-point interconnect system, and includes a first processor 1270 and a second processor 1280 coupled via a point-to-point interconnect 1250. Each of processors 1270 and 1280 may be some version of the processor 1000. In one embodiment of the invention, processors 1270 and 1280 are respectively processors 1110 and 1115, while coprocessor 1238 is coprocessor 1145. In another embodiment, processors 1270 and 1280 are respectively processor 1110 coprocessor 1145.Processors 1270 and 1280 are shown including integrated memory controller (IMC) units 1272 and 1282, respectively. Processor 1270 also includes as part of its bus controller units point-to-point (P-P) interfaces 1276 and 1278; similarly, second processor 1280 includes P-P interfaces 1286 and 1288. Processors 1270, 1280 may exchange information via a point-to-point (P-P) interface 1250 using P-P interface circuits 1278, 1288. As shown in FIG 12 , IMCs 1272 and 1282 couple the processors to respective memories, namely a memory 1232 and a memory 1234, which may be portions of main memory locally attached to the respective processors.Processors 1270, 1280 may each exchange information with a chipset 1290 via individual P-P interfaces 1252, 1254 using point to point interface circuits 1276, 1294, 1286, 1298. Chipset 1290 may optionally exchange information with the coprocessor 1238 via a high-performance interface 1239. In one embodiment, the coprocessor 1238 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.Chipset 1290 may be coupled to a first bus 1216 via an interface 1296. In one embodiment, first bus 1216 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited.As shown in FIG 12 , various I/O devices 1214 may be coupled to first bus 1216, along with a bus bridge 1218 which couples first bus 1216 to a second bus 1220. In one embodiment, one or more additional processor(s) 1215, such as coprocessors, high-throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 1216. In one embodiment, second bus 1220 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 1220 including, for example, a keyboard and/or mouse 1222, communication devices 1227 and a storage unit 1228 such as a disk drive or other mass storage device which may include instructions/code and data 1230, in one embodiment. Further, an audio I/O 1224 may be coupled to the second bus 1220. Note that other architectures are possible. For example, instead of the point-to-point architecture of FIG 12 , a system may implement a multi-drop bus or other such architecture.Referring now to FIG 13 , shown is a block diagram of a second more specific exemplary system 1300 in accordance with an embodiment of the present invention. Like elements in FIGs 12 and 13 bear like reference numerals, and certain aspects of FIG 12 have been omitted from FIG 13 in order to avoid obscuring other aspects of FIG 13 .FIG 13 illustrates that the processors 1270, 1280 may include integrated memory and I/O control logic ("CL") 1272 and 1282, respectively. Thus, the CL 1272, 1282 include integrated memory controller units and include I/O control logic. FIG 13 illustrates that not only are the memories 1232, 1234 coupled to the CL 1272, 1282, but also that I/O devices 1314 are also coupled to the control logic 1272, 1282. Legacy I/O devices 1315 are coupled to the chipset 1290.Referring now to FIG 14 , shown is a block diagram of a SoC 1400 in accordance with an embodiment of the present invention. Similar elements in FIG 10 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In FIG 14 , an interconnect unit(s) 1402 is coupled to: an application processor 1410 which includes a set of one or more cores 202A-N and shared cache unit(s) 1006; a system agent unit 1010; a bus controller unit(s) 1016; an integrated memory controller unit(s) 1014; a set or one or more coprocessors 1420 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 1430; a direct memory access (DMA) unit 1432; and a display unit 1440 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 1420 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.Program code, such as code 1230 illustrated in FIG 12 , may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable's (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.Accordingly, embodiments of the invention also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.Emulation (including binary translation, code morphing, etc.)In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.FIG 15 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. FIG 15 shows a program in a high level language 1502 may be compiled using an x86 compiler 1504 to generate x86 binary code 1506 that may be natively executed by a processor with at least one x86 instruction set core 1516. The processor with at least one x86 instruction set core 1516 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 1504 represents a compiler that is operable to generate x86 binary code 1506 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 1516. Similarly, FIG 15 shows the program in the high level language 1502 may be compiled using an alternative instruction set compiler 1508 to generate alternative instruction set binary code 1510 that may be natively executed by a processor without at least one x86 instruction set core 1514 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 1512 is used to convert the x86 binary code 1506 into code that may be natively executed by the processor without an x86 instruction set core 1514. This converted code is not likely to be the same as the alternative instruction set binary code 1510 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 1512 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 1506.While FIGs 8 and 9 illustrate various operations according to one or more embodiments, it is to be understood that not all of the operations depicted in FIGs 8 and 9 are necessary for other embodiments. Indeed, it is fully contemplated herein that in other embodiments of the present disclosure, the operations depicted in FIGs 8 and 9 , and/or other operations described herein, may be combined in a manner not specifically shown in any of the drawings, but still fully consistent with the present disclosure. Thus, claims directed to features and/or operations that are not exactly shown in one drawing are deemed within the scope and content of the present disclosure.As used in this application and in the claims, a list of items joined by the term "and/or" can mean any combination of the listed items. For example, the phrase "A, B and/or C" can mean A; B; C; A and B; A and C; B and C; or A, B and C. As used in this application and in the claims, a list of items joined by the term "at least one of' can mean any combination of the listed terms. For example, the phrases "at least one of A, B or C" can mean A; B; C; A and B; A and C; B and C; or A, B and C.As used in any embodiment herein, the terms "system" or "module" may refer to, for example, software, firmware and/or circuitry configured to perform any of the aforementioned operations. Software may be embodied as a software package, code, instructions, instruction sets and/or data recorded on non-transitory computer readable storage mediums and/or devices. Firmware may be embodied as code, instructions or instruction sets and/or data that are hard-coded (e.g., nonvolatile) in memory devices.As used in any embodiment herein, the term "circuitry" may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry such as computer processors comprising one or more individual instruction processing cores, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry or future computing paradigms including, for example, massive parallelism, analog or quantum computing, hardware embodiments of accelerators such as neural net processors and non-silicon implementations of the above. The circuitry may, collectively or individually, be embodied as circuitry that forms part of a larger system, for example, an integrated circuit (IC), system on-chip (SoC), desktop computers, laptop computers, tablet computers, servers, smartphones, etc.Any of the operations described herein may be implemented in a system that includes one or more mediums (e.g., non-transitory storage mediums) having stored therein, individually or in combination, instructions that when executed by one or more processors perform the methods. Here, the processor may include, for example, a server CPU, a mobile device CPU, and/or other programmable circuitry. Also, it is intended that operations described herein may be distributed across a plurality of physical devices, such as processing structures at more than one different physical location. The storage medium may include any type of tangible medium, for example, any type of disk including hard disks, floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic and static RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), flash memories, Solid State Disks (SSDs), embedded multimedia cards (eMMCs), secure digital input/output (SDIO) cards, magnetic or optical cards, or any type of media suitable for storing electronic instructions. Other embodiments may be implemented as software executed by a programmable control device.Thus, the present disclosure is directed to systems and methods for performing one or more operations on a two dimensional tile register using an accelerator that includes a tiled matrix multiplication unit (TMU). The processor circuitry includes reservation station (RS) circuitry to communicatively couple the processor circuitry to the TMU. The RS circuitry coordinates the operations performed by the TMU. TMU dispatch queue (TDQ) circuitry in the TMU maintains the operations received from the RS circuitry in the order that the operations are received from the RS circuitry. Since the duration of each operation is not known prior to execution by the TMU, the RS circuitry maintains shadow dispatch queue (RS-TDQ) circuitry that mirrors the operations in the TDQ circuitry. Communication between the RS circuitry 134 and the TMU provides the RS circuitry with notification of successfully executed operations and allows the RS circuitry to cancel operations where the operations are associated with branch mispredictions and/or non-retired speculatively executed instructions.The following examples pertain to further embodiments. The following examples of the present disclosure may comprise subject material such as at least one device, a method, at least one machine-readable medium for storing instructions that when executed cause a machine to perform acts based on the method, means for performing acts based on the method and/or a system for decomposing systolic array circuitry to perform one or more operations on a two dimensional tile register using an accelerator that includes a tiled matrix multiplication unit (TMU).According to example 1, there is provided core circuitry. The core circuitry includes: processor circuitry; re-order buffer (ROB) circuitry coupled to the processor circuitry; and reservation station (RS) circuitry that includes matrix multiplication unit dispatch shadow queue (RS-TDQ) circuitry, the RS circuitry to: dispatch at least one first matrix operation to TDQ circuitry disposed in a matrix multiplication unit (TMU) communicatively coupled to the RS circuitry; dispatch the at least one first matrix operation to the RS-TDQ circuitry; receive a dispatch indication from the TMU upon execution of the at least one first matrix operation by the TMU; communicate, to the ROB circuitry, a signal that includes information indicative of a completion of the at least one first matrix operation by the TMU; cause the ROB circuitry to commit the at least one first matrix operation; and cause a transfer of data from the at least one first matrix operation from TMU buffer (TMB) circuitry to memory circuitry.Example 2 may include elements of example 1 and the core circuitry may further include: cache memory circuity communicatively coupled to the processor circuitry; wherein to cause a transfer of data from the at least one first matrix operation from TMU buffer (TMB) circuitry to memory circuitry, the RS circuitry to further: cause the transfer of an output tile register that includes two dimensional array generated by the at least one first matrix operation from the TMB circuitry to the cache memory circuitry.Example 3 may include elements of any of examples 1 or 2 where to cause a transfer of data from the at least one first matrix operation from TMU buffer (TMB) circuitry to memory circuitry, the RS circuitry to further: cause the transfer of an output tile register that includes a two dimensional array generated by the at least one first matrix operation from the TMB circuitry to system memory circuitry communicatively coupled to the core circuitry.Example 4 may include elements of any of examples 1 through 3 where the RS circuitry may further: dispatch at least one second matrix operation to the TDQ circuitry, the second matrix operation dependent upon the data from the at least one first matrix operation; dispatch the at least one first matrix operation to the TDQ shadow queue circuitry; receive a dispatch indication from the TMU upon execution of the at least one second matrix operation by the TMU; communicate, to the ROB circuitry, a signal that includes information indicative of a completion of the at least one second matrix operation by the TMU; cause the ROB circuitry to commit the at least one second matrix operation; and cause a transfer of data from the at least one second matrix operation from the TMB circuitry to memory circuitry.Example 5 may include elements of any of examples 1 through 4 where to dispatch at least one second matrix operation to the TDQ circuitry, the RS circuitry may further: dispatch at least one second matrix operation to the TDQ circuitry responsive to dispatch of the at least one first matrix operation to the TDQ circuitry.Example 6 may include elements of any of examples 1 through 5 where to dispatch at least one second matrix operation to the TDQ circuitry, the RS circuitry may further: dispatch at least one second matrix operation to the TDQ circuitry responsive to receipt of an indication of the dispatch of the at least one first matrix operation to matrix multiplication (TMM) circuitry in the TMU.Example 7 may include elements of any of examples 1 through 6 where to cause a transfer of data from the at least one second matrix operation from TMU buffer (TMB) circuitry to memory circuitry, the RS circuitry may further: cause the transfer a two dimensional array generated by the at least one second matrix operation from the TMB circuitry to memory circuitry that includes at least one of: processor cache circuitry or system memory circuitry.Example 8 may include elements of any of examples 1 through 7 where the RS circuitry to further, responsive to receipt of an indication, from the processor circuitry, of at least one of: a mis-speculation associated with the at least one first matrix operation or a mis-prediction associated with the at least one first matrix operation: cause a cancellation of the at least one first matrix operation from the RS-TDQ circuitry; and cause a cancellation of the at least one first matrix operation from the TDQ circuitry.According to example 9, there is provided a method of performing one or more matrix operations. The method may include: dispatching, by reservation station (RS) circuitry, at least one first matrix operation to TDQ circuitry disposed in a matrix multiplication unit (TMU) communicatively coupled to the RS circuitry; dispatching, by the RS circuitry, the at least one first matrix operation to the RS-TDQ circuitry; communicating, by the RS circuitry, to reorder buffer (ROB) circuitry, a signal that includes information indicative of a completion of the at least one first matrix operation by the TMU; causing, by the RS circuitry, the ROB circuitry to commit the at least one first matrix operation; and causing, by the RS circuitry, a transfer of data from the at least one first matrix operation from TMU buffer (TMB) circuitry to memory circuitry.Example 10 may include elements of example 9 where causing the transfer of data from the at least one first matrix operation from the TMB circuitry to the memory circuitry may further include: causing, by the RS circuitry, the transfer of an output tile register that includes two dimensional array generated by the at least one first matrix operation from the TMB circuitry to processor cache memory circuitry communicatively coupled to core circuitry.Example 11 may include elements of any of examples 9 or 10 where causing the transfer of data from the at least one first matrix operation from the TMB circuitry to the memory circuitry may further include: causing, by the RS circuitry, the transfer of an output tile register that includes a two dimensional array generated by the at least one first matrix operation from the TMB circuitry to system memory circuitry communicatively coupled to core circuitry.Example 12 may include elements of any of examples 9 through 11, and the method may additionally include: dispatching, by the RS circuitry, at least one second matrix operation to the TDQ circuitry, the second matrix operation using at least a portion of the data from the at least one first matrix operation; dispatching, by the RS circuitry, the at least one second matrix operation to the RS-TDQ circuitry; communicating, by the RS circuitry, to reorder buffer (ROB) circuitry, a signal that includes information indicative of a completion of the at least one second matrix operation by the TMU; causing, by the RS circuitry, the ROB circuitry to commit the at least one second matrix operation; and causing, by the RS circuitry, a transfer of data from the at least one second matrix operation from the TMU buffer (TMB) circuitry to the memory circuitry.Example 13 may include elements of any of examples 9 through 12 where dispatching at least one second matrix operation to the TDQ circuitry may further include: dispatching, by the RS circuitry, the at least one second matrix operation to the TDQ circuitry responsive to dispatching the at least one first matrix operation to the TDQ circuitry.Example 14 may include elements of any of examples 9 through 13 where dispatching at least one second matrix operation to the TDQ circuitry may further include: dispatching, by the RS circuitry, the at least one second matrix operation to the TDQ circuitry responsive to receiving an indication of the dispatch of the at least one first matrix operation to matrix multiplication (TMM) circuitry in the TMU.Example 15 may include elements of any of examples 9 through 14 where causing a transfer of data from the at least one second matrix operation from the TMU buffer (TMB) circuitry to the memory circuitry may further include: causing, by the RS circuitry, the transfer a two dimensional array generated by the at least one second matrix operation from the TMB circuitry to memory circuitry that includes at least one of: processor cache circuitry or system memory circuitry.Example 16 may include elements of any of examples 9 through 15 and the method may additionally include: responsive to receipt of an indication, from the processor circuitry, of at least one of: a mis-speculation associated with the at least one first matrix operation or a mis-prediction associated with the at least one first matrix operation: causing, by the RS circuitry, a cancellation of the at least one first matrix operation from the RS-TDQ circuitry; and causing, by the RS circuitry, a cancellation of the at least one first matrix operation from the TDQ circuitry.According to example 17, there is provided a system for performing one or more matrix operations. The system may include: means for dispatching at least one first matrix operation to TDQ circuitry disposed in a matrix multiplication unit (TMU); means for dispatching the at least one first matrix operation to RS-TDQ circuitry; means for communicating to reorder buffer (ROB) circuitry, a signal that includes information indicative of a completion of the at least one first matrix operation by the TMU; means for causing the ROB circuitry to commit the at least one first matrix operation; and means for causing a transfer of data from the at least one first matrix operation from TMU buffer (TMB) circuitry to memory circuitry.Example 18 may include elements of example 17 where the means for causing the transfer of data from the at least one first matrix operation from the TMB circuitry to the memory circuitry may further include: means for causing the transfer of an output tile register that includes two dimensional array generated by the at least one first matrix operation from the TMB circuitry to processor cache memory circuitry communicatively coupled to core circuitry.Example 19 may include elements of any of examples 17 or 18 where the means for causing the transfer of data from the at least one first matrix operation from the TMB circuitry to the memory circuitry may further include: means for causing the transfer of an output tile register that includes a two dimensional array generated by the at least one first matrix operation from the TMB circuitry to system memory circuitry communicatively coupled to core circuitry.Example 20 may include elements of any of examples 17 through 19, and the system may further include: means for dispatching at least one second matrix operation to the TDQ circuitry, the second matrix operation using at least a portion of the data from the at least one first matrix operation; means for dispatching the at least one second matrix operation to the RS-TDQ circuitry; means for communicating to reorder buffer (ROB) circuitry, a signal that includes information indicative of a completion of the at least one second matrix operation by the TMU; means for causing the ROB circuitry to commit the at least one second matrix operation; and means for causing a transfer of data from the at least one second matrix operation from the TMU buffer (TMB) circuitry to the memory circuitry.Example 21 may include elements of any of examples 17 through 20 where the means for dispatching at least one second matrix operation to the TDQ circuitry may further include: means for dispatching the at least one second matrix operation to the TDQ circuitry responsive to dispatching the at least one first matrix operation to the TDQ circuitry.Example 22 may include elements of any of examples 17 through 21 where the means for dispatching at least one second matrix operation to the TDQ circuitry may further include: means for dispatching the at least one second matrix operation to the TDQ circuitry responsive to receiving an indication of the dispatch of the at least one first matrix operation to matrix multiplication (TMM) circuitry in the TMU.Example 23 may include elements of any of examples 17 through 22 where the means for causing a transfer of data from the at least one second matrix operation from the TMU buffer (TMB) circuitry to the memory circuitry may further include: means for causing the transfer a two dimensional array generated by the at least one second matrix operation from the TMB circuitry to memory circuitry that includes at least one of: processor cache circuitry or system memory circuitry.Example 24 may include elements of any of examples 17 through 23 and the system may additionally include: responsive to receipt of an indication, from the processor circuitry, of at least one of: a mis-speculation associated with the at least one first matrix operation or a mis-prediction associated with the at least one first matrix operation: means for causing a cancellation of the at least one first matrix operation from the RS-TDQ circuitry; and means for causing a cancellation of the at least one first matrix operation from the TDQ circuitry.According to example 25, there is provided a non-transitory storage device. The non-transitory storage device includes instructions that, when executed by reservation station (RS) circuitry, cause the RS circuitry to: dispatch at least one first matrix operation to TDQ circuitry disposed in a matrix multiplication unit (TMU) communicatively coupled to the RS circuitry; dispatch the at least one first matrix operation to the RS-TDQ circuitry; communicate to reorder buffer (ROB) circuitry, a signal that includes information indicative of a completion of the at least one first matrix operation by the TMU; cause the ROB circuitry to commit the at least one first matrix operation; and cause a transfer of data from the at least one first matrix operation from TMU buffer (TMB) circuitry to memory circuitry.Example 26 may include elements of example 25 where the instructions that cause the RS circuitry to cause the transfer of data from the at least one first matrix operation from the TMB circuitry to the memory circuitry further cause the RS circuitry to: cause the transfer of an output tile register that includes two dimensional array generated by the at least one first matrix operation from the TMB circuitry to processor cache memory circuitry communicatively coupled to core circuitry.Example 27 may include elements of any of examples 25 or 26 where the instructions that cause the RS circuitry to cause the transfer of data from the at least one first matrix operation from the TMB circuitry to the memory circuitry further cause the RS circuitry to: cause the transfer of an output tile register that includes a two dimensional array generated by the at least one first matrix operation from the TMB circuitry to system memory circuitry communicatively coupled to core circuitry.Example 28 may include elements of any of examples 25 through 27 where the instructions, when executed by the RS circuitry, further cause the RS circuitry to: dispatch at least one second matrix operation to the TDQ circuitry, the second matrix operation using at least a portion of the data from the at least one first matrix operation; dispatch the at least one second matrix operation to the RS-TDQ circuitry; communicate to reorder buffer (ROB) circuitry, a signal that includes information indicative of a completion of the at least one second matrix operation by the TMU; cause the ROB circuitry to commit the at least one second matrix operation; and cause a transfer of data from the at least one second matrix operation from the TMU buffer (TMB) circuitry to the memory circuitry.Example 29 may include elements of any of examples 25 through 28 where the instructions that cause the RS circuitry to dispatch at least one second matrix operation to the TDQ circuitry further causes the RS circuitry to: dispatch the at least one second matrix operation to the TDQ circuitry responsive to dispatching the at least one first matrix operation to the TDQ circuitry.Example 30 may include elements of any of examples 25 through 29 where the instructions that cause the RS circuitry to dispatch at least one second matrix operation to the TDQ circuitry further causes the RS circuitry to: dispatch the at least one second matrix operation to the TDQ circuitry responsive to receiving an indication of the dispatch of the at least one first matrix operation to matrix multiplication (TMM) circuitry in the TMU.Example 31 may include elements of any of examples 25 through 30 where the instructions that cause the RS circuitry to cause a transfer of data from the at least one second matrix operation from the TMU buffer (TMB) circuitry to the memory circuitry further causes the RS circuitry to: cause the transfer a two dimensional array generated by the at least one second matrix operation from the TMB circuitry to memory circuitry that includes at least one of: processor cache circuitry or system memory circuitry.Example 32 may include elements of any of examples 25 through 31 where the instructions, when executed by the RS circuitry, further cause the RS circuitry to, responsive to receipt of an indication, from the processor circuitry, of at least one of: a mis-speculation associated with the at least one first matrix operation or a mis-prediction associated with the at least one first matrix operation: cause a cancellation of the at least one first matrix operation from the RS-TDQ circuitry; and cause a cancellation of the at least one first matrix operation from the TDQ circuitry.According to example 33, there is provided a matrix multiplication unit (TMU). The TMU may include: TMU data storage buffer (TMB) circuitry; TMU operation queue (TMQ) circuitry; TMU matrix multiplication (TMM) circuitry; TMU control (TMC) circuitry coupled to the TMB circuitry, the TMQ circuitry, and the TMM circuitry, the TMC circuitry to: cause the TMB circuitry to store at least one tile register in the TMB circuitry, the at least one tile register received from reservation station (RS) circuitry communicatively coupled to a core circuit, wherein each of the at least one tile registers includes a respective two-dimensional data array; cause the TMQ circuitry to store at least one first matrix multiplication operation using the at least one tile register; cause the TMM circuitry to execute the at least one first matrix multiplication operation on the one or more tile registers to generate at least one first output tile register; cause the TMB circuitry to store the at least one first output tile register; and cause a transfer of the at least one first output tile register to memory circuitry external to the TMU responsive to a receipt of a request by the RS circuitry.Example 34 may include elements of example 33, the TMQ circuitry may further: cancel the at least one first matrix multiplication operation responsive to receipt of a cancellation request from the RS circuitry.Example 35 may include elements of any of examples 33 or 34 where to cause a transfer of the at least one output tile register to memory circuitry external to the TMU responsive to a receipt of a request by the RS circuitry, the TMC circuitry may further: cause the TMB circuitry to transfer of the at least one output tile register to cache memory circuitry, the cache memory circuitry communicatively coupled to the processor circuitry.Example 36 may include elements of any of examples 33 through 35 where to cause a transfer of the at least one output tile register to memory circuitry external to the TMU responsive to a receipt of a request by the RS circuitry, the TMC circuitry may further: cause the TMB circuitry to transfer of the at least one output tile register to system memory circuitry, the system memory circuitry communicatively coupled to the processor circuitry.Example 37 may include elements of any of examples 33 through 36 where the TMC circuitry may further: cause the TMQ circuitry to store at least one second matrix multiplication operation, the second matrix operation using as an input, the first output tile register; cause the TMM circuitry to execute the at least one second matrix multiplication operation on the first output tile register to generate the at least one second output tile register; cause the TMB circuitry to store the at least one second output tile register; and cause a transfer of the at least one second output tile register to memory circuitry external to the TMU responsive to a receipt of a request by the RS circuitry.Example 38 may include elements of any of examples 33 through 37 where to cause the TMQ circuitry to store at least one second matrix multiplication operation, the TMC circuitry may further: cause the TMQ circuitry to store the at least one second matrix operation responsive to causing the TMM circuitry to execute the at least one first matrix multiplication operation on the one or more tile registers.According to example 39, there is provided a non-transitory storage device that includes instructions that, when executed by TMU control (TMC) circuitry, cause the TMC circuitry to: cause the TMB circuitry to store at least one tile register in the TMB circuitry, the at least one tile register received from reservation station (RS) circuitry communicatively coupled to a core circuit, wherein each of the at least one tile registers includes a respective two-dimensional data array; cause the TMQ circuitry to store at least one first matrix multiplication operation using the at least one tile register; cause the TMM circuitry to execute the at least one first matrix multiplication operation on the one or more tile registers to generate at least one first output tile register; cause the TMB circuitry to store the at least one first output tile register; and cause a transfer of the at least one first output tile register to memory circuitry external to the TMU responsive to a receipt of a request by the RS circuitry.Example 40 may include elements of example 39 where the instructions further cause the TMC circuitry to: cause the TMQ circuitry to cancel the at least one first matrix multiplication operation responsive to receipt of a cancellation request from the RS circuitry.Example 41 may include elements of any of examples 39 or 40 where the instructions that cause the TMC circuitry to cause a transfer of the at least one output tile register to memory circuitry external to the TMU responsive to a receipt of a request by the RS circuitry, further cause the TMC circuitry to: cause the TMB circuitry to transfer of the at least one output tile register to cache memory circuitry, the cache memory circuitry communicatively coupled to the processor circuitry.Example 42 may include elements of any of examples 39 through 41 where the instructions that cause the TMC circuitry to cause a transfer of the at least one output tile register to memory circuitry external to the TMU responsive to a receipt of a request by the RS circuitry, further cause the TMC circuitry to: cause the TMB circuitry to transfer of the at least one output tile register to system memory circuitry, the system memory circuitry communicatively coupled to the processor circuitry.Example 43 may include elements of any of examples 39 through 42 where the instructions further cause the TMC circuitry to: cause the TMQ circuitry to store at least one second matrix multiplication operation, the second matrix operation using as an input, the first output tile register; cause the TMM circuitry to execute the at least one second matrix multiplication operation on the first output tile register to generate the at least one second output tile register; cause the TMB circuitry to store the at least one second output tile register; and cause a transfer of the at least one second output tile register to memory circuitry external to the TMU responsive to a receipt of a request by the RS circuitry.Example 44 may include elements of any of examples 39 through 43 where the instructions that cause the TMC circuitry to cause the TMQ circuitry to store at least one second matrix multiplication operation, further cause the TMC circuitry to: cause the TMQ circuitry to store the at least one second matrix operation responsive to causing the TMM circuitry to execute the at least one first matrix multiplication operation on the one or more tile registers.According to example 45, there is provided a tiled register matrix multiplication system, the system being arranged to perform the method of any of examples 9 through 16.According to example 46, there is provided a chipset arranged to perform the method of any of examples 9 through 16.According to example 47, there is provided least one machine-readable storage device that includes a plurality of instructions that, in response to be being executed on a computing device, cause the computing device to carry out the method according to any of examples 9 through 16.According to example 48, there is provided a device that includes a tiled register matrix multiplication system, the device being arranged to perform the method of any of the examples 9 through 16.The terms and expressions which have been employed herein are used as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications are possible within the scope of the claims. Accordingly, the claims are intended to cover all such equivalents. Various features, aspects, and embodiments have been described herein. The features, aspects, and embodiments are susceptible to combination with one another as well as to variation and modification, as will be understood by those having skill in the art. The present disclosure should, therefore, be considered to encompass such combinations, variations, and modifications.As described herein, various embodiments may be implemented using hardware elements, software elements, or any combination thereof. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth.Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
The instruction set architecture (ISA) for application specific signal processor (ASSP) is tailored to digital signal processing applications. The instruction set architecture implemented with the ASSP is adapted to DSP algorithmic structures. The instruction word of the ISA is typically 20 bits but can be expanded to 40 bits to control two instructions to be executed in series or parallel. All DSP instructions of the ISA are dyadic DSP instructions performing two operations with one instruction in one cycle. The DSP instructions or operations in the preferred embodiment include a multiply instruction (MILT) (504), an addition instruction (ADD)(510), a minimize/maximize instruction (MIN/MAX) also referred to as an extrema instruction, and a no operation instruction (NOP) each having an associated operation code ("opcode"). The present invention efficiently executes DSP instructions by means of the instruction set architecture and the hardware architecture of the application specific signal processor.
CLAIMSWhat is claimed is: 1. A signal processor for performing dyadic digital signal processing instructions having main operations and sub operations, the signal processor comprising: at least one signal processing unit including, a first multiplier and a first adder to execute a main operation of a dyadic digital signal processing instruction, a second multiplier and a second adder to execute a sub operation of the dyadic digital signal processing instruction, each of the first and second adders and the first and second multipliers having a multiplexer at its input to configure the signal processing unit to execute the main operation and the sub operation of the dyadic digital signal processing instruction, and an accumulator having registers to couple to the first multiplier or the first adder to provide operands or store intermediate results therefrom and to couple to the second multiplier or the second adder to provide an operand for the sub operation of the dyadic digital signal processing instruction and to store results of the sub operation, the accumulator register having a register to couple to the buffer memory to store the digital signal processed output generated by the dyadic digital signal processing instruction. 2. The signal processor of claim 1 for performing dyadic digital signal processing instructions, the signal processor further comprising: a reduced instruction set computer (RISC) control unit and a pipeline controller to predecode the dyadic digital signal processing instruction into a plurality of preliminary instruction execution signals, and wherein the at least one signal processing unit further includes a plurality of final decoders coupled to a plurality of multiplexers, each of the first and second adders and first and second multipliers having an input multiplexer from the plurality of multiplexers to receive operands responsive to the selection by those of the plurality of final decoders coupled thereto. 3. The signal processor of claim 1 for performing dyadic digital signal processing instructions, wherein, the main operation of the dyadic digital signal processing instruction is one of the set of multiplication, addition, comparison with a minimum or maximum value, and no operation. 4. The signal processor of claim 3 for performing dyadic digital signal processing instructions, wherein, the sub operation of the dyadic digital signal processing instruction is one of the set of multiplication, addition, comparison with a minimum or maximum value, and no operation which differs from the main operation. 5. The signal processor of claim 1 for performing dyadic digital signal processing instructions, wherein, the main operation of the dyadic digital signal processing instruction is selected to be one of the set of multiplication, addition, comparison with a minimum or maximum value, and no operation and the sub operation of the dyadic digital signal processing instruction is selected to be a no operation. 6. The signal processor of claim 2 for performing dyadic digital signal processing instructions, wherein, the RISC control unit includes three adders, a memory address generator, a multiplier, and a barrel shifter to predecode the dyadic digital signal processing instruction into the plurality of preliminary instruction execution signals. 7. The signal processor of claim 1 for performing dyadic digital signal processing instructions, the signal processor further comprising: a data memory coupled to the RISC control unit and the at least one signal processing unit for storing operands and results of the execution of the dyadic digital signal processing instruction, and a program memory coupled to the pipeline control, the program memory to store dyadic digital signal processing instructions for execution by the at least one digital signal processing unit. 8. The signal processor of claim 1 for performing dyadic digital signal processing instructions, the signal processor further comprising: a host interface to interface to an external host computer, an external memory interface to read and write data to an external memory, clock and phase-locked loop to control the timing of operations of the application specific signal processor, a memory movement engine coupled to the buffer memory to transceive data thereto and therefrom, and wherein the at least one signal processing unit further includes, a data typer and aligner to order the bits of the operands for execution with the main operation, a third adder to add operands together and a compressor to compress more than two operands into a pair of operands. 9. A method of performing dyadic digital signal processing (DSP) instructions, the method comprising: fetching a dyadic DSP instruction having a main operation and a sub operation; predecoding the dyadic DSP instruction to generate predecoded instruction signals; and decoding the predecoded instruction signals to generate select signals to select the inputs of multiplexers of DSP functional blocks to execute the main operation and the sib operation. 10. The method of claim 9 of performing dyadic digital signal processing (DSP) instructions, wherein, the main operation and the sub operation are performed in parallel during the same cycle. 11. The method of claim 9 of performing dyadic digital signal processing (DSP) instructions, wherein, the main operation and the sub operation are performed sequentially during different cycles. 12. The method of claim 9 of performing dyadic digital signal processing (DSP) instructions, wherein, the main operation and the sub operation are two different operations selected from the set of multiplication, addition, comparison with a minimum or maximum value, and no operation. 13. The method of claim 9 of performing dyadic digital signal processing (DSP) instructions, wherein, the DSP functional blocks include a first and second adder and a first and second multiplier, the DSP functional blocks to perform addition, subtraction and a comparison with a minimal value or a maximum value. 14. An instruction set architecture (ISA) for execution of operations within a digital signal processor, the instruction set architecture comprising: a set of instructions for operation within a digital signal processor wherein each instruction includes a first operand accessed directly from memory, a second operand accessed directly from memory of a local register, and a destination register to store results, the set of instructions including, a 20-bit DSP instruction, and a 40-bit DSP instruction, the set of instructions to accelerate calculations within the digital signal processor of the type where D = [ (A operation one B) operation two C] where operation one and operation two are separate signal processing operations. 15. The instruction set architecture (ISA) of claim 14 for execution of operations within a digital signal processor, wherein, the twenty bit instruction uses mode bits in control registers (i. e. mode registers) and the forty bit instruction has a control extension to override mode registers. 16. The instruction set architecture (ISA) of claim 14 for execution of operations within a digital signal processor, wherein, the set of instructions further includes a dyadic instruction to execute two operations in one instruction. 17. The instruction set architecture (ISA) of claim 16 for execution of operations within a digital signal processor, wherein the two operations of the dyadic instruction for execution in one instruction are DSP operations. 18. The instruction set architecture (ISA) of claim 17 for execution of operations within a digital signal processor, wherein the DSP operations are of the set of operations of multiplication, addition, extremum, and no operation. 19. A dyadic digital signal processing (DSP) instruction for execution in digital signal processor, the dyadic DSP instruction comprising: a main DSP operation and a sub DSP operation to be executed in one processor cycle; a first field indicating execution of the main DSP operation and the sub DSP operation to be executed in sequence serially or executed substantially simultaneously in parallel; and a second field indicating a first operand, a third field indicating a second operand, and a fourth field indicating a destination. 20. The dyadic digital signal processing (DSP) instruction of claim 19 for execution in a digital signal processor, wherein the DSP operations are of the set of operations of multiplication, addition, extremum, and no operation.
METHOD AND APPARATUS FOR INSTRUCTION SET ARCHITECTURE HAVING DYADIC DIGITAL SIGNAL PROCESSING INSTRUCTIONS FIELD OF THE INVENTION This invention relates generally to the instruction set architectures (ISA) of processors. More particularly, the invention relates to instruction set architectures for the execution of operations within a signal processing integrated circuit. BACKGROUND OF THE INVENTION Single chip digital signal processing devices (DSP) are relatively well known. DSPs generally are distinguished from general purpose microprocessors in that DSPs typically support accelerated arithmetic operations by including a dedicated multiplier and accumulator (MAC) for performing multiplication of digital numbers. The instruction set for a typical DSP device usually includes a MAC instruction for performing multiplication of new operands and addition with a prior accumulated value stored within an accumulator register. A MAC instruction is typically the only instruction provided in prior art digital signal processors where two DSP operations, multiply followed by add, are performed by the execution of one instruction. However, when performing signal processing functions on data it is often desirable to perform other DSP operations in varying combinations. An area where DSPs may be utilized is in telecommunication systems. One use of DSPs in telecommunication systems is digital filtering. In this case a DSP is typically programmed with instructions to implement some filter function in the digital or time domain. The mathematical algorithm for a typical finite impulse response (FIR) filter may look like the equation Yn = hoXo + h : LX, + h2X2 +... + hNXN where hn are fixed filter coefficients numbering from 1 to N and Xn are the data samples. The equation Yn may be evaluated by using a software program. However in some applications, it is necessary that the equation be evaluated as fast as possible. One way to do this is to perform the computations using hardware components such as a DSP device programmed to compute the equation Yn. In order to further speed the process, it is desirable to vectorize the equation and distribute the computation amongst multiple DSPs such that the final result is obtained more quickly. The multiple DSPs operate in parallel to speed the computation process. In this case, the multiplication of terms is spread across the multipliers of the DSPs equally for simultaneous computations of terms. The adding of terms is similarly spread equally across the adders of the DSPs for simultaneous computations. In vectorized processing, the order of processing terms is unimportant since the combination is associative. If the processing order of the terms is altered, it has no effect on the final result expected in a vectorized processing of a function. In typical micro processors, a MAC operation would require a multiply instruction and an add instruction to perform both multiplication and addition. To perform these two instructions would require two processing cycles. Additionally, a program written for the typical micro processor would require a larger program memory in order to store the extra instructions necessary to perform the MAC operation. In prior art DSP devices, if a DSP operation other than a MAC DSP instruction need be performed, the operation requires separate arithmetic instructions programmed into program memory. These separate arithmetic instructions in prior art DSPs similarly require increased program memory space and processing cycles to perform the operation when compared to a single MAC instruction. It is desirable to reduce the number of processing cycles when performing DSP operations. It is desirable to reduce program memory requirements as well. DSPs are often programmed in a loop to continuously perform accelerated arithmetic functions including a MAC instruction using different operands. Often times, multiple arithmetic instructions are programmed in a loop to operate on the same data set. The same arithmetic instruction is often executed over and over in a loop using different operands. Additionally, each time one instruction is completed, another instruction is fetched from the program stored in memory during a fetch cycle.Fetch cycles require one or more cycle times to access a memory before instruction execution occurs. Because circuits change state during a fetch cycle, power is consumed and thus it is desirable to reduce the number of fetch cycles. Typically, approximately twenty percent of power consumption may, he utilized in the set up and clean up operations of a loop in order to execute DSP instructions. Typically, the loop execution where signal processing of data is performed consumes approximately eighty percent of power consumption with a significant portion being due to instruction fetching. Additionally, because data sets that a DSP device process are usually large, it is also desirable to speed instruction execution by avoiding frequent fetch cycles to memory. Additionally, the quality of service over a telephone system often relates to the processing speed of signals.That is particularly the case when a DSP is to provide voice processing, such as voice compression, voice decompression, and echo cancellation for multiple channels. More recently, processing speed has become even more important because of the desire to transmit voice aggregated with data in a packetized form for communication over packetized networks. Delays in processing the packetized voice signal tend to result in the degradation of signal quality on receiving ends. It is desirable to provide improved processing of voice and data signals to enhance the quality of voice and data communication over packetized networks. It is desirable to improve the efficiency of using computing resources when performing signal processing functions. BRIEF SUMMARY OF THE INVENTION Briefly, the present invention includes a method, apparatus and system as described in the claims. Multiple application specific signal processor (ASSP) having the instruction set architecture of the present invention, including the dyadic DSP instructions, are provided within gateways in communication systems to provide improved voice and data communication over a packetized network.Each ASSP includes a serial interface, a buffer memory, and four core processors for each to simultaneously process multiple channels of voice or data. Each core processor preferably includes a reduced instruction set computer (RISC) processor and four signal processing units (SPs). Each SP includes multiple arithmetic blocks to simultaneously process multiple voice and data communication signal samples for communication over IP,ATM, Frame Relay or other packetized network. The four signal processing units can execute the digital signal processing algorithms in parallel. Each ASSP is flexible and can be programmed to perform many network functions or data/voice processing functions, including voice and data compression/decompression in telecommunications systems (such as CODECs) particularly packetized telecommunication networks, simply by altering the software program controlling the commands executed by the ASSP. An instruction set architecture for the ASSP is tailored to digital signal processing applications including audio and speech processing such as compression/decompression and echo cancellation. The instruction set architecture implemented with the ASSP, is adapted to DSP algorithmic structures. This adaptation of the ISA of the present invention to DSP algorithmic structures balances the ease of implementation, processing efficiency, and programmability of DSP algorithms. The instruction set architecture may be viewed as being two component parts, one (RISC ISA) corresponding to the RISC control unit and another (DSP ISA) to the DSP datapaths of the signal processing units 300. The RISC ISA is a register based architecture including 16-registers within the register file 413, while the DSP ISA is a memory based architecture with efficient digital signal processing instructions. The instruction word for the ASSP is typically 20 bits but can be expanded to 40-bits to control two instructions to be executed in series or parallel, such as two RISC control instructions and extended DSP instructions. The instruction set architecture of the ASSP has four distinct types of instructions to optimize the DSP operational mix. These are (1) a 20-bit DSP instruction that uses mode bits in control registers (i. e. mode registers), (2) a 40-bit DSP instruction having control extensions that can override mode registers, (3) a 20-bit dyadic DSP instruction, and (4) a 40 bit dyadic DSP instruction. These instructions are for accelerating calculations within the core processor of the type where D = [ (A opl B) op2 C] and each of"opl"and"op2"can be a multiply, add, extremum (min/max) or other primitive DSP class of operation on the three operands A, B, and C. The ISA of the ASSP which accelerates these calculations allows efficient chaining of different combinations of operations. All DSP instructions of the instruction set architecture of the ASSP are dyadic DSP instructions to execute two operations in one instruction with one cycle throughput. A dyadic DSP instruction is a combination of two basic DSP operations in one instruction and includes a main DSP operation (MAIN OP) and a sub DSP operation (SUB OP). Generally, the instruction set architecture of the present invention can be generalized to combining any pair of basic DSP operations to provide very powerful dyadic instruction combinations. The DSP arithmetic instructions or operations in the preferred embodiment include a multiply instruction (MULT), an addition instruction (ADD), a minimize/maximize instruction (MIN/MAX) also referred to as an extrema instruction, and a no operation instruction (NOP) each having an associated operation code ("opcode"). The present invention efficiently executes these dyadic DSP instructions by means of the instruction set architecture and the hardware architecture of the application specific signal processor. For example, theDSP instructions can process vector data or scalar data automatically using a single instruction and provide the appropriate vector or scalar output results. BRIEF DESCRIPTIONS OF THE DRAWINGS Figure 1A is a block diagram of a system utilizing the present invention. Figure 1B is a block diagram of a printed circuit board utilizing the present invention within the gateways of the system in Figure 1A. Figure 2 is a block diagram of the Application SpecificSignal Processor (ASSP) of the present invention. Figure 3 is a block diagram of an instance of the core processors within the ASSP of the present invention. Figure 4 is a block diagram of the RISC processing unit within the core processors of Figure 3. Figure 5A is a block diagram of an instance of the signal processing units within the core processors ofFigure 3. Figure 5B is a more detailed block diagram of Figure 5A illustrating the bus structure of the signal processing unit. Figure 6A is an exemplary instruction sequence illustrating a program model for DSP algorithms employing the instruction set architecture of the present invention. Figure 6B is a chart illustrating the permutations of the dyadic DSP instructions. Figure 6C is an exemplary bitmap for a control extended dyadic DSP instruction. Figure 6D is an exemplary bitmap for a non-extended dyadic DSP instruction. Figure 6E and 6F list the set of 20-bit instructions for the ISA of the present invention. Figure 6G lists the set of extended control instructions for the ISA of the present invention. Figure 6H lists the set of 40-bit DSP instructions for the ISA of the present invention. Figure 61 lists the set of addressing instructions for the ISA of the present invention. Figure 7 is a block diagram illustrating the instruction decoding and configuration of the functional blocks of the signal processing units. Like reference numbers and designations in the drawings indicate like elements providing similar functionality. A letter after a reference designator number represents an instance of an element having the reference designator number. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT In the following detailed description of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be obvious to one skilled in the art that the present invention may be practiced without these specific details. In other instances well known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the present invention. Furthermore, the present invention will be described in particular embodiments but may be implemented in hardware, software, firmware or a combination thereof Multiple application specific signal processors (ASSPs) having the instruction set architecture of the present invention, including dyadic DSP instructions, are provided within gateways in communication systems to provide improved voice and data communication over a packetized network. Each ASSP includes a serial interface, a buffer memory and four core processors in order to simultaneously process multiple channels of voice or data. Each core processor preferably includes a reduced instruction set computer (RISC) processor and four signal processing units (SPs). Each SP includes multiple arithmetic blocks to simultaneously process multiple voice and data communication signal samples for communication over IP,ATM, Frame Relay, or other packetized network. The four signal processing units can execute digital signal processing algorithms in parallel. Each ASSP is flexible and can be programmed to perform many network functions or data/voice processing functions, including voice and data compression/decompression in telecommunication systems (such as CODECs), particularly packetized telecommunication networks, simply by altering the software program controlling the commands executed by theASSP. An instruction set architecture for the ASSP is tailored to digital signal processing applications including audio and speech processing such as compression/decompression and echo cancellation. The instruction set architecture implemented with the ASSP, is adapted to DSP algorithmic structures. This adaptation of the ISA of the present invention to DSP algorithmic structures balances the ease of implementation, processing efficiency, and programmability of DSP algorithms. The instruction set architecture may be viewed as being two component parts, one (RISC ISA) corresponding to the RISC control unit and another (DSP ISA) to the DSP datapaths of the signal processing units 300. The RISC ISA is a register based architecture including 16-registers within the register file 413, while the DSP ISA is a memory based architecture with efficient digital signal processing instructions. The instruction word for the ASSP is typically 20 bits but can be expanded to 40-bits to control two instructions to the executed in series or parallel, such as two RISC control instruction and extended DSP instructions. The instruction set architecture of the ASSP has four distinct types of instructions to optimize the DSP operational mix. These are (1) a 20-bit DSP instruction that uses mode bits in control registers (i. e. mode registers), (2) a 40-bit DSP instruction having control extensions that can override mode registers, (3) a 20-bit dyadic DSP instruction, and (4) a 40 bit dyadic DSP instruction. These instructions are for accelerating calculations within the core processor of the type where D = [ (A opl B) op2 C] and each of"opl"and"op2"can be a multiply, add or extremum (min/max) class of operation on the three operands A, B, and C. The ISA of the ASSP which accelerates these calculations allows efficient chaining of different combinations of operations. All DSP instructions of the instruction set architecture of the ASSP are dyadic DSP instructions to execute two operations in one instruction with one cycle throughput. A dyadic DSP instruction is a combination of two DSP instructions or operations in one instruction and includes a main DSP operation (MAIN OP) and a sub DSP operation (SUB OP). Generally, the instruction set architecture of the present invention can be generalized to combining any pair of basic DSP operations to provide very powerful dyadic instruction combinations. The DSP arithmetic operations in the preferred embodiment include a multiply instruction (MULT), an addition instruction (ADD), a minimize/maximize instruction (MIN/MAX) also referred to as an extrema instruction, and a no operation instruction (NOP) each having an associated operation code ("opcode"). The present invention efficiently executes these dyadicDSP instructions by means of the instruction set architecture and the hardware architecture of the application specific signal processor. Referring now to Figure 1A, a voice and data communication system 100 is illustrated. The system 100 includes a network 101 which is a packetized or packetswitched network, such as IP, ATM, or frame relay. The network 101 allows the communication of voice/speech and data between endpoints in the system 100, using packets.Data may be of any type including audio, video, email, and other generic forms of data. At each end of the system 100, the voice or data requires packetization when transceived across the network 101. The system 100 includes gateways 104A, 104B, and 104C in order to packetize the information received for transmission across the network 101. A gateway is a device for connecting multiple networks and devices that use different protocols. Voice and data information may be provided to a gateway 104 from a number of different sources in a variety of digital formats. In system 100, analog voice signals are transceived by a telephone 108. In system 100, digital voice signals are transceived at public branch exchanges (PBX) 112A and 112B which are coupled to multiple telephones, fax machines, or data modems.Digital voice signals are transceived between PBX 112A andPBX 112B with gateways 104A and 104C, respectively.Digital data signals may also be transceived directly between a digital modem 114 and a gateway 104A. Digital modem 114 may be a Digital Subscriber Line (DSL) modem or a cable modem. Data signals may also be coupled into system 100 by a wireless communication system by means of a mobile unit 118 transceiving digital signals or analog signals wirelessly to a base station 116. Base station 116 converts analog signals into digital signals or directly passes the digital signals to gateway 104B. Data may be transceived by means of modem signals over the plain old telephone system (POTS) 107B using a modem 110. Modem signals communicated over POTS 107B are traditionally analog in nature and are coupled into a switch 106B of the public switched telephone network (PSTN). At the switch 106B, analog signals from the POTS 107B are digitized and transceived to the gateway 104B by time division multiplexing (TDM) with each time slot representing a channel and one DSO input to gateway 104B. At each of the gateways 104A, 104B and 104C, incoming signals are packetized for transmission across the network 101. Signals received by the gateways 104A, 104B and 104C from the network 101 are depacketized and transcoded for distribution to the appropriate destination. Referring now to Figure 1B, a network interface card (NIC) 130 of a gateway 104 is illustrated. The NIC 130 includes one or more application-specific signal processors (ASSPs) 150A-150N. The number of ASSPs within a gateway is expandable to handle additional channels.Line interface devices 131 of NIC 130 provide interfaces to various devices connected to the gateway, including the network 101. In interfacing to the network 101, the line interface devices packetize data for transmission out on the network 101 and depacketize data which is to be received by the ASSP devices. Line interface devices 131 process information received by the gateway on the receive bus 134 and provides it to the ASSP devices. Information from the ASSP devices 150 is communicated on the transmit bus 132 for transmission out of the gateway. A traditional line interface device is a multi-channel serial interface or a UTOPIA device. The NIC 130 couples to a gateway backplane/network interface bus 136 within the gateway 104. Bridge logic 138 transceives information between bus 136 and NIC 130. Bridge logic 138 transceives signals between the NIC 130 and the backplane/network interface bus 136 onto the host bus 139 for communication to either one or more of the ASSP devices 150A-150N, a host processor 140, or a host memory 142. Optionally coupled to each of the one or more ASSP devices 150A through 150N (generally referred to as ASSP 150) are optional local memory 145A through 145N (generally referred to as optional local memory 145), respectively.Digital data on the receive bus 134 and transmit bus 132 is preferably communicated in bit wide fashion. While internal memory within each ASSP may be sufficiently large to be used as a scratchpad memory, optional local memory 145 may be used by each of the ASSPs 150 if additional memory space is necessary. Each of the ASSPs 150 provide signal processing capability for the gateway. The type of signal processing provided is flexible because each ASSP may execute differing signal processing programs. Typical signal processing and related voice packetization functions for an ASSP include (a) echo cancellation; (b) video, audio, and voice/speech compression/decompression (voice/speech coding and decoding); (c) delay handling (packets, frames); (d) loss handling; (e) connectivity (LAN andWAN); (f) security (encryption/decryption); (g) telephone connectivity; (h) protocol processing (reservation and transport protocols, RSVP, TCP/IP, RTP, UDP for IP, andAAL2, AAL1, AAL5 for ATM); (i) filtering; (j) Silence suppression; (k) length handling (frames, packets); and other digital signal processing functions associated with the communication of voice and data over a communication system. Each ASSP 150 can perform other functions in order to transmit voice and data to the various endpoints of the system 100 within a packet data stream over a packetized network. Referring now to Figure 2, a block diagram of the ASSP 150 is illustrated. At the heart of the ASSP 150 are four core processors 200A-200D. Each of the core processors 200A-200D is respectively coupled to a data memory 202A202D and a program memory 204A-204D. Each of the core processors 200A-200D communicates with outside channels through the multi-channel serial interface 206, the multichannel memory movement engine 208, buffer memory 210, and data memory 202A-202D. The ASSP 150 further includes an external memory interface 212 to couple to the external optional local memory 145. The ASSP 150 includes an external host interface 214 for interfacing to the external host processor 140 of Figure 1B.-Further included within the ASSP 150 are timers 216, clock generators and a phase-lock loop 218, miscellaneous control logic 220, and a Joint Test Action Group (JTAG) test access port 222 for boundary scan testing. The multi-channel serial interface 206 may be replaced with aUTOPIA parallel interface for some applications such asATM. The ASSP 150 further includes a microcontroller 223 to perform process scheduling for the core processors 200A-200D and the coordination of the data movement within the ASSP as well as an interrupt controller 224 to assist in interrupt handling and the control of the ASSP 150. Referring now to Figure 3, a block diagram of the core processor 200 is illustrated coupled to its respective data memory 202 and program memory 204. Core processor 200 is the block diagram for each of the core processors 200A-200D. Data memory 202 and program memory 204 refers to a respective instance of data memory 202A-202D and program memory 204A-204D, respectively. The core processor 200 includes four signal processing units SPO 300A, SP1 300B, SP2 300C and SP3 300D. The core processor 200 further includes a reduced instruction set computer (RISC) control unit 302 and a pipeline control unit 304.The signal processing units 300A-300D perform the signal processing tasks on data while the RISC control unit 302 and the pipeline control unit 304 perform control tasks related to the signal processing function performed by theSPs 300A-300D. The control provided by the RISC control unit 302 is coupled with the SPs 300A-300D at the pipeline level to yield a tightly integrated core processor 200 that keeps the utilization of the signal processing units 300 at a very high level. The signal processing tasks are performed on the datapaths within the signal processing units 300A-300D.The nature of the DSP algorithms are such that they are inherently vector operations on streams of data, that have minimal temporal locality (data reuse). Hence, a data cache with demand paging is not used because it would not function well and would degrade operational performance.Therefore, the signal processing units 300A-300D are allowed to access vector elements (the operands) directly from data memory 202 without the overhead of issuing a number of load and store instructions into memory resulting, in very efficient data processing. Thus, the instruction set architecture of the present invention having a 20 bit instruction word which can be expanded to a 40 bit instruction word, achieves better efficiencies than VLIW architectures using 256-bits or higher instruction widths by adapting the ISA to DSP algorithmic structures. The adapted ISA leads to very compact and low-power hardware that can scale to higher computational requirements. The operands that the ASSP can accommodate are varied in data type and data size. The data type may be real or complex, an integer value or a fractional value, with vectors having multiple elements of different sizes. The data size in the preferred embodiment is 64 bits but larger data sizes can be accommodated with proper instruction coding. Referring now to Figure 4, a detailed block diagram of the RISC control unit 302 is illustrated. RISC control unit 302 includes a data aligner and formatter 402, a memory address generator 404, three adders 406A-406C, an arithmetic logic unit (ALU) 408, a multiplier 410, a barrel shifter 412, and a register file 413. The register file 413 points to a starting memory location from which memory address generator 404 can generate addresses into data memory 202. The RISC control unit 302 is responsible for supplying addresses to data memory so that the proper data stream is fed to the signal processing units 30OA300D. The RISC control unit 302 is a register to register organization with load and store instructions to move data to and from data memory 202. Data memory addressing is performed by RISC control unit using a 32-bit register as a pointer that specifies the address, post-modification offset, and type and permute fields. The type field allows a variety of natural DSP data to be supported as a "first class citizen"in the architecture. For instance, the complex type allows direct operations on complex data stored in memory removing a number of bookkeeping instructions. This is useful in supporting QAM demodulators in data modems very efficiently. Referring now to Figure 5A, a block diagram of a signal processing unit 300 is illustrated which represents an instance of the SPs 300A-300D. Each of the signal processing units 300 includes a data typer and aligner 502, a first multiplier Ml 504A, a compressor 506, a first adder Al 510A, a second adder A2 510B, an accumulator register 512, a third adder A3 510C, and a second multiplier M2 504B. Adders 510A-510C are similar in structure and are generally referred to as adder 510.Multipliers 504A and 504B are similar in structure and generally referred to as multiplier 504. Each of the multipliers 504A and 504B have a multiplexer 514A and 514B respectively at its input stage to multiplex different inputs from different busses into the multipliers. Each of the adders 510A, 510B, 510C also have a multiplexer 520A, 520B, and 520C respectively at its input stage to multiplex different inputs from different busses into the adders. These multiplexers and other control logic allow the adders, multipliers and other components within the signal processing units 300A-300C to be flexibly interconnected by proper selection of multiplexers. In the preferred embodiment, multiplier M1 504A, compressor 506, adder Al 510A, adder A2 510B and accumulator 512 can receive inputs directly from external data buses through the data typer and aligner 502. In the preferred embodiment, adder 510C and multiplier M2 504B receive inputs from the accumulator 512 or the outputs from the execution units multiplier M1 504A, compressor 506, adderAl 510A, and adder A2 510B. Program memory 204 couples to the pipe control 304 which includes an instruction buffer that acts as a local loop cache. The instruction buffer in the preferred embodiment has the capability of holding four instructions. The instruction buffer of the pipe control 304 reduces the power consumed in accessing the main memories to fetch instructions during the execution of program loops. Referring now to Figure 5B, a more detailed block diagram of the functional blocks and the bus structure of the signal processing unit is illustrated. Dyadic DSP instructions are possible because of the structure and functionality provided in each signal processing unit.Output signals are coupled out of the signal processor 300 on the Z output bus 532 through the data typer and aligner 502. Input signals are coupled into the signal processor 300 on the X input bus 531 and Y input bus 533 through the data typer and aligner 502. Internally, the data typer and aligner 502 has a different data bus to couple to each of multiplier Ml 504A, compressor 506, adder Al 510A, adder A2 510B, and accumulator register AR 512. While the data typer and aligner 502 could have data busses coupling to the adder A3 510C and the multiplier M2 504B, in the preferred embodiment it does not in order to avoid extra data lines and conserve area usage of an integrated circuit. Output data is coupled from the accumulator register AR 512 into the data typer and aligner 502.Multiplier Ml 504A has buses to couple its output into the inputs of the compressor 506, adder Al 510A, adder A2 510B, and the accumulator registers AR 512. Compressor 506 has buses to couple its output into the inputs of adder Al 510A and adder A2 510B. Adder Al 510A has a bus to couple its output into the accumulator registers 512.Adder A2 510B has buses to couple its output into the accumulator registers 512. Accumulator registers 512 has buses to couple its output into multiplier M2 504B, adderA3 510C, and data typer and aligner 502. Adder A3 510C has buses to couple its output into the multiplier M2 504B and the accumulator registers 512. Multiplier M2 504B has buses to couple its output into the inputs of the adder A3 510C and the accumulator registers AR 512. INSTRUCTION SET ARCHITECTURE The instruction set architecture of the ASSP 150 is tailored to digital signal processing applications including audio and speech processing such as compression/decompression and echo cancellation. In essence, the instruction set architecture implemented with the ASSP 150, is adapted to DSP algorithmic structures.The adaptation of the ISA of the present invention to DSP algorithmic structures is a balance between ease of implementation, processing efficiency, and programmability of DSP algorithms. The ISA of the present invention provides for data movement operations,DSP/arithmetic/logical operations, program control operations (such as function calls/returns, unconditional/conditional jumps and branches), and system operations (such as privilege, interrupt/trap/hazard handling and memory management control). Referring now to Figure 6A, an exemplary instruction sequence 600 is illustrated for a DSP algorithm program model employing the instruction set architecture of the present invention. The instruction sequence 600 has an outer loop 601 and an inner loop 602. Because DSP algorithms tend to perform repetitive computations, instructions 605 within the inner loop 602 are executed more often than others. Instructions 603 are typically parameter setup code to set the memory pointers, provide for the setup of the outer loop 601, and other 2X20 control instructions. Instructions 607 are typically context save and function return instructions or other 2X20 control instructions. Instructions 603 and 607 are often considered overhead instructions which are typically infrequently executed. Instructions 604 are typically to provide the setup for the inner loop 602, other control through 2x20 control instructions, or offset extensions for pointer backup. Instructions 606 typically provide tear down of the inner loop 602, other control through 2x20 control instructions, and combining of datapath results within the signal processing units. Instructions 605 within the inner loop 602 typically provide inner loop execution of DSP operations, control of the four signal processing units 300 in a single instruction multiple data execution mode, memory access for operands, dyadic DSP operations, and other DSP functionality through the 20/40 bit DSP instructions of the ISA of the present invention. Because instructions 605 are so often repeated, significant improvement in operational efficiency may be had by providing the DSP instructions, including general dyadic instructions and dyadic DSP instructions, within the ISA of the present invention. The instruction set architecture of the ASSP 150 can be viewed as being two component parts, one (RISC ISA) corresponding to the RISC control unit and another (DSPISA) to the DSP datapaths of the signal processing units 300. The RISC ISA is a register based architecture including sixteen registers within the register file 413, while the DSP ISA is a memory based architecture with efficient digital signal processing instructions. The instruction word for the ASSP is typically 20 bits but can be expanded to 40-bits to control two RISC or DSP instructions to be executed in series or parallel, such as a RISC control instruction executed in parallel with a DSP instruction, or a 40 bit extended RISC or DSP instruction. The instruction set architecture of the ASSP 150 has 4 distinct types of instructions to optimize the DSP operational mix. These are (1) a 20-bit DSP instruction that uses mode bits in control registers (i. e. mode registers), (2) a 40-bit DSP instruction having control extensions that can override mode registers, (3) a 20-bit dyadic DSP instruction, and (4) a 40 bit dyadic DSP instruction. These instructions are for accelerating calculations within the core processor 200 of the type where D = [ (A opl B) op2 C] and each of"opl"and"op2" can be a multiply, add or extremum (min/max) class of operation on the three operands A, B, and C. The ISA of the ASSP 150 which accelerates these calculations allows efficient chaining or different combinations of operations. Because these type of operations require three operands, they must be available to the processor.However, because the device size places limits on the bus structure, bandwidth is limited to two vector reads and one vector write each cycle into and out of data memory 202. Thus one of the operands, such as B or C, needs to come from another source within the core processor 200.The third operand can be placed into one of the registers of the accumulator 512 or the RISC register file 413. In order to accomplish this within the core processor 200 there are two subclasses of the 20-bit DSP instructions which are (1) A and B specified by a 4-bit specifier, andC and D by a 1-bit specifier and (2) A and C specified by a 4-bit specifier, and B and D by a 1 bit specifier. Instructions for the ASSP are always fetched 40-bits at a time from program memory with bit 39 and 19 indicating the type of instruction. After fetching, the instruction is grouped into two sections of 20 bits each for execution of operations. In the case of 20-bit control instructions with parallel execution (bit 39=0, bit 19=0), the two 20bit sections are control instructions that are executed simultaneously. In the case of 20-bit control instructions for serial execution (bit 39=0, bit 19=1), the two 20-bit sections are control instructions that are executed serially. In the case of 20-bit DSP instructions for serial execution (bit 39=1, bit 19=1), the two 20-bit sections are DSP instructions that are executed serially. In the case of 40-bit DSP instructions (bit 39=1, bit 19=0), the two 20 bit sections form one extended DSP instruction which are executed simultaneously. The ISA of the ASSP 150 is fully predicated providing for execution prediction. Within the 20-bit RISC control instruction word and the 40-bit extended DSP instruction word there are 2 bits of each instruction specifying one of four predicate registers within the RISC control unit 302. Depending upon the condition of the predicate register, instruction execution can conditionally change base on its contents. In order to access operands within the data memory 202 or registers within the accumulator 512 or register file 413, a 6-bit specifier is used in the DSP extended instructions to access operands in memory and registers.Of the six bit specifier used in the extended DSP instructions, the MSB (Bit 5) indicates whether the access is a memory access or register access. In the preferred embodiment, if Bit 5 is set to logical one, it denotes a memory access for an operand. If Bit 5 is set to a logical zero, it denotes a register access for an operand. If Bit 5 is set to 1, the contents of a specified register (rX where X: 0-7) are used to obtain the effective memory address and post-modify the pointer field by one of two possible offsets specified in one of the specified rX registers. If Bit 5 is set to 0, Bit 4 determines what register set has the contents of the desired operand. If Bit-4 is set to 0, then the remaining specified bits 3: 0 control access to the registers within the register file 413 or to registers within the signal processing units 300. DSP INSTRUCTIONS There are four major classes of DSP instructions for the ASSP 150 these are : 1) Multiply (MULT): Controls the execution of the main multiplier connected to data buses from memory.Controls: Rounding, sign of multiplyOperates on vector data specified through type field in address registerSecond operation: Add, Sub, Min, Max in vector or scalar mode 2) Add (ADD): Controls the execution of the main-adderControls: absolute value control of the inputs, limiting the resultSecond operation: Add, add-sub, mult, mac, min, max 3) Extremum (MIN/MAX): Controls the execution of the main-adderControls: absolute value control of the inputs, Global or running max/min with T register, TR register recording controlSecond operation: add, sub, mult, mac, min, max 4) Misc: type-match and permute operations. The ASSP 150 can execute these DSP arithmetic operations in vector or scalar fashion. In scalar execution, a reduction or combining operation is performed on the vector results to yield a scalar result. It is common in DSP applications to perform scalar operations, which are efficiently performed by the ASSP 150. The 20-bit DSP instruction words have 4-bit operand specifiers that can directly access data memory using 8 address registers (r0-r7) within the register file 413 of the RISC control unit 302. The method of addressing by the 20 bit DSP instruction word is regular indirect with the address register specifying the pointer into memory, post-modification value, type of data accessed and permutation of the data needed to execute the algorithm efficiently. All of the DSP instructions control the multipliers 504A-504B, adders 510A-510C, compressor 506 and the accumulator 512, the functional units of each signal processing unit 300A-300D. In the 40 bit instruction word, the type of extension from the 20 bit instruction word falls into five categories: 1) Control and Specifier extensions that override the control bits in mode registers 2) Type extensions that override the type specifier in address registers 3) Permute extensions that override the permute specifier for vector data in address registers 4) Offset extensions that can replace or extend the offsets specified in the address registers 5) DSP extensions that control the lower rows of functional units within a signal processing unit 300 to accelerate block processing. The 40-bit control instructions with the 20 bit extensions further allow a large immediate value (16 to 20 bits) to be specified in the instruction and powerful bit manipulation instructions. Efficient DSP execution is provided with 2x20-bit DSP instructions with the first 20-bits controlling the top functional units (adders 501A and 510B, multiplier 504A, compressor 506) that interface to data buses from memory and the second 20 bits controlling the bottom functional units (adder 510C and multiplier 504B) that use internal or local data as operands. The top functional units, also referred to as main units, reduce the inner loop cycles in the inner loop 602 by parallelizing across consecutive taps or sections. The bottom functional units cut the outer loop cycles in the outer loop 601 in half by parallelizing block DSP algorithms across consecutive samples. Efficient DSP execution is also improved by the hardware architecture of the present invention. In this case, efficiency is improved in the manner that data is supplied to and from data memory 202 to feed the four signal processing units 300 and the DSP functional units therein. The data highway is comprised of two buses, X bus 531 and Y bus 533, for X and Y source operands, and one Z bus 532 for a result write. All buses, including X bus 531, Y bus 533, and Z bus 532, are preferably 64 bits wide. The buses are uni-directional to simplify the physical design and reduce transit times of data. In the preferred embodiment when in a 20 bit DSP mode, if the X and Y buses are both carrying operands read from memory for parallel execution in a signal processing unit 300, the parallel load field can only access registers within the register file 413 of the RISC control unit 302.Additionally, the four signal processing units 300A-300D in parallel provide four parallel MAC units (multiplier 504A, adder 510A, and accumulator 512) that can make simultaneous computations. This reduces the cycle count from 4 cycles ordinarily required to perform four MACs to only one cycle. DYADIC DSP INSTRUCTIONS All DSP instructions of the instruction set architecture of the ASSP 150 are dyadic DSP instructions within the 20 bit or 40 bit instruction word. A dyadicDSP instruction informs the ASSP in one instruction and one cycle to perform two operations. Referring now toFigure 6B is a chart illustrating the permutations of the dyadic DSP instructions. The dyadic DSP instruction 610 includes a main DSP operation 611 (MAIN OP) and a sub DSP operation 612 (SUB OP), a combination of two DSP instructions or operations in one dyadic instruction.Generally, the instruction set architecture of the present invention can be generalized to combining any pair of basic DSP operations to provide very powerful dyadic instruction combinations. Compound DSP operational instructions can provide uniform acceleration for a wide variety of DSP algorithms not just multiply-accumulate intensive filters. The DSP instructions or operations in the preferred embodiment include a multiply instruction (MULT), an addition instruction (ADD), a minimize/maximize instruction (MIN/MAX) also referred to as an extrema instruction, and a no operation instruction (NOP) each having an associated operation code ("opcode"). Any twoDSP instructions can be combined together to form a dyadicDSP instruction. The NOP instruction is used for the MAIN OP or SUB OP when a single DSP operation is desired to be executed by the dyadic DSP instruction. There are variations of the general DSP instructions such as vector and scalar operations of multiplication or addition, positive or negative multiplication, and positive or negative addition (i. e. subtraction). Referring now to Figure 6C and Figure 6D, bitmap syntax for an exemplary dyadic DSP instruction is illustrated.Figure 6C illustrates bitmap syntax for a control extended dyadic DSP instruction while Figure 6D illustrates bitmap syntax for a non-extended dyadic DSP instruction. In the non-extended bitmap syntax the instruction word is the twenty most significant bits of a forty bit word while the extended bitmap syntax has an instruction word of forty bits. The three most significant bits (MSBs), bits numbered 37 through 39, in each indicate the MAIN OP instruction type while the SUB OP is located near the middle or end of the instruction bits at bits numbered 20 through 22. In the preferred embodiment, the MAIN OP instruction codes are 000 for NOP, 101 for ADD, 110 forMIN/MAX, and 100 for MULT. The SUB OP code for the givenDSP instruction varies according to what MAIN OP code is selected. In the case of MULT as the MAIN OP, the SUBOPs are 000 for NOP, 001 or 010 for ADD, 100 or Oil for a negative ADD or subtraction, 101 or 110 for MIN, and 111 for MAX. In the preferred embodiment, the MAIN OP and theSUB OP are not the same DSP instruction although alterations to the hardware functional blocks could accommodate it. The lower twenty bits of the control extended dyadic DSP instruction, the extended bits, control the signal processing unit to perform rounding, limiting, absolute value of inputs for SUB OP, or a globalMIN/MAX operation with a register value. The bitmap syntax of the dyadic DSP instruction can be converted into text syntax for program coding. Using the multiplication or MULT non-extended instruction as an example, its text syntax for multiplication or MULT is (vmullvmuln). (vaddlvSublvmaxlsaddlssublsmax) da, sx, sa, sy [, (ps0) lpSl)] The"vmul|vmuln"field refers to either positive vector multiplication or negative vector multiplication being selected as the MAIN OP. The next field, "vadd|vsub|vmax|sadd|ssub|smax", refers to either vector add, vector subtract, vector maximum, scalar add, scalar subtraction, or scalar maximum being selected as the SUB OP. The next field,"da", refers to selecting one of the registers within the accumulator for storage of results.The field"sx"refers to selecting a register within theRISC register file 413 which points to a memory location in memory as one of the sources of operands. The field "sa"refers to selecting the contents of a register within the accumulator as one of the sources of operands. The field"sy"refers to selecting a register within the RISC register file 413 which points to a memory location in memory as another one of the sources of operands. The field of" [, (ps0) psl)]" refers to pair selection of keyword PSO or PS1 specifying which are the sourcedestination pairs of a parallel-store control register.Referring now to Figure 6E and 6F, lists of the set of 20bit DSP and control instructions for the ISA of the present invention is illustrated. Figure 6G lists the set of extended control instructions for the ISA of the present invention. Figure 6H lists the set of 40-bit DSP instructions for the ISA of the present invention. Figure 61 lists the set of addressing instructions for the ISA of the present invention. Referring now to Figure 7, a block diagram illustrates the instruction decoding for configuring the blocks of the signal processing unit 300. The signal processor 300 includes the final decoders 704A through 704N, and multiplexers 720A through 720N. The multiplexers 720A through 720N are representative of the multiplexers 514, 516,520, and 522 in Figure 5B. The predecoding 702 is provided by the RISC control unit 302 and the pipe control 304. An instruction is provided to the predecoding 702 such as a dyadic DSP instruction 600. The predecoding 702 provides preliminary signals to the appropriate final decoders 704A through 704N on how the multiplexers 720A through 720N are to be selected for the given instruction. Referring back to Figure 5B, in a dyadic DSP instruction the MAIN OP generally, if not a NOP, is performed by the blocks of the multiplier M1 504A, compressor 506, adder Al 510A, and adder A2 510B. The result is stored in one of the registers within the accumulator register AR 512. In the dyadic DSP instruction the SUB OP generally, if not aNOP, is performed by the blocks of the adder A3 510C and the multiplier M2 504B. For example, if the dyadic DSP instruction is to perform is an ADD and MULT, then the ADD operation of the MAIN OP is performed by the adder Al 510A and the SUB OP is performed by the multiplier M1 504A.The predecoding 720 and the final decoders 704A through 704N appropriately select the respective multiplexers 720A through 720B to select the MAIN OP to be performed by the adder Al 510A and the SUB OP to be performed by the multiplier M2 504B. In the exemplary case, multiplexer 520A selects inputs from the data typer and aligner 502 in order for adder Al 510A to perform the ADD operation, multiplexer 522 selects the output from adder 510A for accumulation in the accumulator 512, and multiplexer 514B selects outputs from the accumulator 512 as its inputs to perform the MULT SUB OP. The MAIN OP and SUB OP can be either executed sequentially (i. e. serial execution on parallel words) or in parallel (i. e. parallel execution on parallel words). If implemented sequentially, the result of the MAIN OP may be an operand of the SUB OP. The final decoders 704A through 704N have their own control logic to properly time the sequence of multiplexer selection for each element of the signal processor 300 to match the pipeline execution of how the MAIN OP and SUB OP are executed, including sequential or parallel execution. TheRISC control unit 302 and the pipe control 304 in conjunction with the final decoders 704A through 704N pipelines instruction execution by pipelining the instruction itself and by providing pipelined control signals. This allows for the data path to be reconfigured by the software instructions each cycle. As those of ordinary skill will recognize, the present invention has many advantages. One advantage of the present invention is that the ISA is adapted to DSP algorithmic structures providing compact hardware to consume low-power which can be scaled to higher computational requirements. Another advantage of the present invention is that the signal processing units have direct access to operands in memory to reduce processing overhead associated with load and store instructions.Another advantage of the present invention is that pipelined instruction execution is provided so that instructions may be issued every cycle. Another advantage of the present invention is that the signal processing units can be configured cycle by cycle. The preferred embodiments of the present invention are thus described. While the present invention has been described in particular embodiments, it may be implemented in hardware, software, firmware or a combination thereof and utilized in systems, subsystems, components or subcomponents thereof. When implemented in software, the elements of the present invention are essentially the code segments to perform the necessary tasks. The program or code segments can be stored in a processor readable medium or transmitted by a computer data signal embodied in a carrier wave over a transmission medium or communication link. The"processor readable medium"may include any medium that can store or transfer information. Examples of the processor readable medium include an electronic circuit, a semiconductor memory device, a ROM, a flash memory, an erasable ROM (EROM), a floppy diskette, a CDROM, an optical disk, a hard disk, a fiber optic medium, a radio frequency (RF) link, etc. The computer data signal may include any signal that can propagate over a transmission medium such as electronic network channels, optical fibers, air, electromagnetic, RF links, etc. The code segments may be downloaded via computer networks such as the Internet, Intranet, etc. In any case, the present invention should not be construed as limited by such embodiments, but rather construed according to the claims that follow below.
Memory devices, systems, and associated methods with per die temperature-compensated refresh control, and associated methods, are disclosed herein. In one embodiment, a memory device includes a plurality of memory cells and a sensor configured to measure a temperature of the memory device. The memory device determines a frequency at which it is receiving refresh commands. The memory device is further configured to skip refresh operations of the memory cells based, at least in part, on the determination and on the temperature of the memory device.
CLAIMSWhat is claimed is:1. A memory device, comprising: a plurality of memory cells; and a sensor configured to measure a temperature of the memory device, wherein the memory device is configured to: determine a frequency at which refresh commands are being received by the memory device; and skip refresh operations of the memory cells based, at least in part, on the determination and on the temperature of the memory device.2. The memory device of claim 1, wherein the memory device is configured to: skip a first number of refresh operations when the temperature is below a threshold temperature value; and skip a second number of refresh operations when the temperature is at or above the threshold temperature value.3. The memory device of claim 2, wherein the first number corresponds to skipping every other refresh operation.4. The memory device of claim 2, wherein the second number is zero.5. The memory device of claim 2, wherein the threshold temperature value is equal to or greater than 70° C.6. The memory device of claim 2, wherein: the threshold temperature value is a first threshold temperature value; the memory device is further configured to skip a third number of refresh operations when the temperature is below a second threshold temperature value; the second threshold temperature value is less than the first threshold temperature value; and the third number is greater than the first number.7. The memory device of claim 6, wherein the second threshold temperature value is between 45° C and 60°C.8. The memory device of claim 1, wherein the memory device is configured to skip every third refresh operation when the temperature is below a threshold temperature value.9. The memory device of claim 1, wherein the memory device is configured to determine the frequency by receiving a notification indicating the frequency at which refresh commands are being received by the memory device.10. The memory device of claim 9, wherein the memory device is configured to receive the notification in at least one address bit of an address signal received by the memory device when the memory device registers a refresh command.11. The memory device of claim 1, wherein, to skip refresh operations, the memory device is further configured to: receive one or more refresh commands instructing the memory device to execute one or more refresh operations; and in response to receiving the one or more refresh commands, refrain from executing at least a subset of the one or more refresh operations.12. The memory device of claim 1, wherein the memory device is an individual dynamic random-access memory (DRAM) memory die.13. A method, compri sing : determining a frequency at which refresh commands are being received by a memory device, wherein the memory device is one of a plurality of memory devices of a dual in-line memory module (DIMM); determining a temperature of the memory device; and skipping refresh operations of memory cells of the memory device based, at least in part, on the notification and on the temperature of the memory device.14. The method of claim 13, wherein the skipping includes skipping a number of refresh operations when the temperature is below a threshold temperature value.15. The method of claim 14, wherein the number corresponds to skipping every other or every third refresh operation.16. The method of claim 14, wherein the number is a first number, and wherein the skipping includes skipping a second number of refresh operations when the temperature is at or above the threshold temperature value.17. The method of claim 16, wherein the second number of refresh operations is zero.18. The method of claim 14, wherein: the threshold temperature value is a first threshold temperature value; the number of refresh operations is a first number of refresh operations; the method further comprises skipping a second number of refresh operations when the temperature is below a second threshold temperature value; the second threshold temperature value is less than the first threshold temperature value; and the second number is greater than the first number.19. The method of claim 13, wherein determining the frequency includes: receiving a notification via an address signal when the memory device registers a refresh command, wherein the notification indicates the frequency at which the memory device is receiving refresh commands; and monitoring at least one address bit of the address signal.20. The method of claim 13, wherein determining the temperature includes determining the temperature of the memory device using a sensor internal to the memory device.21. The method of claim 13, wherein skipping the refresh operations includes: receiving one or more refresh commands instructing the memory device to execute one or more refresh operations; and in response to receiving the one or more refresh commands, refraining from executing at least a subset of the one or more refresh operations.22. A memory system, comprising: a memory controller; and a memory device including — a plurality of memory cells; and a sensor configured to measure a temperature of the memory device, wherein the memory device is configured to: receive a notification from the memory controller indicating a frequency at which the memory controller is issuing refresh commands; and skip refresh operations of the memory cells based, at least in part, on the notification and on the temperature of the memory device.23. The memory system of claim 22, wherein the memory device is configured to determine the frequency by monitoring at least one address bit of an address signal received by the memory device when the memory device registers a refresh command.24. The memory system of claim 22, wherein the memory device is configured to: skip a first number of refresh operations when the temperature is at or above a first threshold temperature value; skip a second number of refresh operations when the temperature is below the first threshold temperature value; and skip a third number of refresh operations when the temperature is below a second threshold temperature value, the second threshold temperature value being less than the first threshold temperature value.25. The memory system of claim 22, wherein: the memory device is further configured to send an indication of the temperature to the memory controller; and the memory controller is configured to increase the frequency based, at least in part, on the temperature.
MEMORY WITH PER DIE TEMPERATURE-COMPENSATED REFRESHCONTROLCROSS-REFERENCE TO RELATED APPLICATION(S)[0001 j This application claims the benefit of U.S. Patent Application No. 16/921,729, filed July 6, 2020, which is incorporated herein by reference in its entirety.TECHNICAL FIELD[0002] The present disclosure is related to memory systems, devices, and associated methods. In particular, the present disclosure is related to memory devices with per die temperature-compensated refresh control, and associated systems and methods.BACKGROUND[0003] Memory devices are widely used to store information related to various electronic devices such as computers, wireless communication devices, cameras, digital displays, and the like. Memory devices are frequently provided as internal, semiconductor, integrated circuits, and/or external removable devices in computers or other electronic devices. There are many different types of memory, including volatile and non-volatile memory. Volatile memory, including static random-access memory (SRAM), dynamic random-access memory (DRAM), and synchronous dynamic random-access memory (SDRAM), among others, may require a source of applied power to maintain its data. Non-volatile memory, by contrast, can retain its stored data even when not externally powered. Non-volatile memory is available in a wide variety of technologies, including flash memory ( e.g NAND and NOR) phase change memory (PCM), ferroelectric random-access memory (FeRAM), resistive random-access memory (RRAM), and magnetic random-access memory (MRAM), among others. Improving memory devices, generally, may include increasing memory cell density, increasing read/write speeds or otherwise reducing operational latency, increasing reliability, increasing data retention, reducing power consumption, or reducing manufacturing costs, among other metrics. BRIEF DESCRIPTION OF THE DRAWINGS[0004] Many aspects of the present disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale. Instead, emphasis is placed on illustrating clearly the principles of the present disclosure. The drawings should not be taken to limit the disclosure to the specific embodiments depicted, but are for explanation and understanding only.[0005] Figure 1 is a block diagram schematically illustrating a memory system configured in accordance with various embodiments of the present technology.[0006] Figure 2 is a block diagram schematically illustrating a memory device configured in accordance with various embodiments of the present technology.[0007] Figure 2A is a block diagram schematically illustrating a refresh control circuit configured in accordance with various embodiments of the present technology.[0008] Figures 3 and 4 are flow diagrams illustrating routines of memory devices configured in accordance with various embodiments of the present technology.[0009] Figure 5 is a schematic view of a system that includes a memory device configured in accordance with various embodiments of the present technology.DETAILED DESCRIPTION[0010] The technology disclosed herein relates to memory devices (and associated systems and methods) configured to skip refresh operations based, at least in part, on their internal temperatures. In one embodiment, a memory system comprises a memory controller and a plurality of memory devices. The memory controller is configured to monitor at least one temperature of the memory system. When a monitored temperature reaches or exceeds a threshold temperature value, the memory controller increases the frequency at which the controller sends refresh commands to the memory devices. The controller further notifies the memory devices of the frequency increase. In turn, each memory device monitors its internal temperature and skips refresh operations when its internal temperature does not meet or exceed the threshold temperature value. In this manner, those memory devices of the memory system having internal temperatures greater than or equal to the threshold temperature value are refreshed at the higher frequency, thereby reducing the amount of power consumed or required by the memory system during refresh operations.[0011] To ameliorate data retention at higher temperatures, memory systems can be configured to increase the frequency at which their memory devices are refreshed when temperatures of the systems exceed a threshold temperature value. Memory systems often, however, monitor temperature sensors that generate temperature measurements representing temperatures of groups of memory devices (e.g., on dual in-line memory modules (DIMMs)). That is, memory systems can be configured to increase the frequency at which refresh commands are issued to every memory device of the system based on a temperature measurement that is generically associated with a group of memory devices. Large temperature variations (e.g., 10- 30° C or more), however, are often observed across memory devices of these groups. Thus, the generic temperature measurements are often not accurate indications of the internal temperatures of every memory device in a group. As a result, several memory devices (e.g., one or more placements of DRAM on a DIMM that contain sixteen or eighteen such placements) may be operating at internal temperatures below the threshold temperature value even when the generic temperature measurement exceeds the threshold temperature value. Because data loss due to temperature is less of a concern in memory devices having internal temperatures below the threshold temperature value, increasing the frequency at which these cooler memory devices are refreshed constitutes a waste of power. Moreover, memory devices are typically not notified of when the frequency at which they are receiving refresh commands from the memory controller has been increased. As a result, memory devices are often unable to determine whether it is appropriate to skip the additional refresh commands received from the memory controller under the increased frequency. As future generations of memory systems include a greater number of memory devices and/or include memory devices positioned at tighter pitches relative to one another, these future memory systems are expected to demand or consume an even greater amount of power and/or to run at higher temperatures. Accordingly, drawbacks of the foregoing refresh approaches involve significant power demands for future memory systems with greater refresh frequencies, more memory devices, and/or higher power density.[0012] To address these concerns, several embodiments of the present technology are directed to memory devices (e.g., volatile memory devices), systems including memory devices (e.g., DIMMs), and methods of operating memory devices in which individual memory devices are configured to monitor their internal temperatures and to skip refresh operations at least when (a) refresh commands are issued at a greater-than-normal frequency and (b) their internal temperatures are below a threshold temperature value. In particular, an individual memory device of the present technology is configured to receive an indication (e.g., from a memory controller) that refresh commands are being issued in accordance with a faster-than-normal refresh rate. In turn, the individual memory device is configured to compare (i) a temperature measurement generated by a temperature sensor internal to the memory device to (ii) a threshold temperature value. When the temperature measurement exceeds (and/or meets) the threshold temperature value, the memory device is configured to refresh memory cells of the memory device in accordance with the faster-than-normal refresh rate to ameliorate data retention at the higher temperatures. Otherwise, the memory device is configured to skip executing at least a subset (e.g., every other, every third, etc.) of refresh commands received (e.g., from a memory controller), thereby reducing the amount of power required or consumed by the memory device during refresh operations.[0013] In the illustrated embodiments below, the memory systems and devices are primarily described in the context of devices incorporating DRAM storage media. Memory systems and devices configured in accordance with other embodiments of the present technology, however, can include other types of memory systems and devices incorporating other types of storage media, including PCM, SRAM, FRAM, RRAM, MRAM, read only memory (ROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEROM), ferroelectric, magnetoresistive, and other storage media, including non volatile, flash (e.g., NAND and/or NOR) storage media. Furthermore, a person skilled in the art will understand that the technology may have additional embodiments and that the technology may be practiced without several of the details of the embodiments described below with reference to Figures 1-5.[0014] As used herein, the terms “memory system” and “memory device” refer to systems and devices configured to temporarily and/or permanently store information related to various electronic devices. Accordingly, the term “memory device” can refer to a single memory die and/or to a memory package containing one or more memory dies. Similarly, the term “memory system” can refer to a system including one or more memory dies (e.g., a memory package) and/or to a system (e.g., a dual in-line memory module (DIMM)) including one or more memory packages.[0015] Figure 1 is a block diagram schematically illustrating a memory system 100 (e.g., a dual in-line memory module (DIMM)) configured in accordance with various embodiments of the present technology. The memory system 100 can be connected to any one of a number of electronic devices that is capable of utilizing memory for the temporary or persistent storage of information, or a component thereof. For example, the memory system 100 can be operably connected to a host device (not shown). The host device may be a computing device such as a desktop or portable computer, a server, a hand-held device (e.g., a mobile phone, a tablet, a digital reader, a digital media player), or some component thereof (e.g, a central processing unit, a co-processor, a dedicated memory controller, etc.). The host device may be a networking device (e.g, a switch, a router, etc.) or a recorder of digital images, audio, and/or video; a vehicle; an appliance; a toy; or any one of a number of other products. In one embodiment, the host device may be connected directly to the memory system 100, although, in other embodiments, the host device may be indirectly connected to the memory system 100 (e.g, over a networked connection or through intermediary devices).[0016] As shown, the memory system 100 includes a memory controller 101 (e.g., a field programming gate array (FPGA) or other suitable memory controller) and one or more memory devices 104 (e.g., one or more dynamic random-access memory (DRAM) devices) electrically connected to the memory controller 101 via a printed circuit board (PCB) 102 (e.g., via one or more electrical contacts and/or traces). The memory controller 101 can be configured to control one or more operations of the memory system 100. For example, the memory controller 101 can control refresh operations of the memory devices 104. In particular, the memory controller 101 can issue a refresh command to direct one or more of the memory devices 104 to initiate their respective refresh operations.[0017] Individual memory devices 104 of the memory system 100 can include a package substrate 103 and one or more memory dies 200. As illustrated in Figure 1, each of the memory devices 104 includes a first memory die 200a attached to the package substrate 103, and a second memory die 200b stacked on top of the first memory die 200a. In some embodiments, the first and second memory dies 200a and 200b are each electrically connected to the package substrate 103 (e.g., via one or more electrical contacts and/or traces), which in turn can be electrically connected to the PCB 102. Although the devices 104 illustrated in Figure 1 are dual die packages (DDP), one or more memory devices 104 configured in accordance with other embodiments of the present technology can include a greater or lesser number of memory dies 200 (e.g., one memory die or more than two memory dies) than illustrated. In these and other embodiments, the orientation of the memory dies included in a memory device 104 can vary. For example, the first and second memory dies 200a and 200b illustrated in Figure 1 are each oriented face down (e.g., toward the package substrate 103) in a back-to-face orientation. In other embodiments, the fist memory die 200a and/or the second memory die 200b can be oriented face up (e.g., away from the package substrate 103) such that the first and second memory dies 202a and 202b are arranged in a face-to-back, face-to-face, and/or back-to-back orientation on a package substrate 103. In these and still other embodiments, the first and second memory dies 200a and 200b can be arranged side-by-side on the package substrate 103, as opposed to the stacked arrangement illustrated in Figure 1.[0018] In some embodiments, the memory system 100 can further include one or more temperature sensors. In the illustrated embodiment, the memory system 100 includes a temperature sensor 108 positioned within the PCB 102. In other embodiments, the temperature sensor 108 can be positioned at other locations within the memory system 100. For example, the temperature sensor 108 can be positioned within one or more of the package substrates 103, within one or more of the memory dies 200 (e.g., within the first and/or second memory dies 200a and/or 200b), within the memory controller 101, and/or within another component (not shown) of the memory system 100. As described in greater detail below with respect to Figure 2, the memory system 100 additionally or alternatively includes one or more temperature sensors (not shown in Figure 1) internal to all or a subset of the memory dies 200 of the memory system 100. Such internal temperature sensors are expected to provide more accurate indications of an individual memory device’s 104 and/or an individual memory die’s 200 internal temperature than provide by the temperature sensor 108.[0019] In operation, the temperature sensor 108 is configured to generate one or more temperature measurements indicating the temperature of the memory system 100. In the illustrated embodiment, the temperature sensor 108 corresponds to each of the memory devices 104 of the memory system 100. Thus, temperature measurements generated by the temperature sensor 108 generically represent the temperature of each of the memory devices 104. In other embodiments, the temperature sensor 108 can correspond to a subset of the memory devices 104 and/or to one or more other components (e.g., the memory controller 101, the PCB 102, or another component) of the memory system 100. In these and still other embodiments, the memory system can include one or more other temperature sensors similar to the temperature sensor 108, each of which can correspond to a subset of the memory devices 104 and/or to one or more other components (e.g., the memory controller 101, the PCB 102, or another component) of the memory system 100. In contrast, the one or more temperature sensors internal to the memory dies 200 of the memory system 100 are configured to generate temperature measurements representing the temperature of only a corresponding memory die 200.[0020] The temperature sensor 108 and/or the one or more temperature sensors internal to the memory die(s) 200 can be electrically connected to the memory controller 101 such that temperature measurements generated by the temperature sensor(s) are communicated to the memory controller 101. In these embodiments, the memory controller 101 can monitor the temperature measurements to determine a frequency at which to send refresh commands to the memory devices 104. As a specific example, the memory controller 101 can compare (i) a temperature measurement generated by the temperature sensor 108 and/or by one or more temperature sensors internal to the memory die(s) 200 to (ii) a threshold temperature value (e.g., 85° C). In some embodiments, the threshold temperature value can indicate a temperature threshold above (and/or at) which there is a concern that data retention issues will arise due to the temperatures of the memory devices 104 (e.g., of the memory dies 200) at a given refresh rate (e.g., at a standard, default, or normal refresh rate). Stated another way, the threshold temperature value can indicate a temperature threshold below (and/or at) which there is little concern that data retention issues will arise due to the temperatures of the memory devices 104 at the given refresh rate. Thus, if the memory controller 101 determines that a temperature measurement generated by the temperature sensor 108 and/or the temperature sensor(s) internal to the memory die(s) 200 is less than (and/or equal to) the threshold temperature value, the memory controller 101 can issue refresh commands to the memory devices 104 in accordance with the given refresh rate. For example, the memory controller 101 can issue refresh commands to the memory devices 104 every 7.8m5 for double data rate fourth generation (DDR4) memory devices or every 3.9ps for double data rate fifth generation (DDR5) memory devices. The term “IX refresh mode” will be used hereinafter to indicate that the memory controller 101 is issuing (or that a memory device 104 or a memory die 200 is receiving) refresh commands in accordance with a normal or default refresh rate.[0021] On the other hand, if the memory controller 101 determines that the temperature measurement is greater than (and/or equal to) the threshold temperature value, the memory controller 101 can issue refresh commands to the memory device 104 in accordance with a refresh rate faster than the given refresh rate. For the sake of clarity and understanding, the faster refresh rate will be discussed hereinafter in the context of being twice the given refresh rate such that refresh commands are issued to the memory devices 104 twice as fast than in accordance with the given refresh rate. Thus, the term “2X refresh mode” will be used hereinafter to indicate that the memory controller 101 is issuing (or that a memory device 104 or memory die 200 is receiving) refresh commands in accordance with a faster refresh rate. Continuing with the numbers from the above example, the memory controller 101 in the 2X refresh mode can issue a refresh command to the memory devices 104 every 3.9ps for DDR4 memory devices or every 1.95ps for DDR5 memory devices. A person of ordinary skill in the art will readily recognize that a faster refresh rate can be another multiple (other than twice) faster than the given refresh rate. Such other multiples and corresponding refresh rates fall within the scope of the present technology.[0022] In some embodiments, the memory controller 101 can be configured to issue refresh commands to the memory devices 104 in accordance with the IX refresh mode by default. Additionally, or alternatively, the memory controller 101 can be configured to issues refresh commands to each of the memory devices 104 in accordance with the 2X refresh mode when a specified number (e.g., one or more) temperature measurements generated by the temperature sensor 108 are greater than (or equal to) a specified threshold temperature value (e.g., 85° C). In these and other embodiments, the memory controller 101 can be configured to issue refresh commands to each of the memory devices 104 in accordance with the 2X refresh mode when a specified number (e.g., one or more) temperature measurements generated by a specified number (e.g., one or more) of temperature sensors positioned internal to a specified number (e.g., one or more) memory dies 200 are greater than (or equal to) a specified threshold temperature value (e.g., 85° C or another threshold temperature value). In some embodiments, the memory controller 101 is configured to continue issuing refresh commands to each of the memory devices 104 in accordance with the 2X refresh mode until a specified number (e.g., one or more) of temperature measurements generated by the temperature sensor 108 and/or by a specified number (e.g., one or more) temperature sensors positioned internal to a specified number (e.g., one or more) memory dies 200 are less than (or equal to) a specified threshold temperature value (e.g., 85° C or another threshold temperature value). In these and still other embodiments, the memory controller 101 can be configured to issue refresh commands to each of the memory devices 104 in accordance with the IX refresh mode and/or the 2X refresh mode in response to one or more commands received from a host device or another component communicatively connected to the memory controller 101.[0023] As discussed above, large temperature variations (e.g., 10-30° C or more) are often observed across memory devices of a memory system and/or of a memory DIMM. Thus, the temperature measurements generated by the temperature sensor 108 of Figure 1 may be accurate indications of the internal temperatures of only a subset of the memory dies 200 of the memory system 100. In other words, one or more of the memory dies 200 may include temperatures below the threshold temperature value even when a temperature measurement generated by the temperature sensor 108 exceeds (and/or meets) a specified threshold temperature value. Because data loss due to temperature is less of a concern in memory dies 200 having internal temperatures below the threshold temperature value, increasing the frequency at which these cooler memory dies 200 are refreshed would constitute a waste of power. That is, if a memory die 200 having an internal temperature below (and/or at) the specified threshold temperature value is sufficiently refreshed under the IX refresh mode, the cooler memory die 200 would be refreshed at least twice as often as required by changing the refreshing scheme of the cooler memory die 200 to the 2X refresh mode when temperature measurements generated by the temperature sensor 108 and/or by one or more temperature sensors positioned internal to one or more other memory devices 104 or memory dies 200 of the memory system 100 indicate that the other memory devices 104 or memory dies 200 have temperatures above (and/or at) the specified threshold temperature value. In other words, at least every other refresh operation of the cooler memory die 200 under the 2X refresh mode would constitute a waste of power. Furthermore, without an indication that the refresh scheme was changed from the IX refresh mode to the 2X refresh mode, this cooler memory die 200 would not be able to determine that it is being refreshed under the 2X refresh mode and that it is appropriate to skip at least a subset of the refresh commands received from the memory controller 101 given the cooler memory die’s 200 temperature.[0024] To address these concerns (and as discussed in greater detail below with respect to Figures 2 and 3), the memory controller 101 is configured to notify the memory devices 104 and/or the memory dies 200 of the current refresh scheme (i.e., whether the memory controller 101 is issuing refresh commands in accordance with the IX refresh mode or in accordance with the 2X refresh mode). In turn, the memory devices 104 and/or memory dies 200 of the memory system 100 are able to use the notification to determine when it is appropriate to skip refresh operations based, at least in part, on their respective internal temperatures.[0025] Figure 2 is a block diagram schematically illustrating a memory device 200 (e.g., a memory die 200, such as a first memory die 200a and/or a second memory die 200b of Figure 1) configured in accordance with various embodiments of the present technology. The memory device 200 may employ a plurality of external terminals that include command and address terminals coupled to a command bus and an address bus to receive command signals CMD and address signals ADDR, respectively. The memory device may further include a chip select terminal to receive a chip select signal CS, clock terminals to receive clock signals CK and CKF, data clock terminals to receive data clock signals WCK and WCKF, data terminals DQ, RDQS, DBI, and DMI to receive data signals, and power supply terminals VDD, VSS, and VDDQ.[0026] The power supply terminals of the memory device 200 may be supplied with power supply potentials VDD and VSS. These power supply potentials VDD and VSS can be supplied to an internal voltage generator circuit 270. The internal voltage generator circuit 270 can generate various internal potentials VPP, VOD, VARY, VPERI, and the like based on the power supply potentials VDD and VSS. The internal potential VPP can be used in the row decoder 240, the internal potentials VOD and VARY can be used in sense amplifiers included in the memory array 250 of the memory device 200, and the internal potential VPERI can be used in many other circuit blocks.[0027] The power supply terminals may also be supplied with power supply potential VDDQ. The power supply potential VDDQ can be supplied to the IO circuit 260 together with the power supply potential VSS. The power supply potential VDDQ can be the same potential as the power supply potential VDD in an embodiment of the present technology. The power supply potential VDDQ can be a different potential from the power supply potential VDD in another embodiment of the present technology. However, the dedicated power supply potential VDDQ can be used for the 10 circuit 260 so that power supply noise generated by the IO circuit 260 does not propagate to the other circuit blocks.[0028] The clock terminals and data clock terminals may be supplied with external clock signals and complementary external clock signals. The external clock signals CK, CKF, WCK, WCKF can be supplied to a clock input circuit 220. The CK and CKF signals can be complementary, and the WCK and WCKF signals can also be complementary. Complementary clock signals can have opposite clock levels and transition between the opposite clock levels at the same time. For example, when a clock signal is at a low clock level a complementary clock signal is at a high level, and when the clock signal is at a high clock level the complementary clock signal is at a low clock level. Moreover, when the clock signal transitions from the low clock level to the high clock level the complementary clock signal transitions from the high clock level to the low clock level, and when the clock signal transitions from the high clock level to the low clock level the complementary clock signal transitions from the low clock level to the high clock level.[0029] Input buffers included in the clock input circuit 220 can receive the external clock signals. For example, when enabled by a CKE signal from a command decoder 215, an input buffer can receive the CK and CKF signals and the WCK and WCKF signals. The clock input circuit 220 can receive the external clock signals to generate internal clock signals ICLK. The internal clock signals ICLK can be supplied to an internal clock circuit 230. The internal clock circuit 230 can provide various phase and frequency controlled internal clock signals based on the received internal clock signals ICLK and a clock enable signal CKE from the command decoder 215. For example, the internal clock circuit 230 can include a clock path (not shown in Figure 2) that receives the internal clock signal ICLK and provides various clock signals to the command decoder 215. The internal clock circuit 230 can further provide input/output (IO) clock signals. The IO clock signals can be supplied to an input/output (IO) circuit 260 and can be used as a timing signal for determining an output timing of read data and the input timing of write data. The 10 clock signals can be provided at multiple clock frequencies so that data can be output from and input into the memory device 200 at different data rates. A higher clock frequency may be desirable when high memory speed is desired. A lower clock frequency may be desirable when lower power consumption is desired. The internal clock signals ICLK can also be supplied to a timing generator 235 and thus various internal clock signals can be generated that can be used by the command decoder 215, the column decoder 245, and/or other components of the memory device 200.[0030] The memory device 200 may include an array of memory cells, such as memory array 250. The memory cells of the memory array 250 may be arranged in a plurality of memory regions, and each memory region may include a plurality of word lines (WL), a plurality of bit lines (BL), and a plurality of memory cells arranged at intersections of the word lines and the bit lines. In some embodiments, a memory region can be a one or more memory banks or another arrangement of memory cells. In these and other embodiments, the memory regions of the memory array 250 can be arranged in one or more groups ( e.g ., groups of memory banks, one or more logical memory ranks or dies, etc.). Memory cells in the memory array 250 can include any one of a number of different memory media types, including capacitive, magnetoresistive, ferroelectric, phase change, or the like. The selection of a word line WL may be performed by a row decoder 240, and the selection of a bit line BL may be performed by a column decoder 245. Sense amplifiers (SAMP) may be provided for corresponding bit lines BL and connected to at least one respective local I/O line pair (LIOT/B), which may in turn be coupled to at least respective one main I/O line pair (MIOT/B), via transfer gates (TG), which can function as switches. The memory array 250 may also include plate lines and corresponding circuitry for managing their operation.[0031] The command terminals and address terminals may be supplied with an address signal and a bank address signal from outside the memory device 200. The address signal and the bank address signal supplied to the address terminals can be transferred, via a command/address input circuit 205, to an address decoder 210. The address decoder 210 can receive the address signals and supply a decoded row address signal (XADD) to the row decoder 240, and a decoded column address signal (YADD) to the column decoder 245. The address decoder 210 can also receive the bank address signal (BADD) and supply the bank address signal to both the row decoder 240 and the column decoder 245.[0032] The command and address terminals can be supplied with command signals CMD, address signals ADDR, and chip selection signals CS (e.g, from the memory controller 101 and/or a host device). The command signals may represent various memory commands (e.g, including access commands, which can include read commands and write commands). The chip selection signals CS may be used to select the memory device 104 and/or the memory device 200 to respond to commands and addresses provided to the command and address terminals. When an active CS signal is provided to the memory device 200, the commands and addresses can be decoded and memory operations can be performed. The command signals CMD may be provided as internal command signals ICMD to a command decoder 215 via the command/address input circuit 205. The command decoder 215 may include circuits to decode the internal command signals ICMD to generate various internal signals and commands for performing memory operations, for example, a row command signal to select a word line and a column command signal to select a bit line. The internal command signals can also include output and input activation commands, such as a clocked command CMDCK (not shown) to the command decoder 215.[0033] The command decoder 215 may further include one or more registers 218 for tracking various counts or values (e.g., counts of refresh commands received by the memory device 200 or self-refresh operations performed by the memory device 200) and/or for storing various operating conditions for the memory device 200 to perform certain functions, features, and modes (refresh modes, test modes, etc.). As such, in some embodiments, registers 218 (or a subset of the registers 218) may be referred to as mode registers. As a specific example, the memory device 200 may be placed into a refresh mode by programming certain bits of the registers 218. Once the memory device 200 is placed into the refresh mode, the memory device 200 can use certain address bits received in the address signals ADDR to determine the frequency at which refresh commands are being sent to the memory device 200 (e.g., by a memory controller 101). For example, the memory device 200 can monitor a specific address bit in ADDR signals received by the memory device 200 (e.g., from the memory controller 101) to determine a current refresh scheme (e.g., IX refresh mode, 2X refresh mode, etc.) of the corresponding memory system 100. Continuing with this example, the memory device 200 can determine that the memory device 200 is receiving refresh commands in accordance with the IX refresh mode when the address bit in the ADDR signals received by the memory device 200 is in a first state (e.g., a “high” or “1” state) or that the memory device 200 is receiving refresh commands in accordance with the 2X refresh mode when the address bit is in a second state (e.g., a “low” or “0” state). In other words, a memory controller 101 or other component of a memory system that includes the memory device 200 can notify the memory device 200 of the frequency at which refresh commands are being sent to the memory device 200 using one or more address bits in the ADDR signals sent to the memory device 200.(0034] When a read command is issued to the memory device 200, and a row address and a column address are timely supplied to the memory device 200, read data can be read from memory cells in the memory array 250 designated by the row address and the column address. The read command may be received by the command decoder 215, which can provide internal commands to the 10 circuit 260 so that read data can be output from the data terminals DQ, RDQS, DBI, and DMI via read/write (RW) amplifiers 255 and the 10 circuit 260 according to the RDQS clock signals. The read data may be provided at a time defined by read latency information RL that can be programmed in the memory device 200, for example in the mode register 218. The read latency information RL can be defined in terms of clock cycles of the CK clock signal. For example, the read latency information RL can be a number of clock cycles of the CK signal after the read command is received by the memory device 200 when the associated read data is provided.[0035| When a write command is issued to the memory device 200, and a row address and a column address are timely supplied to the memory device 200, write data can be supplied to the data terminals DQ, DBI, and DMI over DQ lines connected to the memory device 200 according to the WCK and WCKF clock signals. The write command may be received by the command decoder 215, which can provide internal commands to the IO circuit 260 so that the write data can be received by data receivers in the IO circuit 260, and supplied via the IO circuit 260 and the RW amplifiers 255 to the memory array 250 over IO lines of the memory device 200. The write data may be written in the memory cell designated by the row address and the column address. The write data may be provided to the data terminals at a time that is defined by write latency WL information. The write latency WL information can be programmed in the memory device 200, for example, in the mode register 218. The write latency WL information can be defined in terms of clock cycles of the CK clock signal. For example, the write latency information WL can be a number of clock cycles of the CK signal after the write command is received by the memory device 200 when the associated write data is received.[0036] The memory array 250 may be refreshed or maintained to prevent data loss, either due to charge leakage or imprint effects. A refresh operation may be initiated by the memory device 200, by the memory system 100 ( e.g ., by the memory controller 101 of Figure 1), and/or by a host device, and may include accessing one or more rows (e.g., WL) and discharging cells of the accessed row to a corresponding SAMP. While the row is opened (e.g, while the accessed WL is energized), the SAMP may compare the voltage resulting from the discharged cell to a reference. The SAMP may then write back a logic value (e.g, charge the cell) to a nominal value for the given logic state. In some cases, this write back process may increase the charge of the cell to ameliorate the discharge issues discussed above. In other cases, the write back process may invert the data state of the cell (e.g, from high to low or low to high), to ameliorate hysteresis shift, material depolarization, or the like. Other refresh schemes or methods may also be employed.[0037] In one approach, the memory device 200 may be configured to refresh the same row of memory cells in every memory bank of the memory array 250 simultaneously. In another approach, the memory device 200 may be configured to refresh the same row of memory cells in every memory bank of the memory array 250 sequentially. In still another approach, the memory device 200 can further include circuitry (e.g, one or more registers, latches, embedded memories, counters, etc.) configured to track row (e.g, word line) addresses, each corresponding to one of the memory banks in the memory array 250. In this approach, the memory device 200 is not constrained to refresh the same row in each memory bank of the memory array 250 before refreshing another row in one of the memory banks.[0038] Regardless of the refresh approach, the memory device 200 can be configured to refresh memory cells in the memory array 250 within a given refresh rate or time window (e.g, 32ms, 28ms, 25ms, 23ms, 21ms, 18ms, 16ms, 8ms, etc.), known as tREF. In these embodiments, a corresponding memory device 104 and/or the memory system 100 can be configured to supply refresh commands to the memory device 200 in accordance with a specified minimum cadence tREFI. For example, the memory device 104 and/or the memory system 100 can be configured to supply one or more refresh commands to the memory device 200 in accordance with the IX refresh mode at least every 7.8ps for DDR4 memory devices (3.9ps for DDR5) such that an approximate minimum of 4000 refresh commands (8000 refresh commands for DDR5) are supplied to the memory device 200 within a 32ms time window. As another example, the memory device 104 and/or the memory system 100 can be configured to supply one or more refresh commands to the memory device 200 in accordance with the 2X refresh mode at least every 3.9ps for DDR4 memory devices (1.95ps for DDR5) such than an approximate minimum of 8000 refresh commands (16000 refresh commands for DDR5) are supplied to the memory device 200 within a 32ms time window.[0039] As discussed above, refresh operations of a memory system 100 and/or a memory device 200 are current-intensive operations that demand and consume a large amount of power. Furthermore, data loss due to temperature is less of a concern in memory devices 200 having internal temperatures below one or more threshold temperature values. Therefore, increasing the frequency at which cooler memory devices 200 are refreshed to ameliorate data retention in other memory devices 200 having higher temperatures would constitute a waste of power.[0040] To address this concern, the memory device 200 illustrated in Figure 2 includes at least one internal temperature sensor 243. The temperature sensor 243 is configured to generate temperature measurements representing the temperature of only the memory device 200. In some embodiments, the temperature measurements are communicated to the command decoder 215 and/or to a memory controller 101 communicatively coupled to the temperature sensor 243. As discussed in greater detail below with respect to Figure 3, the memory device 200 can determine whether to skip one or more refresh operations based, at least in part, on the internal temperature of the memory device 200 and on the frequency at which the memory device 200 is receiving refresh commands (indicated in the notification received via the address bits of address signals ADDR received by the memory device 200). As such, each memory device 200 of the memory system 100 can determine whether to skip refresh operations based on its own temperature such that only memory devices 200 having internal temperatures above a threshold temperature value are refreshed in accordance with the 2X refresh mode. In this manner, the total amount of power required or consumed by the memory device 200 for refresh operations is reduced and/or minimized, thereby reducing the total amount of power required or consumed by a corresponding memory system 100.[0041] Figure 2A is a block diagram schematically illustrating a refresh control circuit 280 of a memory device 200 (Figure 2), configured in accordance with various embodiments of the present technology. As shown, the refresh control circuit 280 includes a delay element 281; inverters 282, 283, and 287; a NAND gate 284; AND gates 285, 288, and 290; a 1/2 divider circuit 286; and an OR gate 289. A person of ordinary skill in the art will readily appreciate that the refresh control circuit 280 illustrated in Figure 2A is but one implementation of a refresh control circuit configured in accordance with various embodiments of the present technology. Therefore, a person of ordinary skill in the art will also readily recognize that a refresh control circuit can include other components in addition to or in lieu of any one or more of the components of the refresh control circuit 280, and still fall within the scope of the present technology. For example, the refresh control circuit 280 is illustrated with a 1/2 divider circuit 286 to enable the memory device to skip every other refresh command, but the refresh control circuit 280 can include other divider circuits (e.g., a 1/3 divider circuit to skip every third refresh command) in other embodiments in addition to or in lieu of the 1/2 divider circuit 286.[0042] As illustrated in Figure 2A, a refresh command signal REF (e.g., a refresh clock signal REFCLK) is received (e.g., from a command decoder 215 (Figure 2) of the memory device 200) and is input (i) into the AND gate 285 via the delay element 281 and (ii) into the AND gate 290. The AND gate 285 additionally receives an inverted refresh management signal RFM, an inverted logical product of an inverted mode register signal and a command address bit signal (illustrated as mode register signal MR4[3] and command address bit signal CA8 in Figure 2 A), and a same bank in progress signal. In some embodiments, the command address bit signal CA8 is asserted when the memory device 200 is in IX refresh mode and is not asserted when the memory device 200 is in 2X refresh mode. The output of the AND gate 285 is input into the 1/2 divider circuit 286 in addition to a set signal SET. In turn, the output of the 1/2 divider circuit 286 is input into the OR gate 289 in addition to the refresh management signal RFM and the logical product of the inverted mode register signal and the command address bit signal (again, illustrated as mode register signal MR4[3] and command address bit signal CA8 in Figure 2A). The output of the OR gate 289 is input into the AND gate 290, and the logical product of the output of the OR gate 289 and the refresh command signal REF is output from the AND gate 290 as an internal refresh signal REF Internal.[0043] In operation, the refresh control circuit 280 is configured to enable the memory device 200 to execute a refresh command (e.g., to output the internal refresh signal REF Internal in a high state) whenever (i) the refresh management signal RFM is asserted, (ii) the mode register signal MR4[3] is not asserted while the command address bit signal CA8 is asserted, and/or (iii) the output of the 1/2 divider circuit 286 is high. The output of the 1/2 divider circuit 286 is high whenever (i) the output of the AND gate 285 is high (e.g., when (a) the refresh command signal REF is asserted, (b) the same bank in progress signal is asserted, (c) the refresh management signal is not asserted, and (d) the mode register signal MR4[3] is not asserted while the command address bit signal CA8 is asserted) and (ii) the set signal SET is asserted. In some embodiments, the set signal SET is asserted whenever the mode register signal MR4[3] is not asserted, the temperature of the memory device 200 is greater than or equal to a threshold temperature value (e.g., 85° C), during set refresh (SFR) mode of the memory device 200, and/or upon entering and exiting fine granular refresh (FGR) mode of the memory device 200. In other words, the refresh control circuit 280 is configured to cause the memory device 200 to skip every other refresh command whenever the refresh management signal is not asserted, the memory device is in 2X refresh mode, and the temperature of the memory device 200 is below the threshold temperature value. A person of ordinary skill in the art will readily recognize, however, that the refresh control circuit 280 can be configured to cause the memory device 200 to skip refresh commands/operations upon the occurrence of other events in addition to or in lieu of the events discussed above.(0044] Figure 3 is a flow diagram illustrating a routine 300 of a memory device configured in accordance with various embodiments of the present technology. The routine 300 is illustrated as a set of steps, blocks, operations, or processes 301-311. In some embodiments, one or more of the steps 301-311 may be implemented, at least in part, in the form of executable code stored on non-transitory, tangible, machine readable media. All or a subset of the steps 301-311 of the routine 300 can be executed, at least in part, by various components of a memory device (e.g., a memory device 104 of Figure 1 and/or a memory device 104/200 of Figures 1 and 2). In these and other embodiments, one or more of the steps 301-311 can be executed, at least in part, by one or more components of a memory system, such as a memory controller, a PCB, a package substrate, and/or a memory die. In these and still other embodiments, one or more of the steps 301-311 can be executed, at least in part, by a host device operably connected to the memory system, by a manufacturer, by an end user, or by an intermediary party.[0045] As shown, the routine 300 begins at step 301 by determining a temperature of the memory device. For example, the routine 300 determines a temperature of the memory device using one or more temperature measurements generated by a temperature sensor internal to the memory device.[0046] At step 302, the routine 300 receives an indication of a frequency at which the memory device is receiving refresh commands. Such an indication can, for example, notify the routine 300 of a refresh scheme or mode (e.g., a IX refresh mode, a 2X refresh mode, etc.) of a memory system (e.g., of a memory controller and/or a host device) that includes or is communicatively connected to the memory device. In some embodiments, a refresh mode of the memory device is enabled by programming certain bits of a mode register of the memory device. In these embodiments, the routine 300 can receive the indication via one or more address bits in address signals received by the memory device (e.g., when the memory device registers a refresh command, after the mode register is programmed to place the memory device into the refresh mode, etc.).[0047] At step 303, the routine 300 determines the frequency at which the memory device is receiving refresh commands. In some embodiments, the routine 300 determines the frequency based, at least in part, on the indication received at step 302. For example, the routine 300 can monitor the one or more address bits in address signals received by the memory device to determine the frequency. Continuing with this example, the routine 300 can determine that the memory device is receiving refresh commands at a frequency consistent with a 2X refresh mode (step 303, “Y”) when a specified address bit in the address signals received by the memory device is in a first state (e.g., a “low” or “0” state), and the routine 300 can accordingly proceed to step 304 to compare the temperature of the memory device determined at step 301 to a first threshold temperature value. Alternatively, the routine 300 can determine that the memory device is receiving refresh commands at a frequency consistent with a IX refresh mode (step 303, “N”) when the specified address bit in the address signals received by the memory device is in a second state (e.g., a “high” or “1” state), and the routine 300 can accordingly proceed to step 309 to compare the temperature of the memory device determined at step 301 to a third threshold temperature value.[0048] For the sake of clarity and understanding, the frequency at which the memory device is receiving refresh commands in the illustrated embodiment is consistent with one of only two possible refresh schemes (e.g., the IX refresh mode and the 2X refresh mode). In other embodiments, however, the frequency can be consistent with any number of other possible refresh schemes. In such embodiments, the flow diagram of Figure 3 can include a corresponding number of additional decision steps (similar to step 303) for determining the frequency at which the memory device is receiving refresh commands, as well as a corresponding number of additional steps (similar to steps 304-311) for determining a number and/or pattern of refresh commands/operations to skip.[0049] At step 304, the routine 300 determines whether the temperature of the memory device determined at step 301 is greater than or equal to a first threshold temperature value. In some embodiments, the first threshold temperature value can be a temperature above (and/or at) which there is a concern that data retention issues will arise in memory cells of the memory device. For example, the first threshold temperature value can be equal to or greater than 70° C (e.g., 85° C). In some embodiments, the first threshold temperature value can be the same threshold temperature value as or similar a similar threshold temperature value to the threshold temperature value used by a memory controller to determine the frequency at which to send refresh commands to the memory device, as discussed in greater detail above with respect to Figure 1. If the routine 300 determines that the temperature of the memory device is greater than or equal to the first threshold temperature value, the routine 300 can accordingly proceed to step 305 to refresh the memory device in accordance with the 2X refresh mode. In other words, the routine 300 can execute a refresh operation for every refresh command received by the memory device. On the other hand, if the routine 300 determines that the temperature of the memory device determined at step 301 is not greater than or equal to the first threshold temperature value, the routine 300 can accordingly proceed to step 306. [0050] At step 306, the routine 300 determines whether the temperature of the memory device determined at step 301 is greater than or equal to a second threshold temperature value. In some embodiments, the second threshold temperature value can be a temperature above (and/or at) which there is concern that data retention issues will arise if the memory device is not regularly refreshed according to a specified refresh rate (e.g., according to the IX refresh mode). For example, the second threshold temperature value can be between about 45° C and about 60° C. If the routine 300 determines that the temperature of the memory device is greater than or equal to the second threshold temperature value, the routine 300 can accordingly proceed to step 307 to skip a first number of refresh operations. In some embodiments, the first number of refresh operations corresponds to skipping half of the refresh commands received by the memory device in accordance with the 2X refresh mode. Thus, the routine 300 can execute a refresh operation for every other refresh command (or another pattern of refresh commands) received by the memory device to effectively refresh the memory device in accordance with the IX refresh mode. As a specific example, in response to receiving a first refresh command directing the memory device to execute a refresh operation, the memory device can execute a refresh operation. In response to receiving a second refresh command directing the memory device to execute a refresh operation, however, the memory device ignores and/or masks the second refresh command such that the memory device refrains from executing a refresh operation. In other embodiments, the first number of refresh operations can correspond to skipping a different number of refresh commands received by the memory device in accordance with the 2X refresh mode.[0051] In some embodiments, the routine 300 track refresh commands received by the memory device. For example, the routine 300 can (i) use a counter (e.g., a register of the memory device) to track whether a last refresh command received by the memory device was executed or ignored, (ii) use a value of the counter to determine whether to perform a refresh operation in response to receiving a current refresh command, and/or (iii) update the value of the counter accordingly after skipping or executing the current refresh command. As another example, when skipping every nth refresh command, the routine 300 can (i) use a counter to track how many refresh operations have been executed in response to receiving refresh commands, (ii) skip executing a refresh operation in response to receiving a refresh command after a value of the counter reaches n-1, and (iii) reset the value of the counter to zero after skipping a refresh operation in response to receiving the nth refresh command. In these and other embodiments, the routine 300 can use more than one counter to skip various patterns of refresh operations. For example, the routine 300 can (i) use a first counter to refrain from executing a refresh operation in response to receiving every other refresh command and (ii) use a second counter to refrain from executing an additional one of every three refresh operations that would otherwise be executed in response to receiving every other refresh command when monitoring only the first counter.[0052] Referring again to block 306, if the routine 300, on the other hand, determines that the temperature of the memory device is not greater than or equal to the second threshold temperature value, the routine 300 can accordingly proceed to step 308 to skip a second number of refresh operations. The second number of refresh operations can be greater than the first number of refresh operations. For example, the second number of refresh operations can correspond to skipping every other refresh command received by the memory device in accordance with the 2X refresh mode, as well as skipping an additional one of every three refresh commands that would otherwise be executed by the routine 300 at step 307. Continuing with this example, for every group of six consecutive refresh commands received by the memory device, the routine 300 can skip the sixth refresh command received by the memory device in addition to the first, third, and fifth refresh commands received by the memory device. In other words, the routine 300 (at step 308) in this example would execute two of every three refresh commands that would otherwise be executed by the routine 300 at step 307. In other embodiments, the second number of refresh operations can correspond to skipping a different number and/or pattern of refresh commands received by the memory device in accordance with the 2X refresh mode.[0053] Returning again to step 303, if the routine 300 determines that the memory device is receiving refresh commands at a frequency consistent with the IX refresh mode (step 303, “N”), the routine 300 proceeds to step 309. At step 309, the routine 300 determines whether the temperature of the memory device determined at step 301 is greater than or equal to a third threshold temperature value. In some embodiments, the third threshold temperature value can be a temperature above (and/or at) which there is concern that data retention issues will arise if the memory device is not regularly refreshed according to a specified refresh rate (e.g., according to the IX refresh mode). For example, the third threshold temperature value can be the same as second threshold temperature value. In other embodiments, the third threshold temperature value can be a different threshold temperature value than the second threshold temperature value.[0054] If the routine 300 determines that the temperature of the memory device is greater than or equal to the third threshold temperature value, the routine 300 can accordingly proceed to step 310 to refresh the memory device according to the IX refresh mode. In other words, the routine 300 can execute a refresh operation for every refresh command received by the memory device. On the other hand, if the routine 300 determines that the temperature of the memory device determined at step 301 is not greater than or equal to the third threshold temperature value, the routine can accordingly proceed to step 311.[0055] At step 311, the routine 300 skips a third number of refresh commands received by the memory device. In some embodiments, the third number of refresh operations corresponds to skipping one third of the refresh commands received by the memory device in accordance with the IX refresh mode. Thus, in these embodiments, the routine 300 can skip every third (or another pattern) of refresh commands received by the memory device. In other embodiments, the third number of refresh operations can correspond to skipping a different number of refresh commands received by the memory device in accordance with the IX refresh mode.[0056| Although the steps 301-311 of the routine 300 are discussed and illustrated in a particular order, the method illustrated by the routine 300 in Figure 3 is not so limited. In other embodiments, the steps 301-311 can be performed in a different order. For example, any of the steps 301-311 of the routine 300 can be performed before, during, and/or after any of the other steps 301-311 of the routine 300. As a specific example, step 302 and/or step 303 can be performed before and/or during step 301. Moreover, a person of ordinary skill in the relevant art will readily recognize that the illustrated method can be altered and still remain within these and other embodiments of the present technology. For example, one or more of the steps 301-311 of the routine 300 illustrated in Figure 3 can be omitted and/or repeated in some embodiments. As a specific example, steps 306 and/or step 308 can be omitted in some embodiments such that the routine 300 proceeds to step 307 after determining that the temperature of the memory device is not greater than or equal to the first threshold temperature. As another specific example, steps 309 and/or step 311 can be omitted in some embodiments such that the routine 300 proceeds to step 310 after determining that the refresh scheme is not the 2X refresh mode at step 303. In these and other embodiments, one or more of the steps 301-311 of the routine 300 can be combined (at least in part). In these and still other embodiments, the routine 300 can include additional steps than shown in Figure 3. For example, the routine 300 can compare the temperature of the memory device determined at step 301 to one or more additional threshold temperatures (e.g., one or more additional threshold temperatures above and/or below the first, second, and/or third threshold temperatures) and/or can accordingly skip different numbers of refresh operations (e.g., numbers greater than and/or less than the first, second, and/or third numbers of refresh operations) based, at least in part, on the additional comparisons.(0057] Figure 4 is a flow diagram illustrating a routine 400 of a memory device configured in accordance with various embodiments of the present technology. The routine 400 is illustrated as a set of steps, blocks, operations, or processes 410, 420, and 430. In some embodiments, one or more of the steps 410, 420, and 430 may be implemented, at least in part, in the form of executable code stored on non-transitory, tangible, machine readable media. All or a subset of the steps 410, 420, and 430 of the routine 400 can be executed, at least in part, by various components of a memory device (e.g., a memory device 104 of Figure 1 and/or a memory device 104/200 of Figures 1 and 2). In these and other embodiments, one or more of the steps 410, 420, and 430 can be executed, at least in part, by one or more components of a memory system, such as a memory controller, a PCB, a package substrate, and/or a memory die. In these and still other embodiments, one or more of the steps 410, 420, and 430 can be executed, at least in part, by a host device operably connected to the memory system, by a manufacturer, by an end user, or by an intermediary party.[0058] The routine 400 begins at block 410 by determining a temperature of the memory device. In some embodiments, the routine 400 determines a temperature of the memory device using one or more temperature measurements generated by a temperature sensor internal to the memory device.[0059] At block 420, the routine 400 continues by determining a frequency at which refresh commands are being received by and/or sent to the memory device. In some embodiments, the routine 400 determines the frequency by receiving a notification indicating the frequency. For example, the routine 400 can receive the notification via one or more address bits in address signals received by the memory device (e.g., when the memory device registers a refresh command, after a mode register of the memory device is programmed to place the memory device into a refresh mode, etc.). Continuing with this example, the routine 400 can determine the frequency by monitoring the one or more address bits in the address signals. In one embodiment, the routine 400 can determine the frequency is a first frequency when a specified address bit in the address signals received by the memory device is in a first state (e.g., a “low” or “0” state), and can determine the frequency is a second frequency when the specified address bit is in a second state (e.g., a “high” or “1” state).[0060] At block 430, the routine 400 continues by skipping refresh operations based, at least in part, on the temperature of the memory device determined at block 410 and on the frequency at which the memory device is receiving and/or is being sent refresh commands. For example, when the memory device is receiving refresh commands at a higher frequency, the routine 400 can (i) skip a first number (e.g., zero) of refresh operations when the temperature is at or above a first threshold temperature value, (ii) skip a second number of (e.g., corresponding to every other) refresh operations when the temperature is below the first threshold temperature value, and/or (iii) skip a third number (e.g., greater than the second number) of refresh operations when the temperature is below a second threshold temperature value (e.g., less than the first threshold temperature value). Additionally, or alternatively, when the memory device is receiving refresh commands at a lower frequency, the routine 400 can (i) skip a first number (e.g., zero) of refresh operations when the temperature is below the first threshold temperature value and/or (ii) skip a second number (e.g., greater than the first number, corresponding to every third, etc.) of refresh operations when the temperature is below a second threshold temperature value (e.g., less than the first threshold temperature value). In some embodiments, skipping refresh operations includes (i) receiving one or more refresh commands directing the memory device to execute one or more refresh operations and (ii) in response to receiving the one or more refresh commands, refraining from executing at least a subset of the one or more refresh operations.[0061] Although the steps 410, 420, and 430 of the routine 400 are discussed and illustrated in a particular order, the method illustrated by the routine 400 in Figure 4 is not so limited. In other embodiments, the steps 410, 420, and 430 can be performed in a different order. For example, any of the steps 410, 420, and 430 of the routine 400 can be performed before, during, and/or after any of the other steps 410, 420, and 430 of the routine 400. As a specific example, step 420 can be performed before and/or during step 410. Moreover, a person of ordinary skill in the relevant art will readily recognize that the illustrated method can be altered and still remain within these and other embodiments of the present technology. For example, one or more of the steps 410, 420, and 430 of the routine 400 illustrated in Figure 4 can be omitted and/or repeated in some embodiments. In these and other embodiments, one or more of the steps 410, 420, and 430 of the routine 400 can be combined (at least in part). In these and still other embodiments, the routine 400 can include additional steps than shown in Figure 4.(0062] Figures 3 and 4 and the corresponding discussion above assumes that a single refresh operation executed in response to a single refresh command received by a memory device refreshes every memory cell in a memory array of the memory device. Refresh operations executed in response to a single refresh command in other embodiments, however, can refresh a single memory region representing a subset of the memory cells in the memory array of the memory device. For example, a single refresh operation executed in response to receiving a single refresh command in some embodiments can refresh a single memory bank in every memory bank group of the memory array. In these embodiments, the routine 300 of Figure 3 and/or the routine 400 can skip greater numbers of refresh operations received by the memory device to achieve a desired refresh rate of the entire memory array. For example, the routine 300 can skip greater number of refresh operations received by the memory device at steps 307, 308, and/or 311 to achieve a desired refresh rate of the entire memory array. For the sake of clarity and understanding, consider the following numerical example. Assuming that the memory array of a memory device includes four memory banks arranged in four memory bank groups, the memory device would need to execute four refresh operations in response to receiving four consecutive refresh commands to refresh the entire memory array. Thus, in this example, each executed refresh operation refreshes a single memory bank in every memory bank group. Therefore, to skip the first number (e.g., half) of the refresh operations, the routine 300 at step 307 could alternate between (i) executing four refresh operations in a row in response to receiving four consecutive refresh commands and (ii) skipping four refresh operations in a row in response to receiving the next four consecutive refresh commands. [0063] Figure 5 is a schematic view of a system that includes a memory device configured in accordance with various embodiments of the present technology. Any one of the foregoing memory systems, devices, and/or dies described above with reference to Figures 1-4 can be incorporated into any of a myriad of larger and/or more complex systems, a representative example of which is system 590 shown schematically in Figure 5. The system 590 can include a semiconductor device assembly 500, a power source 592, a driver 594, a processor 596, and/or other subsystems and components 598. The semiconductor device assembly 500 can include features generally similar to those of the memory systems, devices, and/or dies described above with reference to Figures 1-4, and can, therefore, include various features of programmable die refresh stagger. The resulting system 590 can perform any of a wide variety of functions, such as memory storage, data processing, and/or other suitable functions. Accordingly, representative systems 590 can include, without limitation, hand-held devices ( e.g ., mobile phones, tablets, digital readers, and digital audio players), computers, vehicles, appliances, and other products. Components of the system 590 may be housed in a single unit or distributed over multiple, interconnected units (e.g., through a communications network). The components of the system 590 can also include remote devices and any of a wide variety of computer readable media.Conclusion[0064] The above detailed descriptions of embodiments of the technology are not intended to be exhaustive or to limit the technology to the precise form disclosed above. Although specific embodiments of, and examples for, the technology are described above for illustrative purposes, various equivalent modifications are possible within the scope of the technology, as those skilled in the relevant art will recognize. For example, while steps are presented and/or discussed in a given order, alternative embodiments can perform steps in a different order. Furthermore, the various embodiments described herein can also be combined to provide further embodiments.[0065] From the foregoing, it will be appreciated that specific embodiments of the technology have been described herein for purposes of illustration, but well-known structures and functions have not been shown or described in detail to avoid unnecessarily obscuring the description of the embodiments of the technology. Where the context permits, singular or plural terms can also include the plural or singular term, respectively. Moreover, unless the word "or" is expressly limited to mean only a single item exclusive from the other items in reference to a list of two or more items, then the use of "or" in such a list is to be interpreted as including (a) any single item in the list, (b) all of the items in the list, or (c) any combination of the items in the list. Where the context permits, singular or plural terms can also include the plural or singular term, respectively. Additionally, the terms "comprising," “including,” “having” and “with” are used throughout to mean including at least the recited feature(s) such that any greater number of the same feature and/or additional types of other features are not precluded. As used herein, the phrase “and/or” as in “A and/or B” refers to A alone, B alone, and both A and B.[0066] From the foregoing, it will also be appreciated that various modifications can be made without deviating from the technology. For example, various components of the technology can be further divided into subcomponents, or that various components and functions of the technology can be combined and/or integrated. Furthermore, although advantages associated with certain embodiments of the technology have been described in the context of those embodiments, other embodiments can also exhibit such advantages, and not all embodiments need necessarily exhibit such advantages to fall within the scope of the technology. Accordingly, the disclosure and associated technology can encompass other embodiments not expressly shown or described herein.
The present invention includes a method and device for controlling the data length of read and write operations performed on a memory device. The method includes determining a first number of channels available to a memory controller operatively coupled to the memory device; determining a second number representative of the number of populated channels; calculating a burst length based on the first and second numbers; and programming the memory controller to use the burst length as the data length of read and write operations performed on the memory device.
1. A system comprising:a memory device; anda memory controller operable to control the memory device, the memory controller comprising:a routine configured to determine a first number of channels available to the controller and determine a second number of the first number of channels which are populated;logic configured to calculate a burst length based on the first number and the second number;a control register configured to receive and store the burst length; anda state machine operable to perform one or more commands on the memory device using the burst length as a data length for the one or more commands.2. The system of claim 1, further comprising:a data processor operable to initiate a read or write request on the memory device, wherein the control register is configured to receive and store the burst length in response to the read or write request initiated from the data processor.3. The system of claim 1, wherein the one or more commands includes a read or a write command.4. The system of claim 3, wherein the logic is operable to calculate an optimum burst length, the optimum burst length being a minimum burst length required to minimize a number of read or write commands performed on the memory device.5. The system of claim 4, wherein the state machine is operable to perform the read or write commands on the memory device using the optimum burst length as the data length for the read or write commands.6. The system of claim 5, wherein the memory device is synchronous dynamic random access memory (SDRAM).7. The system of claim 5, wherein the memory device is double data rate (DDR) memory.8. The system of claim 1, wherein the logic is operable to calculate the burst length directly proportional to the first number and inversely proportional to the second number.9. The system of claim 8, wherein the burst length is equal to the first number divided by the second number, and further multiplied by a constant.10. The system of claim 9, wherein the constant represents a minimum burst length required by the memory device or required by the memory controller.11. The system of claim 1, wherein the logic is implemented in hardware.
CROSS-REFERENCE TO RELATED APPLICATIONSUnder 35 U.S.C. [section] 120, this application is a continuation application of and claims priority to U.S. patent application Ser. No. 10/041,679, filed Jan. 7, 2002, now U.S. Pat. No. 6,766,385, the entire contents of which are incorporated by reference herein.BACKGROUNDThis invention relates to data communications in a computer system, and more particularly to a memory controller operable to issue variable length read and write commands.Modern computer systems typically include a host processor coupled to a host bridge. The host bridge interfaces the processor to the rest of the computer system. The host bridge may include a memory controller that is coupled to a system memory, for example Dynamic Random Access Memory (DRAM). A single memory controller can support a plurality of memory channels, where each memory channel is an electrically independent interface with the memory channel's own data bus connecting the memory channel to the memory controller. The larger the number of memory channels, the larger the aggregate bandwidth (amount of information transferred per second between the DRAM and the memory controller). Increasing the number of memory channels also increases the aggregate storage capacity of the memory subsystem by allowing more memory modules/devices to be connected to a single controller.Most memory controllers perform read and write commands in fixed size amounts of data. This amount of data is called a "line". A line contains L bytes of data. For example, when the memory controller performs a read operation, the controller receives a single line of data (L bytes) for each read command issued. Likewise, when the memory controller performs a write operation, the memory controller transmits a line of data (L bytes) for each write command issued. In an n-channel implementation, each of the channels returns a line of data for each read command. The total amount of data returned to the controller is L*n bytes if all channels are populated. For write commands, the controller transmits L*n bytes, with L bytes being written to each usable memory channel.Referring to FIG. 1, timing diagram 100 illustrates the operation of a memory controller supporting two channels 101 and 102 with a fixed burst length L=4. As shown, only channel 101 is populated. Assuming a requesting agent requests R=8 bytes of data 104A-H, the controller would be required to issue two read commands 105A and 105B. The first read command 105A would issue at the rising edge of clock 0 106, and the second read command 105B would issue at the rising edge of clock 4 107. In contrast, assuming both channels are populated and the controller uses multiple channels in a lock-step fashion-i.e., each channel receives the same read and write commands and the data is split between the channels-the controller would only be required to issue one read command of length L=4. The single read command would enable the controller to receive the full L*n or 8 bytes. By requiring a larger number of read or write commands in the event that all channels are not populated, conventional memory controllers suffer performance and efficiency losses.DESCRIPTION OF DRAWINGSFIG. 1 is a timing diagram for a prior art memory controller.FIG. 2 is a diagram of a computer system employing a memory controller supporting variable numbers of channels.FIG. 3A is a diagram of the memory subsystem-memory controller and main memory of FIG. 2.FIG. 3B is diagram of a memory controller implementing calculation logic.FIG. 4 is a diagram of a command length control register and a memory controller state machine of the memory controller of FIG. 2.FIG. 5A is a timing diagram illustrating commands utilized in read requests in memory controller supporting two channels, both of which are populated.FIG. 5B is a timing diagram illustrating commands utilized in read requests in a memory controller supporting two channels, only one of which is populated.FIG. 6 illustrates a transition state diagram depicting the operation of the memory controller state machine of FIG. 4.FIG. 7 is a flow chart illustrating the burst length optimization process.Like reference symbols in the various drawings indicate like elements.DETAILED DESCRIPTIONReferring to FIG. 2, a computer system 200 includes a main memory 201 controlled by a memory controller 202. Memory controller 202 may be a discrete chip or part of another controller, such as a host bridge 203 interfacing between a central processing unit (processor) 204 and a hub interface 205. Main memory 201 includes memory components. The memory components may be DIMM modules that may contain memory devices such as SDRAM or DDR memory. Memory controller 202 is connected to n memory channels 206A-n, connecting the memory controller to the memory components of main memory 201. Memory channels 206A-n between main memory 201 and memory controller 202 carry control signals, address signals, and data signals.Host bridge 203 and main memory 201 both interface with an Input/Output (I/O) bridge 207 which provides an interconnection between various peripheral components within the system (e.g. a keyboard, disk drive, scanner, and/or a mouse (216)).I/O bridge 207 includes a system management (SM) bus interface 210 for coupling to an SM bus 211. SM bus interface 210 may support the serial presence detect protocol to access predefined storage locations in main memory 201 to determine how many channels 206A-n have memory components which are populated with memory devices. The serial presence detect protocol is a standard set by the Joint Electron Device Engineering Council (JEDEC). The standard is referred to as JEDEC Standard 21-C, Configurations for Solid State Memories, published by JEDEC September 2000.Buffers 212 are provided between I/O bridge 207 via expansion bus 213 and one or more components, such as a nonvolatile memory (NVRAM) 215. NVRAM 215 stores a basic input/output system (BIOS) routine, which is executed in the computer system 200 during initial start-up. In operation, the BIOS routine may be copied to main memory 201.Referring to FIGS. 2 and 3A, main memory 201 includes, for each channel 206A-n, memory components 300A-r and 301A-r. Memory controller 202 may provide one or more commands operable to interface with memory components 300A-r and 301A-r.Each memory component 300A-r and 301A-r includes an NVRAM 303A-r and 304A-r configured according to the serial presence detect protocol. The information stored in the NVRAM indicates the type of memory module used, e.g., memory data width, memory size, DDR or SDRAM. During start-up, a BIOS routine executed by processor 204 determines the total number of channels n 206A-n connected to memory controller 202. The BIOS routine may also program SMB interface 210 in I/O bridge 207, accessing predetermined locations in NVRAMs 303A-r and 304A-r to determine whether or not memory components 300A-r and 301A-r are populated with memory. Based on the accessed information, the number of populated channels m (the total number of channels 206A-n that contain memory components 300A-r and 301A-r populated with memory devices) is determined. The BIOS routine may also calculate an optimum burst length L based on n and m using the formula:L=(n/m)*I, where I is a minimum burst length required by the memory interface that is hard-coded into the initialization software and L is the optimum burst length. The optimum burst length L is the minimum burst length required to minimize the number of read or write commands. The lowest limit for the value of the minimum burst length can be the minimum burst length required by the memory devices and/or the memory controller.Memory controller 202 may include a channel configuration register 351, and a populated channel configuration register 352, described in greater detail below, which are programmable by the BIOS routine to configure memory controller 202 to provide the correct read or write burst length L to memory components 300A-r and 301A-r that are populated with memory.Referring to FIG. 3B, the optimum burst length L may alternatively be calculated by a calculation logic block 350. The BIOS routine may alternatively program, via SM bus 211, the value of n into a channel configuration register 351 and the value of m into a populated channel configuration register 352. Channel configuration register 351 and populated channel configuration 352 may be included within memory controller 202. Channel configuration register 351 and populated channel configuration register 352 may send n and m, respectively, as inputs into calculation logic block 350 of memory controller 202. Calculation logic block 350 calculates the optimum burst length L using a preprogrammed value of I. Calculation logic block 350 can be implemented in hardware.Referring to FIG. 4, memory controller 202 (FIG. 2) includes in part a command length control register 400 and a state machine 401. Command length control register 400 may contain a two-bit value [1:0] 402 representing the optimum burst length L determined by the BIOS routine or calculation logic block 350. After the optimum burst length L has been calculated, it is programmed into bits [1:0] 402 of command length control register 400 and then sent to state machine 401 to be used for controlling the length of read and write commands. The operation of state machine 401 will be explained in greater detail below.Referring to FIGS. 2, 3, and 5A, a timing diagram 500 illustrates the operation of memory controller 202 connected to two channels 501 and 502. Both channels 501 and 502 contain DIMM modules populated with memory devices. In one aspect, at startup the BIOS routine determines that there are two channels 501 and 502 connected to the memory controller 202 and assigns an n value of 2 (n=2). The BIOS routine also accesses the predetermined locations in NVRAMS 303A-r and 304A-r and determines that DIMM modules on both channels 501 and 502 contain memory devices; the BIOS routine assigns an m value of 2 (m=2). Assuming a requesting agent requests R=8 bytes of data, the BIOS routine calculates the optimum burst length L as L=(n/m)*I (where I=4), therefore, L=(2/2)*4=4. Thus, memory controller 202 issues a single read command 505 at the rising edge of clock 0 506 to accommodate the 8 bytes requested, 4 bytes 504A-D from the first channel 501 and 4 bytes 507A-D from the second channel 502. Note that because all channels (in this case both channels 501 and 502) are populated with memory devices, the optimum burst length L is the same as the fixed burst length of FIG. 1. Because all channels are populated, using the smaller, fixed-size burst length results in a need for only one read operation and thus the smaller, fixed-size burst length is the optimum burst length.Referring to FIGS. 2, 3, and 5B, a timing diagram 550 illustrates the operation of memory controller 202 in a computer system 200 with only one of two channels populated. Although two channels 551 and 552 are connected to memory controller 202, only the first channel 551 contains DIMM modules populated with memory devices. In one aspect, at startup the BIOS routine determines that there are two channels 551 and 552 connected to memory controller 202 and assigns an n value of 2 (n=2). The BIOS routine also accesses the predetermined locations in NVRAMS 303A-r and 304A-r and determines that only DIMM modules on channel 551 contain memory devices and assigns an m value of 1 (m=1). In one aspect, again assuming a requesting agent requests R=8 bytes of data, the BIOS routine calculates the optimum burst length L=(n/m)*I (where I=4), therefore, L=(2/1)*4=8. Thus, memory controller 202 issues a single read command 555 at the rising edge of clock 0 556 to accommodate the 8 bytes requested, 8 bytes 554A-H from the first channel 551.Because only one of the two channels is populated, memory controller 202 adjusts the burst length to accommodate all 8 bytes in one read operation. Because the 8 bytes cannot be distributed over two channels and read as two four-bit words, memory controller 202 calculates and uses a burst length of 8, allowing for the read operation to read one eight bit word. This burst length is considered the optimum burst length because it is the minimum burst length required to consolidate the read operation into one read command.Referring to FIG. 6, transition state diagram 600 depicts the operation of the memory controller state machine 401 (FIG. 4). Referring to FIGS. 2, 4, and 6, in one aspect, nine states are used to generate two read or write command lengths-a length of 4 for an optimum burst length of 4 and a length of 8 for an optimum burst length of 8. Transition logic in memory controller state machine 401 uses access information in command length control register 400 to determine accesses to either an optimum burst length of 4 bytes or 8 bytes.State 1 (IDLE) corresponds to the idle state of memory controller state machine 401. When in IDLE state, memory controller 202 is not performing a read or write command. Memory controller state machine 401 transitions to state 2 (RD0) when a read or write cycle is initiated by processor 204. Memory controller state machine 401 then transitions through the next three states 3-5, or (RD1), (RD2), and (RD3). By the time memory controller state machine 401 transitions to state 5 (RD3), memory controller 202 has accumulated 4 bytes of data. If the optimum burst length L stored in bytes [1:0] 402 of command length control register 400 is 4, then memory controller state machine 401 transitions back to the IDLE state 1. If the optimum burst length L stored in command length control register 400 is 8, then memory controller state machine 401 transitions to state 7 (RD4) and through the next three states 8-10, or (RD5), (RD6), and (RD7). Once in state 10 (RD7), memory controller 202, which has accumulated 8 bytes of data corresponding to the optimum burst length of 8, transitions back to the IDLE state 1.In the present invention, memory controller state machine 401, using information in command length control register 400, can adjust the length of a read or write command depending on the calculated optimum burst length L. Therefore, the present invention minimizes the number of read and write commands that have to be executed by processor 204, enhancing the performance of the memory interface.Referring to FIG. 7, a method 700 of implementing the burst length optimization process is illustrated. First, memory controller 202 determines how many channels n are available in the computer system 200 (step 710). After determining the number of channels available, memory controller 202 determines how many channels m are populated with memory devices (step 720). Next, memory controller 202 calculates an optimum burst length L based on n and m (step 730). Finally, memory controller 202 stores the optimum burst length L (step 740) and resumes normal operation (step 750).Although the present invention has been described herein with reference to a specific preferred embodiment, many modifications and variations therein will be readily occur to those skilled in the art. Accordingly, all such variations and modifications are included within the intended scope of the present invention as defined by the following claims.
Embodiments involving core-to-core offload are detailed herein. For example, a processor including a first core comprising: decode circuitry to decode an instruction having fields for at least an opcode to indicate an offload request availability operation is to be performed, and execution circuitry to execute the decoded instruction to cause a generation and transmission of an offload availability request to one or more cores of the processor, the offload availability request to include at least one of an identification of the requesting core and an indication of the type of availability requested from the one or more cores of the processor, wherein a core receiving the offload availability request is to determine whether that receiving core is able to act has a helper core for the firstcore to perform one or more tasks on behalf of the first core is described.
1.A processor including:Multiple cores, including at least a first core and a second core;The first core includes:A decoding circuit for decoding an instruction, the instruction having a field for at least an operation code and one or more operands, the operation code is used to indicate that a transfer request availability operation is to be performed, the one or more operands Used to provide information for that operation; andThe execution circuit is used to execute the decoded instruction to:Causes a transfer availability request to be transmitted to one or more cores of the processor, the transfer availability request includes the identification of the requesting core and an indication of the availability type requested from the one or more cores of the processor At least one item of, wherein the core receiving the transfer availability request is used to determine whether the receiving core can act as an assistant core for the first core to perform one or more tasks on behalf of the first core; andThe second core includes:The performance monitoring circuit is used to monitor the performance of the second core.2.The processor of claim 1, wherein the indication of the availability type requested from the one or more cores of the processor is one of calculation, memory, and input/output.3.The processor of any one of claims 1-2, wherein the response to the transfer availability request from the one or more cores of the processor is based at least in part on the performance monitoring The state information stored in the circuit is generated.4.The processor of any one of claims 1-3, wherein the first core further comprises:The transfer phase tracker is used to maintain status information of any tasks related to at least the first core and related to transfer from the first core and any tasks performed by the first core as an assistant.5.The processor of claim 4, wherein the transition stage tracker is used for maintenance by a core-core finite state machine.6.5. The processor of any one of claims 1-5, wherein the performance monitoring circuit is used to track events including one or more of the following:The number of instructions of any type retired;The number of nuclear cycles stopped;The number of cache misses;The number of cache accesses;The number of branch instructions retired;The number of misses in the retired branch; andThe number of slots available.7.The processor of any one of claims 1-6, further comprising:Interconnection for coupling the first core and the second core.8.The processor of any one of claims 1-7, further comprising:The core-core transfer execution circuit is configured to: receive a response to the transfer availability request from one or more cores of the processor; and update the transfer stage value from the one or more cores that responded.9.A processor including:Multiple cores, including at least a first core and a second core;The first core includes:A decoding circuit for decoding an instruction, the instruction having a field for at least an operation code, the operation code being used to indicate that a transfer request availability operation is to be performed; andThe execution circuit is configured to execute the decoded instruction so as to generate a transfer availability request and transmit the transfer availability request to one or more cores of the processor, the transfer availability request including the identification of the requesting core and At least one of the indications of the availability type requested from the one or more cores of the processor, wherein the core receiving the transfer availability request is used to determine whether the receiving core can serve as the The assistant core of a core performs one or more tasks on behalf of the first core; andThe second core includes:The performance monitoring circuit is used to monitor the performance of the second core.10.The processor of claim 9, wherein the indication of the availability type requested from the one or more cores of the processor is one of calculation, memory, and input/output.11.The processor of any one of claims 9-10, wherein the response to the transfer availability request from the one or more cores of the processor is based at least in part on the performance monitoring The state information stored in the circuit is generated.12.11. The processor of any one of claims 9-11, wherein the first core further comprises:The transfer phase tracker is used to maintain status information of any tasks related to at least the first core and related to transfer from the first core and any tasks performed by the first core as an assistant.13.The processor of claim 12, wherein the transition stage tracker is used for maintenance by a core-core finite state machine.14.The processor according to any one of claims 9-13, wherein the performance monitoring circuit is configured to track events including one or more of the following:The number of instructions of any type retired;The number of nuclear cycles stopped;The number of cache misses;The number of cache accesses;The number of branch instructions retired;The number of misses in the retired branch; andThe number of slots available.15.The processor of any one of claims 9-14, further comprising:Interconnection for coupling the first core and the second core.16.The processor of any one of claims 9-15, further comprising:The core-core transfer execution circuit is configured to: receive a response to the transfer availability request from one or more cores of the processor; and update the transfer stage value from the one or more cores that responded.17.One method includes:Decoding the instruction, the instruction having a field for at least an operation code, the operation code being used to indicate that the transfer request availability operation is to be performed; andThe decoded instructions are executed to cause a transfer availability request to be generated and transmitted to one or more cores of the processor, the transfer availability request including the identification of the requesting core and all data from the processor At least one of the indications of the availability type of the one or more core requests, wherein the core receiving the transfer availability request is used to determine whether the receiving core can act as an assistant.18.The method of claim 17, further comprising:Receiving a response to the transfer availability request from one or more cores of the processor; and updating the transfer stage value from the responding core or cores.19.The method of claim 17, further comprising:Maintain status information related to at least the first core, any tasks related to transfer from the first core, and any tasks performed by the first core as an assistant.
Nuclear-nuclear "monitor" instruction varianttechnical backgroundThere are several examples of moving work or tasks from a processor core to a different processor core or to an accelerator. Typically, the operating system is the entity that caused the movement. For example, because the operating system scheduler can see what is executing in the entire system, it can shift the operands when the load changes in a particular component. The offset may include powering down the original actuator. In other examples, cores with different capabilities are paired, and when the demand is high, the more complex core runs the code, and when the demand is low, the less complex core runs the code. Further, thread priority as known by the operating system can affect who is executing at a given moment.Description of the drawingsVarious embodiments of the present disclosure will be described with reference to the accompanying drawings, in which:Fig. 1(A) illustrates an example of code for execution on a single core.Figure 1(B) illustrates an example of the code of Figure 1(A), but part of that code is potentially used to be executed as a task by the second core.Figure 2 illustrates an embodiment of at least two cores and common components shared by these cores, where one of the two cores is requesting an indication of transfer availability from the other core.Figure 3 illustrates an embodiment of at least three cores and common components shared by these cores, where one of the three cores is requesting an indication of transfer availability from other cores.Figure 4 illustrates embodiments of various variants of the transfer availability request instruction.FIG. 5 illustrates an embodiment of the data structure of the transfer phase tracker. Although multiple fields are shown, depending on the implementation, not all fields are utilized, or additional fields may be included.FIG. 6 illustrates an embodiment of a method of processing a transfer availability request command (OFFLOADREQ*).FIG. 7 illustrates an embodiment of a method of processing a transfer availability request command (OFFLOADREQ*).Figure 8 illustrates an embodiment of a method of handling a transfer availability request at a receiving core.Figure 9 illustrates an embodiment of at least three cores and common components shared by these cores, where one of the three cores is using it to transfer availability updates to other cores.Figure 10 illustrates an example of a transfer availability announcement according to some embodiments.FIG. 11 illustrates an embodiment of a method of generating a core announcement.Figure 12 illustrates an embodiment of a method of handling reception of transfer availability announcements in the core.Figure 13 illustrates an embodiment of at least two cores and common components shared by these cores, where one of the two cores is sending a transfer start request to the other cores.Figure 14 illustrates an embodiment of at least three cores and common components shared by these cores, where one of the three cores is sending a transfer start request.FIG. 15 illustrates an embodiment including a core receiving a transfer start request.Figure 16 illustrates embodiments of various branch start instruction variants.Figure 17 illustrates an example of a transfer start request according to some embodiments.Figure 18 illustrates an embodiment of a method of processing startoffload* instructions.Figure 19 illustrates an embodiment of a method of processing startoffload* instructions.FIG. 20 illustrates an embodiment of a method of handling a received transfer start request.Figure 21 illustrates an embodiment of at least two cores and common components shared by these cores, where one of the two cores is sending a transfer end indication to the other cores.Figure 22 illustrates an embodiment of at least three cores and common components shared by these cores, where one of the three cores is sending a transfer end indication to the other cores.FIG. 23 illustrates an embodiment of a core receiving a transfer end instruction.Figure 24 illustrates embodiments of various branch end instruction variants.Figure 25 illustrates an embodiment of a method of processing endoffload* instructions.Figure 26 illustrates an embodiment of a method of processing endoffload* instructions.FIG. 27 illustrates an embodiment of a method of processing a transfer end indication.Figure 28 illustrates an embodiment of hardware for processing instructions such as the OFFLOADREQ* instruction, STARTOFFLOAD* instruction, and ENDOFFLOAD* instruction detailed herein.Figure 29A is a block diagram illustrating an exemplary instruction format according to the present invention,Fig. 29B is a block diagram illustrating fields constituting a complete opcode field in an instruction format according to an embodiment of the present invention.FIG. 29C is a block diagram illustrating the fields constituting the register index field in the instruction format according to an embodiment of the present invention.Fig. 29D is a block diagram illustrating fields constituting an extended operation field in an instruction format according to an embodiment of the present invention.Figure 30 is a block diagram of a register architecture according to an embodiment of the present invention.Figure 31A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register-renaming, out-of-order issue/execution pipeline according to an embodiment of the present invention.FIG. 31B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core to be included in a processor and an exemplary register-renaming, out-of-order issue/execution architecture core according to an embodiment of the present invention.32A-32B illustrate a block diagram of a more specific exemplary ordered core architecture, which will be one of several logic blocks in the chip (including other cores of the same type and/or different types).Figure 33 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have an integrated graphics device according to an embodiment of the present invention.Figure 34 shows a block diagram of a system according to an embodiment of the present invention.Figure 35 is a block diagram of a first more specific exemplary system according to an embodiment of the present invention.Figure 36 is a block diagram of a second more specific exemplary system according to an embodiment of the present invention.Fig. 37 is a block diagram of an SoC according to an embodiment of the present invention. andFig. 38 is a block diagram comparing using a software instruction converter to convert binary instructions in an operand instruction set into binary instructions in a target instruction set according to an embodiment of the present invention.Detailed waysVarious embodiments of methods, devices, systems, and non-transitory computer-readable storage media for nuclear-to-nuclear transfer of one or more tasks are described. Specifically, tasks for execution on the first core (e.g., some appropriate subset of code, such as subsections of loops, loops, etc.) are instead executed on at least the second core as the helper core ( In other words, from the first core to at least the second core). The second core executes the task and makes the result available to the first core for use in subsequent processing.Since there are times when the processor core is not completely computationally constrained, memory constrained, and/or input/output constrained without core-to-core transfer, there may be cycles that the core exceeds. In the above scenario, the second core is not completely constrained and can handle additional work, and the first core can use some help to make its work (in terms of time and/or energy) more efficient. In addition, when one of the central processing unit (CPU) cores gets into a bottleneck or because that is a traditional type of transfer, some solutions push work outwards to the graphics processing unit (GPU). This is unlikely to be power efficient, because GPUs tend to use significantly more power than even a fully loaded CPU.As described in the background art, traditional migration involves the migration of shared code to accelerators such as GPUs or to heterogeneous cores. In either case, the operating system participates in that transfer. The individual cores do not know whether they can handle work from other cores and rely on the operating system (OS). Having the OS participate in the transfer means that any transfer has to deal with the inefficiency of the approval from the OS.The embodiments detailed herein provide one or more mechanisms for such transfers without requiring the operating system to participate. As a result, the nuclei themselves know what they can and cannot do. Typically, the OS is not notified of the transfer. However, in some embodiments, the core may tell the operating system scheduler to postpone scheduling new work when the core is acting as a helper core, and the OS does not inform the core that the core cannot send work to another core. Without involving OS, the nuclear-nuclear transfer described in this article is more efficient.Fig. 1(A) illustrates an example of code for execution on a single core. As shown, the code includes at least three loops for execution on core 0. In this example, at least some of the loops may be executed independently of other loops (eg, the result from LOOP1 is not needed in LOOP2). Such independence is an indication that each cycle can be seen as the task to be transferred.Figure 1(B) illustrates an example of the code of Figure 1(A), but part of that code is potentially used to be executed as a task by the second core. As shown in the figure, the original code has been modified (typically, the compiler will complete the modification, however, this can be done manually or through binary conversion) to include "core-core" instructions, which can be Allow the tasks of LOOP1 to migrate from core 0 to core 1. In this example, several different operations are added to the code, and one or more of these instructions will be discussed in detail below. These operations may be instructions for operations visible to the programmer (such as added by the compiler) or invisible to the programmer that are executed independently by the core. Thus, although OFFLOADREQ*, STARTOFFLOAD*, XSAVEOFFLOAD, and/or ENDOFFLOAD* are described as user-visible instructions, in some implementations, OFFLOADREQ*, STARTOFFLOAD*, XSAVEOFFLOAD, and/or ENDOFFLOAD* are only operations performed by the core. Note that the discussion of this figure will use "instructions."The first new instruction or operation is "OFFLOADREQ*", which, when executed, makes it ask whether core 0 can transfer the task to core 1 so that core 1 acts as a request from core 0 for the helper core of core 0. Is sent to core 1. Core 1 is used to respond to OFFLOADREQ* regarding its status.STARTOFFLOAD* indicates that core 1 is used to receive transfer tasks from core 0 (for example, LOOP1). In some embodiments, STARTOFFLOAD* is directed to a specific core (here, core 1), or causes a broadcast to send a request to start a task to all cores.LOOP1, its operation, loop end determination (ENDLOOP1), context save operation (XSAVEOFFLOAD), and ENDOFFLOAD* are all shown as annotated with // in the code of core 0. This indicates that they will not be executed on core 0, but will be executed on core 1.On core 1, once LOOP1 is completed (up to ENDLOOP1), XSAVEOFFLOAD stores the result(s) of LOOP1 in one or more memory locations accessible by core 0 (such as in shared cache or memory) . The one or more memory locations may have been provided by core 0. In some embodiments, XSAVEOFFLOAD causes an indication of where the location is for the parent core or the requesting core (here, core 0).When ENDOFFLOAD* is executed on core 1, it lets core 0 know that the task is completed and in some cases know the result (if there is any result, and the result is not included in the instructions) where to wait, etc. It is sent back to core 0, thereby allowing core 0 to transfer the result of execution of LOOP1 by core 1 into the execution of the rest of the code by core 1.Note that if OFFLOADREQ*, STARTOFFLOAD*, XSAVEOFFLOAD, and/or ENDOFFLOAD* are not supported by the core, including them should result in a no-op or whatever operation the core performs on unsupported instructions. Thus, all loops of the code will run on core 0 exactly as shown in Figure 1(A). Note that this allows the feature to be added to the core without breaking backward compatibility.Figure 2 illustrates an embodiment of at least two cores and common components shared by these cores, where one of the two cores is requesting an indication of transfer availability from the other core. In some embodiments, these cores (core 1 203 and core 0205) are part of a single processor 201. In other embodiments, these cores are on different processors, but can be accessed from each other via interconnect or structure 231. Note that the interconnection or structure 231 may also be inside the processor, such as a point-to-point interconnection, crossbar, or ring between cores.Note that the internal aspect of core 0 205 is not shown, but its internal aspect replicates core 1 201.Core 1 203 and core 0 205 share interconnection 231, and also share memory controller 240 and one or more levels of cache (for example, L2, L3, and/or L4) depending on the implementation.In this illustration, core 0 205 wants to know whether core 1 203 can be used to act as a helper core and undertake the transferred task. Note that what is not shown in the figure is an intervening operating system for handling transfer or transfer availability requests.As shown in the figure, core 0 205 sends a transfer availability request (AR) to core 1 203 through interconnection 231 (such as ring interconnection, point-to-point, structure, etc.). Core 1 203 receives AR and determines whether it can be used to become an assistant core. Like all cores, core 1 203 includes a front end 3130 (detailed later), an execution engine 3150 (more aspects of which will be detailed later), and a memory unit (detailed later). Core 1 203 (and Core 0 205) further includes a nuclear-nuclear transfer circuit or a nuclear-nuclear transfer finite state machine (FSM) 221 for coordinating nuclear-nuclear transfer. When FSM is used, it is typically code executed on some type of microcontroller.The nuclear-nuclear transfer circuit or the nuclear-nuclear transfer finite state machine 221 is coupled to the performance monitoring circuit 211, which monitors the performance of the core. For example, the performance monitoring circuit 211 may count one or more of the following: the number of retired instructions of any type, the number of core cycles that have not been stopped, the number of cache misses, the number of cache accesses, all The number of retired branch instructions, the number of retired branch misses, and/or the number of available slots. Note that in some embodiments, the content to be monitored is configurable. The monitored content can be used to determine nuclear bondage. For example, the memory and cache counts can be used to determine whether the core is memory bound, the instruction count can indicate whether the core is computationally bound, and so on.The core-to-core transfer circuit or core-to-core transfer finite state machine 221 is also coupled to (or includes) a transfer phase tracker 112, which tracks the state of the core relative to the transfer phase. A more detailed discussion of the exemplary transition phase tracker 223 data structure will be discussed with reference to FIG. 5. In some embodiments, the transition phase tracker 223 depends on the performance monitoring circuit 221 to update its transition phase tracker data structure(s). In other embodiments, the core-to-core transfer circuit or core-to-core transfer finite state machine 221 uses, for example, the performance monitoring circuit 211 and/or information provided from other cores to update the transfer stage tracker data structure (such as, When a core accepts the transfer, the core may remind other cores of the change in the state of the core, or when the core accepts the transfer from core 1 203, the (multiple) transfer phase tracker data structures will be updated).The core-to-core transfer circuit or core-to-core transfer finite state machine 221 uses the information from the performance monitoring circuit 221 and/or the transfer phase tracker 223 to determine whether the core 1 203 can be used as a helper core. For example, if core 0 205 is computationally bound and core 1 203 (eg, based on the performance monitoring circuit 211 value) is not computationally bound, core 1 203 may be able to help. If core 1 203 is also computationally bound, it may not be able to help. When the cores are not homogeneous, if the core-to-core transfer circuit or the core-to-core transfer finite state machine 221 does not support at least an appropriate subset of the same instruction set architecture, then the core-to-core transfer circuit or the core-to-core transfer finite state The machine 221 may also refuse to be an assistant. However, in most instances, it is the binding type of the nucleus that specifies whether the nucleus can help.Note that the other components 225 of the execution engine are described in detail with reference to FIG. 31B. Although shown as part of the execution engine 3150, in some embodiments, one or more of the transition phase tracker 223, the core-to-core transfer circuit or the core-to-core transfer finite state machine 221, and the performance monitoring circuit 211 One is in another area of the core.Once the core-to-core transfer circuit or core-to-core transfer finite state machine 221 determines that core 1 203 can help, it causes a response indicating the availability status of core 1 (available or not) to be sent from core 1 203 via interconnect 231 At least core 0 205. Core 0 205 then uses that information to help determine which core(s) (assuming multiple cores) the core 0 will turn to.Figure 3 illustrates an embodiment of at least three cores and common components shared by these cores, where one of the three cores is requesting an indication of transfer availability from other cores. This example is similar to Figure 2, but AR is a broadcast to multiple cores. In this example, that includes core 1 203 as in FIG. 2 and additionally includes core N 301. Although broadcasting will occupy more of the interconnected bandwidth, when deciding who to send one or more tasks to, broadcasting may allow more up-to-date information for all cores used by core 0 205.Figure 4 illustrates embodiments of various variants of the transfer availability request instruction. Note that not all command configurations are shown. However, each instruction has an opcode 404 to indicate whether the transfer availability request (such as the AR detailed above) is for being individually addressed (for example, OFFLOADREQ) or broadcast (for example, OFFLOADREQBROADCAST).Each instruction also has a field for identifying one or more operands such as operand 1 403 and operand 2 405 and/or a field for immediate data 407. The purpose (content) of those operands and/or immediate data can be changed. Note that operand 1 403, operand 2 405, and/or operand 3 406 may be registers or memory locations. In some embodiments, each instruction uses operands or immediates to provide an identification of the requesting core and/or examples of constraints that will not work in one of the operands or immediates. In other embodiments, each instruction uses operands or immediates to provide an identification of the requesting core and/or an example of a binding condition that is restricting the requesting core. Note that the broadcast variant also includes an indication of either the operand or the immediate value or the destination.The first instruction variant includes an operand 1 403 field for identifying one or more destinations (e.g., a specific core for receiving a transfer availability request). For example, in some embodiments, a register or memory location includes multiple data elements, where each data element corresponds to a core, so that when the data element is set, that core is used to receive a transfer availability request (e.g., XMM1[0 ]=1 indicates that core 0 is used to receive requests, and XMM1[1]=0 indicates that core 1 is not used to receive requests). In other embodiments, various bits of the register or memory location are utilized in a similar manner (for example, GPReg1[0]=1 indicates that core 0 is used to receive requests, and GPReg1[1]=0 indicates that core 1 is not used to receive requests). In some embodiments, the instruction includes the identification of the requesting core in operand 2 405. This allows the receiving core to determine who sent the request.The second instruction variant includes an operand 1 403 field for identifying one or more destinations (e.g., a specific core for receiving a transfer availability request). For example, in some embodiments, a register or memory location includes multiple data elements, where each data element corresponds to a core, so that when the data element is set, that core is used to receive a transfer availability request (e.g., XMM1[0 ]=1 indicates that core 0 is used to receive requests, and XMM1[1]=0 indicates that core 1 is not used to receive requests). In other embodiments, various bits of the register or memory location are utilized in a similar manner (for example, GPReg1[0]=1 indicates that core 0 is used to receive requests, and GPReg1[1]=0 indicates that core 1 is not used to receive requests).In some embodiments, the instruction includes the identification of the requesting core in operand 2 405. In some embodiments, the instruction further includes an indication of the binding type in either operand 2 405 or immediate 407 that will not work. For example, computing is bound, memory is bound, or I/O is bound. In other embodiments, an indication of the type of restraint that the requesting core is suffering from is included in operand 2 405 or immediate 407.The third instruction variant does not use a field for identifying the destination. This will happen when there are only two cores in the system. In some embodiments, the instruction includes the identification of the requesting core in operand 2 405 (or any operand in the operand). This instruction further includes an indication of the binding type in any one of operand 1 405, operand 3 406, or immediate 407 that will not work. For example, computing is bound, memory is bound, or I/O is bound. In other embodiments, an indication of the type of restraint that the requesting core is suffering from is included in those fields.The fourth instruction variant uses the immediate 407 field to identify the destination, where the digits of the immediate correspond to the core number (for example, IMM[0]=core 0). In some embodiments, the instruction includes the identification of the requesting core in operand 2 405 (or any operand in the operand).The fifth instruction variant uses the immediate 407 field to identify the destination, where the bits of the immediate correspond to the core number (for example, IMM[0]=core 0). In some embodiments, the instruction includes the identification of the requesting core in operand 3 406 (or any operand in the operand). This instruction further includes an indication of the binding type in operand 1 403 or operand 2 405. For example, computing is bound, memory is bound, or I/O is bound. In other embodiments, an indication of the type of restraint that the requesting core is suffering from is included in those fields. In some embodiments, the instruction includes the identification of the requesting core in operand 3 406 (or any operand in the operand).The sixth instruction variant is a broadcast version, and uses one of the operands (such as operand 2 405) to identify the identity of the requesting core in operand 2 405. Another operand in the operand (such as operand 1403) is used to identify the type of restraint (depending on the implementation, what is preventing the requesting core or what type of restraint will not work).The seventh instruction variant is the broadcast version, and uses one of the operands (such as operand 1 403) to identify the identity of the requesting core in operand 2 405. Use immediate numbers to identify the type of bondage (depending on the implementation, what is preventing the requesting core or what type of bondage will not work).Note that these examples are not exhaustive. However, each instruction in the OFFLOADREQ* instruction includes one of the operation code using one or more operands or immediates to provide an indication of the destination(s), an indication of the requester ID, and/or a binding indication or Multiple, and therefore, other variants are possible.FIG. 5 illustrates an embodiment of the data structure of the transfer phase tracker. Although multiple fields are shown, depending on the implementation, not all fields are utilized, or additional fields may be included. Further, in some embodiments, the illustrated data structure is broken down into individual data structures for each core. For example, each field is a data element of a vector register, and so on.Typically, each entry includes a field for an accessible core identifier 501. For example, each core is in the processor.In some embodiments, the transfer task field 503 indicates whether that core is performing a transfer task. In some instances, if a core is performing transfer tasks for a different core, that core should not take on additional tasks.In some embodiments, the transfer task operand field 505 indicates which core provides the transfer task to that core. Note that this is coded as a bit vector or as a multi-bit value.In some embodiments, the tethered state field 507 indicates what tethered state the core is in (such as computing, memory, or I/O). Note that this is coded as a bit vector or as a multi-bit value.In some embodiments, the given transfer task to core(s) field 509 indicates which cores a particular core has given the given task. In this example, core 1 has given tasks to core 0 and core 2.In some embodiments, the location field 511 for holding results indicates where the results of task execution are to be stored or saved. This field serves multiple purposes. If the requesting core provides this address, this field gives the assistant core information exactly where to store the result. If the helper core provides the address, this field allows the helper core to pass that address to the requesting core, and if the requesting core does not confirm the completion of the task, the helper core is allowed to keep a record of that address. Note that in some embodiments, along with an indication of task completion, the result happens to be sent to the requesting core.In some embodiments, the instruction pointer field 513 indicates the instruction pointer from which the transferred task starts. This allows the requesting core to easily incorporate the result of the transferred task—the requesting core knows exactly what was replaced. In some embodiments, the requesting core tracks the instruction pointer.In some embodiments, the initial processor state field 515 indicates where the initial state of the requesting core can be found. When used, this field allows the helper core to load the state of the requesting core to speed up execution.FIG. 6 illustrates an embodiment of a method of processing a transfer availability request command (OFFLOADREQ*). Some operations or all operations in the operations of the method (or other processes described herein, or variants, and/or combinations thereof) are executed by the processor core when processing instructions.At 601, an instruction is fetched, the instruction having a field for indicating at least an operation code to perform a transfer availability request operation. The instruction may also include one or more operands and/or immediate data. An example of the instruction format can be found in Figure 4. The instruction is fetched using a fetch circuit such as that shown in FIG. 31B.At 603, the fetched instruction is decoded using a decoding circuit such as that shown in FIG. 31B.In some embodiments, at 605, the data associated with one or more operands is retrieved.At 607, the execution circuit executes the decoded instruction according to the opcode. Executing the decoded instruction includes: transmitting the transfer availability request to one or more cores identified in one or more operands (or broadcasting the transfer availability request if it is the content indicated by the opcode), and the transfer availability request includes the following: One or more items: the identification of the core making the request, the identification of the core used to receive the request (if not a broadcast variant), an indication of the type of availability requested (for example, computing, memory, or I/O), and /Or an indication of the type of bondage that is preventing the requesting core. Execution may also result in receiving a response from one or more cores to which the request was sent and updating the transition phase tracker 223 based on the received response. The response processing and the update of the transfer phase tracker are completed by the nuclear-nuclear transfer circuit or the nuclear-nuclear transfer finite state machine 221.As described above, aspects of the request, such as the requester ID, may be provided in one or more fields of an instruction including one or more operands (such as registers or memory) and/or immediate data.At 609, the result of the executed instruction is submitted.Note that when the transfer availability request operation is not executed as an instruction, there is no fetching, decoding, etc., but the action of the execution circuit is still executed.FIG. 7 illustrates an embodiment of a method of processing a transfer availability request command (OFFLOADREQ*). Some operations or all operations in the operations of the method (or other processes described herein, or variants, and/or combinations thereof) are executed by the processor core when processing instructions.At 701, an instruction is fetched, the instruction having a field for at least an opcode indicating that a transfer availability request operation is to be performed. The instruction may also include one or more operands and/or immediate data. An example of the instruction format can be found in Figure 4. The instruction is fetched using a fetch circuit such as that shown in FIG. 31B.At 703, the fetched instruction is decoded using a decoding circuit such as that shown in FIG. 31B.In some embodiments, at 705, data associated with one or more operands is retrieved.At 707, the execution circuit executes the decoded instruction according to the opcode. Executing the decoded instruction includes: causing the core-core transfer circuit or the core-core transfer finite state machine to generate and transmit a transfer availability request. The transfer availability request includes one or more of the following: the identification of the requesting core, and for receiving As provided by the operand of the instruction, the identification of the requested core (if it is not a broadcast variant), an indication of the type of availability requested (e.g., calculation, memory, or I/O), and/or an indication that the request is being made An indication of the type of nuclear bondage. This information can come from the operand and/or transition phase tracker 223. Execution may also result in receiving a response from one or more cores to which the request was sent and updating the transition phase tracker 223 based on the received response. The response processing and the update of the transfer phase tracker are completed by the nuclear-nuclear transfer circuit or the nuclear-nuclear transfer finite state machine 221.At 709, the result of the executed instruction is submitted.Note that when the transfer availability request operation is not executed as an instruction, there is no fetching, decoding, etc., but the action of the execution circuit is still executed.Figure 8 illustrates an embodiment of a method of handling a transfer availability request at a receiving core. In most embodiments, the core-nuclear transfer circuit or the core-nuclear transfer finite state machine 221 performs this processing.At 801, a transfer availability request is received from the core. The transfer availability request is used to query the receiving core to determine whether the receiving core can handle one or more transfer tasks of the sending core. For example, using FIG. 2, core 1203 receives the request from core 0 205.At 803, one or more of the receiving core usage performance monitoring circuit 211 and/or the transition stage tracker(s) 223 determines whether the receiving core can handle one or more transition tasks of the second core . For example, if the transfer availability request indicates that the requesting core is computationally bound, the receiving core uses information about the performance of the receiving core to decide whether the receiving core can handle the task, or the receiving core Whether it is also bound by calculation.When the receiving core determines that it can handle the task, at 805, the receiving core sends a response indicating that the receiving core is available to handle one or more tasks to the requesting core.When the receiving core determines that it cannot handle the task, at 807, the receiving core sends a response indicating that the receiving core is not available for processing one or more tasks to the requesting core.In some instances, it may be beneficial to send availability announcements from cores to other cores that may want to transfer tasks or receive transfer tasks instead of or in addition to executing transfer availability requests. For example, if interconnect 231 is not busy, it may be worthwhile to update other cores about the availability status of a particular core.Figure 9 illustrates an embodiment of at least three cores and common components shared by these cores, where one of the three cores is using it to transfer availability updates to other cores. When the core is running, its performance monitoring circuit 211 will monitor the performance of the core. How the core performs at a given moment affects whether the core can act as an assistant core. For example, if the core is continuously retiring instructions, it may be computationally constrained and cannot help computationally intensive tasks. Similarly, if the core has experienced many cache misses (and therefore has many memory accesses), adding memory-intensive tasks may not be a good idea either. The nuclear-nuclear transfer circuit or the nuclear-nuclear transfer finite state machine 221 looks at the data of the performance monitoring circuit 221 to make that determination.The core-to-core transfer circuit or the core-to-core transfer finite state machine 221 may also check the transfer phase tracker 223, which may indicate that the core is already a helper core. Depending on the implementation, the acceptance and start of the check transfer task may not be known to other cores. Thus, announcing this fact can be beneficial because it reminds other cores that this core may not be ideal for requesting to be a helper core.In this illustration, core 1 203 is sending its availability as an announcement to core 0 205, core N 301, etc. The transfer availability announcement may be executed according to a predetermined plan, executed when the interconnect 231 is idle, executed when there is a change in the state of the core 1 203, and/or executed in a predetermined manner.Figure 10 illustrates an example of a transfer availability announcement according to some embodiments. The transfer availability notification includes fields for one or more of the following: the sending core ID 1001, an indication 1003 whether the sending core already has a transfer task, and a restraint state 1007. For example, the first exemplary transfer availability announcement comes from core 0, which already has transfer tasks and is computationally bound. Note that the transfer availability announcement can also be made to cores whose memory is not bound, I/O is not bound, or computation is not bound (and therefore can be legitimate candidates for transfer) when there is a transfer task. In some embodiments, if the transfer availability announcement is only intended to be sent to the appropriate subset of cores, the transfer availability announcement also includes a destination ID 1009 field. Furthermore, in some embodiments, the transfer availability announcement includes a timestamp 1011 when it is suitable to be sent. This allows the receiving core to discard the "old" transfer availability announcement that is not the latest. Note that in some embodiments, the transfer availability announcement is only the core's own content and timestamp in its transfer phase tracker 223.FIG. 11 illustrates an embodiment of a method of generating a core announcement. In most embodiments, the core-nuclear transfer circuit or the core-nuclear transfer finite state machine 221 performs this processing.At 1101, a determination of the transfer availability status of the core is made. As described above, this determination may be based on the data of the performance monitoring circuit 211 and/or the information of the transition stage tracker 223.In some embodiments, at 1103, a determination is made whether an update to the previous transfer availability status should be made as a transfer availability status announcement. For example, if there has been no change in the transfer availability status, the update may not be required. If the interconnect coupled to the core is blocked, this may indicate that no update should be made at least at this moment.When it is determined that the transfer availability status notification should not be sent, then at 1105, the transfer availability status notification is not sent. In essence, this is a no-op in the flow.When it is determined that the transfer availability status notification should be sent, at 1107, the transfer availability notification is broadcast to the predetermined set of cores. The predetermined set may be all cores or an appropriate subset of these cores. For example, if a core is already engaged in a task for a core, the core may not have to update the availability of the core to that core. In other embodiments, when the determination at 1103 is not performed, the transfer availability update status report is broadcast. This typically occurs in implementations where the transfer availability update status is sent periodically as scheduled. The predetermined plan may be set by the user (such as in a model-specific register), predetermined in the core-to-core transfer circuit or the core-to-core transfer finite state machine 221, or automatically set based on the historical use of the interconnect 241.In some embodiments, before performing subsequent transfer availability status determination at 1102, at 1109, buffering or delay is achieved.Figure 12 illustrates an embodiment of a method of handling reception of transfer availability announcements in the core. In most embodiments, the core-nuclear transfer circuit or the core-nuclear transfer finite state machine 221 performs this processing.At 1201, a transfer availability announcement is received from another core.In some embodiments, at 1203, a determination is made whether an update to the previous transfer availability status should be made. For example, if there has been no change in the transfer availability status from the core that sent the transfer availability status, no update is required. In other words, if the entry used to send the announcement is the same as the entry in the transition phase tracker 223, no update is necessary.When it is determined that no update is required, then at 1205, the corresponding entry of the transition phase tracker 223 remains unchanged. In essence, this is a no-op in the flow. In some instances, if the received transfer availability announcement is older than the last transfer availability new announcement received for the sending core, no update is made.When it is determined that the update should be performed, at 1207, the state of the transmitting core is updated in the transition phase tracker 223.In some embodiments, at 1209, the sending core acknowledges receipt of the transfer availability announcement and causes the acknowledgement to be sent.Figure 13 illustrates an embodiment of at least two cores and common components shared by these cores, where one of the two cores is sending a transfer start request to the other cores. Note that these components have the same numbers and functions as those detailed with reference to FIG. 2 and the like.In this illustration, core 0 205 has a transfer task for core 1 203, and it is believed that core 1 203 can be used to act as an assistant core and assume the transferred task. Note that what is not shown in the figure is an operating system that intervenes for handling the start of the transfer.As shown in the figure, core 0 205 sends a transfer start request to core 1 203 via interconnect 231 (such as ring interconnect, point-to-point, structure, etc.). Core 1 203 receives the transfer start request and determines whether it can help (what is its availability as an assistant core). Core 1 203 further sends the confirmation back to Core 0 205. Upon receiving the transfer start request, the core 1 203 will retrieve the task from wherever the task is stored (typically, the information is included in the transfer start request), and update any information it has about itself. The entry of the phase tracker 223 is transferred and the task is executed. Typically, the core 0205 executes an instruction to generate a transfer start request.Figure 14 illustrates an embodiment of at least three cores and common components shared by these cores, where one of the three cores is sending a transfer start request. This example is similar to Figure 13, but the transfer start request is a broadcast to multiple cores. In this example, that includes core 1 203 as in FIG. 2 and additionally includes core N 301. Although broadcasting will occupy more bandwidth of the interconnect, broadcasting may allow more check cores 205 to respond to perform tasks.FIG. 15 illustrates an embodiment including a core receiving a transfer start request. In this example, core 1 203 receives the transfer start request. Many actions can be triggered by receiving this request. The first action that can occur is that the nuclear-nuclear transfer circuit or the nuclear-nuclear transfer finite state machine 221 determines that the nuclear can handle the transfer. It is possible that the availability of the core 1 203 has changed since the requesting core was notified about the status of the core 1 203 last time. For example, the operating system may have scheduled large, high-priority tasks at the same time. The core-to-core transfer circuit or the core-to-core transfer finite state machine 221 will view one or more of the following: instruction scheduling and queuing for execution, data of the performance monitoring circuit 211, and the current state as defined by the transfer phase tracker 223. If the core 1203 cannot take on the task, it sends back a confirmation detailing the situation. When the core 1 203 can take on the task, the core 1203 sends an acknowledgement back to remind the core 0 205 that the core 1 203 is starting.The second action that can be sent is to request a task from the memory or cache. In some embodiments, the core-to-core transfer circuit or the core-to-core transfer finite state machine 221 directly directs the request based on the addressing information provided by the transfer start request. In other embodiments, the core-to-core transfer circuit or core-to-core transfer finite state machine 221 uses the addressed information provided by the transfer start request to generate one or more load instructions, and give the one or more load instructions Other components 225 to load tasks. Typically, the task then starts to run. Note that if the core cannot access the task, it will notify the requesting core.A potential third action is to update the transfer phase tracker 223 to include information from the transfer start request and indicate that the transfer is in process.In some embodiments, the fourth action is to load the core state (as part of the request or as a location) that becomes available by the requesting core. The core state may include pre-filled registers, etc.Figure 16 illustrates embodiments of various branch start instruction variants. Note that not all command configurations are shown. However, each instruction has an opcode 1601 to indicate whether the transfer will be addressed individually (e.g., STARTOFFLOAD) or broadcast (e.g., STARTOFFLOABRDCAST).Each instruction also has a field for identifying one or more operands (such as operand 1 1603, operand 21605, operand 3 1607, operand 4 1609, operand 5 1611) and/or used for immediate data The field (not shown, but one or more of the operands can be substituted, such as operand 3). The purpose of those operands and immediate data can be changed. Note that operand 1 1603, operand 2 1605, operand 3 1607, operand 4 1609, and operand 5 1611 can be registers or memory locations.In some embodiments, the STARTOFFLOAD* instruction will include the address of the task to be executed, which in this example is found in operand 1 1603. The address can be in main memory, cache or disk. This address will be provided to the assistant core. In some embodiments, the address to be retrieved is the location of the task to be included in the transfer start request.In some embodiments, the STARTOFFLOAD* instruction will also include an instruction pointer (shown here as provided by operand 21605). The instruction pointer reminds the receiving core about where the task comes from in the original code, and will be sent as part of the transfer start request. Instead of or in addition to sending to the helper core, the instruction pointer can be maintained by the requesting core.In some embodiments, the STARTOFFLOAD* instruction will include the core ID that made the request (shown here as provided by operand 2 1607). This allows the recipient and other parties to know who sent the request.In some embodiments, the STARTOFFLOAD* instruction will include the helper core ID (shown here as provided by operand 41609). This specifies who the receiver core (the assistant core in the future) will be.In some embodiments, the STARTOFFLOAD* instruction will include the core state location that made the request (shown here as provided by operand 5 1611). This allows the receiving core to load the state of the requesting core.Note that this information of the operand can be used to generate a transfer start request for sending from the core to the potential helper core.In some embodiments, the execution of STARTOFFLOAD* causes the core-core transfer circuit or the core-core transfer finite state machine 221 to generate a start transfer request. When the instruction does not use the operand register, the core-core transfer circuit or the core-core transfer finite state machine 221 uses the transfer phase tracker 223 to generate a start transfer request.Figure 17 illustrates an example of a transfer start request according to some embodiments. The transfer start request includes one or more fields. These fields can include content for one or more of the following: core ID 1701, task or task address 1703, destination ID 1705, instruction pointer 1707 in the original code, core state or core state address 1709, And/or timestamp 1711. Note that the content of the transfer start request typically comes from the instruction.Figure 18 illustrates an embodiment of a method of processing startoffload* instructions. Some operations or all operations in the operations of the method (or other processes described herein, or variants, and/or combinations thereof) are executed by the processor core when processing instructions.At 1801, fetch an instruction that has fields for an opcode and one or more operands, the opcode indicating that the branch start operation is to be performed, and the one or more operands provide information for that operation. An example of the instruction format can be found in Figure 16. The instruction is fetched using a fetch circuit such as that shown in FIG. 31B.At 1803, the fetched instruction is decoded using a decoding circuit such as that shown in FIG. 31B.In some embodiments, at 1805, data associated with one or more operands is retrieved.At 1807, the execution circuit executes the decoded instruction according to the opcode. Executing the decoded instruction includes: causing a transfer start request to be generated and transmitting the transfer start request to one or more cores indicated by one or more operands or as a broadcast to transmit the transfer start request, the transfer start request includes one of the following or Multiple items: the identifier of the core that is requesting the transfer, the location where the assistant core can find the task to be executed, the task itself, the identifier of the core(s) to be transferred as the assistant(s), and the code from the Instruction pointer, processor state, processor state location, and/or time stamp. Note that the requested content can be started from the instruction's operand aggregation transfer.For example, in some embodiments, a transfer start request is sent, the transfer start request includes: the identifier of the core that is requesting the transfer, the location where the helper core can find the task to be performed, and the helper(s) to be executed The identifier of the transferred core(s), the instruction pointer from the code, the processor state location, and the time stamp. In other embodiments, a transfer start request is sent, and the transfer start request includes: the identifier of the core that is requesting the transfer, the location where the helper core can find the task to be performed, and the helper(s) to perform the transfer. The identifier and timestamp of the core(s). In other embodiments, a transfer start request is sent, the transfer start request including: the identifier of the core that is requesting the transfer, the identifier of the core(s) to be used as the assistant(s) to perform the transfer, and a time stamp. In these embodiments, the helper core already knows the location of the task (as in that predefined location). These are just exemplary types of transfer start requests that can be sent using a combination of the items detailed above.Execution may also cause a response from one or more cores identified in one or more operands and update the transition phase tracker 223 based on the received response. The response processing and the update of the transfer phase tracker are completed by the nuclear-nuclear transfer circuit or the nuclear-nuclear transfer finite state machine 221.Note that in some embodiments, the core-to-core transfer circuit or the core-to-core transfer finite state machine 221 determines which core, IP, task address, etc. to transfer to, and fills which one before one or more operands are retrieved Or the operand information in multiple operands.In some embodiments, the core state is saved before the instruction, and the core-to-core transfer circuit or core-to-core transfer finite state machine 221 fills in that operand before that operand is retrieved.At 1809, the result of the executed instruction is submitted.Note that when the branch start operation is not executed as an instruction, there is no fetching, decoding, etc., but the actions of the execution circuit are still executed.Figure 19 illustrates an embodiment of a method of processing startoffload* instructions. Some operations or all operations in the operations of the method (or other processes described herein, or variants, and/or combinations thereof) are executed by the processor core when processing instructions.At 1901, the instruction is fetched, which has a field for an opcode indicating that the branch start operation is to be performed. In some embodiments, one or more operands that provide information for that operation are utilized. An example of the instruction format can be found in Figure 16. The instruction is fetched using a fetch circuit such as that shown in FIG. 31B.At 1903, the fetched instruction is decoded using a decoding circuit such as that shown in FIG. 31B.In some embodiments, at 1905, data associated with one or more operands is retrieved.At 1907, the execution circuit executes the decoded instruction according to the opcode. Executing the decoded instruction includes: causing the core-core transfer circuit or the core-core transfer finite state machine 221 to generate a transfer start request and transmit the transfer start request to one or more cores. The transfer start request includes one or more of the following: The identifier of the core that is requesting the transfer, the location where the assistant core can find the task to be executed, the task itself, the identifier of the core(s) to be transferred as the assistant(s), the instruction pointer from the code, Processor state, processor state location, and/or timestamp. This information can come from the operand and/or transition phase tracker 223.For example, in some embodiments, a transfer start request is sent, the transfer start request includes: the identifier of the core that is requesting the transfer, the location where the helper core can find the task to be performed, and the helper(s) to be executed The identifier of the transferred core(s), the instruction pointer from the code, the processor state location, and the time stamp. In other embodiments, a transfer start request is sent, and the transfer start request includes: the identifier of the core that is requesting the transfer, the location where the helper core can find the task to be performed, and the helper(s) to perform the transfer. The identifier and timestamp of the core(s). In other embodiments, a transfer start request is sent, the transfer start request including: the identifier of the core that is requesting the transfer, the identifier of the core(s) to be used as the assistant(s) to perform the transfer, and a time stamp. In these embodiments, the helper core already knows the location of the task (as in that predefined location). These are just exemplary types of transfer start requests that can be sent using a combination of the items detailed above.Execution may also cause a response from one or more cores identified in one or more operands and update the transition phase tracker 223 based on the received response. The response processing and the update of the transfer phase tracker are completed by the nuclear-nuclear transfer circuit or the nuclear-nuclear transfer finite state machine 221.Note that in some embodiments, the core-to-core transfer circuit or the core-to-core transfer finite state machine 221 determines which core, IP, task address, etc. to transfer to, and fills which one before one or more operands are retrieved Or the operand information in multiple operands.In some embodiments, the core state is saved before the instruction, and the core-to-core transfer circuit or core-to-core transfer finite state machine 221 fills in that operand before that operand is retrieved.At 1909, the result of the executed instruction is submitted.Note that when the branch start operation is not executed as an instruction, there is no fetching, decoding, etc., but the actions of the execution circuit are still executed.FIG. 20 illustrates an embodiment of a method of handling a received transfer start request. Some operations or all operations in the operations of the method (or other processes described herein, or variants, and/or combinations thereof) are performed by the nuclear-nuclear transfer circuit or the nuclear-nuclear transfer finite state machine 221At 2001, a request to start the transfer is received. The details regarding the content of such requests have been detailed before.At 2003, a determination is made whether the transfer can be handled. For example, the nuclear-nuclear transfer circuit or the nuclear-nuclear transfer finite state machine 221 determines whether the core of the nuclear-nuclear transfer circuit or the nuclear-nuclear transfer finite state machine 221 can handle the request based on the transfer phase tracker 223 and/or the performance monitoring circuit 211 .When the request cannot be processed, at 2005, an acknowledgment of the transfer start request is made to be sent.When the request can be processed, at 2007, the transfer phase tracker 223 is updated using the details of the transfer start request.At 2009, the transfer task is retrieved as detailed above.At 2011, the receiving core begins to execute the task that was retrieved.At 2013, the confirmation of the transfer start request was made to be sent.Figure 21 illustrates an embodiment of at least two cores and common components shared by these cores, where one of the two cores is sending a transfer end indication to the other cores. Note that these components have the same numbers and functions as those detailed with reference to FIG. 2 and the like.In this illustration, core 0 205 has a transfer task for core 1 203, and core 1 203 is ending that task (because the task is completed, or because core 1 203 needs to do other things). Note that what is not shown in the figure is an operating system for intervention to handle the end of the transfer.As shown in the figure, the core 1 203 sends the transfer end instruction to the core 0 205 through the interconnection 231 (such as ring interconnection, point-to-point, structure, etc.). The core 0 205 receives the transfer end instruction, and can update its transfer phase tracker, determine whether the task is complete, retrieve and integrate the results, and so on. In some embodiments, core 0 205 sends the acknowledgment back to core 1 203. Typically, the transfer end instruction is generated by the execution of instructions from other components 225. Core 1 205 also updates its transition phase tracker 223.Figure 22 illustrates an embodiment of at least three cores and common components shared by these cores, where one of the three cores is sending a transfer end indication to the other cores. This example is similar to Figure 21, but the transfer end instruction is broadcast to multiple cores. In this example, that includes core 0 205 as in FIG. 2 and additionally includes core N 301. Although broadcasting will occupy more bandwidth of the interconnection, broadcasting may allow more cores to know that core 1 203 can perform tasks freely.FIG. 23 illustrates an embodiment of a core receiving a transfer end instruction. In this example, core 0 205 receives the transfer end indication. Many actions can be triggered by receiving this transfer end indication.The potential first action is to retrieve the result of the transferred task from the memory or cache as indicated by the transfer end instruction. In some embodiments, the core-to-core transfer circuit or the core-to-core transfer finite state machine 221 directly directs the request based on the addressing information provided by the transfer end indication. In other embodiments, the core-core transfer circuit or the core-core transfer finite state machine 221 uses the addressed information provided by the transfer end instruction to generate one or more load instructions, and give the one or more load instructions Other components 225 to load tasks. Typically, the result is collected as if the result was executed locally in the code. The location can come from the instruction pointer indicating the end of the branch, or it can be stored locally in the requesting core.A potential second action is to update the transfer phase tracker 223 to include information from the transfer end indication and indicate that the transfer is complete.In some embodiments, the third action is the helper core to load the core state that becomes available (as part of the request or as a location). The core state may include filled registers, etc.Figure 24 illustrates embodiments of various branch end instruction variants. Note that not all command configurations are shown. However, each instruction has an operation code 2401 for indicating whether the branch end instruction generated by the branch end operation is to be individually addressed (for example, ENDOFFLOAD) or broadcast (for example, ENDOFFLOABRDCAST).Each instruction also has fields for identifying one or more operands (such as operand 1 2403, operand 22405, operand 3 2407, operand 4 2409, operand 5 2411, operand 6 2404) and/ Or a field for immediate data (not shown, but one or more of the operands can be substituted, such as operand 3). The purpose of those operands and immediate data can be changed. Note that operand 1 2403, operand 2 2405, operand 3 2407, operand 4 2409, operand 5 2411, and operand 6 2404 can be registers or memory locations.In some embodiments, the ENDOFFLOAD* instruction will include the address of the result of the task, which in this example is found in operand 1 2403. The address can be in main memory, cache or disk. Note that in other embodiments, this address is already known to the requesting core and is not included.In some embodiments, the ENDOFFLOAD* instruction will also include an instruction pointer (shown here as provided by operand 22405). The instruction pointer reminds the source core about where the task comes from in the original code, and will be sent as part of the transfer end instruction. Instead of or in addition to sending to the helper core, the instruction pointer can be maintained by the requesting core.In some embodiments, the ENDOFFLOAD* instruction will include the core ID that made the request (shown here as provided by operand 3 2407). This should be the core receiving the transfer end indication.In some embodiments, the ENDOFFLOAD* instruction will include the helper core ID (shown here as provided by operand 4 2409). This is the core that performs the transferred task.In some embodiments, the ENDOFFLOAD* instruction will include the helper core state position (shown here as provided by operand 5 2411). This allows the requesting core to load the state of the helper core.In some embodiments, the ENDOFFLOAD* instruction will include the helper core ID. This allows the receiving core to know who sent the message.Note that this information of the operand can be used to generate a transfer end instruction for sending from a core to a potential helper core.In some embodiments, the execution of ENDOFFLOAD* causes the core-to-core transfer circuit or the core-to-core transfer finite state machine 221 to generate a start and end indication. When the instruction does not use the operand register, the core-core transfer circuit or the core-core transfer finite state machine 221 uses the transfer phase tracker 223 to generate a transfer end instruction.Figure 25 illustrates an embodiment of a method of processing endoffload* instructions. Some operations or all operations in the operations of the method (or other processes described herein, or variants, and/or combinations thereof) are executed by the processor core when processing instructions.At 2501, the instruction is fetched, the instruction has an operation code for indicating that the transfer end operation is to be performed, and one or more operands that provide information for that operation may also be included. An example of the instruction format can be found in Figure 24. The instruction is fetched using a fetch circuit such as that shown in FIG. 31B.At 2503, the fetched instruction is decoded using a decoding circuit such as that shown in FIG. 31B.In some embodiments, at 2505, data associated with one or more operands is retrieved.At 2507, the execution circuit executes the decoded instruction according to the opcode. Executing the decoded instruction includes: generating a transfer end instruction and transmitting the transfer end instruction to the core that has requested the transfer, the instruction includes one or more of the following: the identifier of the core that has requested the transfer, and the core that requested the transfer can be found The location of the result of the transfer, the result of the transfer, such as the instruction pointer, the core state, and/or the core state position provided by the corresponding startoffload request. Note that the requested content can be started from the instruction's operand aggregation transfer.For example, in some embodiments, a transfer end request is sent, and the transfer end request includes: the identifier of the core that is requesting the transfer, the location where the requesting core can find the result, the identifier of the core that performs the transfer as an assistant, Instruction pointer from code, processor state location, and time stamp. In other embodiments, the transfer end request is sent, and the transfer end request includes: the identifier of the core requesting the transfer, the location where the requesting core can find the result, the identifier of the core performing the transfer as an assistant, and the code from the The instruction pointer, and time stamp. In other embodiments, a transfer end request is sent, and the transfer end request includes: the identifier of the core that is requesting the transfer, the identifier of the core that performs the transfer as an assistant, and a timestamp. In these embodiments, the requesting core already knows the location of the task (as in that predefined location). These are just exemplary types of transfer end requests that can be sent using a combination of the items detailed above.Execution may also cause a response from the requesting core and update the transition phase tracker 223 based on the received response. The response processing and the update of the transfer phase tracker are completed by the nuclear-nuclear transfer circuit or the nuclear-nuclear transfer finite state machine 221.Note that in some embodiments, the core-to-core transfer circuit or the core-to-core transfer finite state machine 221 determines which core, IP, task address, etc. to transfer to, and fills which one before one or more operands are retrieved Or the operand information in multiple operands.At 2509, the result of the executed instruction is submitted.Note that when the transfer end operation is not executed as an instruction, there is no fetching, decoding, etc., but the actions of the execution circuit are still executed.Figure 26 illustrates an embodiment of a method of processing endoffload* instructions. Some operations or all operations in the operations of the method (or other processes described herein, or variants, and/or combinations thereof) are executed by the processor core when processing instructions.At 2601, the instruction is fetched, the instruction has a field for an opcode indicating that the branch end operation is to be performed. In some embodiments, one or more operands that provide information for that operation are included. An example of the instruction format can be found in Figure 24. The instruction is fetched using a fetch circuit such as that shown in FIG. 31B.At 2603, the fetched instruction is decoded using a decoding circuit such as that shown in FIG. 31B.In some embodiments, at 2605, data associated with one or more operands is retrieved.At 2607, the execution circuit executes the decoded instruction according to the opcode. Executing the decoded instruction includes: causing the core-core transfer circuit or the core-core transfer finite state machine 221 to generate a transfer end instruction and transmit the transfer end instruction to the core that has requested the transfer, and the instruction includes one or more of the following: The identifier of the core that has requested the transfer, the location where the core requesting the transfer can find the result of the transfer, the result of the transfer, such as the instruction pointer provided by the corresponding startoffload request, the core state, and/or the core state location. This information can come from the transition phase tracker 223 and/or operands.For example, in some embodiments, a transfer end request is sent, and the transfer end request includes: the identifier of the core that is requesting the transfer, the location where the requesting core can find the result, the identifier of the core that performs the transfer as an assistant, Instruction pointer from code, processor state location, and time stamp. In other embodiments, the transfer end request is sent, and the transfer end request includes: the identifier of the core requesting the transfer, the location where the requesting core can find the result, the identifier of the core performing the transfer as an assistant, and the code from the The instruction pointer, and time stamp. In other embodiments, a transfer end request is sent, and the transfer end request includes: the identifier of the core that is requesting the transfer, the identifier of the core that performs the transfer as an assistant, and a timestamp. In these embodiments, the requesting core already knows the location of the task (as in that predefined location). These are just exemplary types of transfer end requests that can be sent using a combination of the items detailed above.Execution may also cause a response from the requesting core and update the transition phase tracker 223 based on the received response. The response processing and the update of the transfer phase tracker are completed by the nuclear-nuclear transfer circuit or the nuclear-nuclear transfer finite state machine 221.Note that in some embodiments, the core-to-core transfer circuit or the core-to-core transfer finite state machine 221 determines which core, IP, task address, etc. to transfer to, and fills which one before one or more operands are retrieved Or the operand information in multiple operands.At 2609, the result of the executed instruction is submitted.Note that when the transfer end operation is not executed as an instruction, there is no fetching, decoding, etc., but the actions of the execution circuit are still executed.FIG. 27 illustrates an embodiment of a method of processing a transfer end indication. Some operations or all operations in the operations of the method (or other processes described herein, or variants, and/or combinations thereof) are performed by the nuclear-nuclear transfer circuit or the nuclear-nuclear transfer finite state machine 221At 2701, a transfer end indication is received. The details about the content of such instructions have been detailed before.At 2703, a determination is made whether the transfer end instruction is intended for the core. If not, then in some embodiments, at 2705, no operation is performed. In other embodiments, the receiving core still updates its transition phase tracker 223.When the request reaches the correct core, at 2707, the details of the transfer end indication are used to update the transfer phase tracker 223 for that core.At 2709, the transfer task result is retrieved as detailed above.At 2711, the received core integrates the retrieved results.In some embodiments, at 2713, an end transfer indication is confirmed.Figure 28 illustrates an embodiment of hardware for processing instructions such as the OFFLOADREQ* instruction, STARTOFFLOAD* instruction, and ENDOFFLOAD* instruction detailed herein. As shown in the figure, storage 2801 stores one or more of these instructions to be executed.The instruction is received by the decoding circuit 2805. For example, the decoding circuit 2805 receives the instruction from the fetch logic/circuit. The instruction 2801 includes the fields as detailed above. In some embodiments, the operand(s) are registers, and in other embodiments, one or more operands are memory locations. A more detailed embodiment of at least one instruction format will be detailed later. The decoding circuit 2805 decodes the instruction into one or more operations. In some embodiments, the decoding includes generating a plurality of micro-operations to be executed by an execution circuit (such as an execution circuit). The decoding circuit 2805 also decodes the instruction prefix (if used).In some embodiments, the register renaming, register allocation and/or scheduling circuit 2807 provides functions for one or more of the following: 1) Rename logical operand values to physical operand values (for example, in some embodiments Register alias table); 2) assign status bits and flags to decoded instructions; and 3) (for example, in some embodiments, use reserved stations) schedule decoded instructions for execution on execution circuits outside the instruction pool carried out.Registers (register files) and/or memory 2808 store data as operands of instructions to be operated on by the executing circuit. Exemplary register types include compact data registers, general purpose registers, and floating point registers.The execution circuit 2809 executes the decoded instructions detailed above. The write-back (retirement) circuit 2811 submits the result of the execution of the decoded instruction. In some embodiments, the retirement/write back circuit retires the instruction.Instruction SetThe instruction set can include one or more instruction formats. A given instruction format can define various fields (for example, the number of bits, the position of bits) to specify the operation to be performed (for example, opcode) and the operand(s) and/or the operation to be performed on. Other data field(s) (eg mask), etc. Some instruction formats are further decomposed through the definition of instruction templates (or sub-formats). For example, an instruction template of a given instruction format can be defined as having different fields in the instruction format (the fields included are usually in the same order, but at least some fields have different bit positions because fewer fields are included). A subset, and/or defined as having a given field that is interpreted differently. Thus, each instruction of the ISA uses a given instruction format (and if defined, in accordance with a given instruction template in the instruction template of the instruction format) to express, and includes instructions for specifying operations and operands Field. For example, an exemplary ADD (addition) instruction has a specific opcode and instruction format, and the specific instruction format includes an opcode field for specifying the opcode and an opcode field for selecting the operand (operand 1/destination and operand 2) The operand field; and the appearance of the ADD instruction in the instruction stream will cause the operand field to have specific content for selecting a specific operand. Has introduced and/or released SIMD extension sets called Advanced Vector Extensions (AVX) (AVX1 and AVX2) and Utilization Vector Extensions (VEX) coding schemes (see e.g. 64 and IA-32 architecture in September 2014) Software Developer’s Manual; and see the Advanced Vector Extension Programming Reference in October 2014).Exemplary instruction formatThe embodiments of instruction(s) described herein can be embodied in different formats. In addition, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) can be executed on such systems, architectures, and pipelines, but are not limited to those systems, architectures, and pipelines detailed.Although an embodiment of the present invention will be described in which the vector-friendly instruction format supports the following: 64-byte vector operand length (or size) and 32-bit (4 bytes) or 64-bit (8-byte) data element width (or Size) (and thus, a 64-byte vector consisting of 16 double-word-sized elements, or alternatively, a 64-byte vector consisting of 8 quad-word-sized elements); 64-byte vector operand length (or size) and 16 bits (2 bytes) or 8-bit (1 byte) data element width (or size); 32-byte vector operand length (or size) and 32-bit (4 bytes), 64-bit (8 bytes), 16 Bit (2 bytes) or 8 bits (1 byte) data element width (or size); and 16-byte vector operand length (or size) and 32 bits (4 bytes), 64 bits (8 bytes) , 16-bit (2 bytes), or 8-bit (1 byte) data element width (or size); but alternative embodiments may support larger, smaller, and/or different vector operand sizes (for example, 256 words Section vector operand) and larger, smaller or different data element width (for example, 128-bit (16 byte) data element width).FIG. 29A is a block diagram illustrating an exemplary instruction format according to an embodiment of the present invention. FIG. 29A shows an instruction format 2900, which specifies the position, size, interpretation, and order of each field, and the values of some of those fields. In this sense, the instruction format 2900 is dedicated. The instruction format 2900 can be used to extend the x86 instruction set, and thus some fields in the field are similar or the same as those used in the existing x86 instruction set and its extensions (for example, AVX). The format remains consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate field of the existing x86 instruction set with extensions.EVEX prefix (bytes 0-3) 2902—encoded in the form of four bytes.Format field 2982 (EVEX byte 0, bit [7:0])-the first byte (EVEX byte 0) is the format field 2982, and it contains 0x62 (in one embodiment of the present invention, for Distinguish the unique value of the vector friendly instruction format).The second to fourth bytes (EVEX bytes 1-3) include multiple bit fields that provide dedicated capabilities.REX field 2905 (EVEX byte 1, bit [7-5])-consists of EVEX.R bit field (EVEX byte 1, bit [7]–R), EVEX.X bit field (EVEX byte 1, bit [6]–X) and (2957BEX byte 1, bit [5]–B). The EVEX.R, EVEX.X, and EVEX.B bit fields provide the same functions as the corresponding VEX bit fields, and are encoded in the form of 1's complement, that is, ZMM0 is encoded as 1111B, and ZMM15 is encoded as 0000B. The other fields of these instructions encode the lower three bits (rrr, xxx, and bbb) of the register index as known in the art, which can be formed by adding EVEX.R, EVEX.X, and EVEX.B Rrrr, Xxxx and Bbbb.REX' field 2910-This is the EVEX.R' bit field used to encode the upper 16 or lower 16 registers of the extended 32 register set (EVEX byte 1, bit [4]-R' ). In an embodiment of the present invention, the bit is stored in a bit-reversed format together with other bits indicated below to distinguish it from the BOUND instruction (under the well-known x86 32-bit mode). The actual operation code word of the BOUND instruction The section is 62, but the value 11 in the MOD field is not accepted in the MOD R/M field (described below); alternative embodiments of the present invention do not store the indicated bit and the other indicated bits below in a reverse format. The value 1 is used to encode the lower 16 registers. In other words, R'Rrrr is formed by combining EVEX.R', EVEX.R, and other RRR from other fields.Opcode mapping field 2915 (EVEX byte 1, bits [3:0]-mmmm)-its content encodes the implicit leading opcode byte (0F, 0F 38, or 0F 3).Data element width field 2964 (EVEX byte 2, bit [7]-W)-represented by the notation EVEX.W. EVEX.W is used to define the granularity (size) of the data type (32-bit data element or 64-bit data element). If only one data element width is supported and/or a certain aspect of the opcode is used to support the data element width, then this field is not needed, in this sense, this field is optional.EVEX.vvvv 2920 (EVEX byte 2, bit [6:3]-vvvv)-the role of EVEX.vvvv can include the following: 1) EVEX.vvvv is the first operation specified in the inverted (1's complement) form Encode the operand of the number register, and is valid for instructions with two or more operands; 2) EVEX.vvvv encodes the destination register operand specified in the form of 1's complement for a specific vector displacement; or 3 ) EVEX.vvvv does not encode any operands, this field is reserved and should contain 1111b. Thus, the EVEX.vvvv field 2920 encodes the 4 low-order bits of the first operand register specifier stored in the inverted (1's complement) form. Depending on the instruction, an extra different EVEX bit field is used to expand the specifier size to 32 registers.EVEX.U 2968 type field (EVEX byte 2, bit [2]-U)-if EVEX.U=0, it indicates type A (supports combined write mask) or EVEX.U0; if EVEX.U= 1, then it indicates type B (supporting zero write mask and combined write mask) or EVEX.U1.Prefix encoding field 2925 (EVEX byte 2, bits [1:0]-pp)-provides additional bits for the basic operation field. In addition to providing support for traditional SSE instructions in the EVEX prefix format, this also has the benefit of compressing the SIMD prefix (EVEX prefix only needs 2 bits, instead of bytes to express the SIMD prefix). In one embodiment, in order to support traditional SSE instructions using SIMD prefixes (66H, F2H, F3H) in both the traditional format and the EVEX prefix format, these traditional SIMD prefixes are encoded into SIMD prefix encoding fields; and at runtime Before being provided to the decoder, PLA is expanded into a traditional SIMD prefix (therefore, without modification, PLA can execute these traditional instructions in the traditional format as well as these traditional instructions in the EVEX format). Although newer instructions can directly use the content of the EVEX prefix encoding field as an opcode extension, for consistency, certain embodiments are extended in a similar manner, but allow different meanings specified by these traditional SIMD prefixes. Alternative embodiments can redesign PLA to support 2-bit SIMD prefix encoding, and thus no extension is required.α field 2953 (EVEX byte 3, bit [7]-EH, also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX. write mask control, and EVEX.N; also shown as α)- -Its content distinguishes which of the different types of expansion operations are to be performed.β field 2955 (EVEX byte 3, bit [6:4]-SSS, also known as EVEX.s2-0, EVEX.r2-0, EVEX.rr1, EVEX.LL0, EVEX.LLB, also shown as βββ )-Distinguish which of the operations of the specified type should be executed.REX' field 2910-This is the remainder of the REX' field and is the EVEX.V' bit field (EVEX byte) that can be used to encode the upper 16 or lower 16 registers of the extended 32 register set 3. Bit [3]-V'). The bit is stored in bit-reversed format. The value 1 is used to encode the lower 16 registers. In other words, V'VVVV is formed by combining EVEX.V' and EVEX.vvvv.Write mask field 2971 (EVEX byte 3, bits [2:0]-kkk)-its content specifies the index of the register in the write mask register. In one embodiment of the invention, the specific value EVEX.kkk=000 has a special behavior implying that no write mask is used for a specific instruction (this can be implemented in various ways, including using a write mask that is hardwired to all objects Or hardware that bypasses the mask hardware). When merging, the vector mask allows to protect any set of elements in the destination from being updated during any operation (specified by the base operation and the expansion operation); in another embodiment, keep the corresponding mask bit with 0 The old value of each element of the destination. Conversely, when zeroing, the vector mask allows any set of elements in the destination to be zeroed during any operation (specified by the base operation and the expansion operation); in one embodiment, the elements of the destination are in the corresponding mask. When the bit has a value of 0, it is set to 0. A subset of this function is the ability to control the vector length of the operation being performed (ie, the span from the first to the last element being modified), however, the modified elements do not have to be continuous. Thus, the write mask field 2970 allows partial vector operations, which include loads, stores, arithmetic, logic, and so on. Although it is described that the content of the write mask field 2971 selects one of the multiple write mask registers containing the write mask to be used (and thus, the content of the write mask field 2971 indirectly identifies the The mask) is an embodiment of the present invention, but alternative embodiments alternatively or additionally allow the content of the mask write field 2971 to directly specify the mask to be executed. [00242] The real opcode field 2930 (byte 4) is also referred to as the opcode byte. A part of the opcode is specified in this field.The MOD R/M field 2940 (byte 5) includes a MOD field 2942, a register index field 2944, and an R/M field 2946. The content of the MOD field 2942 distinguishes memory access operations from non-memory access operations. The function of the register index field 2944 can be attributed to two situations: encoding the destination register operand or the operand register operand; or it is regarded as an operation code extension and is not used for encoding any instruction operand. The contents of the register index field 2944 directly specify or specify the location of the operand or the destination operand in the register or in the memory through address generation. These fields include a sufficient number of bits to select N registers from the PxQ (eg, 32x512, 16x128, 32x1024, 64x1024) register file. Although in one embodiment N may be up to three operands and one destination register, alternative embodiments may support more or fewer operands and destination registers (for example, up to two operands may be supported, One of these operands is also used as a destination; up to three operands can be supported, and one of these operands is also used as a destination; up to two operands and one destination can be supported Ground).The role of the R/M field 2946 may include the following: encoding the instruction operand referring to the memory address; or encoding the destination register operand or the operand register operand.Scale, Index, Base Address (SIB) byte (byte 6)-The content of the scale field 2950 allows the content of the index field to be scaled for memory address generation (for example, to use 2 index * scale + base address Address generation). SIB.xxx2954 and SIB.bbb 2956-The contents of these fields have been mentioned previously for register indexes Xxxx and Bbbb.Displacement field 2963A (bytes 7-10)-when the MOD field 2942 contains 10, bytes 7-10 are the displacement field 2963A, and it works the same as the traditional 32-bit displacement (disp32) and works at byte granularity . This can be used as part of memory address generation (for example, for address generation using (2 scale * index + base address + displacement)). [00247] Displacement factor field 2963B (byte 7)-When the MOD field 2942 contains 01, byte 7 is the displacement factor field 2963B. The position of this field is the same as that of the traditional x86 instruction set 8-bit displacement (disp8) that works at byte granularity. Since disp8 is sign-extended, it can only be addressed between -128 and 127 byte offsets; in terms of 64-byte cache lines, disp8 usage can be set to only four really useful values -128 8 bits of, -64, 0, and 64; Since a larger range is often required, disp32 is used; however, disp32 requires 4 bytes. Compared with disp8 and disp32, the displacement factor field 2963B is a reinterpretation of disp8; when the displacement factor field 2963B is used, the actual displacement is determined by multiplying the content of the displacement factor field by the size (N) of the memory operand access. This type of displacement is called disp8*N. This reduces the average instruction length (a single byte is used for displacement, but has a much larger range). Such compressed displacement is based on the assumption that the effective displacement is a multiple of the granularity of the memory access, and thus the redundant low-order bits of the address offset do not need to be encoded. In other words, the displacement factor field 2963B replaces the 8-bit displacement of the traditional x86 instruction set. Therefore, the displacement factor field 2963B is encoded in the same way as the 8-bit displacement of the x86 instruction set (therefore, there is no change in the ModRM/SIB encoding rules), the only difference is that disp8 is overloaded to disp8*N. In other words, there is no change in encoding rules or encoding length, but only in the interpretation of the displacement value by hardware (this requires the displacement to be scaled to the size of the memory operand to obtain a byte address offset).The immediate field 2972 allows the designation of immediate numbers. This field does not exist in the implementation of general vector-friendly formats that do not support immediate data and does not exist in instructions that do not use immediate data. In this sense, this field is optional.Full opcode fieldFIG. 29B is a block diagram illustrating the fields with the instruction format 2900 that constitute the complete opcode field 2974 according to one embodiment of the present invention. Specifically, the complete opcode field 2974 includes a format field 2982, a basic operation field 2943, and a data element width (W) field 2964. The basic operation field 2943 includes a prefix encoding field 2925, an opcode mapping field 2915, and an actual opcode field 2930.Register index fieldFIG. 29C is a block diagram illustrating the fields having the instruction format 2900 that constitute the register index field 2945 according to one embodiment of the present invention. Specifically, the register index field 2945 includes REX field 2905, REX' field 2910, MODR/M.reg field 2944, MODR/M.r/m field 2946, VVVV field 2920, xxx field 2954, and bbb field 2956.Extended operation fieldFIG. 29D is a block diagram illustrating a field having an instruction format 2900 that constitutes an extended operation field 2950 according to an embodiment of the present invention. When the class (U) field 2968 contains 0, it indicates EVEX.U0 (Class A 2968A); when it contains 1, it indicates EVEX.U1 (Class B 2968B). When U=0 and the MOD field 2942 contains 11 (indicating no memory access operation), the α field 2953 (EVEX byte 3, bit [7]-EH) is interpreted as the rs field 2953A. When the rs field 2953A contains 1 (rounding 2953A.1), the β field 2955 (EVEX byte 3, bits [6:4]–SSS) is interpreted as the rounding control field 2955A. The rounding control field 2955A includes a one-bit SAE field 2996 and a two-bit rounding operation field 2998. When the rs field 2953A contains 0 (data transformation 2953A.2), the β field 2955 (EVEX byte 3, bits [6:4]-SSS) is interpreted as a three-bit data transformation field 2955B. When U=0 and the MOD field 2942 contains 00, 01, or 10 (indicating a memory access operation), the α field 2953 (EVEX byte 3, bit [7]-EH) is interpreted as the eviction hint (EH) field 2953B, and The β field 2955 (EVEX byte 3, bits [6:4]-SSS) is interpreted as a three-bit data manipulation field 2955C.When U=1, the α field 2953 (EVEX byte 3, bit [7]-EH) is interpreted as the write mask control (Z) field 2953C. When U=1 and the MOD field 2942 contains 11 (indicating no memory access operation), a part of the β field 2955 (EVEX byte 3, bit [4]-S0) is interpreted as the RL field 2957A; when it contains 1 (no memory access operation) When entering 2957A.1), the rest of the β field 2955 (EVEX byte 3, bits [6-5]–S2-1) is interpreted as the rounding operation field 2959A, and when the RL field 2957A contains 0(VSIZE2957.A2 ), the rest of the β field 2955 (EVEX byte 3, bit [6-5]-S2-1) is interpreted as the vector length field 2959B (EVEX byte 3, bit [6-5]–L1-0) . When U=1 and the MOD field 2942 contains 00, 01, or 10 (indicating a memory access operation), the β field 2955 (EVEX byte 3, bit [6:4]-SSS) is interpreted as the vector length field 2959B (EVEX word Section 3, bit [6-5]-L1-0) and broadcast field 2957B (EVEX byte 3, bit [4]-B).Exemplary register architectureFigure 30 is a block diagram of a register architecture 3000 according to one embodiment of the present invention. In the illustrated embodiment, there are 32 vector registers 3010 that are 512 bits wide; these registers are referenced as zmm0 to zmm31. The lower order 256 bits of the lower 16 zmm registers are overlaid on registers ymm0-16. The lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm register) are overlaid on the registers xmm0-15. In other words, the vector length field 2959B selects between the maximum length and one or more other shorter lengths, where each such shorter length is half of the previous length and does not have an instruction template for the vector length field 2959B Operate on the maximum vector length. In addition, in one embodiment, the type B instruction template of the instruction format 2900 operates on compressed or scalar single/double precision floating point data and compressed or scalar integer data. Scalar operations are operations performed on the lowest-order data element position in the zmm/ymm/xmm register; depending on the embodiment, the higher-order data element position either remains the same as before the instruction or is zeroed.Write mask register 3015-In the illustrated embodiment, there are 8 write mask registers (k0 to k7), each of which is 64 bits in size. In an alternative embodiment, the size of the write mask register 3015 is 16 bits. In some embodiments, the vector mask register k0 cannot be used as a write mask; when the code that normally indicates k0 is used as a write mask, it selects the hard-wired write mask 0xFFFF, thereby effectively prohibiting the write mask Used for that instruction.General-purpose registers 3025-In the illustrated embodiment, there are sixteen 64-bit general-purpose registers that are used with the existing x86 addressing mode to address memory operands. These registers are referred to by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 to R15.The scalar floating-point stack register file (x87 stack) 3045, on which the MMX compressed integer flat register file 3050 is overlapped-in the illustrated embodiment, the x87 stack is used to use the x87 instruction set extension to 32/64 /80-bit floating-point data performs an eight-element stack of scalar floating-point operations; while using MMX registers to perform operations on 64-bit compressed integer data, and save operands for some operations performed between MMX and XMM registers.Alternative embodiments of the invention may use wider or narrower registers. In addition, alternative embodiments of the present invention may use more, fewer, or different register files and registers.Exemplary core architecture, processor and computer architectureProcessor cores can be implemented in different processors in different ways, for different purposes. For example, the implementation of such cores may include: 1) a general-purpose ordered core intended for general-purpose computing; 2) a high-performance general-purpose out-of-order core intended for general-purpose computing; 3) intended mainly for graphics and/ Or a dedicated core for scientific (throughput) calculations. The implementation of different processors may include: 1) CPU, which includes one or more general-purpose in-order cores intended for general-purpose computing and/or one or more general-purpose out-of-order cores intended for general-purpose computing; and 2 ) Coprocessor, which includes one or more dedicated cores intended primarily for graphics and/or science (throughput). Such different processors lead to different computer system architectures. These computer system architectures may include: 1) a coprocessor on a chip separate from the CPU; 2) in the same package as the CPU but on a separate die 3) Coprocessors on the same die as the CPU (in this case, such coprocessors are sometimes called dedicated logic or dedicated cores, such as integrated graphics and / Or scientific (throughput logic); and 4) a system-on-chip, which can combine the described CPU (sometimes referred to as application core(s) or application processor(s)), the co-processing described above The device and additional functions are included on the same die. An exemplary core architecture is described next, followed by an exemplary processor and computer architecture.Exemplary nuclear architectureIn-order and out-of-order kernel block diagramFIG. 31A is a block diagram illustrating an exemplary in-order pipeline and an exemplary register renaming out-of-order issue/execution pipeline according to various embodiments of the present invention. FIG. 31B is a block diagram illustrating an exemplary embodiment of an in-order architecture core to be included in a processor and an exemplary register renaming out-of-order issue/execution architecture core according to various embodiments of the present invention. The solid-line boxes in FIGS. 31A-31B illustrate the ordered pipeline and the ordered cores, and the optional addition of the dashed boxes illustrates the register renaming, out-of-order issue/execution pipelines and cores. Considering that the order aspect is a subset of the disorder aspect, the disorder aspect will be described.In FIG. 31A, the processor pipeline 3100 includes a fetch stage 3102, a length decoding stage 3104, a decoding stage 3106, an allocation stage 3108, a renaming stage 3110, a scheduling (also called dispatch or release) stage 3112, a register reading/memory The read stage 3114, the execution stage 3116, the write back/memory write stage 3118, the exception handling stage 3122, and the commit stage 3124.FIG. 31B shows a processor core 3190 that includes a front end unit 3130 that is coupled to the execution engine unit 3150, and both the front end unit 3130 and the execution engine unit 3150 are coupled to the memory unit 3170. The core 3190 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 3190 may be a dedicated core, such as, for example, a network or communication core, a compression engine, a coprocessor core, a general-purpose computing graphics processing unit (GPGPU) core, a graphics core, and so on.The front-end unit 3130 includes a branch prediction unit 3132, which is coupled to the instruction cache unit 3134, the instruction cache unit 3134 is coupled to the instruction translation lookaside buffer (TLB) 3136, and the instruction translation lookaside buffer 3136 is coupled to the instruction The fetch unit 3138 is coupled to the decoding unit 3140. The decoding unit 3140 (or decoder) can decode instructions and generate one or more micro-operations, micro-code entry points, micro-operations, micro-code entry points, and micro-operations that are decoded from the original instructions, or reflect the original instructions in other ways, or derived from the original instructions. Commands, other commands, or other control signals are output. The decoding unit 3140 can be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLA), microcode read-only memory (ROM), etc. In one embodiment, the core 3190 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in the decoding unit 3140, or otherwise in the front-end unit 3130). The decoding unit 3140 is coupled to the rename/allocator unit 3152 in the execution engine unit 3150.The execution engine unit 3150 includes a rename/allocator unit 3152 that is coupled to the retirement unit 3154 and a set 3156 of one or more scheduler units. The scheduler unit(s) 3156 represents any number of different schedulers, including reserved stations, central command windows, and so on. The scheduler unit(s) 3156 is coupled to the physical register file unit(s) 3158. Each of the physical register file unit(s) 3158 represents one or more physical register files, where different physical register files store one or more different data types, such as scalar integer, scalar float Point, packed integer, packed floating point, vector integer, vector floating point, state (for example, the instruction pointer as the address of the next instruction to be executed), etc. In one embodiment, the physical register file unit(s) 3158 includes a vector register unit, a write mask register unit, and a scalar register unit. These register units can provide architectural vector registers, vector mask registers and general registers. The physical register file unit(s) 3158 is overlapped by the retirement unit 3154 to illustrate various ways in which register renaming and out-of-order execution can be achieved (for example, using (multiple) reorder buffers and (multiple) retirement registers) Heap; use (multiple) future files, (multiple) history buffers, (multiple) retirement register files; use register map and register pool, etc.). The retirement unit 3154 and the physical register file unit(s) 3158 are coupled to the execution cluster(s) 3160. The execution cluster(s) 3160 includes a set 3162 of one or more execution units and a set 3164 of one or more memory access units. The execution unit 3162 can perform various operations (for example, shift, addition, subtraction, multiplication) and can perform various data types (for example, scalar floating point, packed integer, packed floating point, vector integer, vector floating point). Although some embodiments may include multiple execution units dedicated to a particular function or set of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 3156, the physical register file unit(s) 3158, and the execution cluster(s) 3160 are shown as possibly multiple because some embodiments create separate data/operations for certain types of data. Pipeline (e.g., scalar integer pipeline, scalar floating point/compacted integer/compacted floating point/vector integer/vector floating point pipeline, and/or each has its own scheduler unit, physical register file unit(s), and/or The memory access pipeline of the execution cluster-and in the case of a separate memory access pipeline, some embodiments are implemented in which only the execution cluster of the pipeline has the memory access unit(s) 3164). It should also be understood that in the case of using separate pipelines, one or more of these pipelines may be issued/executed out of order, and the remaining pipelines may be ordered.The set 3164 of memory access units is coupled to the memory unit 3170, which includes a data TLB unit 3172, which is coupled to a data cache unit 3174, which is coupled to the second level (L2) high speed Cache unit 3176. In an exemplary embodiment, the memory access unit 3164 may include a load unit, a storage address unit, and a storage data unit, each of which is coupled to the data TLB unit 3172 in the memory unit 3170. The instruction cache unit 3134 is also coupled to the second level (L2) cache unit 3176 in the memory unit 3170. The L2 cache unit 3176 is coupled to one or more other levels of cache, and ultimately to the main memory.As an example, an exemplary register renaming out-of-order issue/execution core architecture can implement pipeline 3100 as follows: 1) instruction fetch 3138 execute fetch stage 3102 and length decode stage 3104; 2) decode unit 3140 executes decode stage 3106; 3) The rename/allocator unit 3152 executes the allocation stage 3108 and the rename stage 3110; 4) (multiple) scheduler unit 3156 executes the scheduling stage 3112; 5) (multiple) physical register file unit 3158 and memory unit 3170 execute Register read/memory read stage 3114; execution cluster 3160 executes execution stage 3116; 6) memory unit 3170 and (multiple) physical register file units 3158 execute write back/memory write stage 3118; 7) each unit may be involved The exception handling stage 3122; and 8) the retirement unit 3154 and the physical register file unit(s) 3158 execute the commit stage 3124.Core 3190 can support one or more instruction sets (for example, x86 instruction set (with some extensions that have been added with newer versions); MIPS instruction set of MIPS Technologies, Sunnyvale, California; Sunnyvale, California The ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings, Inc., which includes the instruction(s) described herein. In one embodiment, the core 3190 includes logic for supporting compressed data instruction set extensions (eg, AVX1, AVX2), thereby allowing compressed data to be used to perform operations used by many multimedia applications.Specific exemplary ordered core architecture32A-32B illustrate block diagrams of a more specific exemplary ordered core architecture. The core will be one of several logic blocks in the chip (including other cores of the same type and/or different types). Depending on the application, the logic block communicates with some fixed functional logic, memory I/O interfaces, and other necessary I/O logic through a high-bandwidth interconnection network (for example, a ring network).Figure 32A is a block diagram of a single processor core and its connection to the on-die interconnect network 3202 and its local subset 3204 of the second level (L2) cache according to an embodiment of the invention. In one embodiment, the instruction decoder 3200 supports an x86 instruction set with a compact data instruction set extension. L1 cache 3206 allows low-latency access to cache memory into scalar and vector units. Although in one embodiment (in order to simplify the design), the scalar unit 3208 and the vector unit 3210 use separate sets of registers (respectively the scalar register 3212 and the vector register 3214), and the data transferred between these registers is written to the memory , And then read back from the first level (L1) cache 3206, but alternative embodiments of the invention can use different methods (for example, using a single register set or including allowing data to be transferred between the two register files without The communication path to be written and read back).The local subset 3204 of the L2 cache is part of the global L2 cache, which is divided into a plurality of separate local subsets, one local subset for each processor core. Each processor core has a direct access path to its own local subset 3204 of the L2 cache. The data read by the processor core is stored in its L2 cache subset 3204 and can be quickly accessed in parallel with other processor cores accessing its own local L2 cache subset. The data written by the processor core is stored in its own L2 cache subset 3204, and is dumped and cleared from other subsets if necessary. The ring network ensures the consistency of shared data. The ring network is bidirectional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each circular data path is 1012 bits wide in each direction.FIG. 32B is an expanded view of a part of the processor core in FIG. 32A according to an embodiment of the present invention. FIG. 32B includes the L1 data cache 3206A portion of the L1 cache 3204, and more details about the vector unit 3210 and the vector register 3214. Specifically, the vector unit 3210 is a 16-wide vector processing unit (VPU) (see 16-wide ALU 3228), which executes one or more of integer, single-precision floating-point and double-precision floating-point instructions. The VPU supports the mixing of register inputs through the mixing unit 353220, numerical conversion through the numerical conversion units 373222A-B, and the replication unit 3224 supports the copying of the memory input. The write mask register 3226 allows vector writing of the masked result.Figure 33 is a block diagram of a processor 3300 that may have more than one core, may have an integrated memory controller, and may have an integrated graphics device, according to an embodiment of the present invention. The solid-line box in FIG. 33 illustrates a processor 3300 having a single core 3302A, a system agent 3310, and a collection 3316 of one or more bus controller units, and the optional addition of a dashed-line frame has multiple cores 3302A-N , A collection 3314 of one or more integrated memory controller units in the system agent unit 3310, and a substitute processor 3300 for the dedicated logic 3308.Therefore, different implementations of the processor 3300 may include: 1) CPU, where dedicated logic 3308 is integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and cores 3302A-N are one or Multiple general-purpose cores (for example, general-purpose ordered cores, general-purpose out-of-order cores, a combination of the two); 2) coprocessors, of which core 3302A-N is intended to be used mainly for graphics and/or science (throughput) A large number of dedicated cores; and 3) coprocessors, of which core 3302A-N is a large number of general-purpose ordered cores. Therefore, the processor 3300 may be a general-purpose processor, a co-processor, or a special-purpose processor, such as, for example, a network or communication processor, a compression engine, a graphics processor, a GPGPU (general graphics processing unit), a high-throughput many integrated core (MIC) Coprocessor (including 30 or more cores), embedded processor, etc. The processor can be implemented on one or more chips. The processor 3300 may be part of one or more substrates, and/or may be implemented on one or more substrates using any of a variety of process technologies (such as, for example, BiCMOS, CMOS, or NMOS).The memory hierarchy includes one or more levels of cache within the core, a collection 3306 of one or more shared cache units, and an external memory (not shown) coupled to the collection 3314 of integrated memory controller units. The set 3306 of shared cache units may include one or more intermediate levels of cache, such as the second level (L2), third level (L3), fourth level (L4) or other levels of cache, the last level Cache (LLC) and/or a combination of the above. Although in one embodiment, the ring-based interconnection unit 3312 interconnects the integrated graphics logic 3308, the set of shared cache units 3306, and the system proxy unit 3310/(multiple) integrated memory controller units 3314, an alternative embodiment Any number of known techniques can be used to interconnect such units. In one embodiment, coherency is maintained between one or more cache units 3306 and cores 3302A-N.In some embodiments, one or more cores 3302A-N can be multi-threaded. The system agent 3310 includes those components that coordinate and operate the cores 3302A-N. The system agent unit 3310 may include, for example, a power control unit (PCU) and a display unit. The PCU may be the logic and components required to adjust the power state of the core 3302A-N and the integrated graphics logic 3308, or may include these logic and components. The display unit is used to drive one or more externally connected displays.Core 3302A-N may be homogeneous or heterogeneous in terms of architectural instruction set; that is, two or more cores in core 3302A-N may be able to execute the same instruction set, while other cores may be able to execute the instruction Only a subset of the set or different instruction sets.Exemplary computer architectureFigures 34-37 are block diagrams of exemplary computer architectures. Known in the art for laptop devices, desktop computers, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSP), graphics devices Other system designs and configurations of, video game equipment, set-top boxes, microcontrollers, cellular phones, portable media players, handheld devices, and various other electronic devices are also suitable. Generally, a variety of systems or electronic devices capable of including a processor and/or other execution logic as disclosed herein are generally suitable.Referring now to FIG. 34, shown is a block diagram of a system 3400 in accordance with an embodiment of the present invention. The system 3400 may include one or more processors 3410, 3415, which are coupled to the controller hub 353420. In one embodiment, the controller hub 353420 includes a graphics memory controller hub (GMCH) 3490 and an input/output hub (IOH) 3450 (which can be on separate chips); the GMCH 3490 includes a memory and a graphics controller, the memory 3440 The coprocessor 3445 is coupled to the memory and graphics controller; the IOH 3450 couples the input/output (I/O) device 3460 to the GMCH 3490. Alternatively, one or both of the memory and the graphics controller are integrated in the processor (as described herein), the memory 3440 and the coprocessor 3445 are directly coupled to the processor 3410, and the controller hub 353420 and IOH 3450 In a single chip.The optionality of the additional processor 3415 is indicated by a dashed line in FIG. 34. Each processor 3410, 3415 may include one or more of the processing cores described herein, and may be a certain version of the processor 3300.The memory 3440 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 353420 communicates with(s) via a multi-drop bus such as a front side bus (FSB), a point-to-point interface such as a fast path interconnect (QPI), or a similar connection 3495 The devices 3410 and 3415 communicate.In one embodiment, the coprocessor 3445 is a dedicated processor, such as, for example, a high-throughput MIC processor, a network or communication processor, a compression engine, a graphics processor, a GPGPU, an embedded processor, and so on. In one embodiment, the controller hub 353420 may include an integrated graphics accelerator.There may be various differences between physical resources 3410 and 3415 in a series of quality metrics including architecture, micro-architecture, thermal and power consumption characteristics.In one embodiment, the processor 3410 executes instructions that control general types of data processing operations. Embedded in these instructions can be coprocessor instructions. The processor 3410 recognizes these coprocessor instructions as having a type that should be executed by the attached coprocessor 3445. Therefore, the processor 3410 issues these coprocessor instructions (or control signals representing coprocessor instructions) to the coprocessor 3445 on the coprocessor bus or other interconnections. The coprocessor(s) 3445 accepts and executes the received coprocessor instructions.Referring now to FIG. 35, shown is a block diagram of a first more specific exemplary system 3500 in accordance with an embodiment of the present invention. As shown in FIG. 35, the multi-processor system 3500 is a point-to-point interconnection system, and includes a first processor 3570 and a second processor 3580 coupled via a point-to-point interconnection 3550. Each of the processors 3570 and 3580 may be a certain version of the processor 3300. In one embodiment of the present invention, the processors 3570 and 3580 are processors 1910 and 3415, respectively, and the coprocessor 3538 is the coprocessor 3445. In another embodiment, the processors 3570 and 3580 are the processor 3410 and the coprocessor 3445, respectively.The processors 3570 and 3580 are shown as including integrated memory controller (IMC) units 3572 and 3582, respectively. The processor 3570 also includes point-to-point (P-P) interfaces 3576 and 3578 as part of its bus controller unit; similarly, the second processor 3580 includes P-P interfaces 3586 and 3588. The processors 3570, 3580 can exchange information via a P-P interface 3550 using point-to-point (P-P) interface circuits 3578, 3588. As shown in Figure 35, IMC 3572 and 3582 couple the processors to corresponding memories, namely memory 3532 and memory 3534, which may be parts of the main memory locally attached to the respective processors.The processors 3570, 3580 can each exchange information with the chipset 3590 via the respective P-P interfaces 3552, 3554 of the point-to-point interface circuits 3576, 3594, 3586, and 3598. The chipset 3590 can optionally exchange information with the coprocessor 3538 via the high-performance interface 3539. In one embodiment, the coprocessor 3538 is a dedicated processor, such as, for example, a high-throughput MIC processor, a network or communication processor, a compression engine, a graphics processor, a GPGPU, an embedded processor, and so on.A shared cache (not shown) can be included in either processor, or external to the two processors but connected to these processors via a PP interconnect, so that if the processor is placed in a low power mode, nothing The local cache information of one or both processors can be stored in the shared cache.The chipset 3590 may be coupled to the first bus 3516 via the interface 3596. In one embodiment, the first bus 3516 may be a Peripheral Component Interconnect (PCI) bus or a bus such as PCI Express bus or another third-generation I/O interconnect bus, but the scope of the present invention is not limited to this .As shown in FIG. 35, various I/O devices 3514 may be coupled to the first bus 3516 along with a bus bridge 3518 that couples the first bus 3516 to the second bus 3520. In one embodiment, one or the other such as a coprocessor, a high-throughput MIC processor, a GPGPU, an accelerator (such as, for example, a graphics accelerator or a digital signal processing (DSP) unit), a field programmable gate array, or any other processor A plurality of additional processors 3515 are coupled to the first bus 3516. In one embodiment, the second bus 3520 may be a low pin count (LPC) bus. In one embodiment, various devices may be coupled to the second bus 3520. These devices include, for example, a keyboard and/or mouse 3522, a communication device 3527, and a storage unit 3528, such as a storage unit 3528 that may include instructions/code and data 3530. Disk drives or other mass storage devices. In addition, the audio I/O 3524 may be coupled to the second bus 3520. Note that other architectures are possible. For example, instead of the point-to-point architecture of FIG. 35, the system may implement a multi-branch bus or other such architecture.Referring now to FIG. 36, shown is a block diagram of a second more specific exemplary system 3600 in accordance with an embodiment of the present invention. Similar elements in FIGS. 35 and 36 use similar reference numerals, and some aspects of FIG. 35 are omitted from FIG. 36 to avoid confusion with other aspects of FIG. 36.Figure 36 illustrates that the processors 3570, 3580 may include integrated memory and I/O control logic ("CL") 3572 and 3582, respectively. Therefore, CL 3572, 3582 include integrated memory controller units and include I/O control logic. Figure 36 illustrates that not only the memory 3532, 3534 is coupled to the CL 3572, 3582, but the I/O device 3614 is also coupled to the control logic 3572, 3582. The traditional I/O device 3615 is coupled to the chipset 3590.Referring now to FIG. 37, shown is a block diagram of SoC 3700 according to an embodiment of the present invention. Similar elements in FIG. 33 use similar reference numerals. In addition, the dashed box is an optional feature on more advanced SoCs. In FIG. 37, the interconnection unit(s) 3702 is coupled to: an application processor 3710, which includes a collection of one or more cores 3302A-N and a shared cache unit(s) 3306; a system proxy unit 3310; bus controller unit(s) 3316; integrated memory controller unit(s) 3314; a collection of one or more coprocessors 3720, which may include integrated graphics logic, image processors, audio processors, and video Processor; static random access memory (SRAM) unit 3730; direct memory access (DMA) unit 3732; and display unit 3740 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 3720 includes a dedicated processor, such as, for example, a network or communication processor, compression engine, GPGPU, high-throughput MIC processor, or embedded processor, etc.The various embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementations. The embodiments of the present invention can be implemented as a computer program or program code executed on a programmable system including at least one processor and a storage system (including volatile and non-volatile memory and/or storage elements) , At least one input device and at least one output device.Program codes (such as code 3530 illustrated in FIG. 35) can be applied to input instructions to perform the functions described herein and generate output information. The output information can be applied to one or more output devices in a known manner. For the purposes of this application, a processing system includes any system having a processor such as, for example, a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code can be implemented in a high-level process-oriented programming language or an object-oriented programming language to communicate with the processing system. If necessary, the program code can also be implemented in assembly language or machine language. In fact, the mechanism described in this article is not limited to the scope of any particular programming language. In any case, the language can be a compiled language or an interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium, the instructions representing various logic in the processor, and the instructions, when read by a machine, cause the machine to manufacture To implement the logic of the techniques described in this article. Such representations called "IP cores" can be stored on a tangible machine-readable medium and can be supplied to various customers or production facilities to be loaded into the manufacturing machine that actually manufactures the logic or processor.Such machine-readable storage media may include, but are not limited to, non-transitory, tangible arrangements of articles manufactured or formed by machines or equipment, including storage media, such as hard disks; any other types of disks, including floppy disks, optical disks, compact disks Disk Read Only Memory (CD-ROM), Rewritable Compact Disk (CD-RW), and Magneto-Optical Disk; semiconductor devices such as read only memory (ROM), such as dynamic random access memory (DRAM) and static random access memory Access memory (SRAM) random access memory (RAM), erasable programmable read-only memory (EPROM), flash memory, electrically erasable programmable read-only memory (EEPROM); phase change memory (PCM); magnetic card or Optical card; or any other type of medium suitable for storing electronic instructions.Therefore, embodiments of the present invention also include non-transitory tangible machine-readable media containing instructions or design data, such as hardware description language (HDL), which defines the structures, circuits, devices, and processors described herein. And/or system characteristics. These embodiments are also called program products.Simulation (including binary transformation, code transformation, etc.)In some cases, the instruction converter can be used to convert instructions from a source instruction set to a target instruction set. For example, the instruction converter can transform instructions (for example, using static binary transformation, dynamic binary transformation including dynamic compilation), deform, emulate, or otherwise convert them into one or more other instructions to be processed by the core. The instruction converter can be implemented by software, hardware, firmware, or a combination thereof. The instruction converter may be on the processor, off the processor, or part on and part off the processor.Fig. 38 is a block diagram comparing using a software instruction converter to convert binary instructions in a source instruction set into binary instructions in a target instruction set according to an embodiment of the present invention. In the illustrated embodiment, the instruction converter is a software instruction converter, but alternatively, the instruction converter may be implemented by software, firmware, hardware, or various combinations thereof. FIG. 38 shows that an x86 compiler 3804 can be used to compile a program in the form of a high-level language 3802 to generate x86 binary code 3806 that can be natively executed by a processor 3816 having at least one x86 instruction set core. The processor 3816 with at least one x86 instruction set core means any processor that performs substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing the following items: 1) Intel The substantial part of the instruction set of the x86 instruction set core, or 2) an application that aims to run on an Intel processor with at least one x86 instruction set core in order to obtain substantially the same results as an Intel processor with at least one x86 instruction set core Or the object code version of other software. The x86 compiler 3804 represents a compiler operable to generate x86 binary code 3806 (for example, object code), which can be executed on a processor 3816 having at least one x86 instruction set core with or without additional link processing . Similarly, FIG. 38 shows that an alternative instruction set compiler 3808 can be used to compile a program in the form of a high-level language 3802 to generate a processor 3814 that does not have at least one x86 instruction set core (e.g., has an implementation of Sunny California Alternative instruction set binary code 3810 natively executed by the MIPS instruction set of MIPS Technology Corporation of Vail, and/or the processor that executes the core of the ARM instruction set of ARM Holdings, Sunnyvale, California. The instruction converter 3812 is used to convert the x86 binary code 3806 into code that can be natively executed by the processor 3814 without the x86 instruction set core. This converted code is unlikely to be the same as the alternate instruction set binary code 3810, because an instruction converter capable of doing so is difficult to manufacture; however, the converted code will perform general operations and is composed of instructions from the alternate instruction set. Therefore, the instruction converter 3812 represents software, firmware, hardware, or a combination thereof that allows a processor or other electronic device without an x86 instruction set processor or core to execute x86 binary code 3806 through simulation, simulation, or any other process.Examples of processors, methods, etc. detailed in this article include, but are not limited to:Example 1. A processor, including:Multiple cores, including at least a first core and a second core;The first core includes:A decoding circuit for decoding an instruction, the instruction having a field for at least an operation code and one or more operands, the operation code is used to indicate that a transfer request availability operation is to be performed, the one or more operands Used to provide information for that operation; andThe execution circuit is used to execute the decoded instruction to:Causes a transfer availability request to be transmitted to one or more cores of the processor, the transfer availability request includes the identification of the requesting core and an indication of the availability type requested from the one or more cores of the processor At least one item of, wherein the core receiving the transfer availability request is used to determine whether the receiving core can act as an assistant core for the first core to perform one or more tasks on behalf of the first core; andThe second core includes:The performance monitoring circuit is used to monitor the performance of the second core.Example 2. The processor of example 1, wherein the indication of the availability type requested from the one or more cores of the processor is one of calculation, memory, and input/output.Example 3. The processor of any of Examples 1-2, wherein the response to the transfer availability request from the one or more cores of the processor is based at least in part on The status information stored in the performance monitoring circuit is generated.Example 4. The processor according to any one of examples 1-3, wherein the first core further comprises:The transfer phase tracker is used to maintain status information of any tasks related to at least the first core and related to transfer from the first core and any tasks performed by the first core as an assistant.Example 5. The processor of example 4, wherein the transition phase tracker is used for maintenance by a core-core finite state machine.Example 6. The processor of any one of Examples 1-5, wherein the performance monitoring circuit is used to track events including one or more of the following:The number of instructions of any type retired;The number of nuclear cycles stopped;The number of cache misses;The number of cache accesses;The number of branch instructions retired;The number of misses in the retired branch; andThe number of slots available.Example 7. The processor according to any one of Examples 1-6, further comprising:Interconnection for coupling the first core and the second core.Example 8. The processor according to any one of Examples 1-7, further comprising:The core-core transfer execution circuit is configured to: receive a response to the transfer availability request from one or more cores of the processor; and update the transfer stage value from the one or more cores that responded.Example 9. A processor including:Multiple cores, including at least a first core and a second core;The first core includes:A decoding circuit for decoding an instruction, the instruction having a field for at least an operation code, the operation code being used to indicate that a transfer request availability operation is to be performed; andThe execution circuit is configured to execute the decoded instruction so as to generate a transfer availability request and transmit the transfer availability request to one or more cores of the processor, the transfer availability request including the identification of the requesting core and At least one of the indications of the availability type requested from the one or more cores of the processor, wherein the core receiving the transfer availability request is used to determine whether the receiving core can serve as the The assistant core of a core performs one or more tasks on behalf of the first core; andThe second core includes:The performance monitoring circuit is used to monitor the performance of the second core.Example 10. The processor of example 9, wherein the indication of the availability type requested from the one or more cores of the processor is one of calculation, memory, and input/output.Example 11. The processor of any of Examples 9-10, wherein the response to the transfer availability request from the one or more cores of the processor is based at least in part on The status information stored in the performance monitoring circuit is generated.Example 12. The processor of any one of Examples 9-10, wherein the first core further comprises:The transfer phase tracker is used to maintain status information of any tasks related to at least the first core and related to transfer from the first core and any tasks performed by the first core as an assistant.Example 13. The processor of example 12, wherein the transition phase tracker is used to be maintained by a core-core finite state machine.Example 14. The processor of any one of Examples 9-13, wherein the performance monitoring circuit is used to track events including one or more of the following:The number of instructions of any type retired;The number of nuclear cycles stopped;The number of cache misses;The number of cache accesses;The number of branch instructions retired;The number of misses in the retired branch; andThe number of slots available.Example 15. The processor of any one of Examples 9-14, further comprising:Interconnection for coupling the first core and the second core.Example 16. The processor of any one of Examples 9-15, further comprising:The core-core transfer execution circuit is configured to: receive a response to the transfer availability request from one or more cores of the processor; and update the transfer stage value from the one or more cores that responded.Example 17. A method including:Decoding the instruction, the instruction having a field for at least an operation code, the operation code being used to indicate that the transfer request availability operation is to be performed; andThe decoded instructions are executed to cause a transfer availability request to be generated and transmitted to one or more cores of the processor, the transfer availability request including the identification of the requesting core and all data from the processor At least one of the indications of the availability type of the one or more core requests, wherein the core receiving the transfer availability request is used to determine whether the receiving core can act as an assistant core for the first core to represent The first core performs one or more tasks.Example 17. A method including:Decoding the instruction, the instruction having a field for at least an operation code, the operation code being used to indicate that the transfer request availability operation is to be performed; andThe decoded instructions are executed to cause a transfer availability request to be generated and transmitted to one or more cores of the processor, the transfer availability request including the identification of the requesting core and all data from the processor At least one of the indications of the availability type of the one or more core requests, wherein the core receiving the transfer availability request is used to determine whether the receiving core can act as an assistant.Example 18. The method of Example 17, further comprising:Receiving a response to the transfer availability request from one or more cores of the processor; and updating the transfer stage value from the responding core or cores.Example 19. The method of Example 17, further comprising:Maintain status information related to at least the first core, any tasks related to transfer from the first core, and any tasks performed by the first core as an assistant.Example 20. A non-transitory machine-readable medium having instructions stored thereon that, when processed by a machine, are used to perform any of the methods of Examples 17-19.Example 21. A processor, including:Multiple cores, including at least a first core and a second core;The first core includes:A performance monitoring circuit for monitoring the performance of the first core;Nuclear-nuclear transfer circuit for:Determining the transfer availability status of the first core based at least in part on the value stored in the performance monitoring circuit; andTransmitting to the second core an availability indication for the first core to act as an assistant core to perform one or more tasks on behalf of the second core based on the determined transfer availability status of the first core;An execution circuit for executing the decoded instructions of the one or more tasks of the second core; andThe second core includes:An execution circuit for executing the decoded instructions of the one or more tasks of the second core; andThe transition phase tracker is used to maintain state information related to the availability of at least the first core to act as a helper core.Example 22. The processor of example 21, wherein the availability indication includes a type of available availability, and the type of available availability includes one of calculation, memory, and input/output.Example 23. The processor of example 21, wherein the availability indication is for being transmitted periodically.Example 24. The processor of example 21, wherein the availability indication is for being transmitted only when there is a determination of a change in the availability of the first core.Example 25. The processor of any one of Examples 21-24, wherein the transition phase tracker is used to be maintained by a core-core finite state machine.Example 26. The processor of any one of Examples 21-25, wherein the performance monitoring circuit is used to track events including one or more of the following:The number of instructions of any type retired;The number of nuclear cycles stopped;The number of cache misses;The number of cache accesses;The number of branch instructions retired;The number of misses in the retired branch; andThe number of slots available.Example 27. The processor of any one of Examples 21-26, wherein the first core is used to act as a helper core to perform one or more tasks on behalf of the second core. The instruction does not pass through the operating system Is routed.Example 28. A method that includes:Use performance monitoring circuit to monitor the performance of the first core;Determining the core-to-core transfer availability status of the first core based at least in part on the value stored in the performance monitoring circuit; andBased on the determined transfer availability status of the first core, an availability indication of the availability of the first core for acting as an assistant core to perform one or more tasks on behalf of the second core is transmitted to the second core.Example 29. The method of example 28, wherein the availability indication includes an available availability type, and the available availability type includes one of computing, storage, and input/output.Example 30. The method of example 28, wherein the availability indication is for being transmitted periodically.Example 31. The method of example 28, wherein the availability indication is for being transmitted only when there is a determination of a change in the availability of the first core.Example 32. The method of any one of Examples 28-31, wherein the transition phase tracker is used to be maintained by a core-core finite state machine.Example 33. The method of any of Examples 28-32, wherein the performance monitoring circuit is used to track events including one or more of the following:The number of instructions of any type retired;The number of nuclear cycles stopped;The number of cache misses;The number of cache accesses;The number of branch instructions retired;The number of misses in the retired branch; andThe number of slots available.Example 34. The method of any one of Examples 28-33, wherein the first core is used to act as a helper core to perform one or more tasks on behalf of the second core. The instructions are not passed through the operating system. routing.Example 35. A system including:Memory, used to store transfer tasks;Multiple cores, including at least a first core and a second core;The first core includes:A performance monitoring circuit for monitoring the performance of the first core;Nuclear-nuclear transfer circuit for:Determining the transfer availability status of the first core for handling the stored transfer task based at least in part on the value stored in the performance monitoring circuit; andTransmitting to the second core an availability indication for the first core to act as an assistant core to perform one or more tasks on behalf of the second core based on the determined transfer availability status of the first core;An execution circuit for executing the decoded instructions of the one or more tasks of the second core; andThe second core includes:An execution circuit for executing the decoded instructions of the one or more tasks of the second core; andThe transition phase tracker is used to maintain state information related to the availability of at least the first core to act as a helper core.Example 36. The system of example 35, wherein the availability indication includes an available availability type, and the available availability type includes one of computing, storage, and input/output.Example 37. The system of example 35, wherein the availability indication is for being transmitted periodically.Example 38. The system of example 35, wherein the availability indication is for being transmitted only when there is a determination of a change in the availability of the first core.Example 39. The system of example 35, wherein the transition phase tracker is used to be maintained by a core-core finite state machine.Example 40. A non-transitory machine-readable medium having instructions stored thereon that, when processed by a machine, are used to perform any of the methods of Examples 28-34.Example 41. A processor, including:Multiple cores, including at least a first core and a second core;The first core includes:A performance monitoring circuit for monitoring the performance of the first core;A transition phase tracker, configured to maintain state information related to the availability of at least the second core to serve as a helper core for the first core;A decoding circuit for decoding an instruction, the instruction having a field for at least an operation code and one or more operands, the operation code is used to indicate that a start task transfer operation is to be performed, the one or more operands To provide information; andThe execution circuit is used to execute the decoded instruction to:The transfer start request is transmitted to at least the second core as indicated by the one or more operands, the transfer start request includes one or more of the following: the identifier of the first core, the first core The location where the second core can find the task to be executed, the identifier of the second core, the instruction pointer from the code whose task is an appropriate subset, the state of the core that made the request, and the state of the core that made the request ,##Receiving a response from the second core; andUpdate the state information related to the second core in the transition phase tracker; andThe second core includes:A memory access circuit for retrieving the task to be executed from the location provided by the transfer start request; andThe execution circuit is used to execute the task to be executed.Example 42. The processor of example 41, wherein the second core is not in one of the following situations: computing is bound, memory is bound, or input/output is bound.Example 43. The processor of any one of examples 41-42, wherein the location provided by the transfer start request is in a cache shared between the first core and the second core in.Example 44. The processor of any of Examples 41-42, wherein the location provided by the transfer start request is in a memory location external to the first core and the second core.Example 45. The processor of any one of Examples 41-44, wherein the transition phase tracker is used to be maintained by a core-core finite state machine.Example 46. The processor of any one of Examples 41-45, wherein the performance monitoring circuit is used to track events including one or more of the following:The number of instructions of any type retired;The number of nuclear cycles stopped;The number of cache misses;The number of cache accesses;The number of branch instructions retired;The number of misses in the retired branch; andThe number of slots available.Example 47. The processor of any one of Examples 41-46, wherein the transfer start request is for being transmitted to a plurality of cores including the second core.Example 48. A processor, including:Multiple cores, including at least a first core and a second core;The first core includes:A performance monitoring circuit for monitoring the performance of the first core;A transition phase tracker, configured to maintain state information related to the availability of at least the second core to serve as a helper core for the first core;A decoding circuit for decoding an instruction, the instruction having a field for at least an operation code, and the operation code is used to indicate that a start task transfer operation is to be performed; andThe execution circuit is used to execute the decoded instruction to:The transfer start request is transmitted to at least the second core, and the transfer start request includes one or more of the following: the identifier of the first core, and the location where the second core can find the task to be performed , The identifier of the second core, the instruction pointer from the code whose task is an appropriate subset, the state of the core that made the request, and the position of the state of the core that made the request;Receiving a response from the second core; andUpdate the state information related to the second core in the transition phase tracker; andThe second core includes:A memory access circuit for retrieving the task to be executed from the location provided by the transfer start request; andThe execution circuit is used to execute the task to be executed.Example 49. The processor of example 48, wherein the second core is not in one of the following situations: computationally bound, memory bound, or input/output bound bound.Example 50. The processor of any one of Examples 48-49, wherein the location provided by the transfer start request is in a cache shared between the first core and the second core in.Example 51. The processor of any of Examples 48-49, wherein the location provided by the transfer start request is in a memory location external to the first core and the second core.Example 52. The processor of any one of Examples 48-51, wherein the transition phase tracker is used to be maintained by a core-core finite state machine.Example 53. The processor of any one of Examples 48-52, wherein the performance monitoring circuit is used to track events including one or more of the following:The number of instructions of any type retired;The number of nuclear cycles stopped;The number of cache misses;The number of cache accesses;The number of branch instructions retired;The number of misses in the retired branch; andThe number of slots available.Example 54. The processor of any one of Examples 48-53, wherein the transfer start request is for being transmitted to a plurality of cores including the second core.Example 55. A method including:Use performance monitoring circuit to monitor the performance of the first core;Maintaining status information related to the availability of at least the second core to act as a helper core for the first core;Decoding the instruction, the instruction having a field for at least an operation code, and the operation code is used to indicate that a start task transfer operation is to be performed;The decoded instruction is executed so that the transfer start request is transmitted to at least the second core, and the transfer start request includes one or more of the following: the identifier of the first core, and the second core can discover what is to be executed The location of the task, the identifier of the second core, the instruction pointer from the code of which the task is an appropriate subset, the state of the core that made the request, and the state of the core that made the request.Example 56. The method of example 55, wherein the second core is not in one of the following situations: computing is bound, memory is bound, or input/output is bound.Example 57. The method of any one of examples 55-56, wherein the location provided by the transfer start request is in a cache shared between the first core and the second core .Example 58. The method of any of Examples 55-57, wherein the location provided by the transfer start request is in a memory location external to the first core and the second core.Example 59. The method of any one of Examples 55-58, wherein the performance monitoring circuit is used to track events including one or more of the following:The number of instructions of any type retired;The number of nuclear cycles stopped;The number of cache misses;The number of cache accesses;The number of branch instructions retired;The number of misses in the retired branch; andThe number of slots available.Example 60. A non-transitory machine-readable medium having instructions stored thereon that, when processed by a machine, are used to perform any of the methods of Examples 55-59.Example 61. A processor, including:Multiple cores, including at least a first core and a second core;The first core includes:A performance monitoring circuit for monitoring the performance of the first core;A transition stage tracker, configured to maintain at least state information related to the first core from the second core to the first core and serving as a helper core for the second core;A decoding circuit for decoding an instruction, the instruction having fields for at least an operation code and one or more operands, the operation code is used to indicate that an end task transfer operation is to be performed, the one or more operands To provide information; andThe execution circuit is used to execute the decoded instruction to:The transfer end instruction is transmitted to the second core, and the instruction includes one or more of the following: the identifier of the second core, the location where the second core can find the result of the transfer, and the transferred The result of the execution of the task, the instruction pointer in the original code of the second source, the state of the core that made the request, and the position of the state of the core that made the request; andThe second core includes:The execution circuit is used to execute the task transferred from the first core.Example 62. The processor of example 61, wherein the second core is not in one of the following situations: computing is bound, memory is bound, or input/output is bound.Example 63. The processor of any one of Examples 61-62, wherein the location provided by the transfer end indication is in a cache shared between the first core and the second core in.Example 64. The processor of any one of Examples 61-62, wherein the location provided by the transfer end indication is in a memory location external to the first core and the second core.Example 65. The processor of any one of Examples 61-64, wherein the transition phase tracker is used to be maintained by a core-core finite state machine.Example 66. The processor of any one of Examples 61-65, wherein the performance monitoring circuit is used to track events including one or more of the following:The number of instructions of any type retired;The number of nuclear cycles stopped;The number of cache misses;The number of cache accesses;The number of branch instructions retired;The number of misses in the retired branch; andThe number of slots available.Example 67. The processor of any one of Examples 61-66, wherein the transfer end indication is for being transferred from the first core to a plurality of cores including the second core.Example 68. The processor of any of Examples 61-67, wherein the transfer end indication is not routed through the operating system.Example 69. A processor including:Multiple cores, including at least a first core and a second core;The first core includes:A performance monitoring circuit for monitoring the performance of the first core;A transition stage tracker, configured to maintain at least state information related to the first core from the second core to the first core and serving as a helper core for the second core;A decoding circuit for decoding an instruction, the instruction having a field for at least an operation code, and the operation code is used to indicate that an end task transfer operation is to be performed; andThe execution circuit is used to execute the decoded instruction to:The transfer end instruction is transmitted to the second core, and the instruction includes one or more of the following: the identifier of the second core, the location where the second core can find the result of the transfer, and the transferred The result of the execution of the task, the instruction pointer in the original code of the second source, the state of the core that made the request, and the position of the state of the core that made the request; andThe second core includes:The execution circuit is used to execute the task transferred from the first core.Example 70. The processor of example 69, wherein the second core is not in one of the following situations: computationally bound, memory bound, or input/output bound bound.Example 71. The processor of any one of examples 69-70, wherein the location provided by the transfer end indication is in a cache shared between the first core and the second core in.Example 72. The processor of any of Examples 69-70, wherein the location provided by the transfer end indication is in a memory location external to the first core and the second core.Example 73. The processor of any one of Examples 69-72, wherein the transition phase tracker is used to be maintained by a core-core finite state machine.Example 74. The processor of any one of Examples 69-73, wherein the performance monitoring circuit is used to track events including one or more of the following:The number of instructions of any type retired;The number of nuclear cycles stopped;The number of cache misses;The number of cache accesses;The number of branch instructions retired;The number of misses in the retired branch; andThe number of slots available.Example 75. The processor of any one of Examples 69-74, wherein the transfer end indication is for being transferred from the first core to a plurality of cores including the second core.Example 76. The processor of any of Examples 69-75, wherein the transfer end indication is not routed through the operating system.Example 77. A method that includes:Decoding the instruction, the instruction having a field for at least an operation code, and the operation code is used to indicate that an end task transfer operation is to be performed;The decoded instruction is executed so that the transfer end instruction is transmitted to the second core, and the instruction includes one or more of the following: the identifier of the second core, and the second core can find the result of the transfer Location, the result of the execution of the transferred task, the instruction pointer in the original code of the second source, the state of the core that made the request, and the state of the core that made the request.Example 78. The method of example 77, wherein the second core is not in one of the following situations: computing is bound, memory is bound, or input/output is bound.Example 79. The method of example 78, wherein the location provided by the transfer end indication is in a cache shared between the first core and the second core.Example 80. A non-transitory machine-readable medium having instructions stored thereon that, when processed by a machine, are used to perform any of the methods of Examples 77-79.
To provide an IC assembly, an IC package, and a method of manufacturing the IC assembly that allows a larger number of memory or logic devices to be incorporated on a chip and has increased capacity.SOLUTION: In an IC assembly 600 based on the use of a transistor with a backside contact, the transistor with the backside contact allows backside powering to an IC component such as a transistor in the IC structures. This is an advantage over front side power supply in some implementations. Furthermore, using a glass support structure 450 on the front side of the IC structure with backside powering advantageously reduces parasitic effects in the IC structure compared to using the silicon-based support structure of the front side.SELECTED DRAWING: Figure 6
An integrated circuit (IC) assembly comprising: a substrate end of line (FEOL) layer including a plurality of FEOL devices; a back side power delivery structure including a plurality of power interconnects coupling to various of the FEOL devices; and the plurality of FEOLs. an interconnection line (BEOL) layer including a plurality of BEOL interconnects coupled to one or more of devices; and a glass support structure, said FEOL layer being between said backside power delivery structure and said BEOL layer, said BEOL layer. An IC assembly, wherein a layer is between said FEOL layer and said glass support structure.The plurality of BEOL interconnects includes a first BEOL interconnect and a second BEOL interconnect, the glass support structure having a first terminal coupled to the first BEOL interconnect and a second terminal coupled to the second BEOL interconnect. 2. The IC assembly of claim 1, comprising a thin film device.3. The IC assembly of claim 2, wherein said thin film device is a thin film resistor.3. The IC assembly of claim 2, wherein said thin film device is a thin film capacitor.3. The IC assembly of claim 2, wherein said thin film device is a thin film inductor.6. The IC assembly of any one of claims 1-5, further comprising a bonding interface between said BEOL layer and said glass support structure.7. The IC assembly of claim 6, wherein said bonding interface comprises oxide.8. The IC assembly of claim 7, wherein said oxide includes a portion contacting one or more portions of said glass support structure and a portion contacting one or more portions of said BEOL layer.Further comprising an active layer containing a plurality of IC devices and interconnects, the active layer being between the glass support structure and the bonding interface, the bonding interface being between the active layer and the BEOL layer. 8. The IC assembly of claim 6 or 7, wherein at least one of said plurality of IC devices and interconnects of said active layer couples to one or more of said plurality of BEOL interconnects.10. The IC assembly of claim 9, wherein said bond interface is a hybrid bond interface.11. The IC assembly of claim 9 or 10, wherein the bonding interface includes a portion contacting one or more portions of the active layer and a portion contacting one or more portions of the BEOL layer.A cross-section of each of the at least one interconnect of the active layer and the at least one interconnect of the BEOL interconnect is a trapezoid including two parallel sides, one of the two parallel sides being a short side. , another one is a long side, and for the trapezoid of the at least one interconnect of the active layer, the short side is closer to the glass support structure than the long side, and the at least one interconnect of the BEOL interconnect is: 12. An IC assembly as claimed in any one of claims 9 to 11, wherein for said trapezoid, said long side is closer to said glass support structure than said short side.At least one of said power interconnects has a cross-section of a trapezoid including two parallel sides, one of said two parallel sides being a short side and the other being a long side; 13. The IC assembly of claim 12, wherein for said trapezoid of said at least one of the interconnects, said short side is closer to said glass support structure than said long side.14. The FEOL devices of claims 1-13, wherein the plurality of FEOL devices comprise FEOL transistors having source and drain regions, and wherein at least one power interconnect of the plurality of power interconnects is coupled to the source region or the drain region. An IC assembly according to any one of the preceding claims.15. The IC assembly of any one of claims 1-14, wherein the backside power supply structure comprises an insulator material that encompasses at least a portion of the plurality of power interconnects.16. The IC assembly of any one of claims 1-15, wherein the BEOL layer comprises one or more memory layers, the one or more memory layers comprising memory cells comprising thin film transistors.An integrated circuit (IC) package comprising an IC assembly and a further IC component coupled to said IC assembly, said IC assembly including one or more of a fin transistor, a nanoribbon transistor, and a nanowire transistor. a layer including a plurality of transistors; a back-end layer including a plurality of back-end interconnects coupled to one or more of the plurality of transistors; and a plurality of power interconnects coupled to one or more of the plurality of transistors. The layer comprising a backside power supply structure and a glass support structure, the layer containing the plurality of transistors being between the backside power supply structure and the back end layer, the back end layer supporting the plurality of transistors. IC package between said layer comprising and said glass support structure.18. The IC package of Claim 17, wherein said further IC component comprises one of a package substrate, an interposer, or a further IC die.1. A method of manufacturing an integrated circuit (IC) assembly comprising the steps of providing a substrate end of line (FEOL) device over a semiconductor support structure and providing an interconnect end of line (BEOL) layer over the FEOL device, comprising: wherein the BEOL layer includes a plurality of BEOL interconnects coupled to one or more of a plurality of the FEOL devices; bonding a configuration of the BEOL layer and the FEOL devices to a non-semiconductor support structure; removing at least a portion of the semiconductor support structure to expose a portion of the device; and providing a backside power delivery structure including a plurality of power interconnects coupled to the exposed portion of the FEOL device. A method comprising steps.The step of bonding the BEOL layer and the configuration of the FEOL devices to the non-semiconductor support structure includes: a surface of the BEOL layer bonded to the non-semiconductor support structure; and a surface of the BEOL layer bonded to the BEOL layer. providing one or more bonding materials on at least one of the faces of the structure; 20. The method of claim 19, comprising attaching to a surface.
Over the past few decades, scaling of features in integrated circuits has been the driving force behind the ever-growing semiconductor industry. Scaling to smaller features allows increasing the density of functional units on the limited area of a semiconductor chip. For example, shrinking the size of transistors allows a greater number of memory or logic devices to be included on a chip, resulting in the manufacture of products with increased capacity. Efforts to continue increasing capacity, however, are not without problems. The requirements for optimizing the performance of each device and each interconnect are significantly higher.Embodiments will be readily understood by reading the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals refer to like structural elements. The embodiments in the accompanying drawings are given by way of illustration and not by way of limitation.FIG. 3 provides a schematic illustration of a cross-sectional view of an exemplary transistor with backside contacts according to some embodiments of the present disclosure; FIG.FIG. 2A is a perspective view (A) and a cross-sectional view (B) of an exemplary transistor with a backside contact implemented as a FinFET, according to some embodiments of the present disclosure;FIG. 4 provides a schematic illustration of a cross-sectional view of an exemplary memory cell including a transistor with a backside contact, according to some embodiments of the present disclosure; FIG.FIG. 1 provides a block diagram of an integrated circuit (IC) assembly with a backside power supply and a front glass support, according to some embodiments of the present disclosure;FIG. 2 provides a schematic diagram of an IC assembly with backside power supply and front glass support, according to various embodiments of the present disclosure; FIG. FIG. 2 provides a schematic diagram of an IC assembly with backside power supply and front glass support, according to various embodiments of the present disclosure; FIG. FIG. 2 provides a schematic diagram of an IC assembly with backside power supply and front glass support, according to various embodiments of the present disclosure; FIG. FIG. 2 provides a schematic diagram of an IC assembly with backside power supply and front glass support, according to various embodiments of the present disclosure; FIG.(A)-(D) illustrate a first exemplary method of forming an IC assembly having a back side power supply and a front glass support, according to some embodiments of the present disclosure.(A)-(D) illustrate a second exemplary method of forming an IC assembly having a back side power supply and a front glass support, according to some embodiments of the present disclosure.1 is a side cross-sectional view of an IC package that may include an IC assembly having a backside power supply and a front glass support according to any of the embodiments disclosed herein; FIG.1 is a side cross-sectional view of an IC device assembly that may include an IC assembly having a backside power supply and a front glass support according to any of the embodiments disclosed herein; FIG.1 is a block diagram of an exemplary computing device that may include an IC assembly with a backside power supply and a front glass support according to any of the embodiments disclosed herein; FIG.SUMMARY The systems, methods and devices of the present disclosure each have several innovative aspects, no single one of which is solely responsible for all of the desirable attributes disclosed herein. Details of one or more implementations of the subject matter described in this specification are set forth in the following description and the accompanying drawings.For the purposes of describing the IC assembly with the backside power supply and front glass support described herein, it may be helpful to first understand the phenomena that may come into play in certain IC placements. The following background information may be considered as a basis upon which the present disclosure may be properly described. Such information is provided for illustrative purposes only. It should not, therefore, be construed as limiting in any way the broad scope of the disclosure and potential applications.Monolithic ICs typically include multiple transistors, such as metal oxide semiconductor (MOS) field effect transistors (FETs) (MOSFETs) fabricated on planar substrates such as silicon wafers. Moore's Law has been valid in the IC industry for decades, but with current MOSFET gate dimensions of 20 nanometers and below, lateral scaling of IC dimensions is becoming increasingly difficult. As device sizes continue to shrink, there will come a time when continuing standard planar scaling becomes impractical. This tipping point can be due to economic or physical reasons, such as enormous volume or quantum-based variability. Stacking transistors in three dimensions, typically referred to as vertical scaling or three-dimensional (3D) integration, is therefore a promising means for increasing transistor density.Although 3D integration can be achieved at the package level, for example by stacking separately manufactured chips, the monolithic 3D approach offers the highest interconnect density between layers, with 3D circuits such as 3D logic circuits being the lowest level. and allows it to be built with the tightest circuit densities. Achieving a monolithic 3D IC architecture with favorable metrics in terms of power, performance and footprint area is not a trivial task and further improvements are always desired.Embodiments of the present disclosure are based on using transistors with backside contacts. Conventional substrate-on-board (FEOL) transistors have both source and drain contacts on one side of the transistor, typically the side facing away from the substrate. In contrast to techniques for constructing logic and memory devices with such conventional FEOL transistors, various embodiments of the present disclosure provide transistors, various IC devices (e.g., logic devices) that incorporate such transistors, , memory cells, arrays, etc.) and related methods and more in which a transistor has at least one source or drain (S/D) contact on one side and another S/D contact on the other side. Provide a large device. One side of a transistor may be referred to as the "front side" and the other side may be referred to as the "back side", and generally, in the context of this disclosure, the "side" of a transistor refers to the channel material of the transistor. refers to a region or layer either above or below a layer of Thus, the transistors described herein have one S/D contact on the front side (the contact referred to as the "front side contact") and an S/D contact on the back side (the contact referred to as the "back side contact"). contacts). In further embodiments, both S/D contacts of at least some of the transistors used in the IC assemblies described herein can be on the back side of the transistor. In the following, transistors with one front-side S/D contact and one back-side S/D contact, and transistors with two back-side S/D contacts, may simply be referred to as "transistors with back-side contacts."The use of transistors with backside contacts provides several advantages and enables unique architectures not possible with conventional FEOL logic transistors with both S/D contacts on one side. As an advantage, such transistors allow backside powering of IC components (eg, transistors, etc.) of an IC structure, ie power is supplied from the backside of the IC structure. In some implementations, eg, monolithic 3D IC architectures, back side power supply may be advantageous over front side power supply. As another advantage, such transistors can be moved to the interconnect end of line (BEOL) layer of advanced complementary metal oxide semiconductor (CMOS) processes. As a further advantage, implementing at least some of the transistors with S/D contacts on different sides allows great flexibility for making electrical connections to these transistors. As a result, at least some of the logic devices and memory cells incorporating such transistors may be provided in different layers above the support structure, thereby allowing three-dimensional integration of the memory and logic devices, in particular , allowing for layered architectures with many layers of memory and/or logic devices. By providing a 3D memory and/or logic device, a given footprint area (the plane of the substrate or a plane parallel to the plane of the substrate, i.e. x - footprint area defined as the area in the y-plane) can significantly increase the density of these devices (e.g., the density of memory cells in a memory array) or, conversely, the density of memory and logic devices. It is possible to significantly reduce the footprint area of a structure with a given density.When a backside power supply is implemented, in addition to the interconnects for delivering power, the backside power supply structure may be used to reduce assembly parasitic effects, e.g. may include various IC devices (eg, capacitors, inductors, resistors, etc.) to reduce . However, as more IC components are mounted on the front side of the IC structure, the density of power interconnects on the back side increases to the point where it is difficult to also mount additional IC devices to reduce the parasitic effects of the assembly. .Embodiments of the present disclosure reduce the parasitics in the IC structure by using a glass support structure on the front side of the IC structure with backside power supply, compared to using, for example, silicon-based (Si) support structures on the front. Based on the recognition that the effect can be advantageously reduced. As used herein, the term "glass support structure" refers to any support structure having a dielectric constant lower than Si, eg, lower than about 11. Such glass support structures may include any type of glass material in some embodiments, such glasses having dielectric constants in the range of about 5-10.5. However, in some embodiments, what is described herein as a glass support structure may include materials other than glass, such as mica, provided the material has a sufficiently low dielectric constant. Placing a support structure with a dielectric constant lower than Si in the face of an IC structure can advantageously reduce various parasitic effects associated with the IC structure. because they are typically proportional to the dielectric constant of the surrounding medium. Furthermore, arranging such a support structure makes it possible to implement at least some of the additional IC devices for reducing the parasitic effects of the assembly on the front side of the IC structure, thus advantageously Extend the backside power supply without congesting the real-state of the backside power interconnect.An exemplary IC assembly includes a FEOL layer having a plurality of FEOL devices, a back side having a plurality of power interconnects electrically coupled to (eg, in conductive contact with at least a portion of) various ones of the plurality of FEOL devices. A power delivery structure, a BEOL layer having a plurality of BEOL interconnects electrically coupled to (eg, in conductive contact with at least a portion of) one or more of the plurality of FEOL devices, and a glass support structure (eg, a glass wafer ), the FEOL layer being between the backside power delivery structure and the BEOL layer, and the BEOL layer being between the FEOL layer and the glass support structure.In the context of this disclosure, the term "above" may refer further away from the support structure or FEOL of the IC device, and the term "below" may refer closer to the support structure or FEOL of the IC device.In the following, some discussion will refer to a particular side of the transistor called the front side, and the other side called the back side, to give a general idea of a transistor with S/D contacts on different sides. can refer to However, unless otherwise specified, it is immaterial which side of the transistor is considered the front side and which side is considered the back side. Therefore, some of the front and back side contacts provided herein provided that one S/D contact for the transistor is provided on one side of the channel layer and another on the other side. The description of the exemplary embodiment is applicable to embodiments in which the front and back side designations can be reversed. Additionally, some descriptions may refer to specific S/D regions or contacts that are either source regions/contacts or drain regions/contacts. However, unless otherwise specified, it is immaterial which regions/contacts of the transistor are considered source regions/contacts and which regions/contacts are considered drain regions/contacts. This is because, as is common in the field of FETs, the designations of source and drain are often interchangeable. Accordingly, descriptions of several exemplary embodiments of source and drain regions/contacts provided herein are applicable to embodiments in which the designation of source and drain regions/contacts may be reversed.Although some of the descriptions provided herein may refer to transistors that are top-gate transistors, embodiments of the present disclosure are not limited to only this design, but various other architectures, or , including a mixture of transistors of different architectures. For example, in various embodiments, transistors with backside S/D contacts described herein can include bottom-gate transistors, top-gate transistors, FinFETs, nanowire transistors, planar transistors, etc., all of which within the scope of this disclosure. Furthermore, although the description of this disclosure may refer to logic devices or memory cells provided in a given layer, each layer of the IC devices described herein may also refer to the logic or memory cells described herein. Besides memory devices, other types of devices may be included. For example, in some embodiments, IC devices having logic devices that incorporate transistors with backside S/D contacts may also include memory cells on any of the layers.Moreover, in the detailed description that follows, various aspects of exemplary implementations are described using terminology commonly employed by others of ordinary skill in the art to convey the substance of their work to others of ordinary skill in the art.For example, the term "interconnect" refers to any element formed from an electrically conductive material for providing electrical connection to and/or between one or more components associated with an IC. can be used to describe Generally, "interconnect" refers to conductive lines/traces (sometimes referred to as "lines" or "metal lines" or "trenches") and conductive vias (sometimes referred to as "vias" or (sometimes referred to as "metal vias"). In general, the term "conductive line" may be used to describe conductive elements that are isolated by dielectric materials, including interlevel low-k dielectrics, which are typically provided in the plane of an IC chip. . Such conductive lines are typically arranged in multiple levels or layers of the metallization stack. On the one hand, the term "conductive via" can be used to describe a conductive element that interconnects two or more conductive lines at different levels of a metallization stack. To that end, vias may be provided substantially perpendicular to the plane of the IC chip or support structure on which the IC structure is provided, and may be provided between two conductive lines on adjacent levels or between two conductive lines not on adjacent levels. Conductive lines may be interconnected. The term "metallization stack" may be used to refer to a stack of one or more interconnects for providing connections to different circuit components of an IC chip.In another example, the terms "package" and "IC package" are synonymous, as are the terms "die" and "IC die", and unless otherwise specified, the term "isolation" It means "electrically insulating" and the term "conducting" means "electrically conducting". Where an element may be referred to herein in the singular, such element may contain multiple sub-elements. For example, "conductive material" can include one or more conductive materials. When terms such as "oxide," "carbide," and "nitride" are used, they refer to compounds containing oxygen, carbon, nitrogen, etc., respectively, and the term "high-k dielectric" refers to silicon oxide rather than silicon oxide. Referring to materials with high dielectric constants, the term "low-k dielectric" refers to materials with dielectric constants lower than that of silicon oxide. Furthermore, the term "connection" can be used to describe a direct electrical or magnetic connection between things that are connected without any intermediate device, while the term "coupling" can be used to describe a It may be used to describe either a direct electrical or magnetic connection between or an indirect connection through one or more passive or active intermediate devices. The term "circuit" may be used to describe one or more passive and/or active components arranged to cooperate with each other to provide a desired function. The terms "substantially," "near," "approximately," "near," and "about" generally refer to specific values described herein or known in the art. within +/- 20% of the target value, based on the context of Similarly, terms indicating the orientation of various elements such as "coplanar," "perpendicular," "perpendicular," "parallel," or any other angle between elements are generally used herein Refers to within +/−5-20% of a target value based on the context of a particular value described or known in the art.For purposes of this disclosure, the phrase "A and/or B" means (A), (B) or (A and B). For the purposes of this disclosure, the phrase "A, B and/or C" means (A), (B), (C), (A and B), (A and C), (B and C) or ( A, B and C). When the term "between" is used for a range of measurements, the values at both ends of the range are included. As used herein, the notation "A/B/C" means (A), (B), and/or (C).The description may use the language "in one embodiment" or "in an embodiment." Each such phrase may use one or more of the same or different embodiments. Moreover, the terms "comprising," "including," "having," etc. used in connection with the embodiments of the present disclosure are synonymous. Although the present disclosure may use perspective-based descriptions such as "upper", "lower", "top", "bottom" and "side", such descriptions are used for ease of explanation. and are not intended to limit the application of the disclosed embodiments. The accompanying drawings are not necessarily drawn to scale. Unless otherwise specified, the use of ordinal numbers such as “first,” “second,” “third,” etc. to describe common objects merely refer to different instances of similar objects; It is not intended to imply that the objects so described must be in a given order, either temporally, spatially, ranked, or in any other manner.In the detailed description that follows, reference is made to the accompanying drawings that form a part hereof. Embodiments that may be implemented are shown by way of example in the accompanying drawings. It is understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description should not be taken in a limiting sense. For convenience, when there are collections of figures designated by different letters (e.g., FIGS. 9A-9D), such collections are herein referred to as, for example, "FIGS. can be referenced without a letter, such asIn the drawings, schematic representations of some of the exemplary structures of the various devices and assemblies described herein may be shown in strict right angles and straight lines, although any of the structures described herein may For example, when examined using Scanning Electron Microscopy (SEM) or Transmission Electron Microscopy (TEM) images, such schematics may not appear as "ideal" as the features may actually appear. It should be understood that it may not reflect the limitations of the process of In such images of the actual structure, possible machining defects may also be visible, such as edges that are not perfectly straight in the material, tapered vias or other openings, Unintentional rounding of corners or thickness variations of different material layers, occasional screw, edge, or combination dislocations within crystalline regions, and/or occasional single atoms or clusters of atoms. It is a dislocation defect. There may be other defects not listed here but common within the field of device manufacturing.Various acts may be described in turn as multiple individual acts or operations in the manner that best contributes to the understanding of the claimed invention. However, the order of description should not be construed to imply that these operations are necessarily order dependent. In particular, these operations need not be performed in the order presented. Operations described may be performed in a different order than the described embodiment. Various additional operations may be performed and/or described operations may be omitted in additional embodiments.Various IC assemblies with backside power supplies and front glass supports, as described herein, may be implemented in or in connection with one or more components associated with an IC, and/or , can be implemented among various such components. In various embodiments, IC-related components include, for example, transistors, diodes, power supplies, resistors, capacitors, inductors, sensors, transceivers, receivers, antennas, and the like. Components associated with an IC may include those on board or connected to the IC. ICs can be either analog or digital and can be used in multiple applications such as microprocessors, optoelectronics, logic blocks, audio amplifiers, etc., depending on the components associated with the IC. An IC may be employed as part of a chipset to perform one or more related functions in a computer. Exemplary transistor architectureFIG. 1 provides a schematic illustration of a cross-sectional view of an exemplary transistor 100 implemented as a FET with backside contacts, according to some embodiments of the present disclosure.Elements labeled with reference numbers in FIG. 1 and at least some of the figures that follow indicate correspondence between the reference numbers and the patterns provided at the bottom of each drawing page containing those figures. Using a legend, different patterns are indicated in these figures. For example, the legend indicates that FIG. 1 uses different patterns to show channel material 102, S/D regions 104, contacts to S/D regions 104, and so on. Moreover, although a particular number of a given element may be shown in FIG. 1 and at least some of the subsequent figures, this is again merely for ease of illustration, and more or less It may be included in an IC device according to various embodiments of the present disclosure. Still further, the representations of the various IC devices shown in FIG. 1 and at least some of the figures that follow are intended to show the relative placement of various elements therein, such that the various IC devices or That portion may include other elements or components not shown (eg, any additional materials such as spacer materials, etch stop materials, etc. that may surround the gate stack of transistor 100).In general, a FET, such as a MOSFET, is a three-terminal device, including source, drain and gate terminals, and uses an electric field to control the current through the device. FETs typically have a channel material, source and drain regions provided in the channel material, and alternatively a "work function" provided over the portion of the channel material between the source and drain regions. A gate stack comprising a gate electrode material, referred to as (WF) material, and optionally a gate dielectric material between the gate electrode material and the channel material. Channel material 102, S/D regions 104 (shown as a first S/D region 104-1, eg, a source region, and a second S/D region 104-2, eg, a drain region), to the S/D regions. Contacts 106 (a first S/D contact 106-1 providing electrical contact to the first S/D region 104-1 and a second S/D contact 106 providing electrical contact to the second S/D region 104-2). -2), and the gate stack 108 including at least the gate electrode 110 and optionally the gate dielectric 112 is shown in FIG.Implementations of the present disclosure may be formed or performed on a support structure, which may be, for example, a substrate, die, wafer or chip. The substrate can be, for example, wafer 2000 of FIG. 11, described below, and can be, for example, singulated die 2002 of FIG. 11, described below, or can be included in a die. The substrate can be, for example, a semiconductor substrate constructed from a semiconductor material system including an N-type or P-type material system. In one implementation, the semiconductor substrate can be a crystalline substrate formed using bulk silicon or silicon-on-insulator (SOI) infrastructures. In other implementations, the semiconductor substrate includes, but is not limited to, germanium, silicon germanium, indium antimonide, lead telluride, indium arsenide, indium phosphide, gallium arsenide, aluminum gallium arsenide, arsenide. aluminum indium arsenide, aluminum indium antimonide, indium gallium arsenide, gallium nitride, indium gallium nitride, aluminum indium nitride, or gallium antimonide, or III-V materials (i.e. III Groups II and V materials), Groups II-VI (i.e., Groups II and IV materials of the periodic system of elements), or Group IV materials (i.e., Group IV materials of the periodic system of elements). It may be formed using alternative materials that may or may not be bonded to silicon, including combinations. In some embodiments, the substrate can be amorphous. In some embodiments, the substrate can be a printed circuit board (PCB) substrate. Some examples of materials from which substrates can be formed are described herein, but serve as a basis from which IC assemblies with backside power supplies and front glass supports as described herein can be constructed. Any material obtained falls within the spirit and scope of this disclosure. In various embodiments, channel material 102 may include or be formed on any substrate material that provides a suitable surface for forming transistor 100 .In some embodiments, the channel material 102 can be composed of semiconductor material systems including, for example, N-type or P-type material systems. In some embodiments, the channel material 102 is made of a material such as tin oxide, antimony oxide, indium oxide, indium tin oxide, titanium oxide, zinc oxide, zinc indium oxide, gallium oxide, titanium oxynitride, ruthenium oxide, or tungsten oxide. It may include a highly mobile oxide semiconductor material. In some embodiments, the channel material 102 is a semiconductor material that can be used for the channel portion (eg, portion 114 shown in FIG. 1, which is assumed to refer to the top of the channel material 102). , sometimes referred to as a "blocking material," may be used between the channel portion 114 and the support structure on which the transistor 100 is provided. In some embodiments, channel material 102 may comprise a single crystal semiconductor such as silicon (Si) or germanium (Ge). In some embodiments, the channel material 102 comprises the first sublattice of at least one element from group III of the periodic table (eg, Al, Ga, In) and the periodic table (eg, P, As, Sb ) having a second sublattice of at least one group V element.For some exemplary N-type transistor embodiments (i.e., for embodiments in which transistor 100 is an N-type metal oxide semiconductor (NMOS)), channel portion 114 of channel material 102 advantageously includes It may include, but is not limited to, III-V materials with high electron mobility such as InGaAs, InP, InSb and InAs. For some such embodiments, channel portion 114 of channel material 102 can be a ternary III-V alloy such as InGaAs, GaAsSb, InAsP, or InPSb. In some InxGa1-xAs fin embodiments, the In content (x) may be between 0.6 and 0.9, advantageously at least 0.7 (e.g., In0.7Ga0.3As ). In some embodiments with the highest mobility, channel portion 114 of channel material 102 is essentially III-V material, ie, III-V material that is not intentionally doped with any electrically active impurities. It can be a Group V semiconductor material. In alternative embodiments, a nominal impurity dopant level is present in the channel portion 114 of the channel material 102, such as to further fine-tune the threshold voltage Vt or to provide a HALO pocket implant. obtain. However, even in impurity-doped embodiments, impurity dopant levels within channel portion 114 of channel material 102 can be relatively low, eg, below 10 15 dopant atoms per cubic centimeter (cm−3), which is advantageous. Especially below 1013 cm-3.For some exemplary P-type transistor embodiments (i.e., for embodiments in which transistor 100 is a P-type metal oxide semiconductor (PMOS)), channel portion 114 of channel material 102 advantageously includes It can be a group IV material with high pore mobility such as, but not limited to, Ge or Ge-rich SiGe alloys. In some exemplary embodiments, channel portion 114 of channel material 102 may have a Ge content between 0.6 and 0.9, and advantageously at least 0.7. In some embodiments with the highest mobility, the channel portion 114 is intentionally doped with any electrically active impurities, which can be intrinsic Group III-V (or Group IV for P-type devices) materials. may not be doped into In alternative embodiments, one or more nominal impurity dopant levels are present in channel portion 114, for example, to further set the threshold voltage (Vt), or to provide HALO pocket implants, or the like. obtain. However, even in impurity doped embodiments, the impurity dopant level within the channel portion is relatively low, eg, below 10 15 cm −3 , and advantageously below 10 13 cm −3 .In some embodiments, transistor 100 can be a thin film transistor (TFT). A TFT is a special type of field effect transistor made by depositing a thin film of active semiconductor material and a dielectric layer and metal contacts over a support layer, which may be a non-conductive layer. At least part of the active semiconductor material forms the channel of the TFT. If the transistor 100 is a TFT, the channel material 102 may be tin oxide, antimony oxide, indium oxide, indium tin oxide, titanium oxide, zinc oxide, zinc indium oxide, indium gallium zinc oxide (IGZO), gallium oxide, titanium oxynitride. , high mobility oxide semiconductor materials such as ruthenium oxide or tungsten oxide. Generally, when the transistor 100 is a TFT, the channel material 102 is tin oxide, cobalt oxide, copper oxide, antimony oxide, ruthenium oxide, tungsten oxide, zinc oxide, gallium oxide, titanium oxide, indium oxide, titanium oxynitride. , indium tin oxide, indium zinc oxide, nickel oxide, niobium oxide, copper peroxide, IGZO, indium telluride, molybdic acid, molybdenum diselenide, tungsten diselenide, tungsten disulfide, N-type or P-type amorphous or poly Crystalline silicon, germanium, indium gallium arsenide, silicon germanium, gallium nitride, aluminum gallium nitride, indium phosphide, and black phosphorus (each of which is gallium, indium, aluminum, fluorine, boron, phosphorus, arsenic, nitrogen, tantalum, tungsten, and possibly doped with one or more of magnesium), and the like. In some embodiments, channel material 102 can have a thickness between about 5 and 75 nanometers, including all values and ranges therein. In some embodiments, the thin film channel material 102 may be deposited at relatively low temperatures, thereby depositing the channel material 102 within the thermal budget imposed by back-end manufacturing and front-end applications such as logic devices, for example. making it possible to avoid damaging other components such as components.As shown in FIG. 1, a first S/D region 104-1 and a second S/D region 104-2 (collectively referred to as “S/D regions 104”) are included on either side of the gate stack 108. , thereby realizing a transistor. As is known in the art, source and drain regions (sometimes also referred to interchangeably as "diffusion regions") are formed for the gate stack of a FET. In some embodiments, the S/D region 104 of the transistor 100 is a region of doped semiconductor, e.g., doped with a suitable dopant to a suitable dopant concentration ( For example, it can be a region of the channel material 102) of the channel portion 114). In some embodiments, the S/D regions 104 can be highly doped, for example with a dopant concentration of about 1-1021 cm-3, to advantageously form an ohmic contact with each S/D contact 106; In other embodiments, these regions may also have lower dopant concentrations and form Schottky contacts in some implementations. Regardless of the exact doping level, S/D region 104 of transistor 100 has a higher channel material 102 than other regions, eg, between first S/D region 104-1 and second S/D region 104-2. can be a region having a dopant concentration higher than that in the region of , and thus can be referred to as a "highly doped" (HD) region. In some embodiments, the S/D regions 104 may be formed using either an implant/diffusion process or an etch/deposition process in general. During the molding process, dopants such as boron, aluminum, antimony, phosphorous, or arsenic may be implanted into one or more semiconductor materials on top of channel material 102 to form S/D regions 104 . An annealing process that activates the dopants to further diffuse into the channel material 102 may follow the ion implantation process. In the latter process, the semiconductor material or materials of channel material 102 may be etched first to form recesses in place for later S/D regions. An epitaxial deposition process may then be performed to fill the recesses with the material (which may include a combination of different materials) used to fabricate the S/D regions 104 . In some implementations, the S/D regions 104 may be fabricated using silicon germanium or silicon alloys such as silicon carbide. In some implementations, epitaxially deposited silicon alloys can be doped in situ with dopants such as boron, arsenic, or phosphorous. In further embodiments, S/D region 104 may be formed using one or more alternative semiconductor materials such as germanium or III-V materials or alloys. Although FIG. 1 shows the first and second S/D regions 104 having a single pattern, suggesting that the material compositions of the first and second S/D regions 104 are identical, this may not apply in some other embodiments of transistor 100 . Accordingly, in some embodiments, the material composition of the first S/D region 104-1 can differ from the material composition of the second S/D region 104-2.As further shown in FIG. 1, S/D contacts 106-1 and 106-2 (collectively referred to as "S/D contacts 106") formed from one or more conductive materials provide electrical connection. It can be used to provide to each of S/D regions 104-1 and 104-2. In various embodiments, one or more layers of metals and/or metal alloys can be used to form S/D contacts 106 . For example, S/D contact 106 conductive materials include copper, ruthenium, palladium, platinum, cobalt, nickel, hafnium, zirconium, titanium, tantalum, aluminum, tantalum nitride, tungsten, doped silicon, doped germanium, Or it may comprise one or more metals or metal alloys with materials such as alloys and mixtures of any of these. In some embodiments, S/D contact 106 may include one or more conductive alloys, oxides, or carbides of one or more metals. In some embodiments, S/D contact 106 may comprise a doped semiconductor such as silicon or another semiconductor doped with an N-type dopant or a P-type dopant. Metals may offer higher conductivity, while doped semiconductors may be more easily patterned during fabrication. Although FIG. 1 shows the first and second S/D contacts 106 having a single pattern, suggesting that the material compositions of the first and second S/D contacts 106 are identical, this is the same as the transistor 100. may not be the case in some other embodiments. Therefore, in some embodiments, the material composition of the first S/D contact 106-1 can differ from the material composition of the second S/D contact 106-2.For gate stack 108, gate electrode 110 may include at least one P-type WF metal or N-type WF metal, depending on whether transistor 100 is a PMOS or NMOS transistor. For PMOS transistors, metals that may be used for gate electrode 110 may include, but are not limited to, ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides (eg, ruthenium oxide). For NMOS transistors, metals that can be used for the gate electrode 110 include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, alloys of these metals, and carbides of these metals (e.g., hafnium carbide). , zirconium carbide, titanium carbide, tantalum carbide and aluminum carbide). In some embodiments, gate electrode 110 may include a stack of two or more metal layers, one or more of which is a WF metal layer and at least one metal layer is a fill metal layer. Additional metal layers may be included for other purposes, such as acting as a diffusion barrier layer as described below.Gate dielectric 112 , if used, may surround channel portion 114 at least laterally, and gate electrode 110 may overlap gate dielectric 112 such that gate dielectric 112 is disposed between gate electrode 110 and channel material 104 . Body 112 may be laterally enclosed. In various embodiments, gate dielectric 112 can include one or more high-k dielectric materials such as hafnium, silicon, oxygen, titanium, tantalum, lanthanum, aluminum, zirconium, barium, strontium, yttrium, lead, It may contain elements such as scandium, niobium and zinc. Examples of high-k materials that may be used in gate dielectric 112 include, but are not limited to, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide. , barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, tantalum oxide, tantalum silicon oxide, lead scandium tantalum oxide, and lead zinc niobate. In some embodiments, an annealing process may be performed on gate dielectric 112 during fabrication of transistor 100 to improve the quality of gate dielectric 112 . In some embodiments, the gate dielectric 112 is about 0.5 nanometers to 3 nanometers thick, such as about 1 nanometer to 3 nanometers, or about 1 nanometer to 2 nanometers thick. including all values and ranges).In some embodiments, gate dielectric 112 can be a multi-layer gate dielectric, and can include, for example, either a high-k dielectric material in one layer and a layer of IGZO. In some embodiments, gate stack 108 may be arranged such that IGZO is positioned between the high-k dielectric and channel material 104 . In such embodiments, IGZO may contact channel material 104 and provide an interface between channel material 104 and the remainder of multilayer gate dielectric 112 . IGZO has a gallium:indium ratio of 1:1, a gallium:indium ratio greater than 1 (e.g., 2:1, 3:1, 4:1, 5:1, 6:1, 7:1, 8:1, 9:1 or 10:1) and/or a gallium:indium ratio less than 1 (e.g., 1:2, 1:3, 1:4, 1:5, 1:6, 1:7, 1:8) , 1:9 or 1:10).In some embodiments, gate stack 108 may be surrounded by dielectric spacers, not specifically shown in FIG. Dielectric spacers are provided between the gate stacks 108 of different transistors 100 that may be provided adjacent to each other (eg, different transistors 100 provided along a single fin if the transistors 100 are FinFETs) and the gate stacks 108 and one of the S/D contacts 106 located on the same side as the gate stack 108 . Such dielectric spacers may include one or more low-k dielectric materials. Examples of low-k dielectric materials that can be used as dielectric spacers include, but are not limited to, silicon dioxide, carbon-doped oxides, silicon nitride, fused silica glass (FSG), and silsesquioxane. , including organosilicates such as siloxanes and organosilicate glasses. Other examples of low-k dielectric materials that can be used as dielectric spacers include organic polymers such as polyimide, polynorbornene, benzocyclobutene, perfluorocyclobutane or polytetrafluoroethylene (PTFE). Yet other examples of low-k dielectric materials that can be used as dielectric spacers include silicon-based polymer dielectrics such as hydrogen silsesquioxane (HSQ) and methylsilsesquioxane (MSQ). Another example of a low-k material that can be used in dielectric spacers is when large voids or holes are formed in the dielectric to reduce the overall dielectric constant of the layer since voids can have a dielectric constant close to 1. include various porous dielectric materials such as, for example, porous silicon dioxide or porous carbon-doped silicon dioxide.Two S/D contacts are provided, in stark contrast to conventional implementations, where both S/D contacts are typically provided on a single side of the transistor, typically the front side (eg, where gate stack 108 is provided). Contacts 106 are provided on different sides. That is, as shown in FIG. 1, the second S/D contact 106-2 is provided on the same side as the gate stack 108, which can be considered the front side of the transistor 100, while the first S/D contact 106-1 is on the same side. , is provided on the opposite side of the transistor 100, which can be considered the back side. Thus, the first S/D contact 106-1 is the back side contact and the second S/D contact 106-2 is the front side contact of transistor 100. FIG. Considering the layer above the support structure (not shown in FIG. 1) on which the transistor 100 is built, the first S/D contact 106-1 is in the first layer 120-1 above the support structure. and the second S/D contact 106-2 is in the second layer 120-2 above the support structure, the first S/D region 104-1 and the second S/D region 104-2 being in the second layer 120-2. The portion of channel material 102 between (eg, channel portion 114) is in the third layer 120-3 above the support structure. As can be seen in FIG. 1, the third layer 120-3 is between the first layer 120-1 and the second layer 120-2. At least a portion of gate stack 108, or a contact with gate stack 108 (a gate contact not specifically shown in FIG. 1), is on the same layer as one of S/D contacts 106, e.g. It may be provided in two layers 120-2. In further embodiments of transistor 100, first S/D contact 106-1 may also be implemented in second layer 120-2.Transistors with backside S/D contacts described herein, such as transistor 100, may be implemented using any suitable transistor architecture, such as planar or non-planar architectures. FIGS. 2A and 2B show perspective and cross-sectional views, respectively, of an exemplary IC device 200 having a transistor with at least one backside contact implemented as a FinFET, according to some embodiments of the present disclosure. One exemplary structure is shown in (B). IC device 200 thus represents one exemplary implementation of transistor 100 . Accordingly, some of the reference numerals shown in FIGS. 2A-2B are the same as those used in FIG. 1 and are the same or similar to those described with reference to FIG. Similar elements are shown and will not be repeated in FIGS. 2A-2B.A FinFET refers to a transistor having a non-planar architecture in which a fin formed of one or more semiconductor materials extends away from a base (the term "base" refers to any suitable base on which a transistor may be built). a supporting structure, such as a substrate). The portion of the fin closest to the base may be covered by an insulator material. Such insulator materials, typically oxides, are commonly referred to as "shallow trench isolation" (STI), and the portion of the fin encompassed by the STI is typically the "sub-fin portion", Or simply called "sub fin". A gate stack, including at least a layer of gate electrode material and optionally a layer of gate dielectric, overlies the top and sides of the remaining top of the fin (i.e., the portion overlying and not encompassed by the STI). may be provided, thereby wrapping around the top of the fin. The portion of the fin covered by the gate stack is typically referred to as the "channel portion" of the fin. This is because the conductive channel forms here and is part of the active area of the fin during operation of the transistor. Source and drain regions are provided on opposite sides of the gate stack to form the source and drain terminals of the transistor, respectively. A FinFET may be implemented as a "tri-gate transistor," and the name "tri-gate" derives from the fact that, in use, such a transistor can form a conducting channel on three "sides" of the fin. FinFETs potentially improve performance over single-gate and double-gate transistors.FIG. 2A is a perspective view of an IC device/FinFET 200 having one front side and one back side S/D contact, and FIG. It is a cross-sectional view. FIGS. 2A-2B show channel material 102, S/D region 104, and gate stack 108 showing gate electrode 110 and gate dielectric 112 described above. As shown in FIGS. 2A-2B, when transistor 100 is implemented as a FinFET, FinFET 200 further includes a base 202, a fin 204, and an STI including a sub-fin portion of fin 204. Material 206 may be included. S/D contact 106 is not specifically shown in FIGS. 2A-2B in order not to obscure the drawings. The cross-sectional side view of FIG. 2B is a representation in the yz plane of the exemplary coordinate system xyz shown in FIG. , (eg, along the plane shown as plane AA in FIG. 2A) through the fins 204 . 1 is a representation in the xz plane of the exemplary coordinate system shown in FIG. 204 (eg, along the plane shown at plane BB in FIGS. 2A and 2B).As shown in FIGS. 2A-2B, fins 204 may extend away from base 202 and may be substantially perpendicular to base 202 . Fin 204 may include one or more semiconductor materials, such as a stack of semiconductor materials, such that the top of the fin (i.e., the portion of fin 204 encompassed by gate stack 108) serves as the channel region of FinFET 200. can function. Accordingly, the top of fin 204 may be formed from channel material 102 as described above and may include channel portion 114 .The sub-fins of fins 204 are alloys of elements 2, 3, or even 4 of Groups III and V of the periodic table, including boron, aluminum, indium, gallium, nitrogen, arsenic, phosphorus, antimony, and bismuth. It can be any binary, ternary, or quaternary III-V compound semiconductor. In some exemplary N-type transistor embodiments, the sub-fin portion of fin 204 may be a III-V material that has a band offset (eg, conduction band offset for N-type devices) from the channel portion. Exemplary materials include, but are not limited to, GaAs, GaSb, GaAsSb, GaP, InAlAs, GaAsSb, AlAs, AlP, AlSb, AlGaAs. In some N-type transistor embodiments of FinFET 200 in which the channel portion of fin 204 (eg, channel portion 114) is InGaAs, the sub-fin may be GaAs, and at least a portion of the sub-fin is also to a higher impurity level than the channel portion. It can be doped with impurities (eg P-type). In alternating heterojunction embodiments, the sub-fins and channel portion of fin 204 are each or include a Group IV semiconductor (eg, Si, Ge, SiGe). A sub-fin of fin 204 may be a first elemental semiconductor (eg, Si or Ge) or a first SiGe alloy (eg, having a wide bandgap). In some exemplary P-type transistor embodiments, the sub-fins of fin 204 may be group IV materials that have a band offset from the channel portion (eg, a valence band offset for P-type devices). Exemplary materials include, but are not limited to Si or Si-rich SiGe. In some P-type transistor embodiments, the sub-fin of fin 204 is Si, and at least a portion of the sub-fin may also be doped with impurities (eg, N-type) to a higher impurity level than the channel portion.As further shown in FIGS. 2A-2B, STI material 206 may encompass portions of the sides of fins 204 . The portion of fin 204 encompassed by STI 106 forms a sub-fin. In various embodiments, STI material 206 includes, but is not limited to hafnium, silicon, oxygen, nitrogen, titanium, tantalum, lanthanum, aluminum, zirconium, barium, strontium, yttrium, lead, scandium, niobium. , a low-k or high-k dielectric containing elements such as zinc. Further examples of dielectric materials that may be used in STI material 206 include, but are not limited to, silicon nitride, silicon oxide, silicon dioxide, silicon carbide, carbon-doped silicon nitride, silicon oxynitride, Hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, tantalum oxide, oxide May include silicon tantalum, lead scandium tantalum oxide, lead zinc niobate.Gate stack 108 may cover the top of fin 204 (the portion above STI 206), as shown in FIGS. The shaded channel portion 114) corresponds to the portion of the fin 204 covered by the gate stack 108 as shown in FIGS. 2A-2B. In particular, gate dielectric 112 (if used) may cover the top of fin 204 and gate electrode 110 may cover gate dielectric 112 . The interface between the channel portion and the sub-fin portion of fin 204 is located near where gate electrode 110 terminates.In some embodiments, the FinFET 200 is between about 5 and 40 nanometers (including all values and ranges therein, e.g., between about 22 and 35 nanometers, or between about 20 and 30 nanometers, in some embodiments). metric), measured along the fin 204 in the direction of the x-axis of the exemplary frame of reference xyz shown in FIGS. The gate length GL (ie, the distance between the first S/D region 104-1 and the second S/D region 104-2) can be a dimension. Fins 204 may be about 5-30 nanometers in some embodiments (including all values and ranges therein, such as about 7-20 nanometers, or about 10-15 nanometers); It may have a thickness that is the dimension measured in the direction of the y-axis of the reference coordinate system x-y-z shown in FIGS. 2A-2B. Fins 204, in some embodiments, are between about 30 and 350 nanometers (including all values and ranges therein, e.g., between about 30 and 200 nanometers, between about 75 and 250 nanometers, or between about 150 and 350 nanometers). 300 nanometers), which is the dimension measured in the direction of the z-axis of the reference coordinate system xyz shown in FIG.Although the fins 204 shown in FIGS. 2A-2B are shown as having a rectangular cross-section in the yz plane of the reference frame shown, the fins 204 are otherwise fin-shaped. Gate stack 108 may conform to rounded or sloping fins 204 , which may have a rounded or slanted cross-section at the “top” of 204 . In use, the FinFET 200 can form a conducting channel on three "sides" of the channel portion of the fin 204, potentially a single-gate transistor (a conducting channel on one "side" of the channel material or substrate). ) and double-gate transistors (which can form the conduction channel on two "sides" of the channel material or substrate).Although not specifically shown in FIG. 2A, S/D contact 106 may be electrically connected to S/D region 104 but extends in a different vertical direction with respect to fin 204 . For example, first S/D contact 106-1 may be electrically connected to first S/D region 104-1 and may extend from first S/D region 104-1 toward base 202, thereby Backside S/D contacts for FinFET 200 are formed as described for FIG. In such implementations, the second S/D contact 106-2 can be electrically connected to the second S/D region 104-2 and extends from the second S/D region 104-2 away from the base 202. 1, thereby forming a front side S/D contact for FinFET 200 which is also similar to the description of FIG.Although FIGS. 2A-2B show a single FinFET 200, in some embodiments, along fin 204, multiple FinFETs (with some spaces between ) can be placed next to each other. Moreover, in various further embodiments, the transistor 100 with one front side S/D contact and one back side S/D contact may be used in many other ways besides the FinFET 200, such as planar FETs, nanowire FETs, or nanoribbon FETs. can be implemented in a transistor architecture of An exemplary memory implementationEmbedded memory is critical to the performance of modern system-on-chip (SoC) technologies, especially to enable 3D monolithic integration. An IC assembly with backside power and front glass support may contain embedded memory. Therefore, some memory considerations are discussed here.Some memory devices may be considered "standalone" devices in that they are included on a chip that also does not contain computational logic (as used herein the term is "computational logic device" or simply "computational logic"). or "logic device" refers to a device, eg, a transistor, for performing computational processing operations). Other memory devices may be included on the chip along with the computational logic and may be referred to as "embedded" memory devices. Using embedded memory to support computational logic can improve performance by bringing memory and computational logic closer together and eliminating interfaces that increase latency. Various embodiments of the present disclosure relate to embedded memory arrays and corresponding methods and devices.Some embodiments of the present disclosure may refer to dynamic random access memory (DRAM), particularly embedded DRAM (eDRAM). This is because this type of memory was introduced in the past to address the density and standby power limitations of large static random access memory (SRAM) based caches. However, embodiments of the present disclosure are equally applicable to other technologies in which memory cells are implemented. Thus, in general, the memory cells described herein may be eDRAM cells, spin transfer torque random access memory (STTRAM) cells, resistive memory (RRAM®) cells, or any other It can be implemented as non-volatile memory cells.A memory cell, e.g., an eDRAM cell, includes a capacitor for storing a bit value, or memory state of the cell (e.g., a logical "1" or "0"), as well as access to the cell (e.g., for writing information to the cell). or to read information from the cell). Such memory cells are said to use one transistor (i.e., "1T" in the term "1T-1C memory cell") and one capacitor (i.e., "1C" in the term "1T-1C memory cell"). Emphasizing the fact, it can be referred to as a "1T-1C memory cell". The capacitor of the 1T-1C memory cell can be coupled to one source/drain (S/D) terminal of the access transistor (eg, the source terminal of the access transistor), while the other S/D terminal of the access transistor is connected to the bit. line (BL) and the gate terminal of the transistor may be coupled to a word line (WL). Such memory cells can be fabricated with only a single access transistor, thus providing higher density and lower standby power for SRAMs in the same process technology.Various 1T-1C memory cells have traditionally been implemented with FEOLs, access transistors that are logic process-based transistors implemented in the top layer of a semiconductor substrate. The inventors of the present disclosure have realized that the use of conventional logic transistors presents several challenges when such transistors are used to form three-dimensional memory and logic devices.One challenge relates to the location of capacitors in such memory cells. That is, it may be desirable to provide capacitors in metal layers close to the corresponding access transistors. Since the logic transistors are implemented as FEOL transistors located directly on the semiconductor substrate, the corresponding capacitors of the 1T-1C memory cells must then be incorporated in lower metal layers to be sufficiently close to the logic access transistors. be. Since the pitch of lower metal layers scales significantly at advanced technology nodes, incorporating capacitors in lower metal layers poses significant challenges in scaling 1T-1C based memories and creating 3D memory devices. LetAnother challenge is that given the available surface area of a substrate, the number of FEOL transistors that can be formed in that area is limited, placing significant limitations on the density of memory cells or logic devices incorporating such transistors. impose.Implementing memory cell transistors (eg, memory cell access transistors) as transistors with backside contacts may ameliorate at least some of the problems and problems described above. For example, moving the memory cell's access transistors to the BEOL layer (allowed by the backside contact architecture) means that their corresponding capacitors have a correspondingly thicker interlevel dielectric (ILD) to achieve higher capacitance. ) and a top metal layer with a larger metal pitch, meaning that the integration challenges posed by incorporating capacitors can be eased.FIG. 3 provides a schematic illustration of a cross-sectional view of an exemplary memory cell 300 including transistors with backside contacts, according to some embodiments of the present disclosure. FIG. 3 shows how transistor 100 can be used to form a 1T-1C memory cell. In particular, memory cell 300 illustrates all of the components of transistor 100 of FIG. It is further shown schematically that it can be bound to -1. Capacitor 302 is any suitable capacitor, e.g., a metal-insulator-metal (MIM) capacitor for storing a bit value or memory state (e.g., a logical "1" or "0") of memory cell 300. A capacitor, transistor 100 may in turn function as an access transistor that controls access to memory cell 300 (eg, access to write information to the cell or access to read information from the cell). By coupling capacitor 302 to S/D region 104 - 1 , capacitor 302 is configured to store the memory state of memory cell 300 . In some embodiments, capacitor 302 may be coupled to S/D region 104-1 through a storage node (not specifically shown in FIG. 3) coupled to S/D region 104-1. In some embodiments, S/D contact 106-1 may be considered a storage node.Although not specifically shown in FIG. 3, memory cell 300 was coupled to one of S/D regions 104 (eg, S/D region 104-2 in the illustration of FIG. 3) to which capacitor 302 was not coupled. It may further include bit lines for transferring memory states. Such bitlines may be connected to sense amplifiers and bitline drivers, which may be provided, for example, in memory peripheral circuitry associated with the memory array in which memory cell 300 may be included. Additionally, and although not specifically shown in FIG. 3, memory cell 300 further includes a word line coupled to the gate terminal of transistor 100, eg, to gate stack 108, to provide a gate signal. obtain. Transistor 100 may be configured to control the transfer of the memory state of memory cell 300 between a bitline and a storage node or capacitor 302 in response to a gate signal. Exemplary IC assembly with glass support on topTransistors with backside contacts can enable three-dimensional integration of IC assemblies with backside power supplies and front glass supports. An exemplary IC assembly is shown in FIG. It provides a block diagram of an IC assembly 400 with backside power supply and front glass support, according to some embodiments of the present disclosure.As shown in FIG. 4, IC assembly 400 may include FEOL layer 420 and BEOL layer 430 above FEOL layer 420 . FEOL layer 420 may include a plurality of FEOL devices, eg, FEOL transistors implemented as backside contact transistors. BEOL layer 430 may include at least a plurality of interconnects electrically coupled to (eg, in conductive contact with at least a portion of) one or more of the plurality of FEOL devices of FEOL layer 420 . In some embodiments, BEOL layer 430 may further include BEOL devices, such as back-end transistors, at least some of which may be implemented as transistors with backside contacts.In various embodiments, FEOL transistors with backside contacts implemented in FEOL layer 420 can be part of the computational logic and/or part of the memory array.For example, in some embodiments, some of the FEOL transistors of FEOL layer 420 may be access transistors of memory cells of a memory array, such as the 1T-1C memory cells described above. In such embodiments, such memory cell capacitors may then be implemented in the BEOL layer 430 . In other embodiments, some of the FEOL transistors in FEOL layer 420 may be access transistors for memory cell types other than 1T-1C. In such embodiments, other portions of the memory cells (eg, storage transistors) may be implemented in BEOL layer 430 .In another example, some of the FEOL transistors of FEOL layer 420 may be part of the computational logic of IC assembly 400 . For example, such transistors may be responsible for computational logic functions related to read/write operations on data stored in memory cells that may be implemented in BEOL layer 430 . To that end, some of the FEOL transistors in FEOL layer 420 control (e.g., access (read/write), may be part of one or more input/output (I/O) ICs (eg, memory peripheral circuits) configured to control storage, update). In some embodiments, some of the FEOL transistors of FEOL layer 420 perform various operations on data stored in memory cells implemented in IC assembly 400 (e.g., arithmetic and logic operations implemented in IC assembly 400). data from one or more of the memory arrays, and possibly also data from external device chips).Transistors with backside contacts described herein, either as stand-alone transistors (eg, transistor 100) or included as part of memory cells (eg, memory cell 300), may be used in various regions/locations in IC assembly 400. can be included in For example, transistor 100 may be used as a logic transistor (eg, included in FEOL layer 420), eg, in computational logic. In another example, transistor 100 may be used as an access transistor in one or more memory layers of BEOL 430, for example. Providing backside contacts to transistors can ease integration challenges posed by incorporating storage nodes (e.g., storage capacitors) of memory cells, and 3D with stacked architectures having many layers of memory and/or computational logic. Makes building memory and logic devices feasible.The description of FIG. 4 is intended to provide the general orientation and placement of the various layers with respect to each other, and unless otherwise specified in this disclosure, elements described for one of the layers shown in FIG. may extend into or reside within one or more other layers. For example, although not specifically shown in FIG. 4, power and signal interconnects for various IC components of IC assembly 400 may reside in any of the layers shown in FIG. Further, although a single BEOL layer 430 is shown in FIG. 4, in various embodiments, BEOL layer 430 of IC assembly 400 may include multiple BEOL layers.In some embodiments, BEOL layer 430 may include one or more memory layers that may form one or more memory arrays. Such memory arrays include access transistors (eg, transistor 100), storage nodes (eg, storage capacitors or storage transistors), and word lines (eg, row selectors) and bit lines (eg, column selectors) that make up the memory cells. obtain. In some embodiments, the memory layers of BEOL layer 430 may include TFT-type memory cells. FEOL layer 420, on the other hand, may include various logic layers, circuits, and devices (eg, logic transistors) to drive and control logic ICs. For example, logic devices in FEOL layer 420 may form memory peripheral circuits for controlling (eg, accessing (reading/writing), storing, updating) memory cells in BEOL layer 430 . In some embodiments of IC assembly 400, computational logic may be provided in FEOL 420 and in one or more lowest metal layers of BEOL layer 430, while one or more memory arrays may reside above BEOL layer 430. It can be provided in layers. In other embodiments of IC assembly 400, computational logic described with reference to FEOL layer 420 may reside on top of FEOL layer 420 (eg, on BEOL layer 430), between memory layers of BEOL layer 430, or , may be provided coupled to the memory layers of the BEOL layer 430 .The various BEOL layers of BEOL layer 430 may include metal layers of the metallization stack of IC assembly 400 . Various metal layers of the BEOL may be used to interconnect various inputs and outputs of logic devices in the computational logic of FEOL layer 420 and/or memory cells in the memory layers of BEOL layer 430 . In general, each of the metal layers of BEOL layer 430 may include via portions and trench/interconnect portions. The trench portion of the metal layer transfers signals and power along conductive (e.g. metal) lines (sometimes also referred to as "trenches") that extend in the xy plane (e.g., the x or y direction). while the via portion of the metal layer is configured to transfer signals and power through conductive vias extending in the z-direction, eg, to either an upper or lower adjacent metal layer. Thus, vias connect metal structures (eg, metal lines or vias) from one metal layer to metal structures (eg, metal lines or vias) in an adjacent metal layer. Although referred to as "metal" layers, the various layers of BEOL layer 430 are made of conductive metals such as copper (Cu), aluminum (Al), tungsten (W), or cobalt (Co), or metal alloys. It may include only certain patterns, or more generally patterns of conductive material formed in an insulating medium such as ILD. The insulating medium may comprise any suitable ILD material such as silicon oxide, carbon-doped silicon oxide, silicon carbide, silicon nitride, aluminum oxide, and/or silicon oxynitride.FEOL layer 420 may originally be provided over a semiconductor support structure, such as a substrate, die, wafer or chip, and may be any of the materials described with reference to the support structures of the embodiments of FIGS. may include a combination of However, such semiconductor support structures may later be removed to expose the backside portion of the FEOL device of FEOL layer 420 so that backside power supply structure 410 may be provided on the backside of FEOL layer 420 (thus, A BEOL layer 430 is provided on the front side of the FEOL layer 420 and a back side power delivery structure 410 is provided on the back side of the FEOL layer 420).4, the IC assembly 400 may further include a bonding interface 440 and a glass support structure 450, where the bonding interface 440 is the interface where the top surface of the BEOL layer 430 bonds to the surface of the glass support structure 450. could be. Thus, in IC assembly 400 , FEOL layer 420 is between backside power delivery structure 410 and BEOL layer 430 , and BEOL layer 430 is between FEOL layer 420 and glass support structure 450 .5-8 provide schematic diagrams of exemplary implementations of IC assembly 400, according to various embodiments of the present disclosure.FIG. 5 provides a schematic diagram of an IC assembly 500 with backside power supply and front glass support, according to some embodiments of the present disclosure. Portions of IC assembly 400 shown in FIG. 4, such as backside power supply structure 410 and FEOL 420, are labeled in IC assembly 500 of FIG. IC assembly 500 also shows an exemplary implementation of each of these portions.As shown in FIG. 5, the backside power delivery structure 410 may include multiple power interconnects 512 arranged in one or more layers (three such layers separated by horizontal lines are shown in FIG. 5). 5, such separation may not exist in other embodiments, or other embodiments may have a different number of layers and/or different configurations of power than that shown in FIG. interconnect 512). Power interconnect 512 may include any suitable combination of vias 512-1 and lines 512-2, some of which are labeled in FIG. , not labeled. Power interconnect 512 may include any suitable conductive material, such as any of the conductive metals or metal alloys described above. Portions of various power interconnects 512 may be encompassed by insulator material 514, which may include any of the ILD materials described above.As further shown in FIG. 5, FEOL layer 420 may include multiple FEOL devices 526 . One or more of FEOL devices 526 may be transistors with backside contacts as described above, eg, transistor 100 . In various embodiments, the FEOL device 526 is one of fin transistors, nanoribbon transistors, and nanowire transistors with one or more backside contacts, as known in the art and as described herein. or may include more than one. One or more of the power interconnects 512 may then be coupled to one or more S/D regions of such transistors with backside contacts (i.e., one or more of the power interconnects 512 may be coupled to the S/D regions of the transistors of the FEOL device 526). backside contact to one or more S/D regions).Also, as shown in FIG. 5, BEOL layer 430 includes a plurality of BEOL interconnects 532 that may include any suitable conductive material, such as any of the conductive metals or metal alloys described above. obtain. BEOL interconnect 532 may include any suitable combination of vias 532-1 and lines 532-2, some of which are labeled in FIG. , not labeled. One or more of BEOL interconnects 532 may be electrically coupled to one or more of FEOL devices 526 (eg, in conductive contact with at least a portion thereof). At least a portion of BEOL interconnect 532 may be encompassed by insulator material 534, which may include any of the ILD materials described above. In some embodiments, an insulator material such as insulator material 534 may also at least partially enclose a portion of FEOL device 526 . FIG. 5 also schematically illustrates that BEOL layer 430 may include a layer of memory cells 536 . Memory cell 536 can be any of the memory cells described above, such as a TFT-type memory cell, such as memory cell 300 . In further embodiments, IC assembly 500 may include multiple layers of memory cells 536 .In some embodiments, side cross-sectional views of BEOL interconnect 532 and power interconnect 512 have differences in properties due to the fact that BEOL interconnect 532 and power interconnect 512 are formed on different sides of FEOL layer 420 . can. Specifically, in such embodiments, at least some of the BEOL interconnects 532 and at least some of the power interconnects 512 may be trapezoidal in cross-section in a plane perpendicular to the FEOL layer 420 . Such a trapezoid may include two parallel sides, one short side and one long side (ie, the length of the long side is greater than the length of the short side). Characterized by the fact that the BEOL interconnect 532 and the power interconnect 512 are formed on different sides of the FEOL layer 420, the trapezoidal shape of the BEOL interconnect 532 has the long side closer to the glass support structure 450 than the short side and the short side to the glass support structure 450. It is closer to the FEOL layer 420 than the long side, while in the trapezoidal shape of the power interconnect 512 the short side is closer to both the glass support structure 450 and the FEOL layer 420 than the long side.FIG. 5 further illustrates bonding interface material 540 that may be used to implement bonding interface 440 . This bonds the top surface of the BEOL layer 430 to a non-semiconductor support structure 550 that can be used to mount the glass support structure 450 as described above. In some embodiments, bonding interface material 540 can include an oxide, such as silicon oxide. As shown in FIG. 5, in some embodiments, a portion (eg, one side) of bonding interface material 540 may contact one or more portions of glass support structure 450, while bonding interface material 540 Other portions of (eg, the opposite side) of the BEOL layer 430 may contact one or more portions of the BEOL layer 430 . In some embodiments, the bonding interface material 540 can have a thickness between about 1 and 100 nanometers, such as about 1-50 nanometers, or about 1-20 nanometers.In some embodiments, non-semiconductor support structure 550 may comprise a glass material. Examples of glass materials can include silicon oxide materials, optionally doped with elements and compounds such as boron, carbon, aluminum, hafnium oxide, etc., with doping concentrations of, for example, about 0.01% to 10%. In other embodiments, non-semiconductor support structure 550 may comprise other solid materials having dielectric constants lower than Si, eg, less than about 10.5. In some embodiments, the non-semiconductor support structure 550 can include mica. The thickness of the glass support structure 450 provides mechanical stability for the IC assembly 400 and possibly various devices (some such devices are , shown in FIG. 6 and described above) can be any value for the glass support structure 450 . In some embodiments, the glass support structure 450 can have a thickness of about 0.2 micrometers (microns) to 100 microns, such as about 0.5-5 microns, or about 1-3 microns. .FIG. 6 provides a schematic diagram of an IC assembly 600 having a backside power supply and a glass support with thin film devices on the front, according to some embodiments of the present disclosure. Portions of IC assembly 400 shown in FIG. 4, such as backside power supply structure 410 and FEOL 420, are labeled in IC assembly 600 of FIG. IC assembly 600 also shows an exemplary implementation of each of these portions. In particular, IC assembly 600 is shown in IC assembly 500 as described above (in FIG. 6, IC assembly 600 includes some of the same elements included in IC assembly 500 shown in FIG. 5; shown using the same pattern). However, IC assembly 600 also includes one or more thin film devices 556 disposed on glass support structure 450 . For the sake of brevity, the detailed description of IC assembly 500 is not repeated with respect to IC assembly 600 and only the differences are described. Additionally, to avoid complicating the drawing of FIG. 6, power interconnects 512-1 and 512-2 and BEOL interconnects 532-1 and 532-2 are specifically shown in FIG. 6 as labeled in FIG. not labeled explicitly.In various embodiments, thin film device 556 can be a two terminal device such as a thin film resistor, thin film capacitor, and thin film inductor configured to reduce parasitic effects within IC assembly 600 . A first terminal of such a two terminal thin film device 556 may be electrically coupled (eg, in conductive contact) to a first BEOL interconnect of the plurality of BEOL interconnects 532 , while a second terminal may be a first terminal of the plurality of BEOL interconnects 532 . can be electrically coupled (eg, in conductive contact) to the second BEOL interconnect. An example of such a two-terminal coupling is labeled in FIG. 6 for one of the thin film devices 556 (although three different thin film devices 556 are shown in the example of FIG. 6). the first terminal of the thin film device 556 shown on the right side of the IC assembly 600 is coupled to the first BEOL interconnect 612-1 of the plurality of BEOL interconnects 532 (the above coupling shown in dashed outline 652-1 in FIG. 6); A second terminal of the thin film device 556 shown on the right side of the IC assembly 600 is coupled to a second BEOL interconnect 612-2 of the plurality of BEOL interconnects 532 (the coupling shown in dashed outline 652-2 in FIG. 6). As shown in FIG. 6, in some embodiments, a portion of thin film device 556 may extend through bonding interface 450 and make electrical contact with a respective portion of BEOL interconnect 532 .FIG. 7 provides a schematic diagram of an IC assembly 700 having a back side power supply and a glass support with a front active layer, according to some embodiments of the present disclosure. Portions of IC assembly 400 shown in FIG. 4, such as backside power supply structure 410 and FEOL 420, are labeled in IC assembly 700 of FIG. IC assembly 700 also shows an exemplary implementation of each of these portions. In particular, IC assembly 700 is shown in IC assembly 500 as described above (in FIG. 7, IC assembly 700 includes some of the same elements included in IC assembly 500 shown in FIG. 5; shown using the same pattern). However, IC assembly 700 also includes active layer 650 between glass support structure 450 and BEOL layer 430 . For the sake of brevity, the detailed description of IC assembly 500 is not repeated for IC assembly 700 and only the differences are described. Further, to avoid complicating the drawing of FIG. 7, power interconnects 512-1 and 512-2 and BEOL interconnects 532-1 and 532-2 are shown in FIG. 7 as labeled in FIG. is not specifically labeled inAs shown in FIG. 7, an active layer 750 may be provided between the glass support structure 450 and the bonding interface 440, which in turn may be provided between the active layer 750 and the BEOL layer 430. . In some embodiments, some portions of bonding interface 440 may contact one or more portions of active layer 750 and other portions of bonding interface 440 may contact one or more portions of BEOL layer 430. Some can be contacted. Bonding interface 440 may be, for example, a hybrid bonding interface in such embodiments as described below with reference to FIG.As shown in FIG. 7, the active layer 750 may include multiple interconnects 752 arranged in one or more layers (two such layers are shown in FIG. 7, separated by horizontal lines). 7, but such separation may not exist in other embodiments, or other embodiments may include a different number of layers and/or different configurations of interconnects 752 than shown in FIG. obtain). Interconnect 752 may include any suitable combination of vias 752-1 and lines 752-2, some of which are labeled in FIG. not labeled. Interconnect 752 may comprise any suitable conductive material, such as any of the conductive metals or metal alloys described above. Portions of various interconnects 752 may be encompassed by insulator material 754, which may include any of the ILD materials described above. One or more of the interconnects 752 of the active layer 750 may be electrically coupled to one or more of the plurality of BEOL interconnects 532 (eg, in conductive contact with at least a portion thereof).As further shown in FIG. 7, IC assembly 700 may further include a plurality of devices 756 such as transistors or memory cells. Although FIG. 7 shows device 756 as part of glass support structure 450 , device 756 may be part of active layer 750 in other embodiments of IC assembly 700 . In some embodiments, one or more of devices 756 can be transistors described above, such as transistor 100, for example. In some embodiments, one or more of devices 756 may be memory cells, such as memory cell 300, as described above, or may be any other embedded memory cell. One or more of interconnects 752 may in turn be coupled to one or more portions of device 756 and to one or more of BEOL interconnects 532 .In some embodiments, the cross-sectional side view of interconnect 752 may have different properties due to the fact that interconnect 752 and BEOL interconnect 532 are formed on different sides of bonding interface 440 . In particular, in such embodiments, the cross-section of at least some of the interconnects 752 in a plane perpendicular to the FEOL layers 420 may be trapezoidal with one short side and one long side. Such a trapezoid may include two parallel sides, one short side and one long side (ie, the length of the long side is greater than the length of the short side). Characteristic for the fact that interconnect 732 and BEOL interconnect 532 are formed on different sides of bonding interface 440, the trapezoidal shape of interconnect 732 has the short side closer to glass support structure 450 than the long side, and the long side is closer to glass support structure 450 than the long side. Closer to the bonding interface 440 and the FEOL layer 420, while in the trapezoidal shape of the BEOL interconnect 512, the long side is closer to both the glass support structure 450 and the bonding interface 440 than the short side.FIG. 8 provides a schematic diagram of an IC assembly 800 having backside power supplies and a glass support with front thin film devices and active layers, according to some embodiments of the present disclosure. Portions of IC assembly 400 shown in FIG. 4, such as backside power supply structure 410 and FEOL 420, are labeled in IC assembly 800 of FIG. IC assembly 800 also shows an exemplary implementation of each of these portions. In particular, IC assembly 800 has one or more thin film devices 556 disposed in glass support structure 450 as described above, and further includes active layer 750 of IC assembly 700 as described above. It can be implemented as IC assembly 600 . This is shown in FIG. 8 with an IC assembly 800 containing some of the same elements, and the same pattern included in IC assembly 600 shown in FIG. 6 and IC assembly 700 shown in FIG. is used to indicate The description of an IC assembly having one or more thin film devices 556 disposed on a glass support structure 450 and an active layer 750 provided with reference to FIGS. 6 and 7 applies to the IC assembly 800 of FIG. possible and therefore not repeated for the sake of brevity. Exemplary manufacturing methodIC assemblies with backside power supplies and front glass supports described herein may be manufactured using any suitable technique, such as subtractive, additive, damascene, dual damascene, and the like. Some such techniques may include suitable deposition and patterning techniques. As used herein, "patterning" refers to any suitable technique (e.g., applying a resist, patterning the resist using lithography, followed by dry etching, wet etching, or any suitable etching of one or more materials using a technique) to form a pattern in one or more materials.FIGS. 9A-9D illustrate a first exemplary method of forming an IC assembly having a back side power supply and a front glass support, according to some embodiments of the present disclosure. FIGS. 10A-10D illustrate a second exemplary method of forming an IC assembly having a back side power supply and a front glass support, according to some embodiments of the present disclosure. The IC assemblies shown in FIGS. 9 and 10 include some of the same elements included in the IC assemblies shown in FIGS. 5-8 and are shown using the same patterns. For the sake of brevity, the detailed description of those elements is applicable to the IC assembly shown in FIGS. 9 and 10 and will not be repeated.FIG. 9A shows an IC structure 900A in which a first method of fabrication includes multiple FEOLs over a semiconductor support structure 902, which can include any of the support structures described with reference to FIGS. 1-3. It may begin by forming device 526, then forming BEOL 430 over FEOL layer 420 with FEOL device 526, then providing a layer of bonding interface material 540 over the top surface of BEOL layer 430. indicate that FIG. 9B shows IC structure 900B, the first manufacturing method then flips IC structure 900A of FIG. 100A to perform bonding between the IC structure 900A and the glass support structure 450. FIG. In general, the bonds described herein can be insulator-to-insulator bonds, such as oxide-to-oxide bonds, where the bond interface material is and then the structure is assembled together while heating the assembly to a suitable temperature (e.g., slightly elevated temperature, e.g., about 50-200° C.), optionally applying a suitable pressure for a period of time. be done. In some embodiments, bonding interface material 540 is an adhesive material that ensures attachment of IC structure 900A and glass support structure 450 to each other, as shown in FIGS. 9B and 9C. can be In some embodiments, bonding interface material 540 can be an etch stop material. In some embodiments, the bonding interface material 540 may both be etch stop materials and have suitable adhesive properties to ensure that the IC structures as described herein are attached together. . In some embodiments, intentionally added adhesive bonding material may not be used, in which case layer labels "540" or "440" in this drawing bond the respective IC structures together. represents the bonding interface resulting from Even when the particular materials of the insulators of the IC structures that are bonded together can be the same (in which case the bonding interface will appear as a seam or thin layer in what would otherwise appear as a bulk insulator (e.g. bulk oxide) layer). yet recognizable), the bonding interface may be recognizable as a seam or thin layer in the IC assemblies described herein using, for example, Selected Area Electron Diffraction (SED). As used herein, references to "bonding interface material 540" or "bonding interface 440" are intended to join the IC structures described herein, unless otherwise specified. Applicable to the "bonding interface" for embodiments in which no adhesive material added to is used. FIG. 9C shows IC structure 900C, after the bonding of IC structure 900A and glass support structure 450 has been performed, the first manufacturing method is to expose the back side of FEOL device 526 of FEOL 420 ( For example, we can proceed to remove the semiconductor support structure 902 (using a suitable polishing or grinding process). FIG. 9D shows IC structure 900D. After the backside of FEOL device 526 of FEOL 420 has been exposed, a first fabrication method can proceed to provide backside power supply structure 410 as described above. indicates thatFIG. 10A shows an IC structure 1000A in which a second method of manufacturing multiple FEOL devices on a semiconductor support structure 902, which can include any of the support structures described with reference to FIGS. 1-3. We start by forming 526 and then show that BEOL 430 can be formed over FEOL layer 420 with FEOL device 526 . FIG. 10B shows IC structure 1000B, and a second manufacturing method then flips IC structure 1000A of FIG. can proceed to contact the top surface of the active layer 750 provided on top of the IC structure 1000A, thereby performing a hybrid bonding between the IC structure 1000A and the glass support structure 450. FIG. The bonding descriptions provided for FIGS. 9A-9D are applicable to bonding IC structure 1000A to glass support structure 450 and are therefore not repeated for the sake of brevity. . FIG. 10C shows IC structure 1000C in which bonding interface 440 is formed between active layer 750 and BEOL layer 430 after bonding between BEOL layer 430 and active layer 750 of IC structure 900A is performed. indicate that it can be FIG. 10D later shows that the second fabrication method proceeds to remove semiconductor support structure 902 (eg, using a suitable polishing or grinding process) to expose the back side of FEOL device 526 of FEOL 420; It is then shown that a backside power supply structure 410 as described above can be provided. exemplary electronic deviceThe IC assembly with backside power supply and front glass support disclosed herein can be included in any suitable electronic device. 11-13 illustrate various examples of devices and components that may include one or more IC assemblies with backside power supplies and front glass supports as disclosed herein.FIG. 11 is a side cross-sectional view of an exemplary IC package 2200 that can include one or more IC assemblies with back side power supplies and front side glass supports according to any of the embodiments disclosed herein. In some embodiments, IC package 2200 may be a system-in-package (SiP).The package substrate 2252 can be formed of a dielectric material (eg, ceramic, build-up film, epoxy film with filler particles, etc.) and can be formed between surfaces 2272 and 2274, or between different locations on surface 2272. , and/or may have conductive paths extending through the dielectric material between different locations on surface 2274 .Package substrate 2252 may include conductive contacts 2263 coupled to conductive paths 2262 through package substrate 2252 so that circuitry within die 2256 and/or interposer 2257 may be connected to various conductive contacts 2264 (or not shown). (to other devices included in the package substrate 2252).IC package 2200 may include interposer 2257 coupled to package substrate 2252 via conductive contacts 2261 of interposer 2257 , first level interconnect 2265 , and conductive contacts 2263 of package substrate 2252 . The first level interconnects 2265 shown in FIG. 13 are solder bumps, but any suitable first level interconnect 2265 can be used. In some embodiments, interposer 2257 may not be included in IC package 2200 , but rather die 2256 may be directly coupled to conductive contacts 2263 on surface 2272 by first level interconnect 2265 .IC package 2200 may include one or more dies 2256 coupled to interposer 2257 via conductive contacts 2254 of die 2256 , first level interconnects 2258 , and conductive contacts 2260 of interposer 2257 . Conductive contacts 2260 may be coupled to conductive paths (not shown) through interposer 2257 so that circuitry within die 2256 may be electrically connected to various conductive contacts 2261 (or to other devices included in interposer 2257 not shown). allows you to connect The first level interconnects 2258 shown in FIG. 11 are solder bumps, but any suitable first level interconnect 2258 may be used. As used herein, a "conductive contact" can refer to a portion of conductive material (eg, metal) that serves as an interface between different components. Conductive contacts may be recessed into a surface of a component, may be coplanar with the surface, may extend away from the surface, and may have any suitable form (e.g. , conductive pads or sockets).In some embodiments, underfill material 2266 may be disposed between package substrate 2252 and interposer 2257 around first level interconnect 2265 and mold compound 2268 may be placed around die 2256 and interposer 2257 to form a package. It can be placed in contact with substrate 2252 . In some embodiments, underfill material 2266 can be the same as mold compound 2268 . An exemplary material that may be used for underfill material 2266 and mold compound 2268 is a suitable epoxy molding compound. Second level interconnect 2270 may be coupled to conductive contact 2264 . The second level interconnects 2270 shown in FIG. 11 are solder balls (eg, for a ball grid array configuration), but any suitable second level interconnect 2270 (eg, a pin or land grid array in a pin grid array configuration). lands in the configuration) can be used. The second level interconnect 2270 may be an IC package to another component such as a circuit board (e.g., motherboard), interposer, or another IC package known in the art and described below with reference to FIG. 2200 can be used.Die 2256 may take the form of any of the IC assembly embodiments with backside power and front glass support described herein. In embodiments in which IC package 2200 includes multiple dies 2256, IC package 2200 may be referred to as a multi-chip package (MCP). Die 2256 may include circuitry to perform any desired functions. For example, one or more of the dies 2256 can be logic dies (eg, silicon-based dies), and one or more of the dies 2256 can be memory dies (eg, silicon-based dies) that include embedded logic and memory devices as described herein. high bandwidth memory). In some embodiments, any of the dies 2256 may include one or more IC assemblies with backside power supplies and front glass supports, eg, as described above. In some embodiments, at least some of the dies 2256 may not include either a backside power supply or an IC assembly with a front glass support.The IC package 2200 shown in FIG. 11 can be a flip-chip package, although other package architectures can be used. For example, IC package 2200 can be a ball grid array (BGA) package, such as an embedded wafer level ball grid array (eWLB) package. In another example, IC package 2200 may be a wafer level chip scale package (WLCSP) or panel fan-out (FO) package. Although two dies 2256 are shown in IC package 2200 in FIG. 11, IC package 2200 may include any desired number of dies 2256 . IC package 2200 may include additional passive components such as surface mounted resistors, capacitors and inductors located on first side 2272 or second side 2274 of package substrate 2252 or on either side of interposer 2257. may contain components. More generally, IC package 2200 may include any other active or passive components known in the art.FIG. 12 is a side cross-sectional view of an IC device assembly 2300 that can include components having one or more IC assemblies with backside power supplies and front glass supports according to any of the embodiments disclosed herein. . The IC device assembly 2300 may include multiple components arranged on a circuit board 2302 (which may be, for example, a motherboard). IC device assembly 2300 includes components disposed on a first side 2340 of circuit board 2302 and an opposite second side 2342 of circuit board 2302 . In general, components may be placed on one or both of faces 2340 and 2342 . In particular, any suitable component of IC device assembly 2300 includes any one or more IC assemblies having backside power supplies and front glass supports according to any of the embodiments disclosed herein. can contain. For example, any of the IC packages described below with reference to IC device assembly 2300 can be IC packages 2200 described above with reference to FIG. and may include one or more IC assemblies with a front glass support).In some embodiments, circuit board 2302 may be a PCB that includes multiple metal layers separated from each other by layers of dielectric material and interconnected by conductive vias. Any one or more of the metal layers are formed in desired circuit patterns to transfer electrical signals (optionally in conjunction with other metal layers) between components coupled to the circuit board 2302. can be In other embodiments, circuit board 2302 may be a non-PCB board.IC device assembly 2300 shown in FIG. 12 includes package-on-interposer structure 2336 coupled to first side 2340 of circuit board 2302 by coupling component 2316 . Coupling components 2316 may electrically and mechanically couple the package-on-interposer structure 2336 to the circuit board 2302 and may include solder balls (eg, shown in FIG. 12), male and female portions of sockets, adhesives, etc. , underfill material, and/or any other suitable electrical and/or mechanical coupling structure.Package-on-interposer structure 2336 may include IC package 2320 coupled to interposer 2304 by coupling component 2318 . Coupling component 2318 may take any suitable form for the application, such as those described above with reference to coupling component 2316 . IC package 2320 includes one or more IC assemblies having back side power supplies and front side glass supports as described herein. Although a single IC package 2320 is shown in FIG. 12, multiple IC packages can be coupled to the interposer 2304 . In fact, additional interposers may be coupled to interposer 2304 . Interposer 2304 may provide an intervening substrate used to bridge circuit board 2302 and IC package 2320 . In general, the interposer 2304 may spread connections to a wider pitch or may reroute connections to different connections. For example, interposer 2304 may couple IC package 2320 (eg, die) to the BGA of coupling component 2316 for coupling with circuit board 2302 . In the embodiment shown in FIG. 12, IC package 2320 and circuit board 2302 are attached to opposite sides of interposer 2304 . In other embodiments, IC package 2320 and circuit board 2302 may be attached to the same side of interposer 2304 . In some embodiments, three or more components may be interconnected by an interposer 2304.Interposer 2304 may be formed of epoxy resin, fiberglass reinforced epoxy resin, ceramic material, or polymeric material such as polyimide. In some implementations, the interposer 2304 can include the same materials described above used for semiconductor substrates, such as silicon, germanium, and other III-V and IV materials. It can be formed of overlying rigid or flexible materials. Interposer 2304 may include metal interconnects 2308 and vias 2310 including, but not limited to, through-silicon vias (TSVs) 2306 . Interposer 2304 may further include embedded devices 2314 that include both passive and active devices. Such devices may include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, sensors, electrostatic discharge (ESD) protection devices and memory devices. More complex devices such as radio frequency (RF) devices, power amplifiers, power management devices, antennas, arrays, sensors and micro-electro-mechanical systems (MEMS) devices can also be formed on interposer 2304 . Package-on-interposer structure 2336 may take the form of any package-on-interposer structure known in the art.IC device assembly 2300 may include IC package 2324 coupled to first side 2340 of circuit board 2302 by coupling component 2322 . Coupling component 2322 may take the form of any of the embodiments described above with reference to coupling component 2316, and IC package 2324 may take the form of any of the embodiments described above with reference to IC package 2320. can take the formIC device assembly 2300 shown in FIG. 12 includes package-on-package structure 2334 coupled to second side 2342 of circuit board 2302 by coupling component 2328 . Package-on-package structure 2334 may include IC package 2326 and IC package 2332 coupled together by coupling component 2330 such that IC package 2326 is disposed between circuit board 2302 and IC package 2332 . Coupling components 2328 and 2330 may take the form of any of the embodiments of coupling component 2316 described above, and IC packages 2326 and 2332 may take the form of any of the embodiments of IC package 2320 described above. can take Package-on-package structure 2334 may be constructed according to any of the package-on-package structures known in the art.FIG. 13 is an exemplary computing device that may comprise one or more components having one or more IC assemblies with backside power supplies and front glass supports according to any of the embodiments disclosed herein 2400 is a block diagram. Any of the components of computing device 2400 may include IC package 2200, described with reference to FIG. Any of the components of computing device 2400 may include IC device assembly 2300 described with reference to FIG.Although a number of components are illustrated in FIG. 13 as being included in computing device 2400, any one or more of these components may be omitted or duplicated as appropriate for the application. In some embodiments, some or all of the components included in computing device 2400 may be attached to one or more motherboards. In some embodiments, some or all of these components are manufactured on a single SoC die.Additionally, in various embodiments, computing device 2400 may not include one or more of the components shown in FIG. 13, but computing device 2400 may combine one or more components. interface circuitry. For example, computing device 2400 may not include display device 2406, but may include display device interface circuitry (eg, connectors and driver circuitry) to which display device 2406 may be coupled. In another set of examples, computing device 2400 may not include audio input device 2418 or audio output device 2408, but audio input or output device interface circuitry (such as connectors and supporting circuitry).Computing device 2400 may include a processing device 2402 (eg, one or more processing devices). As used herein, the term "processing device" or "processor" processes electronic data from registers and/or memory and transforms the electronic data into other electronic data that can be stored in registers and/or memory. may refer to any device or part of a device that The processing device 2402 may include one or more digital signal processors (DSPs), application specific integrated circuits (ASICs), central processing units (CPUs), graphics processing units (GPUs), cryptographic processors (which run cryptographic algorithms in hardware). dedicated processor), server processor or any other suitable processing device. Computing device 2400 may include memory 2404, which itself may be volatile memory (eg, DRAM), non-volatile memory (eg, read-only memory (ROM)), flash memory, solid-state memory, and/or one or more memory devices such as hard drives. In some embodiments, memory 2404 may include memory that shares a die with processing device 2402 . The memory may be used as cache memory and may include one or more IC assemblies with back side power supplies and front side glass supports as described herein.In some embodiments, computing device 2400 may include communications chip 2412 (eg, one or more communications chips). For example, communications chip 2412 may be configured to manage wireless communications for the transfer of data to and from computing device 2400 . The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communication channels, etc. that may communicate data using electromagnetic radiation modulated through a non-solid medium. . The term does not imply that the associated devices do not contain any wires, although in some embodiments they may.Communication chip 2412 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 602.11 family), IEEE 602.16 standard (eg, IEEE 602.16). 16-2005 amendment), including any amendments, updates and/or revisions (e.g., Advanced LTE Project, Ultra Mobile Broadband (UMB) Project (also known as "3GPP2"), etc.) Includes the Long-Term Evolution (LTE) project. Broadband Wireless Access (BWA) networks compatible with IEEE 602.16 are generally referred to as WiMAX (an acronym that stands for Worldwide Interoperability for Microwave Access) networks, which are conformance and interoperability with the IEEE 602.16 standard. It is a certification mark for products that have passed the test of Communications chip 2412 supports Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Next Generation HSPA (E-HSPA) or may operate according to an LTE network. Communications chip 2412 may operate according to Enhanced Data for GSM Evolution (EDGE), GSM EDGE Radio Access Network (GERAN), Universal Terrestrial Radio Access Network (UTRAN) or Next Generation UTRAN (E-UTRAN). . Communications chip 2412 supports Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced Cordless Telecommunications (DECT), Evolution Data Optimized (EV-DO) and derivatives thereof, as well as 3G, 4G, It may operate according to any other radio protocol designated as 5G and beyond. In other embodiments, communications chip 2412 may operate according to other wireless protocols. Computing device 2400 may include an antenna 2422 for facilitating wireless communications and/or for receiving other wireless communications (such as AM or FM wireless transmissions).In some embodiments, communications chip 2412 may manage wired communications, such as electrical, optical, or any other suitable communications protocol (eg, Ethernet). As noted above, communications chip 2412 may include multiple communications chips. For example, a first communications chip 2412 may be dedicated to short-range wireless communications such as Wi-Fi® or Bluetooth®, and a second communications chip 2412 may be dedicated to global positioning system (GPS), It may be dedicated to long range wireless communication such as EDGE, GPRS, CDMA, WiMAX®, LTE, EV-DO, and the like. In some embodiments, the first communications chip 2412 may be dedicated to wireless communications and the second communications chip 2412 may be dedicated to wired communications.Computing device 2400 may include battery/power circuitry 2414 . The battery/power circuit 2414 powers the components of the computing device 2400 to one or more energy storage devices (eg, batteries or capacitors) and/or an energy source separate from the computing device 2400 (eg, line power). A circuit for coupling may be included.Computing device 2400 may include a display device 2406 (or corresponding interface circuitry, as described above). Display device 2406 may include any visual indicator such as, for example, a heads-up display, computer monitor, projector, touch screen display, liquid crystal display (LCD), light emitting diode display, or flat panel display.Computing device 2400 may include an audio output device 2408 (or corresponding interface circuitry, as described above). Audio output device 2408 may include any device that produces an audible indicator such as, for example, a speaker, headset, or earpiece.Computing device 2400 may include an audio input device 2418 (or corresponding interface circuitry, as described above). Audio input device 2418 may include any device that produces a signal representing sound, such as a microphone, microphone array, or digital device (eg, a device with a musical instrument digital interface (MIDI) output).Computing device 2400 may include a GPS device 2416 (or corresponding interface circuitry, as described above). GPS device 2416 may communicate with satellite-based systems and receive the location of computing device 2400 in a manner known in the art.Computing device 2400 may include other output devices 2410 (or corresponding interface circuitry, as discussed above). Examples of other output devices 2410 may include audio codecs, video codecs, printers, wired or wireless transmitters for providing information to other devices, or additional storage devices.Computing device 2400 may include other input devices 2420 (or corresponding interface circuitry, as described above). Examples of other input devices 2420 include accelerometers, gyroscopes, compasses, image capture devices, keyboards, cursor control devices such as mice, styluses, touch pads, bar code readers, quick response (QR) code readers, any A sensor or radio frequency identification (RFID) reader may be included.Computing device 2400 may be a handheld electrical or mobile computing device (e.g., cell phone, smart phone, mobile internet device, music player, tablet computer, laptop computer, netbook computer, ultrabook computer, personal digital assistant (PDA)). , ultra-mobile personal computers, etc.), desktop computing devices, server devices or other networked computing components, printers, scanners, monitors, set-top boxes, entertainment control units, vehicle control units, digital cameras, digital video recorders or may have any desired form factor, such as a wearable computing device. In some embodiments, computing device 2400 may be any other electronic device that processes data. Selection exampleThe following paragraphs provide various examples of the embodiments disclosed herein.Example 1 is a FEOL layer having multiple FEOL devices, a backside power delivery structure having multiple power interconnects electrically coupled to (e.g., in conductive contact with at least a portion of) various ones of the multiple FEOL devices. , a BEOL layer having a plurality of BEOL interconnects electrically coupled to (e.g., in conductive contact with at least a portion of) one or more of the plurality of FEOL devices, and a glass support structure (e.g., at least one of the glass wafers). part), the FEOL layer being between the backside power delivery structure and the BEOL layer, the BEOL layer being between the FEOL layer and the glass support structure.Example 2 shows that the plurality of BEOL interconnects includes a first BEOL interconnect and a second BEOL interconnect (e.g., first and second metal lines), and the glass support structure is electrically coupled (e.g., in conductive contact) to the first BEOL interconnect. 4.) Provide an IC assembly according to Example 1, including a two terminal thin film device having a first terminal and having a second terminal electrically coupled (eg, in conductive contact) to a second BEOL interconnect.Example 3 provides an IC assembly according to Example 2, wherein the thin film device is a thin film resistor.Example 4 provides an IC assembly according to Example 2, wherein the thin film device is a thin film capacitor.Example 5 provides an IC assembly according to Example 2, wherein the thin film device is a thin film inductor.Example 6 provides an IC assembly according to any one of the above examples, further including a bonding interface between the BEOL layer and the glass support structure.Example 7 provides an IC assembly according to Example 6, wherein the bonding interface comprises oxide.Example 8 provides an IC assembly according to Example 7, wherein the oxide includes a portion contacting one or more portions of the glass support structure and a portion contacting one or more portions of the BEOL layer. do.Example 9 provides an IC assembly according to any one of Examples 1-7, further comprising an active layer comprising a plurality of IC devices and interconnects, the active layer being between the glass support structure and the bonding interface. , the bonding interface is between the active layer and the BEOL layer, and at least one of the plurality of IC devices and interconnects of the active layer electrically couples (e.g., at least one of) to one or more of the plurality of BEOL interconnects. conductive contact with the part).Example 10 provides an IC assembly according to Example 9, wherein the bond interface is a hybrid bond interface.Example 11 provides an IC assembly according to Examples 9 or 10, wherein the bonding interface includes a portion contacting one or more portions of the active layer and a portion contacting one or more portions of the BEOL layer.Example 12 shows that the cross-section of each of the at least one interconnect of the active layer and the at least one interconnect of the BEOL interconnect is trapezoidal with two parallel sides, one of the two parallel sides being the short side. and another one is a long side, for the trapezoid of at least one interconnect of the active layer, the short side is closer to the glass support structure than the long side, for the trapezoid of the at least one interconnect of the BEOL interconnect, the long side is short Provide an IC assembly according to any one of Examples 9-11, closer to the glass support structure than to the sides.Example 13 is a power interconnect, wherein the cross-section of at least one of the interconnects is a trapezoid including two parallel sides, one of the two parallel sides being a short side and one being a long side; Provide an IC assembly according to Example 12, wherein for the trapezoidal shape of at least one of the power interconnects, the short sides are closer to the glass support structure than the long sides.Example 14 shows that the plurality of FEOL devices includes FEOL transistors having source and drain regions, and at least one power interconnect of the plurality of power interconnects is electrically coupled (e.g., in conductive contact) to the source or drain regions. ), providing an IC assembly according to any one of the above examples.Example 15 provides an IC assembly according to any one of the preceding examples, wherein the backside power supply structure includes an insulator material that encompasses at least a portion of the plurality of power interconnects.Example 16 provides an IC assembly according to any one of the above examples, wherein the BEOL layer includes one or more memory layers, and the one or more memory layers include memory cells including thin film transistors.Example 17 provides an IC assembly according to any one of the above examples, wherein the glass support structure is replaced with a support structure of a material having a dielectric constant lower than 10, which can be, but is not limited to, glass. For example, the support structure material can be mica.Example 18 provides an IC package including an IC assembly according to any one of the above examples and a further IC component coupled to the IC assembly.Example 19 provides an IC package according to Example 18, wherein the additional IC component comprises one of a package substrate, interposer, or additional IC die.Example 20 provides an IC package according to Examples 18 or 20, wherein the IC assembly includes or is part of at least one of a memory device, a computing device, a wearable device, a handheld electronic device, and a wireless communication device. .Example 21 provides an electronic device comprising a carrier substrate and one or more of an IC assembly according to any one of the above examples and an IC package according to any one of the above examples bonded to the carrier substrate. .Example 22 provides the electronic device according to Example 21, wherein the carrier substrate is a motherboard.Example 23 provides the electronic device according to Example 21, wherein the carrier substrate is a PCB.Example 24 provides an electronic device according to any one of Examples 21-23, wherein the electronic device is a wearable electronic device (eg a smartwatch) or a handheld electronic device (eg a mobile phone).Example 25 provides an electronic device according to any one of Examples 21-24, wherein the electronic device further comprises one or more communication chips and an antenna.Example 26 provides the electronic device according to any one of Examples 21-25, wherein the electronic device is an RF transceiver.Example 27 is of Examples 21-25, wherein the electronic device is one of an RF communication device, such as an RF transceiver switch, power amplifier, low noise amplifier, filter, filter bank, duplexer, upconverter, or downconverter. An electronic device according to any one is provided.Example 28 provides the electronic device according to any one of Examples 21-25, wherein the electronic device is a computing device.Example 29 provides an electronic device according to any one of Examples 21-28, wherein the electronic device is included in a base station of a wireless communication system.Example 30 provides an electronic device according to any one of Examples 21-28, wherein the electronic device is included in a user equipment device (ie, mobile device) of a wireless communication system.Example 31 provides a method of manufacturing an IC assembly comprising providing a FEOL device over a semiconductor support structure and providing a BEOL layer over the FEOL device, the BEOL layer comprising a plurality of including a plurality of BEOL interconnects electrically coupled (e.g., in conductive contact with at least a portion) to one or more of the FEOL devices; and performing a backside exposure by removing at least a portion of the semiconductor support structure to expose a portion of the FEOL device; and electrically coupled to the exposed portion of the FEOL device (e.g., providing a backside power delivery structure including a plurality of power interconnects (in conductive contact with at least a portion thereof).Example 32 illustrates that the step of bonding the BEOL layer and FEOL device configuration to the non-semiconductor support structure comprises: a side of the BEOL layer bonded to the non-semiconductor support structure; Example 31 comprising providing at least one with one or more bonding materials and attaching a side of the BEOL layer bonded to the non-semiconductor support structure to a side of the non-semiconductor support structure bonded to the BEOL layer provide a method byExample 33 provides the method according to Example 32, wherein the one or more bonding materials comprise oxide.Example 34 removes at least a portion of the semiconductor support structure to expose a portion of the FEOL device comprises polishing or grinding the semiconductor support structure until a portion of the FEOL device is exposed; A method according to any one of Examples 31-33 is provided.Example 35 provides a method according to any one of Examples 31-34, wherein at least a portion of the semiconductor support structure is removed after bonding the BEOL layer and FEOL device features to the non-semiconductor support structure.Example 36 provides the method according to any one of Examples 31-35, wherein the backside power supply structure comprises an insulator material encompassing at least a portion of the plurality of power interconnects.Example 37 provides a method according to any one of Examples 31-36, wherein the non-semiconductor support structure comprises glass.Example 38 provides a method according to any one of Examples 31-37, wherein the non-semiconductor support structure comprises mica.Example 39 shows that the non-semiconductor support structure includes an active layer containing a plurality of IC devices and interconnects, and the step of bonding the BEOL layer and features of the FEOL devices to the non-semiconductor support structure includes connecting the BEOL layer and features of the FEOL devices to the active layer. and electrically coupling at least one of the plurality of IC devices and interconnects of the active layer to one or more of the plurality of BEOL interconnects. offer.Example 40 further includes a process for forming an IC assembly according to any one of the above examples (eg, for forming an IC assembly according to any one of Examples 1-17), Examples 31-39. providing a method according to any one ofThe above description of illustrated implementations of the disclosure, including the matter set forth in the Abstract, is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Specific implementations and examples of the disclosure are described herein for purposes of illustration, and various equivalent variations are possible within the scope of the disclosure, as those skilled in the art will appreciate. These modifications can be made to this disclosure in light of the above detailed description. OTHER POINTS TO BE CONSIDERED (Item 1) An integrated circuit (IC) assembly comprising a substrate end of line (FEOL) layer containing a plurality of FEOL devices and a plurality of power interconnects coupling to various said plurality of FEOL devices. a backside powering structure comprising a backside powering structure comprising: a backside powering structure (BEOL) layer comprising a plurality of BEOL interconnects coupled to one or more of the plurality of FEOL devices; and a glass support structure, the FEOL layer being coupled to the backside powering structure. an IC assembly between said BEOL layer, said BEOL layer between said FEOL layer and said glass support structure. (Item 2) The plurality of BEOL interconnects includes a first BEOL interconnect and a second BEOL interconnect, the glass support structure having a first terminal coupled to the first BEOL interconnect and a second terminal coupled to the second BEOL interconnect. An IC assembly according to item 1, comprising a thin film device having terminals. 3. The IC assembly of claim 2, wherein the thin film device is a thin film resistor. (Item 4) The IC assembly according to item 2, wherein the thin film device is a thin film capacitor. (Item 5) The IC assembly according to item 2, wherein the thin film device is a thin film inductor. 6. The IC assembly of claim 1, further comprising a bonding interface between the BEOL layer and the glass support structure. 7. The IC assembly of claim 6, wherein the bonding interface comprises oxide. 8. The IC assembly of claim 7, wherein the oxide includes portions contacting one or more portions of the glass support structure and portions contacting one or more portions of the BEOL layer. (Item 9) further comprising an active layer including a plurality of IC devices and interconnects, wherein said active layer is between said glass support structure and said bonding interface, said bonding interface being between said active layer and said BEOL layer. and wherein at least one of said plurality of IC devices and interconnects of said active layer couples to one or more of said plurality of BEOL interconnects. 10. The IC assembly of claim 9, wherein the bonding interface is a hybrid bonding interface. 11. The IC assembly of claim 9, wherein the bonding interface includes a portion contacting one or more portions of the active layer and a portion contacting one or more portions of the BEOL layer. (Item 12) A cross section of each of at least one interconnect of the active layer and at least one interconnect of the BEOL interconnect is a trapezoid including two parallel sides, one of the two parallel sides being a short side and another long side, for said trapezoid of said at least one interconnect of said active layer, said short side being closer to said glass support structure than said long side, said at least said BEOL interconnect; 10. The IC assembly of item 9, wherein for the trapezoid of one interconnect, the long side is closer to the glass support structure than the short side. (Item 13) A cross-section of at least one of said power interconnects is a trapezoid including two parallel sides, one of said two parallel sides being a short side and another being a long side. 13. The IC assembly of item 12, wherein for said trapezoid of said at least one interconnect of said power interconnects, said short side is closer to said glass support structure than said long side. 14. Item 1, wherein the plurality of FEOL devices includes FEOL transistors having source and drain regions, and at least one power interconnect of the plurality of power interconnects is coupled to the source region or the drain region. An IC assembly as described in . 15. The IC assembly of claim 1, wherein the backside power supply structure comprises an insulator material that encompasses at least a portion of the plurality of power interconnects. 16. The IC assembly of claim 1, wherein the BEOL layer includes one or more memory layers, and wherein the one or more memory layers include memory cells including thin film transistors. (Item 17) An integrated circuit (IC) package comprising an IC assembly and a further IC component coupled to said IC assembly, said IC assembly being one of a fin transistor, a nanoribbon transistor, and a nanowire transistor. or a layer including a plurality of transistors, including a plurality of back-end interconnects coupled to one or more of the plurality of transistors; a back-end layer including a plurality of back-end interconnects coupled to one or more of the plurality of transistors; a backside power supply structure including a power interconnect; and a glass support structure, wherein the layer including the plurality of transistors is between the backside power supply structure and the backend layer, the backend layer comprising the An IC package between said layer containing a plurality of transistors and said glass support structure. 18. The IC package of Claim 17, wherein the further IC component comprises one of a package substrate, an interposer, or a further IC die. 19. A method of manufacturing an integrated circuit (IC) assembly, comprising the steps of providing a substrate end of line (FEOL) device over a semiconductor support structure and providing an interconnect end of line (BEOL) layer over the FEOL device. wherein the BEOL layer includes a plurality of BEOL interconnects coupled to one or more of the plurality of FEOL devices; and bonding a configuration of the BEOL layer and the FEOL devices to a non-semiconductor support structure. removing at least a portion of the semiconductor support structure to expose portions of the FEOL devices; and a backside power delivery structure including a plurality of power interconnects coupled to the exposed portions of the FEOL devices. and providing. (Item 20) The step of bonding the BEOL layer and the configuration of the FEOL device to the non-semiconductor support structure comprises: a surface of the BEOL layer bonded to the non-semiconductor support structure; providing one or more bonding materials on at least one surface of the non-semiconductor support structure that is bonded to the non-semiconductor support structure; and attaching to said surface of a semiconductor support structure.
A bit-and-one-half analog to digital converter comprises a switched capacitor circuit, including an opamp, that receives an analog input voltage and generates a residual analog output voltage. The switched capacitor circuit samples the analog input voltage during a sampling phase and generates the residual analog output voltage during an integration phase. A comparator generates a digital output based on the analog output voltage generated by the switched capacitor circuit. A current source communicates with the opamp and is operable to supply a first bias current to the opamp during the sampling phase and a second bias current that is greater than the first bias current to the opamp during the integration phase.
1. A bit-and-one-half analog to digital converter, comprising:a switched capacitor circuit, including an opamp, that receives an analog input voltage and generates a residual analog output voltage, wherein said switched capacitor circuit samples said analog input voltage during a sampling phase and generates said residual analog output voltage during an integration phase;a comparator that generates a digital output based on said analog output voltage generated by said switched capacitor circuit; anda current source that communicates with said opamp and is operable to supply a first bias current to said opamp during said sampling phase and a second bias current that is greater than said first bias current to said opamp during said integration phase.2. The bit-and-one-half analog to digital converter of claim 1 wherein said switched capacitor circuit includes a capacitor that stores said sampled analog input voltage during said sampling phase.3. The bit-and-one-half analog to digital converter of claim 2 wherein said sampled analog input voltage stored by said capacitor is integrated by said opamp during said integration phase to generate said residual analog output voltage.4. The bit-and-one-half analog to digital converter of claim 1 wherein said first bias current is a fractional portion of said second bias current.5. The bit-and-one-half analog to digital converter of claim 1 wherein said first bias current is zero.6. A multi-stage pipelined analog to digital converter, comprising:a plurality of bit-and-one-half converter stages arranged in series, each converter stage receiving an analog input voltage and generating a residual analog output voltage, wherein each converter stage further comprises:a switched capacitor circuit, including an opamp, that receives said analog input voltage and generates said residual analog output voltage, wherein said switched capacitor circuit samples said analog input voltage during a sampling phase and generates said residual analog output voltage during an integration phase;a comparator that generates a digital stage output based on said residual analog output voltage generated by said switched capacitor circuit;a current source that communicates with said opamp and is operable to supply a first bias current to said opamp during said sampling phase and a second bias current that is greater than said first bias current to said operation amplifier during said integration phase; anda correction circuit that accepts said digital stage output from each of said converter stages and generates a corresponding digital output.7. The multi-stage pipelined analog to digital converter of claim 6 wherein said switched capacitor circuit includes a capacitor that stores said sampled analog input voltage during said sampling phase.8. The multi-stage pipelined analog to digital converter of claim 7 wherein said sampled analog input voltage stored by said capacitor is integrated by said opamp during said integration phase to generate said residual analog output voltage.9. The multi-stage pipelined analog to digital converter of claim 6 wherein said first bias current is a fractional portion of said second bias current.10. The multi-stage pipelined analog to digital converter of claim 6 wherein said first bias current is zero.11. A bit-and-one-half analog to digital converter, comprising:switched capacitor means, including integrating means for integrating signals input thereto, for receiving an analog input voltage and for generating a residual analog output voltage, wherein said switched capacitor means samples said analog input voltage during a sampling phase and generates said residual analog output voltage during an integration phase;comparing means for generating a digital output based on said analog output voltage generated by said switched capacitor means; andcurrent means that communicates with said integrating means for supplying a first bias current to said integrating means during said sampling phase and a second bias current that is greater than said first bias current to said integrating means during said integration phase.12. The bit-and-one-half analog to digital converter of claim 11 wherein said switched capacitor means includes a capacitor that stores said sampled analog input voltage during said sampling phase.13. The bit-and-one-half analog to digital converter of claim 12 wherein said sampled analog input voltage stored by said capacitor is integrated by said integrating means during said integration phase to generate said residual analog output voltage.14. The bit-and-one-half analog to digital converter of claim 11 wherein said first bias current is a fractional portion of said second bias current.15. The bit-and-one-half analog to digital converter of claim 11 wherein said first bias current is zero.16. A multi-stage pipelined analog to digital converter, comprising:a plurality of bit-and-one-half converter stages arranged in series, each converter stage receiving an analog input voltage and generating a residual analog output voltage, wherein each converter stage further comprises:switched capacitor means, including an integrating means for integrating signals input thereto, for receiving said analog input voltage and for generating said residual analog output voltage, wherein said switched capacitor means samples said analog input voltage during a sampling phase and generates said residual analog output voltage during an integration phase;comparing means for generating a digital stage output based on said residual analog output voltage generated by said switched capacitor means;current means that communicates with said integrating means for supplying a first bias current to said integrating means during said sampling phase and a second bias current that is greater than said first bias current to said operation amplifier during said integration phase; andcorrection means for accepting said digital stage output from each of said converter stages and for generating a corresponding digital output.17. The multi-stage pipelined analog to digital converter of claim 16 wherein said switched capacitor means includes a capacitor that stores said sampled analog input voltage during said sampling phase.18. The multi-stage pipelined analog to digital converter of claim 17 wherein said sampled analog input voltage stored by said capacitor is integrated by said integrating means during said integration phase to generate said residual analog output voltage.19. The multi-stage pipelined analog to digital converter of claim 16 wherein said first bias current is a fractional portion of said second bias current.20. The multi-stage pipelined analog to digital converter of claim 16 wherein said first bias current is zero.21. A method of operating a bit-and-one-half analog to digital converter, comprising:sampling an analog input voltage during a sampling phase;generating a residual analog output voltage during an integration phase using an opamp;generating a digital output based on said residual analog output voltage; andsupplying a first bias current to said opamp during said sampling phase and a second bias current that is greater than said first bias current to said opamp during said integration phase.22. The method of claim 21 further comprising using a capacitor to store said sampled analog input voltage during said sampling phase.23. The method of claim 22 further comprising integrating said sampled analog input voltage stored by said capacitor using said opamp during said integration phase to generate said residual analog output voltage.24. The method of claim 21 wherein said first bias current is a fractional portion of said second bias current.25. The method of claim 21 wherein said first bias current is zero.
CROSS-REFERENCE TO RELATED APPLICATIONSThis application is a divisional of U.S. patent application Ser. No. 10/313,369 now U.S. Pat. No. 6,839,015, filed on Dec. 6, 2002. The disclosure of the above application is incorporated herein by reference.FIELD OF THE INVENTIONThe present invention relates to a multi-stage pipelined analog to digital converter, and more particularly to a method for reducing power consumption of each stage of an analog to digital converter.BACKGROUND OF THE INVENTIONMulti-stage pipelined analog to digital converters (ADC) provide efficient high speed conversion of analog signals to digital equivalents. A representative multi-stage pipelined ADC 10 is shown in FIG. 1. The ADC 10 generally includes a plurality of converter stages, such as stages 11, 12 and 13, arranged in series relative to each other. Each converter stage operates by comparing an analog input voltage to thresholds provided by reference signals Vrefp and Vrefn. As a result, each converter stage provides one or more bits of digital data to a digital correction circuit 15. The digital correction circuit 15, in turn, resolves the digital output from each stage into a digital output 16 that corresponds to an analog input 17.FIG. 2 is a generalized block diagram of each converter stage. In operation, each stage accepts an analog input voltage and generates a residual analog voltage and a digital stage output. In particular, each stage applies the analog input voltage to a multiplying digital to analog converter (MDAC) 19 to generate the residual analog voltage. The residual analog voltage is then provided to a comparator 18, which generates the digital stage output. The residual analog voltage also serves as input to subsequent converter stages. This arrangement is also referred to herein as a bit-and-one-half analog to digital converter.Each converter stage may include a switched capacitor circuit as shown in FIG. 3. The switched capacitor circuit operates in accordance with a two cycle clock with phases designated as [Phi]1 and [Phi]2. During a sampling phase, input capacitors C1 and C2 are charged by an input voltage Vin. In this phase, an operational amplifier 21 does not perform a function. During a subsequent integration phase, the switched capacitor circuit generates a residual output voltage. More specifically, the charge stored by the input capacitors is integrated by the operational amplifier 21 to generate an output voltage Vout. In other words, the operational amplifier 21 is active every other clock cycle. The same bias current is continuously supplied to the operational amplifier 21.SUMMARY OF THE INVENTIONAn analog to digital converter includes a first charging circuit that is selectively charged by an input voltage. A first opamp has one input that selectively communicates with the first charging circuit. A first current source selectively generates a first bias current for the first opamp during a first phase and a second bias current that is not equal to the first bias current for the first opamp during a second phase.In other features, the first bias current is less than the second bias current. The first bias current is zero and the second bias current is greater than zero. The first opamp is operated in an integrating mode during the second phase.In still other features, a first switching circuit communicates with the one input of the first opamp, the first charging circuit and the first current source. The first switching circuit charges the first charging circuit using the input voltage and operates the first current source to supply the first bias current to the first opamp during the first phase. The first switching circuit isolates the first charging circuit from the input voltage, operates the first opamp in an integrating mode and operates the first current source to supply the second bias current to the first opamp during the second phase.In still other features, the first current source is a variable current source that selectively provides the first and second bias currents during the first and second phases, respectively. Alternately, the first current source includes two current sources. Only one of the two current sources is connected to the first opamp during the second phase.In yet other features, a second charging circuit is selectively charged by an output of the first opamp. A second opamp has one input that selectively communicates with the second charging circuit. A second current source selectively generates the second bias current for the second opamp during the first phase and the first bias current for the second opamp during the second phase.In still other features, a second switching circuit communicates with the one input of the second opamp, the second charging circuit and the second current source. The second switching circuit charges the second charging circuit using the output of the second opamp, operates the second opamp in an integrating mode and operates the second current source to supply the second bias current to the second opamp during the first phase. The second switching circuit isolates the second charging circuit from the output of the first opamp and operates the second current source to supply the first bias current to the second opamp during the second phase. The first charging circuit includes at least one capacitor.Further areas of applicability of the present invention will become apparent from the detailed description provided hereinafter. It should be understood that the detailed description and specific examples, while indicating the preferred embodiment of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention will become more fully understood from the detailed description and the accompanying drawings, wherein:FIG. 1 is a block diagram depicting a conventional multi-stage pipelined analog to digital converter (ADC) according to the prior art;FIG. 2 is a block diagram an exemplary converter stage residing in the multi-stage ADC according to the prior art;FIG. 3 is a block diagram of a conventional switched capacitor circuit which may be implemented in a converter stage of the multi-stage ADC according to the prior art;FIG. 4A is a block diagram of a first switched capacitor circuit with reduced power consumption in accordance with the present invention;FIG. 4B is a flowchart illustrating the operation of the circuit in FIG. 4A;FIG. 5 illustrates a current mirror according to the prior art;FIG. 6A is a diagram of a second switched capacitor circuit with reduced power consumption in accordance with the present invention;FIG. 6B is a flowchart illustrating the operation of the circuit in FIG. 6A;FIG. 7A is a diagram of a third switched capacitor circuit with reduced power consumption in accordance with the present invention;FIG. 7B is a flowchart illustrating the operation of the circuit in FIG. 7A;FIG. 8A illustrates a circuit having regular and/or irregular periodic active and inactive phases during operation and a power supply that supplies first and second bias signals during the regular and/or irregular periodic active and inactive phases, respectively;FIG. 8B illustrates exemplary regular periodic active and inactive phases for the circuit in FIG. 8A and exemplary first and second bias signals during the regular periodic active and inactive phases, respectively;FIG. 8C illustrates exemplary irregular periodic active and inactive phases for the circuit in FIG. 8A and exemplary first and second bias signals during the irregular periodic active and inactive phases, respectively;FIG. 9A illustrates multiple circuits having regular and/or irregular periodic active and inactive phases during operation and one or more power supplies that supply first and second bias signals during the regular and/or irregular periodic active and inactive phases, respectively;FIG. 9B illustrates exemplary regular and/or irregular periodic active and inactive phases for each of the circuits in FIG. 9A and exemplary first and second bias signals during the regular and/or irregular periodic active and inactive phases, respectively;FIG. 10A illustrates a square-waveform bias signal and a zero bias signal for the active and inactive phases, respectively;FIG. 10B illustrates the square-waveform bias signal of FIG. 10A and a non-zero bias signal for the active and inactive phases, respectively;FIG. 11A illustrates a stepped bias signal and a zero bias signal for the active and inactive phases, respectively;FIG. 11B illustrates the stepped bias signal of FIG. 11A and a non-zero bias signal for the active and inactive phases, respectively;FIG. 12A illustrates a linearly changing bias signal and a zero bias signal for the active and inactive phases, respectively;FIG. 12B illustrates the linearly changing bias signal of FIG. 12A and a non-zero bias signal for the active and inactive phases, respectively;FIG. 13A illustrates an exponentially changing bias signal and a zero bias signal for the active and inactive phases, respectively;FIG. 13B illustrates the exponentially changing bias signal of FIG. 13A and a non-zero bias signal for the active and inactive phases, respectively;FIG. 14A illustrates a stair-stepped bias signal and a zero bias signal for the active and inactive phases, respectively; andFIG. 14B illustrates the stair-stepped bias signal of FIG. 14A and a non-zero bias signal for the active and inactive phases, respectively.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSThe following description of the preferred embodiment(s) is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses. For purposes of clarity, the same reference numbers will be used in the drawings to identify similar elements.Referring now to FIG. 4A, a partial schematic of two exemplary converter stages 41, 42 of a multi-stage pipelined analog to digital converter are shown. For simplicity only two stages are shown; however, it is readily understood that an analog to digital converter may employ more converter stages. For instance, a nine bit analog to digital converter employs seven such converter stages. Moreover, it is readily understood that the operational schemes of the present invention are applicable to such converters.Each converter stage includes a switched capacitor circuit and a comparator. As noted above, each switched capacitor circuit operates in accordance with a two cycle clock. During a first clock cycle, switches designated [Phi]1 are closed and switches designated [Phi]2 are open; whereas, during a second clock cycle, switches [Phi]1 are open and switches [Phi]2 are closed.During the first clock cycle, input capacitors C11, C12 of the first stage 41 are charged by an input voltage Vin. This process is also referred to herein as the sampling phase. Concurrently, the charged stored (from a previous clock cycle) in the input capacitors C21, C22 of the second stage 42 is integrated by the operational amplifier OP2 of the second stage 42 to generate a residual output voltage Vout2. This residual output voltage Vout2 is based on reference voltages as well as digital output from the comparator. This process is also referred to herein as the integration phase. It should be noted that the operational amplifier OP1 of the first stage 41 is not active during this clock cycle.Conversely, during the second clock cycle, input capacitors C21, C22 of the second stage 42 are charged by an input voltage Vout1; whereas, the charged stored in the input capacitors C11, C12 of the first stage 41 is integrated by the operational amplifier OP1 of the first stage 41 to generate a residual output voltage Vout1. The residual output voltage Vout1 from the first stage serves as the input voltage to the second stage as shown in FIG. 4A.In accordance with the present invention, each operational amplifier is only biased during the integration phase to reduce power consumption. Referring to FIGS. 4A and 4B, a current source 44 may be electrically connected to each of the operational amplifiers OP1, OP2. In addition, switching elements 46, 47 may be located between the current source 44 and each of the operational amplifiers OP1, OP2. During the first clock cycle or phase (as determined in step 50), a bias current is supplied to the operational amplifier OP2 of the second stage 42, but not to the operational amplifier OP1 of the first stage 41 (as shown in step 52). Conversely, during the second clock cycle, a bias current is supplied to the operational amplifier OP1 of the first stage 41, but not to the operational amplifier OP2 of the second stage 42 (as shown in step 52). In other words, each operational amplifier is biased only during its active phase, thereby reducing the power consumption of the circuit.An exemplary circuit for biasing the operational amplifiers is depicted in FIG. 5. In particular, a biasing circuit 60 employs a current mirror configuration as is well known in the art. In operation, transistors 62,64 serve as switching elements, which control when a bias current is applied to a given operational amplifier. However, it is readily understood that other circuit configurations for biasing the operational amplifiers are within the broader aspects of the present invention.In alternate embodiments depicted in FIGS. 6A and 7A, a fractional portion of the bias current may be supplied to each of the operational amplifiers during the sampling phase. In other words, each operational amplifier is supplied with a full bias current during its integration phase and with a fractional portion of the full bias current during its sampling phase. Although not limited thereto, the fractional biasing current can be 25% of the full bias current. By supplying a fractional portion of bias current, the operational amplifiers are able to maintain a common mode state during the sampling phase. The present invention reduces power consumption of the circuit while maintaining the response of the operational amplifier residing therein.Referring to FIGS. 6A and 6B, variable current sources 70 may be electrically connected to each of the operational amplifiers OP1, OP2. During the first clock cycle or phase (as determined in step 74), the variable current source 70 of the second stage provides a bias current having a high level to the operational amplifier OP2 of the second stage 42 (as shown in step 76). The variable current source of the first stage provides a low bias current to the operational amplifier OP1 of the first stage 41 (as shown in step 76).During the second clock cycle or phase (as determined in step 74), the variable current source 70 provides a bias current having a high level to the operational amplifier OP1 of the first stage 42 (as shown in step 78). The variable current sources 70 provide a low bias current to the operational amplifier OP2 of the second stage 41 (as shown in step 78). In other words, one operational amplifier is biased by a high current level during its active phase and the other operational amplifier is biased by a low current level during its inactive phase, and vice-versa, to reduce the power consumption of the circuit.The variable current sources 70 may receive clock information such as a clock signal, [Phi]1 and/or [Phi]2 as an input. In FIG. 6A, the variable current source 70 of the first stage receives [Phi]2 and the variable current source of the second stage receives [Phi]1. As can be appreciated, the variable current sources may receive other signals that will allow the variable current source to determine when the associated stage is active or inactive.Referring to FIGS. 7A and 7B, two current sources I1 and I2 may be selectively connected to each of the operational amplifiers OP1, OP2 depending upon the active/inactive phase of the stage. One of the two current sources is associated with a switch that closes during the active stage and opens during the inactive stage. During the first clock cycle or phase (as determined in step 84), the operational amplifier OP2 of the second stage 42 (as shown in step 86) is biased by both current sources I1 and I2. The operational amplifier OP1 of the first stage 41 (as shown in step 86) is biased by one current source I2.During the second clock cycle or phase (as determined in step 84), the operational amplifier OP2 of the second stage 42 (as shown in step 76) is biased by one current source I2. The operational amplifier OP1 of the first stage 41 (as shown in step 76) is biased by both current sources I1 and I2. In other words, one operational amplifier is biased by a high current level during its active phase and the other operational amplifier is biased by a low current level during its inactive phase, and vice-versa, to reduce the power consumption of the circuit.Referring now to FIG. 8A, a circuit 100 having active and inactive phases during operation is shown. The active and inactive phases can be regularly periodic, or in other words, alternating at regular intervals. Alternately, the active and inactive phases can be irregularly periodic, or in other words, alternating between active and inactive phases at different intervals. A power supply 110 supplies first and second bias signals during the regular and/or irregular periodic active and inactive phases of the circuit 100, respectively. The second bias signal is lower than the first bias signal to reduce power consumption. The second bias signal can be lower during the inactive phase because the inactive phase occurs after an active phase. The circuit has already settled during the active state and is operating in steady state. When the circuit transitions to the inactive state, the circuit needs less power to operate.The circuit 100 may provide phase feedback information to the power supply 110 if needed. The power supply 110 can be a current source such as those described above, a voltage source or any other suitable power supply. The power supply 110 can include two power supplies that are switched in a manner similar to current sources shown above in FIG. 7A or a variable or multiple output power supply similar to the variable current sources shown in FIG. 6A. Example circuits 100 include switched capacitor filters such as those described in U.S. Pat. No. 6,400,214, filed Aug. 28, 2000 to Aram et al., which is hereby incorporated by reference in its entirety and digital to analog converters such as those described above.Referring now to FIG. 8B, exemplary regular periodic active and inactive phases for the circuit in FIG. 8A are shown. The power supply 110 generates first and second bias signals during the regular periodic active and inactive phases, respectively. Referring now to FIG. 8C, exemplary irregular periodic active and inactive phases for the circuit in FIG. 8A are shown. The power supply 110 provides first and second bias signals during the irregular periodic active and inactive phases, respectively.FIG. 9A illustrates a circuit 120 including multiple sub-circuits 122-1, 122-2, . . . , and 122-n having active and inactive phases during operation. The active and inactive phases of the sub-circuits 122-1, 122-2, . . . , and 122-n may be in-phase and/or out-of-phase with respect to one another. The active and inactive phases may be regular and/or irregular periodic. One or more power supplies 126-1, 126-2, . . . , and 126-n supply the first and second bias signals during the active and inactive phases, respectively. A single power supply 128 with multiple outputs may be used to provide outputs to each stage of the circuit 120. The circuit 120 and/or the sub-circuits 122-1, 122-2, . . . , and 122-n may provide phase feedback signals to the power supplies 126-1, 126-2, . . . , and 126-n if needed. Interconnections between the sub-circuits 126 may be varied from those shown. The circuit 120 may or may not be pipelined.Referring now to FIGS. 10A-14B, exemplary first and second bias signals are shown that can be used to bias the circuits shown in FIGS. 1-9. Generally, the first bias signals occur during the active phase and have a signal level that is higher than the second bias signal that occurs during the inactive phase. The first and second bias signals can be regular or irregular periodic. The first and/or second signals can also be square-waveform or constant signals, stepped signals, linearly changing signals and/or non-linearly changing signals.Referring now to FIGS. 10A and 10B, exemplary constant signals are shown. FIG. 10A illustrates a square-waveform bias signal and a zero bias signal for the active and inactive phases, respectively. FIG. 10B illustrates the square-waveform bias signal of FIG. 10A and a non-zero bias signal for the active and inactive phases, respectively.Referring now to FIGS. 11A and 11B, exemplary stepped signals are shown. FIG. 11A illustrates a stepped bias signal and a zero bias signal for the active and inactive phases, respectively. The stepped bias signal may include a high startup level followed by a lower steady-state level. FIG. 11B illustrates the stepped bias signal of FIG. 11A and a non-zero bias signal for the active and inactive phases, respectively.Referring now to FIGS. 12A and 12B, exemplary linearly changing signals are shown. FIG. 12A illustrates a linearly changing bias signal and a zero bias signal for the active and inactive phases, respectively. FIG. 12B illustrates the linearly changing bias signal of FIG. 12A and a non-zero bias signal for the active and inactive phases, respectively.Referring now to FIGS. 13A and 13B, exemplary non-linearly changing signals are shown. FIG. 13A illustrates an exponential bias signal and a zero bias signal for the active and inactive phases, respectively. FIG. 13B illustrates the exponential bias signal of FIG. 13A and a non-zero bias signal for the active and inactive phases, respectively.Referring now to FIGS. 14A and 14B, other exemplary non-linearly changing signals are shown. FIG. 14A illustrates a stair-stepped bias signal and a zero bias signal for the active and inactive phases, respectively. FIG. 14B illustrates the stair-stepped bias signal of FIG. 14A and a non-zero bias signal for the active and inactive phases, respectively.As can be appreciated by skilled artisans, the present invention significantly reduces power consumption for devices having active and inactive periods. In addition, skilled artisans will appreciate that other bias waveforms can be used for the active and inactive phases in addition to those examples shown in FIGS. 10A-14B. Furthermore, the active and inactive phases need not have the same periods, for example as shown in FIGS. 14A and 14B. The duration of the active and/or inactive period may also vary from one active phase and/or inactive phase to another. While the first bias waveforms in FIGS. 11A-14B are increasing waveforms, decreasing waveforms can also be used.Those skilled in the art can now appreciate from the foregoing description that the broad teachings of the present invention can be implemented in a variety of forms. Therefore, while this invention has been described in connection with particular examples thereof, the true scope of the invention should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, the specification and the following claims.
A method and system to facilitate configurable input/output (I/O) termination voltage reference in a transmitter or receiver. In one embodiment of the invention, the transmitter and receiver, each has a termination circuit to select a suitable termination reference voltage based on the desired coupling type. In one embodiment of the invention, the transmitter has a termination circuit coupled with a transmission driver and the transmitter selects only one of a supply voltage, a ground voltage and a half supply voltage as a termination voltage reference of the transmission driver. The receiver has a termination circuit to select either a supply voltage or a ground voltage as a termination voltage reference of the receiver.
1.A device includes:Launch driver; andA termination circuit, which is coupled with the emission driver so as to select only one of the power supply voltage, the ground voltage, and the half supply voltage as the termination voltage reference of the emission driver.2.The apparatus of claim 1, wherein the termination circuit has three control signals, and wherein each control signal enables or disables a corresponding one of the power supply voltage, the ground voltage, and the half power supply voltage as the termination voltage reference of the transmission driver.3.The apparatus of claim 2, wherein only one of the power supply voltage, the ground voltage, and the half power supply voltage is selected as the termination voltage reference of the transmission driver, the termination circuit enables only one of the three control signals to enable the power supply voltage, the ground The corresponding one of the voltage and the half-supply voltage serves as the termination voltage reference of the transmission driver.4.The apparatus of claim 1, wherein when the transmission driver is to be set in an alternating current (AC) coupling mode, the termination circuit that selects only one of the power supply voltage, the ground voltage, and the half supply voltage as the termination voltage reference of the transmission driver selects only the half power The voltage serves as the termination voltage reference for the emission driver.5.The apparatus of claim 1, wherein when the transmission driver is to be set in a direct current (DC) coupling mode, the termination circuit that selects only one of the power supply voltage, the ground voltage, and the half power supply voltage as the termination voltage reference of the transmission driver selects only the power supply voltage Or just select the ground voltage as the termination voltage reference of the emission driver.6.The device of claim 1, wherein the device includes a register having one or more bits for programming the three control signals.7.The apparatus of claim 3, wherein the transmission driver drives a pair of differential output signals; and wherein the termination circuit includes:Resistor, which is coupled to one of the differential output signals and the node;Another resistor, which is coupled to the other of the differential output signals and the node;A capacitor, which is coupled to the node and the ground node; andAn operational amplifier having a non-inverting input, an inverting input, an enable input and an output, wherein the non-inverting input is coupled to the half supply voltage, wherein the inverting input and output are coupled to the node, and wherein the three control Only the first control signal in the signal is coupled to the enable input in order to set the node to a half supply voltage in response to the activation of the first control signal.8.The apparatus of claim 7, wherein the termination circuit further comprises:Switch logic, which has a switch input, a switch output, and another enable input, wherein the switch input is coupled to the supply voltage, wherein the switch output is coupled to the node, and wherein only the second control signal of the three control signals is connected to the other Input coupling is enabled to set the node to the supply voltage in response to the activation of the second control signal.9.The device of claim 8, wherein the enable input is a first enable input, wherein the other enable input is a second enable input, and wherein the termination circuit further comprises:Another switch logic having another switch input, another switch output, and a third enable input, wherein the other switch input is coupled to the ground node, wherein the other switch output is coupled to the node and wherein the Among the three control signals, only the third control signal is coupled with the third enable input so as to set the node to the ground voltage in response to the third control signal being effective.10.The apparatus of claim 1, wherein the transmission driver is capable of operating in accordance with one of a direct media interface (DMI) protocol, a peripheral component interconnect (PCI) express interface protocol, and a display port interface protocol.11.A system including:A memory controller with a receiver having a termination circuit to select a power supply voltage or ground voltage as the termination voltage reference of the receiver; andA controller having a transmitter in communication with the receiver.12.The system of claim 11, wherein the termination circuit is a receiver termination circuit, wherein the termination voltage reference is a receiver termination voltage reference, wherein the power supply voltage is the receiver power supply voltage and wherein the transmitter includes:Launch driver; andA transmitter termination circuit, which is coupled with the transmission driver so as to select only one of the transmitter power supply voltage, ground voltage, and half power supply voltage as the transmitter termination voltage reference of the transmission driver.13.The system of claim 11, wherein the memory controller is a memory controller hub (MCH) and wherein the controller is one of a platform controller hub (PCH) and an input / output (I / O) controller hub (ICH).14.The system of claim 13, further comprising a processor, wherein the MCH is part of the processor.15.The system of claim 11, wherein the receiver includes a pair of differential input signals coupled to the termination circuit, wherein the termination circuit comprises:Resistor, which is coupled to one of the differential input signals and the node;Another resistor, which is coupled to the other of the differential input signals and the node; and wherein the receiver also connects the node to a selected voltage.16.The system of claim 12, wherein the transmitter termination circuit has three control signals, and wherein each control signal enables or disables a corresponding one of the transmitter power supply voltage, ground voltage, and half power supply voltage as the transmitter terminal of the transmitter driver Connect the voltage reference.17.The system of claim 12, wherein the transmitter termination circuit that selects only one of the transmitter power supply voltage, ground voltage, and half power supply voltage as the termination voltage reference of the transmission driver enables only one of the three control signals to be enabled The corresponding one of the transmitter power supply voltage, the ground voltage, and the half power supply voltage serves as the termination voltage reference of the transmitter driver.18.The system of claim 17, wherein the transmission driver drives a pair of differential output signals; and wherein the termination circuit includes:A resistor, which is coupled to one of the differential output signals and another node;Another resistor, which is coupled to the other of the differential output signals and the node;A capacitor, which is coupled to the node and the ground node; andAn operational amplifier having a non-inverting input, an inverting input, an enable input and an output, wherein the non-inverting input is coupled to the half supply voltage, wherein the inverting input and output are coupled to the node, and wherein the three control Only the first control signal in the signal is coupled to the enable input in order to set the node to a half supply voltage in response to the activation of the first control signal.19.The system of claim 18, wherein the transmitter termination circuit further comprises:Switch logic, which has a switch input, a switch output, and another enable input, where the switch input is coupled to the transmitter supply voltage, wherein the switch output is coupled to the node, and wherein only the second control signal of the three control signals is Another enable input coupling to set the node to the transmitter supply voltage in response to the activation of the second control signal.20.The system of claim 19, wherein the enable input is a first enable input, wherein the other enable input is a second enable input, and wherein the transmitter termination circuit further comprises:Another switch logic having another switch input, another switch output, and a third enable input, wherein the other switch input is coupled to the ground node, wherein the other switch output is coupled to the node and wherein the Among the three control signals, only the third control signal is coupled with the third enable input so as to set the node to the ground voltage in response to the third control signal being effective.21.The system of claim 12, wherein the transmission driver is capable of operating in accordance with one of a direct media interface (DMI) protocol, a peripheral component interconnect (PCI) express interface protocol, and a display port interface protocol.22.A method including:Only one of the power supply voltage, the ground voltage, and the half power supply voltage is selected as the termination voltage reference of the transmitter.23.The method of claim 22, further comprising selecting another power supply voltage or another ground voltage as the termination voltage reference of the receiver communicatively coupled to the transmitter.24.The method of claim 23, wherein the transmitter and the receiver are communicatively coupled according to one of a direct media interface (DMI) protocol, a peripheral component interconnect (PCI) express interface protocol, and a display port interface protocol.
Method and system for enabling configurable input / output (I / O) termination voltage referenceTechnical fieldThe present invention relates to termination circuits, and more specifically, but not exclusively, to configurable input / output (I / O) termination voltage references.Background techniqueA typical computer system has several main components. The main components include a processor, a memory controller hub commonly referred to as "North Bridge", an I / O controller hub commonly referred to as "South Bridge", a memory module, and Mass storage device.The termination voltage reference of the interface between the South Bridge and the North Bridge is usually fixed. For example, when direct current (DC) coupling between the south bridge and the north bridge is to be used, the power-terminated south bridge can only be connected to the power-terminated north bridge. The power-terminated south bridge cannot be connected to the ground-terminated north bridge. Therefore, the fixed termination voltage reference of South Bridge and North Bridge does not allow the interoperability of South Bridge and North Bridge.BRIEF DESCRIPTIONThe features and advantages of the embodiments of the present invention will become clear from the following detailed description of the subject, in which:Figure 1 shows a system according to an embodiment of the invention;2 shows a circuit diagram of a transmitter according to an embodiment of the present invention; andFig. 3 shows a circuit diagram of a receiver according to an embodiment of the invention.detailed descriptionThe embodiments of the invention described herein are shown by way of example in the drawings, not by way of limitation. For simplicity and clarity of illustration, elements shown in the drawings are not necessarily drawn to scale. For example, the size of some elements may be exaggerated relative to other elements for clarity. Furthermore, where considered appropriate, reference numerals have been repeated among the figures to indicate corresponding or analogous elements. Reference in the specification to "one embodiment" or "an embodiment" of the present invention means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Therefore, the appearance of the phrase "in one embodiment" in different places throughout the specification does not necessarily all refer to the same embodiment.Embodiments of the present invention provide a method and system that facilitates a configurable input / output (I / O) termination voltage reference in a transmitter or receiver. In one embodiment of the invention, each of the transmitter and the receiver has a termination circuit that selects an appropriate termination reference voltage based on the desired coupling type. For example, in one embodiment of the invention, the transmitter has a termination circuit coupled to the transmission driver and the transmitter selects only one of the power supply voltage, ground voltage, and half power supply voltage as the termination voltage reference of the transmission driver. In another embodiment of the invention, the receiver has a termination circuit that selects the power supply voltage or ground voltage as the termination voltage reference of the receiver.For example, in one embodiment, when AC coupling between the transmitter and the receiver is to be used, the transmitter selects the half supply voltage as the termination voltage reference of the transmission driver and the receiver selects the ground voltage as the termination voltage of the receiver Benchmark. By facilitating a configurable I / O termination voltage reference in the transmitter or receiver, the transmitter and / or receiver interoperability is allowed. In addition, there is no need to have multiple designs of transmitters with different termination voltage references. A single design of a transmitter with configurable I / O termination voltage references can be used to save the cost of maintaining multiple production lines for transmitters with different termination voltage references.Figure 1 shows a block diagram 100 of a system 110 according to one embodiment of the invention. System 110 includes, but is not limited to, desktop computers, laptop computers, netbooks, notebook computers, personal digital assistants (PDAs), servers, workstations, cellular phones, mobile computing devices, Internet appliances, or any other type of computing device. In another embodiment, the system 110 used to implement the methods disclosed herein may be a system on chip (SOC).The processor 120 has a processing core that executes instructions of the system 110. The processing core includes but is not limited to prefetch logic for fetching instructions, decoding logic for decoding instructions, execution logic for executing instructions, and so on. The processor 120 has a cache memory that caches instructions and / or data of the system 110. In another embodiment of the present invention, the cache memory includes, but is not limited to, level one, level two, and level three cache memories or any other configured cache memory within the processor 120.The processor 120 has a memory controller 122 coupled with the I / O controller 140 through interfaces 124 and 142. The memory controller 122 performs functions that enable the processor 120 to access and communicate with the memory 130 through the interfaces 126 and 132, the memory 130 including volatile memory and / or non-volatile memory. Volatile memory includes, but is not limited to, synchronous dynamic random access memory (SDRAM), dynamic random access memory (DRAM), RAMBUS dynamic random access memory (RDRAM), and / or any other type of random access storage device.Non-volatile memory includes but is not limited to NAND flash memory, phase-change memory (PCM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), or any other type of non-volatile storage device. The memory 130 stores instructions and information to be executed by the processor 120. The memory 130 may also store temporary variables or other intermediate information when the processor 120 executes instructions. In another embodiment of the present invention, the memory controller 122 is separate from the processor 120 and exists in another block or module.The I / O controller 140 includes, but is not limited to, an I / O controller hub (ICH), a platform controller hub (PCH), a chipset, and so on. The I / O controller 140 enables the processor 120 to connect to other modules in the system 110. In one embodiment of the invention, the interfaces 124 and 142 operate in accordance with, but not limited to, point-to-point communication protocols such asQuick Path Interconnect (QPI), Direct Media Interface (DMI). The I / O controller 140 is connected to the display device 150 through the interfaces 144 and 152. The display device includes but is not limited to a liquid crystal display (LCD), a cathode ray tube (CRT) display, or any other form of visual display device. In one embodiment of the invention, the interfaces 144 and 152 operate in accordance with, but not limited to, the digital video interface (DVI) protocol, display port protocol, high-definition multimedia interface (HDMI), and so on.In one embodiment of the invention, the I / O controller 140 has a configurable transmitter in the interface 142. I / O controller 140 can select the desired termination voltage reference in interface 142 to communicate with interface 124 in memory controller 122. In one embodiment of the present invention, if I / O controller 140 and memory controller 122 are communicating in accordance with DMI, then I / O controller 140 selects only the power supply voltage of system 110, the ground voltage of system 110, and the One of the half supply voltages serves as the termination voltage reference. In one embodiment of the invention, the memory controller 122 has a receiver that selects the desired termination voltage reference that matches the type of coupling in the transmitter in the interface 142.In one embodiment of the invention, the configurability of the transmitter in the interface 142 of the I / O controller and the receiver in the interface 124 of the memory controller 122 facilitates the I / O controller 140 and the memory controller 122 Interoperability. The I / O controller 140 has an interface (s) 146 coupled to peripherals, including but not limited to non-volatile memory, storage media, keyboard / mouse, and network interfaces. Storage media include but are not limited to solid state drives, hard drives, universal serial bus flash drives, or any other form of computer data storage media.The network interface is implemented using any type of well-known network interface standards, including but not limited to an Ethernet interface, a universal serial bus (USB) interface, a peripheral component interconnect (PCI) express interface, a wireless interface, and / or Any other suitable type of interface. The wireless interface operates in accordance with, but not limited to, the Institute of Electrical and Electronics Engineers (IEEE) Wireless Standard Family 80211, Home Plug-in AV (HPAV), Ultra Wideband (UWB), Bluetooth, WiMax, or any form of wireless communication protocol.Although the modules shown in FIG. 1 are depicted as separate blocks in the system 110, some of the functions performed by these blocks may be integrated into a single semiconductor circuit or may be implemented using two or more separate integrated circuits. In another embodiment of the invention, the system 110 may include more than one processor / processing core. In addition, there are other functional blocks or more examples of each block that can be connected in the system 110 that are not shown.FIG. 2 shows a circuit diagram 200 of the transmitter 210 according to an embodiment of the present invention. In one embodiment of the invention, the transmitter 210 is part of the interface 142 in the I / O controller 140. In another part of the invention, the transmitter 210 is part of the interface 144 in the I / O controller 140. Those of ordinary skill in the related art will readily understand that the transmitter 210 can be used in any interface of the system 110 without affecting the working principle of the present invention.The transmitter has a driver 220 coupled to the termination circuit 240. In one embodiment of the invention, the transmitter transmits information through a pair of differential links D + 290 and D-292. The driver 220 has six transistors 221, 223, 225, 227, 229, and 231 controlled by control signals 222, 224, 226, 228, 230, and 232, respectively. The power supply voltage 235 supplies power to the driver 220, and a person of ordinary skill in the related art will easily understand the operating principle of the driver and the operating principle will not be described here. The circuit diagram of the driver 220 shown is not intended to be limiting, and those of ordinary skill in the relevant art will readily understand that other implementations of the driver 220 are possible without affecting the working principle of the present invention.In one embodiment of the invention, the termination circuit 240 has three possible settings for terminating the voltage reference. Those of ordinary skill in the relevant art will understand that in other embodiments of the present invention, more than three or less than three possible settings of the termination voltage reference may be implemented.The termination circuit 240 has two resistors 242 and 244 connected in series across the pair of differential outputs D + 290 and D-292 of the driver 220. In one embodiment of the invention, resistors 242 and 244 have substantially equal values. In another embodiment of the invention, each of the resistors 242 and 244 has a value of 50 ohms. The first setting of the termination voltage reference of the transmitter 210 is facilitated by an operational amplifier (op-amp) 246. In one embodiment of the invention, the output of op-amp 246 and the inverting input are connected to node 243. The voltage at node 243 is the termination voltage reference of transmitter 210. The non-inverting input of op-amp 246 is connected to half-terminal voltage (Vterm / 2) 250, which is half of the terminal voltage of driver 220. In one embodiment of the invention, the terminal voltage is connected to the power supply voltage of the transmitter 210.In one embodiment of the present invention, op-amp 246 has an enable 1 signal 248 signal to control when the voltage of node 243 is set to half of the terminal voltage. For example, in one embodiment of the present invention, if the enable 1 signal 248 is asserted, that is, activated or turned on, the inverting input of op-amp 246 will see the non-inverting of op-amp Enter the same voltage. Since the non-inverting input of op-amp 246 is set to Vterm / 2250, the inverting input of op-amp 246 is set to Vterm / 2250. In one embodiment of the invention, the node 243 is set to Vterm / 2250 by enabling op-amp 246. When the enable 1 signal 248 signal is in effect, the termination voltage reference of the transmitter 210 is set to Vterm / 2250.The termination circuit 240 has a capacitor 280 connected to the node 243 and the ground node. In one embodiment of the present invention, when the transmitter 210 is to be placed in an alternating current (AC) coupling mode, the termination voltage reference is set to half of the terminal voltage of the driver 220. At high frequencies, the capacitor 280 acts as a short circuit to the ground node.In one embodiment of the invention, the second setting of the termination voltage reference is facilitated by the switching logic. The switching logic includes but is not limited to transistors, relays, etc. In one embodiment of the invention, the termination circuit 240 has a transistor 260 controlled by the enable 2 signal 262. When the enable 2 signal 262 is in effect, the transistor 260 is turned on or on and it allows the node 243 to be set to the terminal voltage (Vterm) 270. Therefore, the termination voltage reference of the transmitter 210 is set to the termination voltage (Vterm) 270.In one embodiment of the invention, the third setting of the termination voltage reference is facilitated by another switching logic. In one embodiment of the invention, the termination circuit 240 has a transistor 265 controlled by an enable 3 signal 267. When the enable 3 signal 267 is in effect, the transistor 265 is turned on or on and it allows the node 243 to be set to ground voltage. Therefore, the termination voltage reference of the transmitter 210 is set to the ground voltage.In one embodiment of the present invention, when the transmitter 210 is to be set in a direct current (DC) coupling mode, the transmitter selects only the power supply voltage or only the ground voltage as the termination voltage reference of the transmitter 210. In one embodiment of the invention, during operation of the transmitter 210, only one of the three enable signals 248, 262, and 267 is active to select the desired termination voltage reference.In one embodiment of the present invention, the I / O controller 140 has registers that control the setting of the three enable signals 248, 262, and 267. For example, in one embodiment of the invention, the I / O controller 140 has two bits that control the termination voltage reference of the transmitter 210. Based on the values of these two bits in the register, appropriate control signals are sent to the three enable signals 248, 262, and 267. For example, when the values of these two bits are set to "00" to indicate that the ground voltage is set to the termination voltage reference of the transmitter 210, a control signal is sent to enable the enable 3 signal 267 and each of the two control signals is sent One de-asserts enable signals 248 and 262 to deactivate op-amp 248 and transistor 260, respectively.In one embodiment of the invention, the register is part of the transmitter 210. Those of ordinary skill in the related art will readily understand other methods of controlling the enable signals 248, 262, and 267 and these other methods may also be used without affecting the working principle of the present invention. For example, in another embodiment of the present invention, each of the enable signals 248, 262, and 267 is connected to a strap on the system board to allow the termination voltage of the I / O controller 140 to be configured Benchmark. The user can, for example, use a jumper to connect the desired strap pin to the power supply voltage or the ground voltage.FIG. 3 shows a circuit diagram 300 of the receiver 310 according to an embodiment of the present invention. In one embodiment of the invention, the receiver 310 is part of the interface 124 in the memory controller 122. In other embodiments of the present invention, the receiver 310 may also be implemented in other interfaces of the system 110. The receiver 310 has a pair of differential input signals D + 350 and D-352. Resistors 302 and 304 are connected in series across the pair of differential input signals D + 350 and D-352. In one embodiment of the invention, resistors 302 and 304 have substantially equal values. In another embodiment of the present invention, when the receiver 310 can operate in accordance with the DMI protocol, the resistors 302 and 304 have a value of 50 ohms.A person of ordinary skill in the related art will easily understand the operating principle of the circuit in the receiver 310, and the operating principle will not be described here. In one embodiment of the invention, the node 354 can be used as a pin or ball on the package of the receiver 310. In one embodiment of the invention, node 354 may be connected to a pin on the system board. This allows the termination voltage reference of the receiver 310 to be controlled by connecting the power supply voltage or the ground voltage via the strap pin. In another embodiment of the present invention, the node 354 may be controlled or configured through registers. For example, in one embodiment of the invention, one bit of the register can be used to control the voltage of the node 354.Using the configurable or controller terminated voltage reference of the transmitter 210 and / or receiver 310, interoperability of the transmitter 210 and / or receiver 310 can be achieved.Although examples of embodiments of the disclosed subject matter have been described, those of ordinary skill in the relevant art will readily appreciate that many other methods of implementing the disclosed subject matter can be used alternatively. In the foregoing description, various aspects of the disclosed subject matter have been described. For purposes of explanation, specific quantities, systems, and configurations are stated in order to provide a thorough understanding of the subject matter. However, it will be apparent to those skilled in the relevant art who have the benefit of this disclosure that the subject matter can be implemented without the specific details. In other cases, well-known features, components, or modules are omitted, simplified, combined, or split so as not to obscure the disclosed subject matter.The wording "operable" as used herein means that when a device or system is in a power-off state, the device, system, protocol, etc. can operate or be adapted to operate for its desired function. The expression "substantially equal" as used herein means that the difference in values is not more than 10%. For example, a resistor may have a 5% tolerance level and two resistors with equal published resistance values may differ in actual measured resistance values due to the tolerance level. In one embodiment of the invention, a tolerance level within 10% is acceptable. Various embodiments of the disclosed subject matter can be implemented in hardware, firmware, software, or a combination thereof, and can be used by reference or in combination for simulation, simulation, and manufacturing design such as instructions, functions, processes, data structures, logic, applications, designs It is described by a program code such as a representation or format, which when accessed by a machine, causes the machine to perform a task, define an abstract data type or low-level hardware context, or produce a result.The technology shown in the drawings can be implemented by using code and data stored and executed on one or more computing devices such as a general-purpose computer or computing device. Such computing devices use machine-readable media to store and (internally and over the network and other computing devices) transfer codes and data, such as machine-readable storage media (eg, magnetic disks; optical disks; random access memory) ; Read-only memory; flash storage devices; phase-change memory) and machine-readable communication media (such as electrical, optical, acoustic, or other forms of propagated signals-such as carrier waves, infrared signals, digital signals, etc.).Although the disclosed subject matter has been described with reference to illustrative embodiments, this description is not intended to be interpreted in a limiting sense. Various modifications to the illustrative embodiments that are obvious to those skilled in the art to which the disclosed subject matter belongs and other embodiments of the subject matter should be considered to be within the scope of the disclosed subject matter.
A network storage system includes a virtual file system ("VFS") that manages the files of the network storage system, and a storage center that stores the files. The VFS and the storage center are separated, such that a client accesses the VFS to conduct file system operations and the client accesses the storage center to upload/download files. The client accesses the network storage system through one or more storage ports. The storage center includes a plurality of distributed object storage managers (DOSMs) and a storage cluster that includes a plurality of intelligent storage nodes. The network storage system includes additional storage centers at geographically disparate locations. The network storage system uses a multi-cast protocol to maintain file information at the DOSMs regarding files stored in the intelligent storage nodes, including files stored in disparate storage centers.
What is claimed is:1. A system comprising:a remote storage center for storing a plurality of files in a local file system;a local computer for utilizing at least one file stored at said remote storage center;a first local device, coupled to said local computer and to said remote storage center, for operating as an active storage port, said active storage port for receiving file system operation requests on said file from said local computer, and for translating the file system operation requests to local file system requests including a file identifier associated with contents of the file to uniquely identify the file stored at the remote storage center, said local computer and said first local device being coupled through a communications mechanism said active storage port further comprises a network interface, to couple said active storage port to said passive storage port, and processes to monitor the health of said active storage port and to enter a failover condition if said health falls below a pre-determined threshold;a second local device, coupled to said local computer and to said remote storage center, for operating as a passive storage port, said passive storage port for switching to said active storage port during said failover condition, said local computer and said second local device being coupled through said communications mechanism utilized by said first local device, said passive storage port further comprises a network interface, to couple said passive storage port to said active storage port, and processes to query said active storage port to obtain a status of said health of said active storage port; andadditional local devices to support a 2N failover configuration of storage ports, wherein "N" represents any integer value.2. The system as set forth in claim 1, wherein said communications mechanism comprises:a network;said first local device comprises a network interface that communicates on said network using a network address; andsaid second local device comprises a network interface that communicates on said network after a the failover condition using said network address of said first local device.3. The system as set forth in claim 1, wherein said communications mechanism further comprises a network file system, such that said local computer exports said local file system using said network address to conduct said file system operations.4. The system as set forth in claim 1, wherein said active storage port comprises a data cache, for storing said file, and a directory cache for storing file system information on said file.5. The system as set forth in claim 4, wherein said first local device and said second local device further comprise network interfaces for communicating to said remote storage center.6. The system of claim 1, wherein the first local device and the second local device each further comprise:a network interface capable of being coupled to the remote storage center;a client interface capable of being coupled to the local computer; anda cache capable of storing information at the storage port accessed from the remote storage center.7. An apparatus comprising:first local device for operating as an active storage port for a remote storage center that stores at least one file in a local file system, said active storage port for receiving file system operation requests for said file, for generating information for said file system operation, and for translating the file system operation requests to local file system requests including a file identifier associated with contents of the file to uniquely identify the file stored at the remote storage center, said first local device being accessed through a communications mechanism, said first local device further comprises a network interface, to couple said first local device to said second local device, and processes to monitor health of said first local device and to enter a failover condition if said health falls below a pre-determined threshold;second local device for operating as a passive storage port, said passive storage port for switching to said active storage port during said failover condition to said active storage port, said second local device being accessed through said communications mechanism utilized by said first local device, said second local device further comprises a network interface, to couple said second local device to said first local device, and processes to query said first local device to obtain a status of said health of said first local device; andadditional local devices to support a 2N failover configuration of storage ports, wherein "N" represents any integer value.8. The apparatus as set forth in claim 7, wherein said communications mechanism comprises:a network;said first local device comprises a network interface that communicates on said network using a network address; andsaid second local device comprises a network interface that communicates on said network after a failover condition using said network address of said first local device.9. The apparatus as set forth in claim 7, wherein said first local device and said second local device comprise a data cache, for storing said file, and a directory cache for storing file system information on said file.10. The apparatus as set forth in claim 7, wherein said first local device and said second local device further comprise network interfaces for communicating to said remote storage center.11. A method for configuring a storage system for failover operation, said method comprising:storing a plurality of files in a local file system in a remote storage center;utilizing at least one file stored at said remote storage center in a local computer;coupling a first local device to said local computer and to said remote storage center;operating said first local device as an active storage port by receiving file system operating requests for said file from said local computer, by transferring information for said file system operations and by translating the file system operation requests to local file system requests including a file identifier associated with contents of the file to uniquely identify the file stored at the remote storage center;transferring information between said local computer and said first local device via a communication mechanism;coupling a second local device to said local computer and to said remote storage center;operating said a second local device as a passive storage port by switching said second device to an active storage port to execute a failover condition in said first local device; andtransferring information between said local computer and said second local device via said communication mechanism; andcoupling additional local devices to support a 2N failover configuration of storage ports, wherein "N" represents any integer value; andcoupling said first local device to said second local device;monitoring the heath of said active storage port;submitting a query from said passive storage port to said active storage port to obtain a status of said health of active storage port; andentering said failover condition if said health falls below a pre-determined threshold.12. The method as set forth in claim 11, wherein:transferring information between said local computer and said first local device via a communication mechanism comprises transferring information over a network using a network address; andtransferring information between said local computer and said second local device via a communication mechanism comprises transferring information over a said network using said network address of said first local device.13. The method as set forth in claim 11, wherein:transferring information between said local computer and said first local device via a communication mechanism comprises transferring information over a network using a network address;transferring information between said local computer and said second local device via a communication mechanism comprises transferring information over a network using said network address of said first local device; andexporting a network file system of said local computer over said network using said network address.14. The method as set forth in claim 11, further comprising caching, in said active storage port, data for said file and file system information on said file.15. The method as set forth in claim 11, further comprising coupling said active storage port to said remote storage center to receive data for said file and file system information on said file.
CROSS-REFERENCES TO RELATED APPLICATIONSThis application claims the benefit of U.S. patent application Ser. No. 09/695,499, filed Oct. 23, 2000, entitled "A Network Storage System", and to U.S. Provisional Patent Applications Nos. 60/186,693 and 60/186,774, filed Mar. 3, 2000, entitled "Method and Apparatus for Implementing A Network-Based Storage Service" and "Method and Apparatus for Establishing Control and Data Lines To A Storage Facility, And API For Supporting Such Lines", respectively.BACKGROUND OF THE INVENTION1. Field of the InventionThe present invention is directed toward the field of storage, and more particularly toward accessing remote storage through use of a local device.2. Art BackgroundWith the rapid digitization of music, film and photographs, customer demand is driving the Internet to become the most preferred transport mechanism for all forms of digital media. Using the Internet, users have instantaneous worldwide access to their favorite movies, songs, or personal memorabilia. As the producers and owners of media content increasingly use the Internet as a primary method for worldwide distribution, the aggregate amount of rich media content available over the Internet is increasing at an extremely rapid rate.Not only is the number of rich media objects available over the Internet growing exponentially, but the size of the media, generally referred to herein as objects, is also dramatically increasing. A median Web object is 5 kilobytes (KB) in size, while the size of a rich media object may be 100 to 1 million times larger. For example, high-resolution digital photographs average 500 KB per picture. Digital music runs 3 to 5 megabytes ("MB") per song, and digital movies may reach up to 4 gigabytes ("GB") in size.As the number of personal computers, digital camcorders, digital cameras, and personal digital audio players grow, demand for Internet bandwidth to store, share and retrieve media files across the Internet also will grow. As the use of high-bandwidth digital subscriber lines ("DSL"), cable modems, and digital broadcast satellite networks gain in popularity, which supports the growth of the Internet backbone, the demand for using the Internet as a primary delivery channel for rich media objects also gains in popularity. This development causes a virtuous cycle, where the installation of broadband networks drives the use of rich media devices, which in turn, creates demand for further improvements in network bandwidth, and so on.The distribution of rich media objects across the Internet creates the need for increased storage capacity to store these rich media objects. As the number of personal media devices grows, and the network bandwidth expands, the amount of storage media required to store the various MP3 files, photographs, films, and video clips will also grow. Also, as more storage becomes readily available, more people will use the Internet to catalog, store, and access their rich media objects (e.g., digital photographs of family members).To date, only traditional storage solutions from established enterprise vendors have been available to a Web site developer implementing rich media repositories. One challenge with adopting today's existing storage technology for use with the Internet is meeting current and future scalability requirements. Today, large scale storage systems only scale to a few dozen terabytes. This amount of storage space is inadequate for storing substantial amounts of rich media objects. For example, if just 10 percent of America on line ("AOL") users place two 15 minute videos on a personal home page, then one petabyte (ie., 1000 terabytes) of storage would be required. Today's enterprise storage system architectures cannot support this level of storage capacity.In the Internet world, in addition to providing mass storage, it is also critically important to provide universal access to that storage across the wide area network. The content provider, regardless of the location of their content servers, cache servers, or stream servers, would ideally like to provide ubiquitous access to an entire store of rich media objects. Current technology, including storage area networks and network attached storage technologies, do not provide direct access to the wide area network. Only servers located within the same metropolitan area can directly access these types of storage systems.Since Internet users are measured in the tens of thousands or even millions of users, instead of hundreds of users, another challenge in mass storage is the ability to scale delivery of media as the demand increases. A true Internet based storage system must be able to handle peak loads of millions of simultaneous requests from all around the world. Traditional storage architectures are designed to support a few hundred simultaneous requests from the fastest possible response time to match the speed of the server CPU. For the Internet, storage systems must be able to manage literally millions of simultaneous downloads at the speed of the wide area network. Thus, these traditional storage architectures are not "impedance matched" with the wide area network because the storage devices handle far too few simultaneous transactions that far exceed the latency requirements of the wide area network. In addition, these traditional storage architectures are typically implemented with expensive disks and expensive connection technologies.Another issue regarding storage of rich media objects is the time to market. The time to market is often a crucial requirement for new rich media Web sites. Growth rates are measured in terabytes per month. Quickly bringing new capacity online becomes a strategic advantage in fast-moving markets. Typically, with traditional storage solutions, it takes a customer two to six months to integrate a fully operational multi-terabytes storage unit with the content providers site. This start-up time is to slow to meet rapidly increasing business demands. Pre-building large amounts of excess capacity in anticipation of this demand is one tactic to deal with unpredictable demand spikes, but this approach is prohibitively expensive.Traditional storage architectures have been optimized for database and file server applications. The Internet introduces a whole new set of demands on storage devices, including scalability, global access, user accounts, and rapid deployment. With the explosive growth in rich media served over the Internet over the next several years, this is coming to a head. The coming title wave of rich content will surpass the capabilities of even the most robust enterprise storage architectures. Accordingly, there is a demand to develop new paradigms in new ways of designing Internet ready rich media storage systems.SUMMARY OF THE INVENTIONLocal devices, operating as storage ports, interface a local computer, such as a web or application server, to one or more remote storage centers. The storage centers store files for use by the local computer. One or more local devices (i.e., first local device) operate as an active storage port. The local computer issues file system requests to the active storage ports. In response, the active storage port generates information for the file system request. In one embodiment, the file system requests include requests for file data as well as requests for directory operations. The active storage port processes the file system requests, and accesses, as necessary, the remote storage center.The local devices are coupled in a 2N failover configuration. In addition to the active storage port, a second local device, designated as a passive storage port, is coupled to the local computer and the storage center. If a failover condition in the active storage port occurs, the second local device, initially operating as the passive storage port, is switched to the active storage port. The failover architecture may be extended to include additional active and passive storage port pairs.The local computer and the first local device communicate via a communication mechanism, and the local computer communicates with and the second local device via the same communication mechanism. In one embodiment, the active storage port communicates with the local computer over a network. For this embodiment, the local computer accesses the active storage port via a network address (e.g., IP address). If a failover condition occurs, switching the active storage port to the passive storage port, the passive storage port assumes the network address of the active storage port. Thus, a failover condition, which results in switching from the active storage port to the passive storage port, occurs without any interruption to the local computer. In one embodiment, to gain access to the files and file system information through the storage port, the local computer mounts the storage port as a storage device through functionality provided in a network file system (e.g., NFS, CIFS, etc.).In one embodiment, the active storage port is coupled to the passive storage port. The active storage port monitors its health (e.g., CPUs, hard disk drives, etc.). The passive storage port issues queries to the active storage port to determine the health of the active storage port. The passive storage port assumes the role of the active storage port if the health falls below a pre-determined threshold. For example, if a predetermined number of hard disk drives fail in the active storage port, a failover condition may be triggered.In one embodiment, the storage port further includes a data cache and a directory cache. The data cache stores at least a subset of files of the storage center, and the directory cache stores at least a subset of directory information for the files of the storage center. In operation, the active storage port receives requests for files. In response, the active storage port determines whether the file resides in its data cache, and delivers the file if the file is cached. If the file is not cached, then the active storage port accesses the remote storage center to obtain the file. The active storage port also receives requests for directory operations, and determines whether the directory information resides in the storage port directory cache. If the directory information is cached, then the storage port delivers it to the local computer. If the directory information is not cached, the storage port obtains the directory information from a remote virtual file system.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block diagram illustrating one embodiment for the storage system of the present invention.FIG. 2 illustrates one embodiment for use of the network storage system as a media storage service.FIG. 3 is a block diagram illustrating one embodiment for the storage cluster.FIG. 4 is a flow diagram illustrating one embodiment for the download operation in the storage cluster.FIG. 5 is a flowchart illustrating one embodiment for authentication in the network storage system.FIG. 6 illustrates one embodiment of a distributed object storage manager ("DOSM").FIG. 7 is a block diagram illustrating one embodiment for an intelligent storage node.FIG. 8 is a flow diagram illustrating one embodiment for processing upload requests in the storage cluster.FIG. 9 is a flow diagram illustrating one embodiment for generating unique fingerprints of object files.FIG. 10 is a block diagram illustrating one embodiment for caching data in the storage cluster.FIG. 11 is a block diagram illustrating one embodiment for implementing a VFS for use with a network storage system.FIG. 12 illustrates example database tables for implementing the file system with a database.FIGS. 13A and 13B are flow diagrams illustrating one embodiment for performing directory operations in the VFS.FIG. 14 is a flow diagram illustrating one embodiment for the delete file operation for the network storage system.FIG. 15 illustrates geographical replications of storage centers.FIG. 16 is a block diagram illustrating one embodiment for replicating the storage centers.FIG. 17 illustrates one embodiment for use of the storage center in a content delivery network.FIG. 18 is a flow diagram illustrating one embodiment for use of the storage center with a content delivery network.FIG. 19 illustrates one embodiment for use of the storage port in the network storage system.FIG. 20 is a flow diagram illustrating one embodiment for use of a storage port to deliver content.FIG. 21a illustrates one hardware configuration for a storage port device.FIG. 21b illustrates embodiments for implementing the storage port in software.FIG. 22 is a block diagram illustrating one embodiment for a storage port.FIG. 23 is a block diagram illustrating one embodiment for file system translation in the storage port.FIG. 24 is a flow diagram illustrating one embodiment for translating a file system operation from a local file system to the network storage file system.FIG. 25 is a block diagram illustrating one embodiment for using the storage port to directly download object files to the end-user.FIG. 26 is a flow diagram illustrating one embodiment for directly downloading object files to an end-user.FIG. 27 is a block diagram illustrating one embodiment to interface a storage center to a client's private file directory system.FIG. 28 is a flow diagram illustrating one embodiment for accessing object files in a storage center using a client's private file system.FIG. 29 is a block diagram illustrating one embodiment for a storage port fail over configuration.FIG. 30 is a flow diagram illustrating one embodiment for a storage port fail over process.FIG. 31 is a flow diagram illustrating one embodiment for using the multicast protocol after a storage node fail over condition.DETAILED DESCRIPTIONThe disclosure of U.S. Provisional Patent Applications Nos. 60/186,693 and 60/186,774, filed Mar. 3, 2000, entitled "Method and Apparatus for Implementing A Network-Based Storage Service" and "Method and Apparatus for Establishing Control and Data Lines To A Storage Facility, And API For Supporting Such Lines", respectively, are hereby incorporated by reference.Network Storage System Overview:The network storage system is designed to meet the storage requirements of rich media content owners. Rich media objects typically represent up to 90 percent of the storage required for a film, music or a photo album associated with a web site. The network storage system uses distributed systems technology to provide scalability to support petabytes of storage and to support millions of users. Users only gain access to their media objects, within the network storage system, using a highly secured "shared secret" authentication certificate technology. The network storage system also provides immediate expandability for any user that desires to increase their storage capacity. Also, the network storage system is extremely cost-effective because, in one embodiment, it consists of standard off the shelf CPUs with the latest high-density disk technology.For purposes of nomenclature, the term "client", as used herein, refers to an entity that uses the storage system to store object files. For example, a client may consist of a web site owner that desires to deliver, outside their web server, rich media objects associated with content on their web site. Also, for purposes of nomenclature, the term "end-user", as used herein, refers to a recipient of the object. For example, the end-user may consist of a computer user that downloads objects from a web site across the Internet using a web browser. Also, under this definition, the end-user may also be a client.FIG. 1 is a block diagram illustrating one embodiment for the storage system of the present invention. For the embodiment of FIG. 1, the storage system consists of a control path and a data path. The control path consists of a virtual file system ("VFS") 50 and the data path consists of a distributed storage cluster 70. The control path is used to conduct all directory operations. The VFS includes, in part, client assigned filenames and network storage system assigned unique file identifiers for each rich media object. The unique file identifiers are embedded into storage resource locators ("SRLs").The distributed storage cluster 70 is used to store the object files for the system (i.e., all client data). As shown in FIG. 1, the VFS and the storage cluster 70 are coupled to communicate information so as to coordinate file system information with the physical storage of the object files.As shown in FIG. 1, file system control 60 issues directory operation requests to the VFS 50. As is described more fully below, file system control 60 may comprise software that uses a library to essentially "translate" file system requests from the client's local file system to file system requests compatible with the network storage system. In other embodiments, file system control 60 consists of a storage port coupled to the client's system (e.g., the client's application or web server). In general, the storage port, implemented in either hardware or software, translates file system commands from the client's local file system (e.g., NFS or CIFS) to file system requests compatible with the network storage system. In one embodiment, to interface the client's file system to the network storage system, a client need only mount the storage port as a network drive. The storage port then provides complete access to the network storage system. A detailed discussion of the storage port is set forth below.As shown in FIG. 1, object recipient 80 receives, in response to object requests, objects downloaded from storage cluster 70. The object recipient 80 may comprise the client, or the object recipient 80 may consist of one or more end-users. Embodiments for transferring objects from the storage cluster 70 to object recipients, including both end-users and clients, are described more fully below.The network storage system has applications for use as an Internet based media storage service. For this application, the network storage system is an integral part of the Internet infrastructure used by rich media content owners and delivery networks. FIG. 2 illustrates one embodiment for use of the network storage system as a media storage service. In general, the storage service 130 provides a single consistent worldwide image of a client's (e.g., a company operating a web site) entire directory of rich objects. For this embodiment, an end-user 100 is coupled to both the content origin server 120 and storage service 130 through a network. For example, the end-user 100 may be coupled to the content origin server 120 and storage service 130 via the Internet. The storage service 130 includes processing and networking facilities, such as a server 140, and data store 150. The storage service 130 and content origin server 120 communicate to conduct file directory operations and object file operations. The data store 150, part of the storage service 130, stores large data files, such as rich media data files, illustrated as multimedia files 160, 170 and 180 in FIG. 2. In one embodiment, the data store 150 consists of a cluster of intelligent storage nodes.In one embodiment, the storage service communicates with web servers (e.g., content origin server 120) and browsers (e.g., Microsoft Explorer or Netscape Navigator) operating on end-user computer 100 via the standard Internet hypertext transfer protocol ("HTTP") and universal resource locators ("URLs"). Although the use of HTTP is described herein, any transport protocol may be used without deviating from the spirit or scope of the invention. For the configuration of FIG. 2, the end-user, through end-user computer 100, generates hyper text transfer protocol ("HTTP") requests to the content origin server 120 to obtain hyper text mark-up language ("HTML") files. In addition, to obtain large data objects associated with those text files, the end-user, through end user computer 100, generates HTTP requests to the storage service 130. For example, the end-user may download from the content origins server 120 a few kilobytes of textual data describing a rich object, such as text describing an upcoming film. When the user "clicks" on a URL to download a film snippet from the upcoming film, an HTTP request is generated to the storage service 130, and a storage service 130 downloads the film snippet to the end-user computer 100. The network configuration of FIG. 2 permits off loading the storage of rich objects from the content origin server 120 to the storage service 130. This configuration greatly reduces the size and complexity of content origin servers needed to store, manage and serve rich objects to end-users.Distributed Storage Cluster:In one embodiment, the storage cluster utilizes distributed systems technology that harnesses the throughput of hundreds of CPUs and the storage of thousands of disk drives. FIG. 3 is a block diagram illustrating one embodiment for the storage cluster. The storage cluster 300 receives upload, download, and delete operations that include the storage resource locator ("SRL"). The SRL is then used to uniquely identify a client file. As shown in FIG. 3, the storage cluster consists of distributed object storage managers ("DOSMs") 320 and intelligent storage nodes 340. There are "n" distributed object storage managers 320, wherein "n" is any integer value greater than one. Similarly, there are "n" intelligent storage nodes for the intelligent storage nodes 340 component (i.e., wherein "n" is also any integer value greater than one).As shown in FIG. 3, file upload and download operations are input to a load balancing fabric 310. In one embodiment, the load balancing fabric 310 is a layer four ("L4") switch. In general, L4 switches are capable of effectively prioritizing TCP and UDP traffic. In addition, L4 switches, which incorporate load balancing capabilities, distribute requests for HTTP sessions among a number of resources, such as servers. For this embodiment, the load balancing fabric 310 distributes upload and download requests to one of a plurality of DOSMs based on DOSM availability. The load balancing capability in an L4 switch is currently commercially available.Each DOSM independently handles hundreds of simultaneous download transactions. In one embodiment described below, each DOSM has a local high-speed disk cache to store frequently accessed file objects. Each DOSM has a map, dynamically generated, of the storage system. The map identifies a correspondence between an intelligent storage node address and an object finger print. In one embodiment, the DOSMs record all usage and performance data gathered by a separate accounting system and monitoring system.The DOSMs 320 communicate with the intelligent storage nodes 340 via an interconnect fabric 330. The interconnect fabric 330 consists of a high-speed, high bandwidth fabric to ensure that all the DOSMs 320 communicate with every intelligent storage node at all times. In one embodiment, the DOSMs 320 communicate with the intelligent storage node over the interconnect fabric via a protocol, entitled the distributed object storage protocol ("DOSP"). Effectively, the DOSP links hundreds of intelligent storage nodes into one large storage cluster. As described more fully below, the DOSP consist of a multi-cast protocol as well as a point-to-point protocol.In general, the intelligent storage nodes 340 provide the persistent store for the objects or files. The intelligent storage nodes contain thousands of high-density disk drives. The intelligent storage nodes are described more fully below in conjunction with the discussion of FIG. 7.In one embodiment, the network storage system uses the storage resource locators ("SRLs") to process requests. In one embodiment, the network storage system uses the following format for the SRL:http://<storage-cluster>/<encoded-request>/<digital-signature>/<arbitrary-customer-uri, wherein:the "storage-cluster" field includes the name or IP address of a storage center DSM pool;the "encoded-request" field comprises a base64 encoded op code and arguments;the "digital-signature" field consists of a certificate derived from the following expression: md5(shared-secret+md5(shared-secret+encoded-request)); andthe "arbitrary-customer-uri" field contains arbitrary information added to the SRL by the network storage system clients. For example, the arbitrary-customer-uri field may include the filename and extension of the file being downloaded to enable browsers to send the content to an appropriate plug-in.In one embodiment, the "encoded request" field is encoded using base64 encoding. As shown in Table 1, the encoded request consists of a URL type field, a version field, and type/version specific payload field.<tb>TABLE 1<tb>Field<sep>Datatype<sep>Comment<tb>Type<sep>Numeric<sep>Type of the URL, i.e. Standard, CDN, etc.<tb>Version<sep>Numeric<sep>Version of the URL<tb>Payload<sep>NA<sep>Payload specific to the Type/Version of the URL.In one embodiment, the type/version specific payload field consists of a series of '/' delimited fields that contain accounting information, an op code, and an op code dependent argument list. Table 2 shows one embodiment for the type/version specific payload field.<tb>TABLE 2<tb>Field<sep>Datatype<sep>Comment<tb>Expires<sep>Numeric<sep>Number of seconds since the epoc that the link<tb><sep><sep>expires. If 0, the link has an infinite duration<tb><sep><sep>and will not be checked for expiration.<tb>Access<sep>Numeric<sep>The access method associated with the SRL,<tb>method<sep><sep>i.e. Storage Port, end user SRL, CDN, etc.<tb>Client Id<sep>Numeric<sep>The client id of the client<tb><sep><sep>performing the operation.<tb>Op Code<sep>Numeric<sep>The opcode of the operation to be performed.<tb>Arguments<sep>NA<sep>An opcode specific argument list.Table 3 includes two access method types for the access method field.<tb><sep>TABLE 3<tb><sep>Access method<sep>Encoding<sep>Comment<tb><sep>SRL<sep>0x0001<sep>End user SRL request.<tb><sep>Storage Port<sep>0x0002<sep>Internal Storage Port request.Table 4 includes operational codes for the op code field.<tb>TABLE 4<tb>Operation<sep>Encoding<sep>Arguments<tb>NO_OP<sep>0x0000<sep>None<tb>STORE<sep>0x0010<sep>Pfid - numeric Parent folder id to upload<tb><sep><sep>the file to. Other arguments<tb><sep><sep>are mime encoded.<tb>FETCH<sep>0x0020<sep>Md5 - alphanumeric Hexadecimal<tb><sep><sep>representation of the md5<tb><sep><sep>hash of the file to be downloaded.<tb>FETCH_AUTH<sep>0x0021<sep>Md5 - alphanumeric Hexadecimal<tb><sep><sep>representation of the md5<tb><sep><sep>hash of the file to be downloaded.<tb><sep><sep>Authentication Callback URI - alpha-<tb><sep><sep>numeric URL encoded callback URI<tb>DELETE<sep>0x0050<sep>Md5 - alphanumeric Hexadecimal<tb><sep><sep>representation of the md5 hash<tb><sep><sep>of the file to be deleted.<tb>CONTROL<sep>0x1000<sep>ControlTicket - alphanumeric Hexadeci-<tb><sep><sep>mal representation of the digital signature<tb><sep><sep>of the XML control document.The object files, stored in one or more storage clusters, are not associated with a "central authority" that specifies a physical location for the object files. The VFS, in part, stores an object fingerprint for a file, but does not indicate a location for the file. Because of this, the network storage system may be referred to as a "stateless" or a "soft state" system. Instead of using a central authority to locate files, the physical address for the files is identified in the storage cluster through a dynamically generated reference. However, the reference does not necessarily identify the location for all the object files (i.e., the reference, at any one time, potentially identifies only a subset of the object files in the system). Since the network storage system does not use a central authority, object files may be added, updated or stored in multiple locations in the storage system, and the location of the object files in the intelligent storage nodes may be discovered in response to a specific request.FIG. 4 is a flow diagram illustrating one embodiment for the download operation in the storage cluster. For purposes of nomenclature, the "recipient" in a download operation is the destination of the file for the download operation. The storage cluster receives a download request, including the unique file identifier (e.g., SRL) (block 400, FIG. 4). When the storage cluster receives a download request, the load balancing fabric 310 (FIG. 3), such as an L4 switch, selects an available DOSM (block 410, FIG. 4). The DOSM parses the SRL to extract the certificate and the encoded request (block 415, FIG. 4). From the encoded request, a certificate is calculated, and the calculated certificate is compared to the SRL certificate. If the SRL does not authenticate, then an error message is sent to the recipient (blocks 420 and 425, FIG. 4). Alternatively, if the SRL does authenticate, then the DOSM determines whether the object identified by the SRL resides in the corresponding DOSM's data cache (blocks 420 and 430, FIG. 4). If the data object is cached, then the object is transmitted from the storage cluster to the recipient (e.g., via the Internet using HTTP protocol) (blocks 430 and 495, FIG. 4). If the object is not cached at the DOSM, then the DOSM attempts to identify the location of the object in one of the intelligent storage nodes (blocks 430 and 440, FIG. 4).If the DOSM knows the location of the object (e.g., the object file is an entry in the DOSM look-up table) and the storage node is readable, then the DOSM obtains a connection with the storage node that stores the object, and transmits the object from the storage cluster to the recipient (blocks 442, 435 and 495, FIG. 4). In one embodiment, to determine whether the storage node is readable, the DOSM queries the storage node for the object file a predetermined number of times. Alternatively, if the DOSM does not know the storage location of the object in the intelligent storage nodes, then the DOSM broadcasts a request to the intelligent storage nodes to locate the object (blocks 440 and 450, FIG. 4). Each intelligent storage node determines whether the object is stored on one of its disk drives (block 460, FIG. 4). If the object file is located in one of the intelligent storage nodes, then the intelligent storage node, which stores the requested object, broadcasts identification information to all of the distributed object storage managers (blocks 462 and 470, FIG. 4). For example, if intelligent storage node "1" of intelligent storage nodes 340 stores the requested object in disk "3", then intelligent storage node "1" broadcasts to all "n" DOSMs that the object file is located in disk "3" of intelligent storage node "1." All DOSMs snoop the packets on the network to obtain file identification information. In response to the intelligent storage nodes broadcast, each DOSM updates its reference (e.g., lookup table or file system directory) with the proper file identification information.If the DOSM broadcasts a request to the intelligent storage nodes to locate the object and the object is not located from the request, then the DOSM establishes a point-to-point connection with an intelligent storage node to individually query the storage node for the object (blocks 462 and 464, FIG. 4). This process is repeated until all intelligent storage nodes have been queried or the object has been located. If the object is located in one of the intelligent storage nodes, then the intelligent storage node, which stores the requested object, broadcasts identification information to all of the distributed object storage managers (blocks 466 and 470, FIG. 4). Alternatively, if the object is not located in one of the intelligent storage nodes, then a failover procedure is executed to locate the object in a different storage center (blocks 466 and 468, FIG. 4).When the intelligent storage node is located, the DOSM obtains a connection with the intelligent storage node, and opens the file with the requested object. If the storage node is readable (i.e., the DOSM successfully reads the file from the storage node), then the object is transmitted from the intelligent storage node to the recipient via a network (e.g., using HTTP protocol over the Internet). If the object file is not readable, then a failover procedure is executed to obtain the object in a different storage node and/or storage center, and the DOSM obtains a connection with the new storage node (blocks 442, 468 and 435, FIG. 4). Thereafter, the object is transmitted from the storage cluster to the recipient (block 495, FIG. 4).In one embodiment, accesses to the network storage system require a valid authentication certificate. In one embodiment utilizing CDNs, the certificate is based on the object file's unique user filename and a secure key assigned to each client account. In other embodiments, the network storage system supports full HTTPS and SSL protocols for secure communications between clients/end-users and the network storage system.FIG. 5 is a flowchart illustrating one embodiment for authentication in the network storage system. To authenticate a request, the network storage system decodes the SRL to extract the client identification, the SRL certificate and the client filename or object fingerprint (block 500, FIG. 5). The network storage system (i.e., virtual file system or storage cluster) extracts a "secret" or secure key corresponding to the client identified with the request. In general, the "secret" or secure key is a password supplied by the client to authenticate operations in the network storage system. Using the secure key and object fingerprint, the network storage system generates a calculated certificate (block 520, FIG. 5). In one embodiment, the network storage system generates a calculated certificate for the request in accordance with the following expression:MD5 Hash(Secure Key+MD5 Hash(Secure Key+Encoded SRL))As shown above, a first MD5 hash calculation is performed on the object fingerprint and the secure key to obtain a first result, and a second MD5 hash calculation is performed on the first result and the secure key to obtain the calculated certificate. The network storage system compares the calculated certificate with the SRL certificate (i.e., the certificate transmitted with the SRL request) (block 530, FIG. 5). If the certificates match, then the SRL is authenticated, and the request is performed (blocks 540 and 560, FIG. 5). Alternatively, if the calculated certificate does not match the SRL certificate, then the network storage system generates an error message to the requester (blocks 540 and 550, FIG. 5).FIG. 6 illustrates one embodiment of a distributed object storage manager ("DOSM"). For this embodiment, the processes and functions of each DOSM (i.e., also referred to herein as a "control node") are implemented in software for execution on a computer, such as a server 600. In other embodiments, the distributed object storage managers 320 may be implemented in a combination of hardware and software on one or more computers. Each DOSM maintains a file lookup table to identify the location of object files stored in the intelligent storage nodes 340. Table 610 of FIG. 6 illustrates one embodiment for a DOSM file lookup table. For this embodiment, each entry of the table identifies a corresponding object file stored in an intelligent storage node. Specifically, each entry includes a file identification, an IP address, and a disk identification. The file identification, also referred to herein as the object fingerprint, is derived by performing an MD5 hash calculation on the contents of the object file. The result of this MD5 hash calculation is a 128 bit string. For this embodiment, the DOSM file lookup table stores, in the file identification column, the 128 bit string, with the file designation "MD5." The second column of the DOSM file lookup table stores the IP address of the intelligent storage node that stores the object file (e.g., "10.3.100.1"). The third column, labeled disk ID, stores an integer value that identifies the specific disk drive on the intelligent storage node that stores the object file. In one embodiment, when the look-up table is at full capacity, the DOSM uses a least recently used ("LRU") caching algorithm to replace existing entries in the DOSM lookup table with new entries received.As shown in FIG. 6, the DOSM also includes a data cache 620. In general, the data cache 620 stores objects (i.e., client data) to permit the DOSM to streamline data directly to the recipient in response to a download request. During a download request, in the event of a cache miss, when the object is transferred from the intelligent storage node to the recipient, the object is also stored in the data cache 620. Similar to the DOSM file lookup table, the data cache 620 uses a least recently used ("LRU") caching algorithm to replace existing entries with new data objects when the data cache is full.The DOSM also maintains a state table 630. In general, the state table 630 provides the state of the system by storing information on the overall capacity and health of the intelligent storage nodes 340. In one embodiment, the state tables are built using the multicast protocol to obtain, from the intelligent storage nodes, information about the corresponding intelligent storage node. The state information indicates whether disks on the intelligent storage nodes are healthy, how much space is on the disks, etc. In one embodiment, as shown in FIG. 6, state table 630 stores: read-write state of the storage nodes; health of the storage nodes (including an identification of failed nodes); and the current load of the storage nodes, including available storage capacity and the number of input/output ("I/O") operations per second. The DOSM uses state information to select, in an upload operation, the appropriate intelligent storage node for storage of a new object file. For example, the DOSM uses information on the number of input/output ("I/O") operations per second to load balance the storage nodes. The DOSM also uses information on available storage capacity to select an intelligent storage node to store a new object file.FIG. 7 is a block diagram illustrating one embodiment for an intelligent storage node. For this embodiment, the intelligent storage node is implemented on a computer, including software to perform the functions described herein. An intelligent storage node 700 includes a processing core 710 that consists of one or more central processing units ("CPUs"). In one embodiment, the processing core 710 comprises two CPUs. The intelligent storage node 700 also includes volatile memory, labeled 730 in FIG. 7. The memory 730 is used to store instructions executed by the processing core 710, as well as data used by the intelligent storage node. The intelligent storage node 700 further includes a network interface 720 to interface the intelligent storage node to the plurality of distributed object storage managers 320 via the interconnect fabric 330. The elements of the intelligent storage node 700 communicate via a computer transport mechanism 750 (e.g., a peripheral component interconnect ("PCI") bus, processor bus, etc.). The computer transport mechanism 750 is intended to represent a broad category of one or more computer busses, such as peripheral component interconnect ("PCI") bus or the industry standard association ("ISA") bus.The intelligent storage node 700 further includes a plurality of disk drives 740 to store the object files. As shown in FIG. 7, the number of disks in an intelligent storage node is represented as "n", such that "n" is an integer value greater than one. In one embodiment, the processing core 710 communicates with the disk drives 740 using the ISA protocol. However, any protocol used to access disk drives, including standard computer serial interface ("SCSI") protocol, may be used without deviating from the spirit or scope of the invention.The intelligent storage node contains information to identify object files that it stores. In one embodiment, the information to identify object files is stored in the file system directory of the intelligent storage node. In other embodiments, the information to identify object files is cached. Table 5 illustrates example entries to cache the identification of object files in an intelligent storage node.<tb><sep>TABLE 5<tb><sep>FILE ID<sep>DISK ID<tb><sep>File1.MD5<sep>1<tb><sep>File6.MD5<sep>2<tb><sep>File4.MD5<sep>2<tb><sep>File5.MD5<sep>"n"Table 5 includes a file identifier and a disk identifier. The file identifier, or file ID, stores the unique file handle corresponding to the object file. In one embodiment, the unique file handle is the object fingerprint obtained from performing an MD5 hash function on the contents of the object file. For the first example entry in Table 5, the unique file handle is represented as "file1.MD5." The second column, labeled disk id, identifies the specific disk drive on the intelligent storage node that stores the object file. For the second example entry in Table 5, the object file, "file6.MD5", is stored on the second disk drive on that intelligent storage node. On initial start-up of the intelligent storage node, the intelligent storage node builds the file identification table.The storage cluster also processes upload requests. FIG. 8 is a flow diagram illustrating one embodiment for processing upload requests in the storage cluster. For purposes of nomenclature, the "source", as used herein, refers to the source of the object file for the upload operation. If the storage cluster receives an upload request, then the load balancing fabric 320 (FIG. 3) selects an available DOSM to process the upload request (blocks 805 and 810, FIG. 8). The VFS creates a file identification (e.g., storage system node) and the appropriate directory for the new object file (block 805, FIG. 8). The selected DOSM parses the upload request to extract the certificate, object file, as well as client and directory information (block 820, FIG. 8). If the upload request does not authenticate, then the DOSM transmits an error message to the source (block 835, FIG. 8). Alternatively, if the upload request does authenticate, then the DOSM selects at least one intelligent storage node to store the object file (block 840, FIG. 8). In one embodiment, the upload operation stores the object file in two storage nodes. The "mirroring" of the object files ensures accessibility to the object in the event a failure occurs in an intelligent storage node. In one embodiment for "mirroring" the object files, the network storage system stores the object file at different geographic locations (e.g., different storage centers). If access to the geographically disparate storage center is unavailable at the time the object file is uploaded, then an additional copy of the file is stored at the local storage center.In one embodiment, the DOSM uses a state table (FIG. 6) to select the intelligent storage nodes most appropriate to store the new object. For purposes of discussion, the selected intelligent storage nodes are referred to herein as the "destination intelligent storage nodes." The DOSM establishes a connection with the destination intelligent storage node (block 850, FIG. 8). In one embodiment, the DOSM establishes a DOSP point-to-point connection with the destination source node. The object file is then transferred to the destination intelligent storage node (block 860, FIG. 8). In addition, after transferring the file to the intelligent storage node, the DOSM receives a status message as part of the DOSP point-to-point protocol. The status message indicates whether the transfer operation was successful.In one embodiment, the destination intelligent storage node generates a unique fingerprint for the object file (block 870, FIG. 8). Specifically, the destination intelligent storage node computes an MD5 hash of the contents of the object file. The intelligent storage node also verifies the object file. After receiving the successful status at the DOSM, the DOSM establishes a connection to the virtual file system ("VFS"). The DOSM communicates file information (e.g., the 128 bit.MD5 unique object fingerprint, file size, etc.), directory information (e.g., folder ID, parent folder ID, etc.), as well as client information and metadata (block 880, FIG. 8). The VFS attempts to verify the upload. If the VFS does not verify the upload, then an error message is sent to the source of the upload request (blocks 890 and 835, FIG. 8). If the VFS does verify the upload, then the verification is transmitted to the DOSM. In turn, the DOSM verifies the upload to the source (block 895, FIG. 8). Also, the storage system returns, to the source, a file handle that uniquely identifies the file to the network storage system.If the source of the upload request is an end-user, then the DOSM re-directs the end-user to the client. For example, the DOM may redirect the end-user to a predetermined URL at the client's web site. In other embodiments, if the source was a storage port, then the DOSM transmits a storage system node (i.e., handle used only for the storage system) and the unique object file fingerprint.As discussed above, as part of the upload operation, the network storage system generates a unique fingerprint of the object file. FIG. 9 is a flow diagram illustrating one embodiment for generating unique fingerprints of object files. First, the destination intelligent storage node creates a temporary file with the contents of the object file (block 900, FIG. 9). An MD5 hash calculation is performed on the contents of the temporary file (block 910, FIG. 9). The DOSM determines whether the unique fingerprint, generated from the MD5 hash operation, currently exists in the network storage system (block 920, FIG. 9). If the fingerprint currently exists, the temporary file, which holds the contents of the object, is deleted (blocks 930 and 940, FIG. 9). Also, a reference count associated with the existing fingerprint file is incremented (block 950, FIG. 9). The use of reference counts is described more fully below in conjunction with a discussion of the delete operation. If the fingerprint generated from the temporary file does not exist, then the temporary file is converted to a permanent file, and the unique fingerprint is used to identify the file in the storage cluster (block 960, FIG. 9).Virtual File System:In one embodiment, directory operations are performed in the virtual file system ("VFS"). FIG. 11 is a block diagram illustrating one embodiment for implementing a VFS for use with a network storage system. In general, the VFS is the control path for maintaining the network storage system. The VFS maintains, for each object file, the customer file directory including the customer assigned filenames and the unique network storage system file identifiers. In one embodiment discussed above, the unique network storage system file identifiers consist of a 128 bit digital fingerprint obtained from performing an MD5 hash calculation on the contents of the object file. As shown in FIG. 11, the VFS consists of distributed directory managers ("DDMs") 1110 and distributed directories 1120. There are "n" DDMs and "n" distributed directories, wherein "n" represents any integer one or greater. In one embodiment, each client is mapped to a distributed directory.The DDMs support common directory operations, such as "open file", "move file", "delete file", "open folder", "move folder", and "create folder." The arrows of FIG. 11 depict multi-directory requests and operations. The requests may originate from the end-user or the client, via a storage port or a web store. In one implementation, the requests to the VFS are transported using HTTP requests and encoded using the extended markup language ("XML"). Although the VFS is described using the HTTP protocol with XML encoded requests, any network protocol with any type of request format may be used without deviating from the spirit or scope of the invention.In one embodiment, the VFS employs a database to implement the file system. For the database implementation, each directory operations request is converted into the database operation. Alternatively, the VFS may implement the file system using a local file system (i.e., a file system local to the VFS). For the file system embodiment, files are generated to store information stored in the database implementation. Also, the DDMs include a lookup table to locate the files in the distributed directories. The files or database tables are replicated in a remote storage center.The network storage file system consists of files arranged in directories or folders (hereafter referred to as folders). Similar to most file systems, the network storage file system is a hierarchical file system. In a hierarchical file system, directories or folders are arranged in levels, starting with a root or base folder. Additional folders or sub folders are then arranged under the root folder. The file system may comprise any number of levels, such that additional layers of sub folders fall beneath other sub folders. For purposes of nomenclature used herein, a parent folder to a folder is the folder arranged above the folder in the hierarchy of folders or directories.FIG. 12 illustrates example database tables for implementing the file system with a database. For the database embodiment, the VFS maintains a customer table 1200, folder table 1210 and file table 1220. The customer table 1200 includes fields for "customer ID", "customer name", and "customer reserved fields." The customer ID is a network storage system identifier used to uniquely identify the client. The customer name is the real name associated with a customer. For the first example entry in the customer table 1200, "customer A" has a customer ID of "1." The customer reserved fields provide storage reserved for use by the client.The folder table 1210 includes fields for "customer ID", "folder ID", "folder parent ID", and "metadata." For this embodiment, each entry in the folder table corresponds to a folder in the network storage file system. The customer ID, the same customer ID stored in the customer table, uniquely identifies the client. For the example entries in folder table 1210, the customer ID of "3" identifies that the folders have been assigned to "customer C." The folder ID identifies the specific folder for that entry. For example, the first entry in folder table 1210 is for a folder identified by the identification of "2." The third column, labeled "folder parent ID", identifies the parent folder for the folder corresponding to the database entry or row. For example, the second entry in folder table 1210 is a sub folder to the first entry of table 1210 (i.e., folder "100" is in the next hierarchical level beneath folder "2"). Note that the first entry in folder table 1210 does not have a value for the folder parent ID. This signifies that folder "2" is a root folder.The file table contains an entry for each object file stored in a network storage file system. The example file table 1220 includes columns or fields for "customer ID", "file handle", "folder ID", "folder parent ID", and "metadata." Again, the customer ID identifies the customer that owns the file. The entries shown in file table 1220 are for files stored by customer C. The file handle field stores the fingerprint that the network file system uses to uniquely identify the file. Although the network file system stores 32 byte hexadecimal character sequences to identify files, for purposes of illustration, file handle entries for file table 1220 are shown as "52.MD5", "55.MD5", "99.MD5", and "67.MD5." The folder ID field identifies the folder that contains the file. For example, the first entry in file table 1220, corresponding to file "55.MD5", is organized or stored in folder 100. The folder parent ID identifies the parent folder to the folder that stores the file. The folder 100, which contains "52.MD5", has a parent folder of "2."FIGS. 13A and 13B are flow diagrams illustrating one embodiment for performing directory operations in the VFS. When a DDM receives a directory operation request, the DDM parses the request to extract the certificate, an operational code, as well as arguments corresponding to the operational code (blocks 1300 and 1310, FIG. 13A). The operational code specifies the directory operation requested. The DDM, using the certificate and the information contained in the request, validates the request. If the request does not validate, the DDM sends an error message to the requester (blocks 1320 and 1330, FIG. 13A). Alternatively, if the request does validate, the DDM parses the operational code and extracts the arguments, including the folder to perform the open operation (blocks 1320 and 1330, FIG. 13A).In general, if the operation is for an "open folder" operation, then the DDM returns all sub folders and files contained in the folder identified by the argument. Specifically, the DDM extracts, from the appropriate distributed directory, the file and folder tables that correspond to the folder identified as an argument in the "open folder" request (blocks 1340 and 1345, FIG. 13A). Specifically, the DDM extracts all the files and sub folders that correspond to the folder identified as an argument with the request. Using the example of FIG. 12, if the "open folder" request included the arguments "folder ID=2" and "customer ID=3", then the DDM extracts, from the folder table in the distributed directory, folder IDs 100 and 251 (i.e., folders 100 and 251 are sub folders of the root folder 2). If the "open folder" request included the arguments "folder ID=100", then the DDM extracts from the file table file handles "52.MD5" and "55.MD5."If the operational code in a directory request is for an "open file" operation, subsequent to an "open folder" request, then file information is obtained from the file table (i.e., file handle) and the client table (i.e., client identification) to construct an authentication certificate and an SRL for the file. For the above example, if the argument with the "open file" operation specified the file "52.MD5", then file and client information are obtained to construct the SRL for the "52.MD5" file.If the operational code in a directory request is for a "move folder" operation, then a database operation is performed to revise the entries in the file and folder tables to reflect the new location of the folder. The "move folder" operation includes, as an argument, the new destination for the folder. Using the example of FIG. 12, if the "move folder" operation specified moving folder ID 166 from a sub folder of folder ID 251 to directly beneath the root folder 2, then the parent folder ID on the fourth entry of folder table 1210 is modified from "251" to "2." Also, for file table 1220, the parent folder ID for the third and fourth entries are modified from "251" to "2."If the directory operation is a "create folder" operation, then a new entry or row is generated for the folder table (blocks 1360 and 1365, FIG. 13A). The "create folder" operation includes a parent folder as an argument. As described below, the client's folder name is converted to the network storage system's folder identification. Using the example of FIG. 12, if the requester desires to create a new folder under the sub folder 166, then the DDM assigns a new folder identification for the folder, and enters a new row or entry for the folder table 1210 with a folder parent ID of 166.If the directory operation is a "move file" operation, then a database operation is performed to revise an entry in the file table to reflect the new location of the file (blocks 1370 and 1375, FIG. 13A). The "move file" operation includes a new destination for the file as an argument in the directory request. For the example database tables in FIG. 12, if the "move file" operation specified moving file "52.MD5" from folder 100 to folder 166, then the folder ID and folder parent ID fields for the first entry of file table 1220 are revised to "166" and "251", respectively.As shown in block 1390 of FIG. 13A, the arguments extracted from the database tables are returned to the requester. In one embodiment, the response from a DDM includes XML encoded documents with the list of files (i.e., in the form of a SRL) and/or directories. For example, in response to the "open folder" request, the VFS returns file and folder Ids for the files and subfolders arranged under the subject folder.FIG. 13B is a continuation of the flow diagram of FIG. 13A illustrating additional file system operations in the VFS. If the operational code is a "delete folder" operation, then the corresponding folder entry is deleted from the folder table (blocks 1372 and 1374, FIG. 13B). If the operational code designates a "delete file" operation, then the file entry, identified in the operation, is deleted from its file table (blocks 1376 and 1378, FIG. 13B). For a "create file" operation, the VFS adds an entry for a new file in the file table (blocks 1386 and 1388, FIG. 13B). If the operational code specifies an "update folder" operation, then the client metadata in the corresponding folder table for the folder entry is updated (blocks 1386 and 1388, FIG. 13B). For an "update file" operation, the VFS updates client metadata in the table for the corresponding file entry (blocks 1392 and 1394, FIG. 13B). After executing the appropriate database operation, the arguments for the operation are returned to the requester (blocks 1396, FIG. 13B).In one embodiment, the network storage system uses a reference count to manage up loading and deleting existing files. In general, when a new file is uploaded to the network storage system or a file request is received to upload an existing file, the reference count is incremented by one. Conversely, when a file request is received to delete a file, the reference count is decremented by one. The network storage system uses the reference count to delete an object file when the reference count is zero. For example, a client may transmit a first request to upload an object file, entitled "my file." After the upload operation is complete, the reference count to "my file" is one. Thereafter, a client may transmit a second request to upload "my file." Instead of storing a second copy of "my file", the network storage system increments the reference count of "my file" to "2." For this example, the client may then transmit a first request to delete "my file." In response to this request, the network storage system does not delete "my file." Instead, the network storage system decrements the reference count to "1." Thereafter, if the client transmits a second request to delete "my file", the reference count is decremented to "0", and the network storage system deletes "my file."FIG. 14 is a flow diagram illustrating one embodiment for the delete file operation for the network storage system. If the VFS receives a delete request, then a DDM performs a validation check (blocks 1400 and 1405, FIG. 14). If the delete request is not valid, then an error message is transmitted to the requester (blocks 1410 and 1415, FIG. 14). If the request is validated, then the DDM extracts a file handle (i.e., MD5 file handle) from the file table in the database (block 1420, FIG. 14). The DDM deletes the file identification from the file table in the database (block 1450, FIG. 14). In addition, the DDM constructs a delete SRL, and transmits the delete SRL to the storage cluster (block 1460, FIG. 14). In response to the delete operation, the storage cluster extracts the reference count for the corresponding file. If the reference count is greater than 1, the storage cluster decrements the reference count by one (blocks 1430 and 1440, FIG. 14). Alternatively, if the reference count is one, the storage cluster decrements the reference count to zero, and deletes the file, identified by the SRL, in the appropriate intelligent storage node (block 1470, FIG. 14).Dynamic Data Caching:FIG. 10 is a block diagram illustrating one embodiment for caching data in the storage cluster. As shown in FIG. 10, there are "n" DOSMs. Each DOSM (i.e., DOSM 1, DOSM 2, DOSM 3 . . . DOSM "n") contains a corresponding data cache (i.e., data cache 1, data cache 2, data cache 3 . . . data cache "n"). The network storage system file upload and download operations are received by the load balancing fabric 310 (also see FIG. 3). A switch, such as an L4 switch, with load balancing capabilities, allocates resources among a pool of resources. For the network storage system, the load balancing fabric 310 efficiently allocates requests among the "n" DOSMs. In one embodiment, when a DOSM transfers an object from the intelligent storage node to a destination, the object is cached in the data cache of the corresponding DOSM. Objects are deleted from the data cache in order to store objects more recently requested via a least recently used ("LRU") caching policy.Load balancing the DOSMs in the network storage system permits an "automatic" caching of objects in high demand. In prior art systems, elaborate mechanisms are employed to identify data in high demand. Based on these decision mechanisms, data is cached in an attempt to meet the needs of the high demand. For example, an object may be in high demand when a movie studio offers, over its web site, a video preview of a newly released or upcoming film. For this example, the movie studio uses the network storage system to deliver the media rich object, "New Film Preview." The "New Film Preview" may be available to the end-user if the end-user "clicks" on a URL in the movie studio's web site. For this example, if the movie is very popular, when the movie studio client offers the "New Film Preview" through its web site, many end-users may attempt to download the rich object, "New Film Preview."For an initial request to download the object "New Film Preview", the load balancing fabric 310 selects a DOSM to manage the request. For this example, the load balancing fabric 310 selects DOSM 1 to fulfill the request. Assuming the DOSM 1 does not currently store the object in its data cache, the DOSM 1 acquires the object from the appropriate intelligent storage node. As the object is delivered to satisfy the initial request, the object is stored in the DOSM 1 data cache 1. For this example, the storage cluster receives a second request for the "New Film Preview" object, and the load balancing fabric 310, based on availability, selects DOSM 3 to process the request. Again, assuming DOSM 3 does not currently store the object in its data cache, the DOSM 3 obtains the object from the appropriate intelligent storage node, and transfers the object to the requester as well as stores the object in the data cache 3. Similarly, for this example, additional requests are made to the storage cluster to download the "New Film Preview" object. Based on available resources, the load balancing fabric 310 selects, for two separate requests, the DOSM 2 and the DOSM "n" to handle the two requests. Again, assuming DOSM 2 and DOSM "n" do not currently store the object in their data caches, both DOSMs acquire the "New Film Preview" object from the appropriate intelligent storage node, transfer the New Film Preview to the requester, and store the object and their respective data caches (i.e., data cache 2 and data cache "n"). As illustrated by the previous example, if an object is in high demand, the storage cluster, using a load balancing fabric that selects the different DOSMs, fetches, for storage in each of the DOSM data caches, a copy of the high demand object. Thus, the distribution of DOSM resources results in fast access to an object highly requested.The dynamic caching of object files in the DOSM also occurs for object files retrieved from different storage centers. For example, an object file, "New Film Preview", may be stored in an intelligent storage node at storage center 1. In storage center 2, DOSMs receive requests for the object file, "New Film Preview." For this example, the DOSMs in storage center 2 retrieve the object file, "New Film Preview", from storage center 1. Similar to the example provided above, the DOSMs in storage center 2 cache the object file, "New Film Preview." Thus, object files in high demand are cached in DOSMs globally, as required by demand.As shown in the example of FIG. 10, each data cache stores potentially different objects depending upon requests processed in the respective DOSMs. For example, in addition to the "New Film Preview" object, data cache 1 stores "Photos Y" and "BLOB X"; data cache 2 stores "Ad 5" and "Video Snippet 8"; data cache three stores "Photos Z" and "Advertisement 10"; and data cache "n" stores "BLOB A" and "Video Snippet 2."Geographic Replication of Storage Centers:The network storage system is optimized to support a massive number of simultaneous download transactions. The network storage system relies upon a single virtual directory of all file objects. From any location on the Internet, clients see the exact same view of their private file system. Thus, the network storage system supports simultaneous downloads of a single object that appears identical to users worldwide. In one implementation, the network storage system spans multiple continents with storage repositories or storage centers. The automatic geographic load balancing between storage centers ensures that all requests are directed to the nearest storage center. However, to provide fail over and enhanced performance, the storage center, including the storage cluster and VFS, are replicated. The physical replication across multiple locations includes a traffic management service. The traffic management service provides geographic load balancing of user transactions among geographic locations.FIG. 15 illustrates geographical replications of storage centers. For this example, there is a North American storage center 1510, an Asian storage center 1530, and a European storage center 1520. As shown in the example of FIG. 15, clients and end-users in North America have optimal access to the storage center through the North American storage center 1510. Also, clients and end users in Europe have optimal access to European storage center 1520. Similarly, clients and end-users in Asia have optimal access to be Asian storage center 1530. In this configuration, the storage center is coupled to a wide area network to provide the maximum bandwidth for the delivery of objects. If a particular storage center becomes overloaded with requests, new requests are automatically diverted to the next closest storage center. All objects are geographically mirrored to provide one hundred percent disaster protection. Also, if access to the geographically disparate storage center is unavailable at the time a file is stored, then an additional copy of the file is stored at the local storage center (i.e., the object file is mirrored locally).The components within the network storage system are fully redundant with automatic recovery. Thus, the system supports extremely high level of service availability.Download requests to each geographic storage center are continuously distributed across the DOSMs to deliver the fastest possible response time. In addition, in one embodiment, a global load balancing system ensures that the worldwide load across all storage centers is evenly spread to eliminate any "hot spots" and alleviate transitory demand spikes. The storage system operates far more quickly than the network itself, and thus introduces negligible delay to the overall file transit time. Thus, the worse case elapsed time for the individual object download is primarily determined by the speed of the wide area network used to transfer the object.All components within the network storage system are replicated and redundant to provide complete recoverability in the event of a failure. In one embodiment, each storage center attaches to multiple network back bone providers to ensure continuous network access. All files and the control path directory structure are geographically replicated at the time of upload to prevent any possible loss of data. As is described more fully below, the system maintains coherency among disparate storage centers through use of the distributed object storage protocol ("DOSP").FIG. 16 is a block diagram illustrating one embodiment for replicating the storage centers. For this example, two storage centers, labeled 1510 and 1520, are shown. However, based on the distributed architecture of the network storage system, any number of storage centers may be replicated. Storage centers 1510 and 1520 both include, for the storage cluster, load balancing fabric 320, distributed objects storage managers ("DOSMs") 320, interconnect fabric 330, and intelligent storage nodes 340. Storage center 1510 stores the same object files as storage center 1520. For example, if "object file 1" is stored in storage node 10 storage center 1510, then "object file 1" is stored in storage node "1" in storage center 1520. For the control path, the storage centers and 1510 and 1520 include the virtual file system ("VFS") 50. Similarly, the VFS in storage center 1510 stores the same directory information as the VFS in storage center 1520. Accordingly, the storage centers are replicated. Although the VFS and the storage clusters are shown in the same geographic "storage center", the VFS and storage cluster may be located at geographically disparate locations.For this example, intelligent storage nodes in storage cluster 1510 (i.e., storage node 1, storage node 2, . . . storage node "n") are accessed via Internet protocol ("IP") addresses IP addr1, IP addr2, and IP addrn, respectively. Thus, when a DOSM communicates with an intelligent storage node in storage center 1510, the DOSM uses these IP addresses to access the specific intelligent storage node. Storage center 1520 includes storage nodes (i.e., storage node 1, storage node 2, . . . storage node "n") addressed by IP address IP addr1', IP addr2', and IP addrn', respectively. Thus, in storage center 1520, when a DOSM communicates with the storage node, the DOSM uses an IP addr across the interconnect fabric 330. Although the replication of storage centers is described using an TCP/IP network protocol, any network protocol and corresponding addressing scheme may be used to replicate the storage centers.As shown in FIG. 16, the distributed objects storage managers of storage center 1510 are coupled to the interconnect fabric of storage center 1520. Similarly, the distributed object storage managers of storage center 1520 are coupled to the interconnect fabric of storage center 1510. Based on this configuration, the distributed objects storage managers of storage center 1510 have access to the intelligent storage nodes of storage center 1520. Likewise, the distributed object storage managers of storage center 1520 have access to the intelligent storage nodes of storage center 1510. As discussed above, each DOSM maintains a lookup table that correlates a file to an IP address (See FIG. 6). For example, if a file specified in a download request resides on storage node 1 in storage center 1510, then an entry of the DOSM lookup table specifies IP addr1. Similarly, in storage center 1520, if a file resides in storage node 1, an entry for the DOSM lookup table specifies IP addr1'.The storage center architecture supports a "dynamic" fail over. If a storage node, or a disk drive on a storage node, renders the access to a file inaccessible, then the DOSM may obtain the file from the replicated storage center. In one embodiment, to perform "dynamic" fail over, a mapping is stored between intelligent storage nodes in storage center 1510 and intelligent storage nodes in storage center 1520. Table 6 below shows a mapping for the example in configuration of FIG. 16.<tb><sep>TABLE 6<tb><sep>IP Address<sep>IP Address'<tb><sep>IP Addr1<sep>IP Addr1'<tb><sep>IP Addr2<sep>IP Addr2'<tb><sep>. . . <sep>. . . <tb><sep>IP Addrn<sep>IP Addrn'For this example, IP addr1 maps to IP addr1'. If there is a failure in storage node 1 in storage center 1510, then DOSMs of storage center 1510 access storage node 1 of storage center 1520 using IP addr1'. In one embodiment, the IP mapping between storage centers is implemented by modifying only the subnet address portion between the two IP addresses mapped. For example, if IP addr1 is 10.3.100.1, then IP addr1' is derived by changing, as appropriate, the subnet portion of the address (e.g., 10.10.100.1).The directory information stored in the VFS is replicated between storage center 1510 and 1520 in a similar manner. Thus, if a failure occurs in a distributed directory of storage center 1510, then the distributed directory manager in storage center 1510, using an IP address mapping, accesses the replicated distributed directory in storage center 1520.In one embodiment, to further implement geographic replication for a fail over mode, if one disk fails, then a DOSM attempts to identify the file in the same node at a different storage center. If a storage node is rendered inoperable, then the DOSM clears the entry in the DOSM file lookup table, and attempts to locate the file at a remote storage center. For example, if disk "2" of storage node "1" in storage center 1510 fails, a DOSM 320 attempts to locate the file in storage node "1", disk "2", in storage center 1520. If the file is not located in storage node "1", disk 2, of storage center 1520, the DOSM, using the multicast protocol, attempts to locate the file locally (i.e., in the storage center 1510). If the file is not located locally, the DOSM, using the multicast protocol, attempts to locate the file at a remote storage center (e.g., storage center 1520).Accessing the Network Storage System:The network storage system has application for use in content delivery networks. In general, content owners and providers often employ the services of a content delivery network. Content delivery networks attempt to optimize the delivery of commonly accessed rich media objects. In order to maximize the delivery of the rich media objects, content delivery networks employ local caches at the edges of the wide area network.The network storage system has applications to complement content delivery networks by providing the underlying content for the content origin web site. In one embodiment, each cache at the content delivery network directly accesses the geographically closest storage center to locate the desired object to eliminate the need for content delivery network to access the content owner's/provider's web site.FIG. 17 illustrates one embodiment for use of the storage center in a content delivery network. For the example of FIG. 17, the content delivery network 1700 includes an end-user computer 1740 coupled over a network (e.g., Internet) to a content origin web server 1720. The content origin web server 1720 implements or hosts a web site. The web site permits the end-user to select content, such as rich media objects. A content delivery network includes a ("CDN") server 1730. The CDN server 1730 delivers content published on the web site by the content origin web server 1720. Specifically, the end-user computer 1740 is coupled to the CDN server 1730 to maximize the delivery of content, including rich media objects associated with the web site, to the end-user. The CDN server 1730 caches, at the CDN, a portion of the content associated with the web site 1730.For purposes of illustration, a wide area network 1750 is shown as including satellite communication networks 1760, wireless communication networks 1770, and fiber-optic networks 1780. As illustrated in FIG. 17, the CDN server 1730 is located close to the edges of the wide area network 1750. The location of CDN server 1730 close to the wide area network 1750 optimizes the delivery of objects cached at the CDN server 1730. For this embodiment, one or more storage center(s) 1710 are coupled to the CDN server 1730. In the event of a cache miss at the CDN server 1730, the CDN server 1730 obtains the content (e.g., object file) from storage center(s) 1710. This configuration allows the CDN server 1730 to bypass the slower content origin web server 1720 in the event that content, requested by end-user computer 1740, is not located at the CDN server 1730. According, the storage center(s) 1710 optimize routing of content through the Internet back to the CDN when the desired content is not located in the local cache.FIG. 18 is a flow diagram illustrating one embodiment for use of the storage center with a content delivery network. The end-user, through the end-user computer, generates an HTTP request to the content origin web server (block 1800, FIG. 18). In response to the user request, the content origin server returns to the end-user computer HTML with embedded file URLs (block 1810, FIG. 18). The embedded file URLs identify the rich media objects stored at the CDN server. To obtain the rich media objects, the end-user computer generates HTTP file requests to the content delivery network (e.g., CDN server 1730) (block 1820, FIG. 18). If the file identified by the URL is located in a cache at the CDN server site, then the CDN server delivers the file to the end-user computer (blocks 1825 and 1850, FIG. 18). Alternatively, if the file is not cached at the CDN server site, the CDN server generates an HTTP file request to the storage center (blocks 1825 and 1830, FIG. 18). In one embodiment, the HTTP file request includes the network storage system's SRL, to uniquely identify the file. In response to the CDN server's request, the storage center downloads the file to the CDN cache (block 1840, FIG. 18). The CDN server delivers the file to the end-user computer (block 1850, FIG. 18).Accessing the Network Storage System Using a Storage Port:There are multiple ways to access the network storage system. In one embodiment, the client uses a "storage port." The storage port provides access to the network storage system through a standard file system interface (e.g., network file system ("NFS") or Microsoft NT CIFS). The storage port may be configured by the client in various ways for different applications to optimize the delivery of rich media objects. In one embodiment, the storage port is configured at the client site to provide seamless integration from the client site to the network storage system. In another embodiment, to further off load rich media object traffic from a web site, the storage port may be used as a file system manager that downloads files to the end-user directly from the network storage system. In other embodiments, the network storage system may be directly interfaced with a private file structure.The storage port device provides a transparent gateway connection into the network storage system. In one application, the storage port device is installed at the client site, and interfaces to local web servers via standard NFS or CIFS protocols over a local area network ("LAN") connection. Specifically, in one embodiment, the user mounts the storage port as a storage device on the client network. In this configuration, the storage port effectively provides the user with a virtual NFS or CIFS file system with storage capacity at the storage center (i.e., provides the user with hundreds of terabytes in storage capacity). In one embodiment, the storage port device occupies only approximately 1.75 inches of rack height. As described more fully below, multiple storage ports may be installed at a single client site to increase aggregate throughput.FIG. 19 illustrates one embodiment for use of the storage port in the network storage system. An end-user 1900 communicates with a client site 1910 over a wide area network 1920. The end-user computer 1900 generates requests (e.g., HTTP requests) for files accessed through the client's web site. A content web server 1925, located at the client site 1910, processes requests to the client web site, including requests to download rich media objects. Content web server 1925 is intended to represent a broad category of computers and software used to implement a web site, such as multiple web servers and/or application servers, and any hardware/software configuration may be used without deviating from the spirit or scope the invention.The content web server 1925 is coupled to the storage port 1930 over a network, such as a local area network at the client site 1910. Specifically, the content web server 1925 generates file and directory operation requests in accordance with the format of the "local" file if system. As used herein, a "local" file system connotes one or more file systems or file structures used at the client site. For example, the content web server 1925 may generate NFS or Microsoft NT CIFS requests for files and directory operations. To interface the storage port 1930 with the content web server 1925, the storage port 1930 is mounted as a storage device. In one embodiment, one directory is mounted for object files and a second directory is mounted for SRLs. As shown in FIG. 19, the storage port 1930 communicates with the storage center 1950 to conduct file and directory operations.FIG. 20 is a flow diagram illustrating one embodiment for use of a storage port to deliver content. The client site receives a URL file request from an end-user computer (block 2010, FIG. 20). The URL identifies an object file associated with the client's web site. In response to the end user's URL file request, the client site (e.g., content web server) generates a local file system request for the object file (block 2020, FIG. 20). The local file system request is received by the storage port. The storage port includes a cache to store both object files and directory information. If the object file is stored locally in the storage port, then the storage port retrieves the object file from the data cache, and returns the object file to the content web server in response to the local file system request (blocks 2030, 2040, and 2070, FIG. 20). Alternatively, if the storage port does not store a copy of the object file in its data cache, then the storage port requests the object file from the storage center (blocks 2030 and 2050, FIG. 20). In response to the local file system request, the storage center downloads the object file to the storage port, and the object file is returned to the content web server (blocks 2060 and 2070, FIG. 20). Thereafter, the content web server delivers the object file to the end-user in response to the URL file request (block 2080, FIG. 20).The storage port may be implemented in either hardware or software. FIG. 21a illustrates one hardware configuration for a storage port device. As shown in FIG. 21a, the content web server 2100 communicates with the storage port 2110 over a communications link 2120, such as a local area network. The storage port 2110 conducts file and directory operations with storage center 2130.FIG. 21b illustrates embodiments for implementing the storage port in software. In one embodiment, the network storage system is accessed through library calls or through application program interface ("API") calls. For these embodiments, the software provides translation between the client's local file system and the network storage file system. As discussed above, the storage center 2160 includes software running on computers for performing the functions of the VFS and intelligent storage clusters. This software includes entry points (i.e., APIs) to permit interfacing of external software. In part, the APIs on the storage center software permit the client to conduct file and directory operations as described herein. As shown in FIG. 21b, content web server 2140 runs, in addition to software to operate the client site, software to call APIs in the network storage center. Thus, for this embodiment, the content web server 2140 executes network storage system file and directory operations over the wide area network 2180 through remote program calls.In another embodiment, shown as storage system library calls 2155, a customized network storage system library includes a collection of file system operations. For example, one library function may permit software operating at the client (e.g., on content web server 2140) to request an object file download to the storage center through use of the library function. For this example, to perform the file download operation, the client software calls the file download function and passes the SRL as an argument to the function call. A library of functions provides an additional means to interface client software to directly access the network storage system.FIG. 22 is a block diagram illustrating one embodiment for a storage port. As shown in FIG. 22, a storage port 2200 includes a processing core 2210, memory 2230, storage port data store 2240, and network interface(s) 2220. These components are coupled via a bus transport 2250 that may include one or more busses (e.g., ISA, PCI, or microprocessor buses). Processing core 2210 includes one or more central processing units ("CPUs"). In one embodiment, the storage port includes two CPUs. Memory 2330 is used to store, during operation of the device, software to perform the functions of the storage port described herein. The storage port data store 2240 contains one or more hard disk drives (i.e., "n" hard disk drives, wherein "n" is any number one or greater), used, in part, to cache file system information (i.e., directory cache) and object files (i.e., data cache). The network interface(s) 2220, which includes "n" network interface cards, couples the storage port 2200 to client devices (e.g., content web server). In addition, to support a fail over architecture, the network interface cards are used to connect one or more storage ports together. In one embodiment, id the storage port includes three network interface cards.FIG. 23 is a block diagram illustrating one embodiment for file system translation in the storage port. The network storage system issues "file handles" unique to the network storage system. In one embodiment, a network storage system file handle identifies, for a corresponding file: a) client identification; b) parent directory; c) metadata and d) the unique digital fingerprint (i.e., 128 bit MD5 identification). In general, the file system translation software 2300 converts local file system operations to network storage system file system operations. In one embodiment, to perform this function, the software includes file system translator 2320 and storage system access processes 2330. The file system translator 2320 includes local file system interception 2340 and storage system kernel processes 2350.In operation, local client file system 2310, which may include operating system software running at the client's site, issues local file system operations. For example, the client software may issue requests, in accordance with UNIX or Microsoft NT to open a file. The file open operation includes a file descriptor that identifies the file in the local file system. Typically, file system calls are processed by the operating system kernel (labeled 2360 in FIG. 23). The operating system kernel software maintains a mapping between file descriptors and directories to "inodes." The inodes provide the system a physical pointer to the file data in the system (e.g., a pointer to the file stored on a hard disk drive).For the embodiment of FIG. 23, when the local client file system 2310 issues a file system operation, local file system interception 2340 "traps" or intercepts the call, and passes the thread of execution to the storage system kernel processes 2350. In one embodiment, the local file system interception 2340 comprises CODA software, developed at Carnegie Mellon University. In general, CODA is a type of distributed file system. A portion of the functionality provided by the CODA software exports an underlying file system. Specifically, CODA exports file system operations, typically executed in the kernel level, to applications programs accessible in the user portion of memory. Although file system translation is described using CODA to intercept local file system operations, any software that intercepts file system calls may be used without deviating to the spirit or scope of the invention.In general, the storage system kernel processes 2350 obtains network storage system file handles (referred to herein as "storage handles") for storage in operating system kernel 2360 to provide a mapping between local file system descriptors and storage handles. Thus, the file descriptors provide a handle to identify files and directories in the local file system, and the storage handles provide a handle to identify files and directories in the network storage system.To maintain the mapping between local file system descriptors and storage handles, the storage system kernel processes 2350 obtains network storage file system information from storage system access processes 2330. Specifically, storage system kernel processes 2350 obtains from storage system access processes 2330 storage handles and directory information. As shown in FIG. 23, storage system access processes 2330 obtain directory and storage handle information from directory cache 2370. Alternatively, if directory and storage handle information is not cached at the storage port, storage system access processes 2330 query the network storage system (i.e., VFS) to obtain directory information and storage handles. Accordingly, the translation system 2300 provides a mapping between the client's local file system and the network storage file system.FIG. 24 is a flow diagram illustrating one embodiment for translating a file system operation from a local file system to the network storage file system. The process is initiated by the client issuing a local file system request (block 2400, FIG. 24). The local file system request is received by the operating system kernel, and dispatched to the file system translator (FIG. 23). For example, if the file system operation is an open file operation for the file "foo.txt", then the operating system kernel dispatches the open file operation with the file name "foo.txt" as an argument to the file system translator. If the file system operation is an "Open Folder" operation for the folder "dir1", then the operating system kernel dispatches the open folder operation with the folder name "dir1" as an argument.The process determines whether there is sufficient directory information in the storage port directory cache (block 2430, FIG. 24). For the "Open Folder" example above, if the storage handles for all subfolders and files are not stored in the directory cache, then additional directory information is required to fulfill the request. For the "Open File" example, if the storage port has been recently initialized and thus does not contain information on the file, then additional directory information on the file (e.g., "foo.text") is required to open the file.If there is sufficient directory information in the directory cache, and the file system operation does not require retrieving data (i.e., the file system operation is not an "open file" operation) or updating directory information, then the appropriate directory information from the directory cache is retrieved and returned in response to the local file system operation (blocks 2430 and 2435, FIG. 12) (blocks 2435 and 2437, FIG. 24). For the "Open Folder" example above, storage handles for all subfolders and files in the subject folder are retrieved from the directory cache, the storage handles and corresponding file identifiers are stored in the operating system kernel, and the file identifiers are returned to local file system.If additional directory information is required (i.e., the information is not in the storage port directory cache), then a request is generated to the VFS for the additional directory information (block 2070, FIG. 24). In one embodiment, the storage port generates an XML encoded request. For the "Open Folder" example, if the storage nodes and corresponding file identifiers are not stored in the directory cache, then the storage port generates an XML encoded "Open Folder" request to extract file and folder information for files and subfolders within the subject folders (i.e., the folder that is the subject of the "Open Folder" request). In one embodiment, in response to a request for folder information, the VFS returns name, folder identification, client metadata, upload SRL, and parent folder identification. In response to a request for file information, the VFS returns name, file identification, client metadata, download SRL, and parent folder identification. In one embodiment, the client metadata fields are used to track and maintain state information used in the local file system (e.g., information for UNIX, Microsoft Windows or NT, etc.). In addition to obtaining additional directory information, if the client local file system command is a directory operation (i.e., "move folder", "delete folder", etc.), then an XML request to the VFS is generated to perform the directory operation in the VFS. The directory information is received and stored in the directory cache (block 2480, FIG. 24).If the file system operation requires file data (e.g., open file, read file etc.), then the storage port determines whether the file is located in the data cache (block 2440, FIG. 12). If the file is stored in the data cache, then the file, or appropriate portion, is transferred from the storage port to the client requestor (block 2090, FIG. 12). Alternatively, if the file is not in the data cache, then the storage port generates a file download request to the storage cluster (block 2050, FIG. 24). In response to the storage cluster request, the storage port receives and subsequently caches the object file in the data cache (block 2060, FIG. 12). The object is then transferred from the storage port to the client requestor (block 2090, FIG. 12).End User Network Storage System Access Method:In another embodiment, the storage port supports file downloads directly to the end-user or through a CDN partner. In one embodiment, the SRLs are directly embedded into the Web page HTML, and are sent to the end-user. This results in transferring objects directly from the storage center to the end-user browser. FIG. 25 is a block diagram illustrating one embodiment for using the storage port to directly download object files to the end-user. For this configuration, an end-user computer 2610 communicates with a client site 2620 and the storage center 2650. The client site 2620 maintains a web site. For this embodiment, the client site 2620 maintains a web site through a content web server 2630. However, any configuration of servers, including remote web site hosting, may be used without deviating the spirit or scope of the invention.The content web server 2630 communicates with the storage port 2640, and in turn, the storage port 2640 communicates with the storage center 2650. As illustrated in FIG. 25, the end-user, through end-user computer 2610, generates URL requests to the client site 2620, and receives, in return, HTML with one or more embedded SRLs. Using the embedded SRLs, the end-user computer 2610 generates SRL requests directly to the storage center 2650 over a wide area network 2660. In response, the storage center 2650 serves object files directly to the end-user computer 2610.FIG. 26 is a flow diagram illustrating one embodiment for directly downloading object files to an end-user. The client site (e.g., content web server) generates local file system requests for SRL(s) corresponding to file(s) (block 2700, FIG. 26). The file(s) contain content that the client desires to embed in the web page. In one embodiment, the storage port dynamically generates the SRL(s) in response to the request from the content web server (block 2710, FIG. 26). In one embodiment, a time-out parameter is added to the SRL(s) (block 2720, FIG. 26). The time-out parameter permits a client to specify a period of time that the SRL is valid (i.e., a period of time that the end-user may access the file). In one implementation, the time-out parameter specifies a period of time with a granularity in seconds.The SRL(s) are embedded in the HTML of the client's web page (block 2730, FIG. 26). The end-user issues web page requests to the client site (block 2740, FIG. 26). The content web server then downloads the requested HTML with the embedded SRL(s) (block 2745, FIG. 26). With the embedded SRL, the end-user generates HTTP requests to the storage center (block 2750, FIG. 26). If the SRL(s) do not authenticate at the storage center, then the storage center transmits an error message to the end-user (block 2755, FIG. 26). If the SRL(s) do authenticate, then the time-out parameter is checked to determine whether the file access is valid (block 2760, FIG. 26). If the SRL is not valid (i.e., the time-out parameter is out of range), then the operation is ceased (block 2760, FIG. 26). If the SRL is within the specified time range, then the storage center downloads the object file to the end-user (block 2770, FIG. 26).The storage port 2640 acts as a file system cache. For this embodiment, the storage port contains the client's SRL files stored in a standard NFS or CIFS directory format. Each NFS or CIFS file contains the corresponding SRLs, and the SRLs contain the unique file identifier and the SRL authentication certificate.In one embodiment, to deliver the SRLs to the end-user, the network file system utilizes a second directory, in addition to the directory for the object files, that shadows the object file directory. The client uses the second directory to obtain shadow files. A shadow file contains an SRL to identify an object file of the network storage system. In one embodiment, to embed the SRL into the web page HTML, the client reads the contents of the shadow file for the corresponding object file. In one embodiment, the shadow file is generated during an upload operation. The client may access a shadow file by mounting the second directory. For example, a client may specify, for the file "foo.text", the following directory-filename: storagefilesystem:/export/dir/foo.text.The client uses this directory and filename to access the contents of the object file, "foo.text." To obtain the SRL for the example file "foo.text", a client mounts a different directory, such as the following example directory: storagefilesystem:/SRL/dir/foo.text,wherein, the SRL file contains a unique file identifier and the SRL authentication certificate for the file, "foo.text." To deliver the SRL to the end-user, the client reads the contents of a shadow file for the corresponding object file, and publishes the SRL to the user.Client Private File System Directory:The network storage system of the present invention also supports using an existing private file directory to access the storage system. For this embodiment, the network storage system customer (e.g., client) may desire to use their own file structure in conjunction with the network storage system's file system. In other embodiments, a client of the network storage system may wish to develop a file system to track additional information beyond that information tracked using NFS or CIFS.FIG. 27 is a block diagram illustrating one embodiment to interface a storage center to a client's private file directory system. In one embodiment, the storage port at the client site 2820 is replaced with a private file manager 2840. For this embodiment, the private file manager 2840 generates SRLs for object files using a unique file identification assigned to the user file at the time of upload, as well as using a shared secret to authenticate file system operations. As shown in FIG. 27, the content web server 2830, operating at the client site 2820, generates file system requests to the private file manager 2840. In turn, the private file manager 2840 issues SRLs corresponding to the object files that are the subject of the request. In one embodiment, the client supplies their own unique ID at the time the client uploads files to the storage center. In another embodiment, the client utilizes, in requests to download files, the object finger print returned by the storage center.As shown in FIG. 27, the end-user, through end-user computer 2810, generates URL requests to the client's web site. In turn, the client site 2820 returns HTML with embedded SRLs. With the embedded SRLs, the end-user computer 2810 generates SRL requests, over a wide area network 2860, to the storage center 2850. In turn, the storage center 2850 serves object files identified by the SRL.FIG. 28 is a flow diagram illustrating one embodiment for accessing object files in a storage center using a client's private file system. The end-user issues the URL requests to the client web site (block 2900, FIG. 28). In response, the client (e.g., content web server) generates file location requests to a file manager (block 2910, FIG. 28). In general, the file manager services requests to issue SRLs corresponding to files in the client's private file system. A client may use any type of file system in conjunction with the network storage system. All that is required is that the client's private file system issues SRLs for files managed by the client's private file system. The file manager retrieves the SRL for the file associated with the HTML, and delivers the file to the content web server (block 2920, FIG. 28). The content web server then transmits to the end-user HTML with the embedded SRL (block 2930, FIG. 28). Thereafter, the end-user generates HTTP requests to the storage center with the SRL (block 2940, FIG. 28). If the SRL does not authenticate, then the storage center issues an error message to the user. Alternatively, if the SRL authenticates, then the storage center generates an MD5 hash on the client supplied unique file ID to identify the file (block 2947, FIG. 28). The storage center thereafter downloads the object file to the end-user (block 2950, FIG. 28)For the client's private file system access method, the client maintains a mapping between unique filenames and SRLs. In one embodiment, the unique filename is not obtained from an MD5 hash operation, but is a unique filename. Thus, the network storage system utilizes a technique to differentiate between MD5 file names, derived from the contents of the object file, and client unique file names. In one embodiment, to differentiate between these two types of file names, the network storage system assigns different storage fingerprint identifiers. For a filename generated by an MD5 hash operation on the contents of the object file, the file is designated "128 bits.MD5." To identify a customer unique filename, the file is designated as "MD5.UFID" (i.e., where "MD5" is the client's unique file name). This convention permits the network storage system to differentiate between the twp types of file identifiers, and allows the customer to interface with the network storage system by only designating unique file names.Failover Architecture:In one embodiment, the storage port supports failover or failsafe architectures. FIG. 29 is a block diagram illustrating one embodiment for a storage port fail over configuration. For purposes of explanation, FIG. 29 illustrates a fail over configuration with two storage ports. However, the storage port fail over configuration may be extended to any "2N" fail over configuration. For this embodiment, the fail over configuration includes an active storage port 3010 and a passive storage port 3020. Each storage port includes a plurality of network interface cards. Both the active storage port 3010 and passive storage port 3020 communicate to storage center(s) over wide area network 3065, through network interface cards 3045 and 3025, respectively. The active storage port 3010 and passive storage port 3020 also communicate to the client site network via network interface cards 3050 and 3035, respectively. As shown in FIG. 29, the client accesses the active storage port 3010 over client site network 3060 using IP Addr.For the embodiment of FIG. 29, a third network interface card is contained on both the active storage port 3010 (3055) and passive storage port 3020 (3030) to communicate between the devices for fail over monitoring. The active storage port 3010 operates as current storage port at the client site. The passive storage port 3020 monitors the health of the active storage port 3010. Specifically, active storage port 3010 includes health monitoring 3070 that continually executes a process to ascertain the health of the active storage port 3020 (e.g., health of the CPUs, hard disk drives, etc.). For this embodiment, the passive storage port 3020 queries the active storage port 3010 for health status. If a condition occurs in the active storage port 3010 that warrants a fail over condition, then the passive storage port 3020 becomes the active storage port (i.e., the passive storage port is used to interface the client site to storage center(s)).In one embodiment, to support fail over, one IP address is used for the NFS/CIFS export. For this embodiment, a standard IP switch over scheme may be utilized. Specifically, when a fail over condition occurs, the passive storage port 3020 assumes the IP address of the active storage port 3010. The health monitoring 3070 and 3080 include both active and passive processes, so that if a fail over condition occurs, the passive storage port may execute the active storage port process.FIG. 30 is a flow diagram illustrating one embodiment for a storage port fail over process. When a storage port fail over occurs, the new storage port does not contain any directory information in its directory cache or any objects in its data cache. Thus, after a fail over operation, if a file is open and the storage port receives a read file request, the new storage port must execute a file open operation (blocks 3130 and 3140, FIG. 30). After the storage port receives the file identification information (e.g., SRL), the storage port generates a request to the storage center to obtain the object file, in order to transmit a block of object data in response to the read file request.After a fail over condition, when a file is requested (block 3120, FIG. 30) or an open file operation is necessary, the storage port generates an XML to the VFS to obtain file identification information (block 3150, FIG. 30). In response, the VFS returns file identification information (block 3160, FIG. 30). With the file identification information, the storage port updates its directory cache (block 3170, FIG. 30). With the file identification information (e.g., SRL), the storage port generates a request to the storage center for the object file (block 3180, FIG. 30). In response, the storage center delivers the object file, and the storage port updates its data cache (block 3190, FIG. 30). If the storage center download operation was in response to a read request to the storage port, the read request delivers data as specified in the read request.Network Storage System Dynamic Failover:In one embodiment, storage nodes monitor the health of their respective nodes (e.g., monitor hard disk drives, processor, network access, etc.). If the health of a storage node requires that the storage node should cease operation, then the storage cluster executes a fail over operation. In one embodiment, in a fail over operation, the storage node reports the failed status to the DOSMs, and the DOSMs update their state table. If this occurs, the DOSMs attempt to locate the replicated file at a different storage node (i.e., either locally or remotely).FIG. 31 is a flow diagram illustrating one embodiment for using the multicast protocol after a storage node fail over condition. If a storage node fails, then the DOSMs update and their state tables to indicate that the storage node is no longer in use (blocks 3210 and 3220, FIG. 31). If the DOSM receives a file request for a file previously stored on the failed storage node, then the DOSM, which received the download request, issues a multicast protocol request to the storage nodes (blocks 3225 and 3230, FIG. 31). In one embodiment, the DOSM may issue the multicast protocol request to local storage nodes (i.e., storage nodes located at its storage center).Each storage node that receives the multicast request determines whether it contains the requested object file (block 3240, FIG. 31). If none of the storage nodes contain the object file, then the DOSM may issue another multicast protocol request at a remote storage location (blocks 3245 and 3247, FIG. 31). Again, at the remote storage center, each storage node determines whether it contains the requested object file (block 3240, FIG. 31). In another embodiment, if the DOSM does not locate the file using the multicast protocol, the DOSM may query each individual storage node using the DOSP point-to-point protocol.When a storage node locates the requested object file, the storage node broadcasts the file identification information using the multicast protocol (block 3250, FIG. 31). Each DOSM snoops, using the multicast protocol, to receive the file identification information (block 3260, FIG. 31). As illustrated in the process embodiment of FIG. 31, the multicast protocol may be used to synchronize file location information in the DOSMs in the event of a fail over condition.Multi-Cast Protocol:The multi-cast protocol of the present invention supports the maintenance of file information in a distributed storage system. Since the network storage system consists of a plurality of storage nodes, the multicast protocol is used to track file information and synchronize file information throughout the network storage system. The tracking and maintaining of file and directory information includes maintaining information throughout geographically disparate storage centers. In one embodiment, the multi-cast protocol synchronizes cache information in the DOSMS. For example, if a new object file is loaded, the multi-cast protocol provides a means for all DOSMs in the network storage system to obtain information necessary to access the new object file. In addition, some file operations, including delete file or update file operations, require updating the DOSM lookup tables. Also, if a storage node fails, and a fail over condition is executed, the multi-cast protocol provides a means for the DOSMs to locate the file at the storage node the file has been replicated.The Distributed Object Storage Protocol (DOSP):In one embodiment, the DOSP includes daemon/master services and multicast-based monitoring communications. Communication between the daemon and master components is accomplished through a set of "request packets" and "response packets." The request packets consist of three major subcomponents: an opcode that specifies the type of request; a header implemented via a C++ specific structure that provides information about the data that to follows; and data transmitted, if any.Each operation has an associated operation code and a pair of structures: one for issuance of the request, and a second separate structure for return values. Once the receiver has received and processed the request (sent data, deleted file, etc) it then sends a response consisting of the appropriate "Out Structure" indicating the status of the request (SUCCESS, FAILURE, etc) and any required return values. Currently, there are six service operations supported by the DOSP: null, store file, retrieve file, retrieve file range, delete file, and get contents.The null operation provides a framework to develop future modifications of the protocol and to test basic functionality of the master/daemon request/response interaction.When a file is ready for storing, the DOSM client sends a request id, followed by a request header. It then sends the data to the dosd in a series of chunks, each of which is preceded by a DosdStoreHeader which gives the size of the next chunk to be read, and a field indicating whether this is the last packet to be sent.When a file is being retrieved from the Storage Cluster, the DOSM client sends a request Id, followed by a request structure. The DOSD responds by first sending the size of the data, the data requested, and finally an Out structure with the return value of the operation.The get contents operation is used to acquire the contents of the storage node as a character based stream. The after the "In Structure" is passed to the dosd, the dosd first returns the length of the stream of md5 hash/node&disk associations, followed by the stream of data, with the "Out structure" coming last.The DOSP provides an extensible framework for any new services or additional functionality. There are essentially three steps to adding new functionality: defining a new pair of In/Out structures; assigning a new opcode, implementing a handle in the DOSM client; and adding a service handle for the dosd.To facilitate gathering of information about the system, the DOSP provides several multicast-based services. In one embodiment, these services work in a manner very similar to the non-multicast aspect of the protocol. Specifically, requests consist of three parts: an opcode; a request In structure; and any additional data.Responses consist of a response structure containing a RETURN value and any other return values required to satisfy the request. If data is streamed, a size field precedes the data, followed by the data, and then followed by the Out structure.Since multicast traffic occurs on a completely separate port from point-to-point dosm/dosd traffic, the multicast In/Out structures are not multicast-specific. This makes it possible for the DOSM to query the entire dosd storage cluster or to query an individual machine with the same request/response structures and their associated operational sequencing.One of the jobs of the DOSM is to monitor the current state of nodes in the cluster. There are several tools to facilitate this task. Primarily, the various dos daemons multicast heartbeats on a specific multicast port and group. The DOSM contains an option to query a specific disk, or all of the disks on a given a storage node. A "get disk state" function returns a value, and an array of disk state values (online, offline, down) with one entry per disk. A "get disk status" function contains an option to query a specific disk, or all of the disks on a given a node. The "get disk status" contains a RETURN value, and an array of disk statistics; one array per statistic (bytes free, bytes available, inodes used, inodes available, number of outstanding ops), with one entry per disk. The DOSP includes a load balancing function.The DOSP includes a heartbeat function. This allows querying specific machines for a heartbeat in addition to providing system-wide tracking functionality via multicast methods.Although the present invention has been described in terms of specific exemplary embodiments, it will be appreciated that various modifications and alterations might be made by those skilled in the art without departing from the spirit and scope of the invention.
The present invention provides, in one embodiment, a method of forming a metal layer over a semiconductor wafer. The method includes the chemical reduction of copper oxide (105) over the deposited copper seed layer (110) by exposure to a substantially copper-free reducing agent solution (120), such that the copper oxide (105) is substantially converted to elemental copper, followed by electrochemical deposition of a second copper layer (125) over the copper seed layer (110). Such methods and resulting conductive structures thereof may be advantageously used in methods to make integrated circuits comprising interconnection metal lines.
What is claimed is: 1. A method of forming a metal layer over a semiconductor wafer, comprising:exposing a copper oxide on a copper seed layer located over a semiconductor substrate to a substantially copper-free reducing agent solution, such that said copper oxide is substantially converted to elemental copper, wherein said substantially copper-free reducing agent solution comprises a reducing agent selected from the group consisting of formate, formaldehyde, dimethylamine borane, ammonium hypophosphite, and hydrazine sulfate; and electrochemically depositing a second copper layer over said copper seed layer. 2. The method as recited in claim 1 wherein said exposing and said electrochemically depositing is conducted in a same deposition tool.3. The method as recited in claim 1, further includes drying said copper seed layer after said exposing said copper seed layer to a substantially copper-free reducing agent solution wherein said drying and said exposing are performed in the same tool.4. The method as recited in claim 1, further includes a time window between said exposing and said electrochemically depositing, of less than about 4 hours.5. The method as recited in claim 1, wherein said reducing agent has a reduction potential that is less than a reduction potential of a copper-containing solution having essentially the same composition as the copper-free reducing agent solution.6. The method as recited in claim 1, wherein said substantially copper-free reducing agent is dissolved in an aqueous solution.7. The method as recited in claim 1, wherein said substantially copper-free reducing agent has a concentration less than about 1 molar.8. The method as recited in claim 1, wherein a thickness of said copper seed layer after said exposing to said substantially the same as compared to a deposited copper seed layer.9. The method as recited in claim 1, wherein said thickness of said copper seed layer after said exposing to said substantially copper-free reducing agent is at least about 20 Angstroms.10. A method of making an integrated circuit comprising:forming active devices over or in a semiconductor substrate; forming interconnect metals lines on a dielectric layer located over said active devices including: exposing a copper oxide on a copper seed layer located over said semiconductor substrate to a substantially copper-free reducing agent solution, such that said copper oxide is substantially converted to elemental copper, wherein said substantially copper-free reducing agent solution comprises a reducing agent selected from the group consisting of formate, formaldehyde, dimethylamine borane, ammonium hypophosphite, and hydrazine sulfate; depositing, using an electrochemical deposition tool, a second copper layer over said seed layer; connecting said interconnects with said active devices to form an operative integrated circuit. 11. The method as recited in claim 10 wherein said exposing is conducted in said electrochemical deposition tool.12. The method as recited in claim 10, further includes drying said copper seed layer after said exposing said copper seed layer to a substantially copper-free reducing agent solution wherein said drying and said exposing are performed in the same tool.13. The method as recited in claim 10, further includes a time window between said exposing and said electrochemically depositing, of less than about 4 hours.14. The method as recited in claim 10, wherein said reducing agent has a reduction potential of a copper-containing solution having essentially the same composition as the copper-free reducing agent solution.15. The method as recited in claim 10, wherein said substantially copper-free reducing agent is dissolved in an aqueous solution.16. The method as recited in claim 10, wherein said substantially copper-free reducing agent has a concentration of less than about 1 molar.17. The method as recited in claim 10, wherein a thickness of said copper seed layer after said exposing is substantially the same as compared to a deposited copper seed layer.18. The method as recited in claim 10, wherein said thickness of said copper seed layer after said exposing to said substantially copper-free reducing agent is at least about 20 Angstroms.
TECHNICAL FIELD OF THE INVENTIONThe present invention is directed, in general, to manufacture of integrated circuits and more specifically to a method for forming an improved conductive interconnect structure.BACKGROUND OF THE INVENTIONThe push to sub-0.18 micron multilevel metallized interconnections, such as lines, via, and trenches, and the desire to produce faster semiconductor devices, has resulted in a shift toward the use of Copper for making electrical interconnections in ultra-large scale integration circuits. The deposition of Copper interconnects are not without difficulties, however. For example, when copper is etched, it tends to be redeposited elsewhere on the semiconductor device, or on the processing chamber. Copper atoms also readily diffuse into silicon-containing dielectric layers. The contamination by Copper in unwanted locations can degrade or destroy the performance of active devices in integrated circuits. One approach to reducing the problems with copper etching and diffusion, is the deposition of an underlying barrier layer to block the migration of Copper atoms into other components of the semiconductor. To facilitate the adhesion of copper to the diffusion barrier, a seed layer of copper is deposited over the diffusion barrier, followed by the deposition of a second thicker copper conducting layer over the copper seed layer.Typically, the copper seed layer is deposited on a semiconductor wafer by a vacuum process, such as physical vapor deposition (PVD) or chemical vapor deposition (CVD). The thick copper conducting layer is deposited by a wet process, such as electrochemical deposition (ECD) or electrode-less chemical deposition. Because the deposit of the seed layer and thick conducting layer involve two distinct processes and tools, the wafer has to be removed from the copper seed layer depositing tool, exposed to the atmosphere for a period, and then placed in the tool for depositing the thick layer. Backlogs and mismatches in the machine times for seed layer and thick layer deposition can extend the time window where the wafer is exposed to the atmosphere for several hours.During this time window, the surface of the seed layer oxidizes. In addition, organic contaminants may form on the seed layer. The presence of an oxide layer on the copper seed layer can result in thinning or dissolution of the copper seed layer when placed in acidic electroplating solutions used for ECD. The resulting discontinuities in the seed layer exacerbate the formation of voids in the thick conducting layer during electroplating, thereby negatively impacting device performance and reliability. In addition, the oxide layer may not be fully removed during ECD. The continued presence of an oxide layer between the seed layer and the thick conducting layer weakens adhesion between these layers, making the interconnection more prone to mechanical failure. The current practice is to therefore minimize copper oxidation and organic compound contamination by restricting the period between depositing the seed layer and thick conducting layer by ECD processes. This approach, however may still result in unacceptably high oxidation and increased cycle times and therefore increased costs.Previous approaches to mitigate copper oxidation and organic compound contamination are flawed, leading to degraded device performance. One approach, for example, is to produce thicker seed layers so that during electroplating, dissolution is not complete, and at least a portion of the copper seed layer remains. The problem with this approach is that for small openings, the thick seed layer can pinch off the trench opening resulting in center voids in the trench feature during the subsequent deposition of the thick conducting layer.Another approach has been to deposit two seed layers in order to produce a thicker layer with better step coverage inside the trench or via feature. Typically the second seed layer covers the first seed layer and a native oxide layer that forms on the first seed layer. This particular approach, however, has the same problems as described above.A third approach has been to chemically reduce the copper oxide layer back to elemental Copper in the presence of a Hydrogen gas plasma environment. But because reduction is performed in a separate tool when the wafer is taken out of the reduction tool, oxides formation on the seed layer surface can reoccur during the period when the wafer is waiting for thick Copper layer deposition by ECD. Moreover, there are additional costs and time to perform this reduction step.A fourth approach has been to electrochemically reduce the copper oxide layer back to elemental copper in the presence of an electrical current. Although, this can be combined with the ECD process, it still requires an additional electrodeposition chamber for performing the electrochemical reduction, thereby resulting in additional processing steps and costs.Accordingly, what is needed in the art is a method of making copper interconnections that do not exhibit the limitations of the prior art.SUMMARY OF THE INVENTIONTo address the above-discussed deficiencies of the prior art, the present invention provides a method forming a metal layer over a semiconductor wafer. The method includes exposing a copper oxide on a copper seed layer located over a semiconductor substrate, to a substantially copper-free reducing agent solution. The exposure is such that the copper oxide is substantially converted to elemental copper. The method further includes electrochemically depositing a second copper layer over the copper seed layer.In another embodiment, the present invention provides a method of making an integrated circuit. The method includes forming active devices on a semiconductor substrate and forming interconnect metals lines on a dielectric layer located over the active devices. Forming interconnects on the interconnect metal lines includes exposing a copper oxide on a copper seed layer located over the semiconductor substrate to a substantially copper-free reducing agent solution, as discussed above. Forming interconnects on the interconnect metal lines further includes depositing, by using an electrochemical deposition tool, a second copper layer over the seed layer.The foregoing has outlined preferred and alternative features of the present invention so that those of ordinary skill in the art better understand the detailed description of the invention that follows. Additional features of the invention will be described hereinafter that form the subject of the claims of the invention. Those skilled in the art should appreciate that they can readily use the disclosed conception and specific embodiment as a basis for designing or modifying other structures for carrying out the same purpose of the present invention. Those skilled in the art should also realize that such equivalent constructions do not depart from the scope of the invention.BRIEF DESCRIPTION OF THE DRAWINGSThe invention is best understood from the following detailed description when read with the accompanying FIGUREs. It is emphasized that in accordance with the standard practice in the semiconductor industry, various features may not be drawn to scale. In fact, the dimensions of the various features may be arbitrarily increased or reduced for clarity of discussion. Reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:FIGS. 1A to 1C illustrate sectional views of selected steps in a method of forming an exemplary metal layer over a semiconductor wafer according to the principles of the present invention;FIGS. 2A to 2B illustrate an exemplary configuration tools to carry out methods steps according to the principles of the present invention; andFIGS. 3A to 3C illustrate sectional views of selected steps in a method of making an exemplary integrated circuit according to the principles of the present invention.DETAILED DESCRIPTIONIt has been determined that prior art methods for eliminating or reducing void formation in copper interconnects are problematic because they result in the removal of portions of the desired copper seed layer as well as the undesired copper oxide layer, or they simply cover up the copper oxide layer, or, by virtue of requiring a different tool, they present a time window during which additional oxide formation may occur. The present invention recognizes, for the first time, the advantages of converting the copper oxide layer to elemental copper by exposing the copper seed layer to a substantially copper-free reducing agent solution. Although discussed in the context of forming copper interconnections in, for example, vias, trenches or lines, the present invention could be equally applied to any interconnection where it is desirable to remove an oxide layer from a seed layer prior to depositing a conductive layer over the seed layer.FIGS. 1A to 1C illustrate sectional views of selected steps in a method 100 of forming an exemplary metal layer over a semiconductor substrate according to the principles of the present invention. Turning first to FIG. 1A, illustrated is a step of exposing an oxide 105, such as a copper oxide layer, on a copper seed layer 110 located over a semiconductor substrate 115, to a reducing agent solution 120 substantially free of copper. A substantially copper-free reducing agent is one where the copper concentration is no greater than 2 gm per liter within the solution and, in a preferred embodiment, the concentration of copper is less than about 0.5 gm per liter, and more preferably less than about 0.2 gm per liter. The exposure is such that the copper oxide is substantially converted to elemental copper. The so-converted elemental copper is thus integrated into the copper seed layer 110 (FIG. 1B). The method further includes ECD of a second copper layer 125 over the copper seed layer 110, as illustrated in FIG. 1C.For the purposes of the present invention, the term substantially converted to elemental copper means at least about 90 percent, and more preferably 99 percent, and even more preferably 99.9 percent, of the copper oxide 105 on the copper seed layer 110 is converted to elemental copper. The term semiconductor substrate 115 as, used herein, refers to any substrate located on or over a semiconductor wafer, including the semiconductor wafer itself. For example, the semiconductor substrate 115 may include a conductive layer 130 and a dielectric layer 135 formed over the semiconductor substrate 115. The dielectric layer 135 may comprise silicon dioxide, and more desirably, a silicon oxide-based low-k dielectric material, such as fluorine or carbon. These and other structures discussed herein may be formed using conventional deposition and photolithographic techniques well known to those skilled in the art.The exposure of the copper seed layer 110 to substantially the copper-free reducing agent solution 120 need only be for a period sufficient to substantially convert the copper oxide 105 to elemental copper. In certain desirable embodiments, for example, the period of exposure is at least about 1 minute. In certain embodiments, the semiconductor substrate 115 is washed with de-ionized water before being relocated in the next ECD chamber. Preferably, a time window between exposure to the reducing agent solution 120 and electrochemically deposition is less than about 4 hours, and more preferably, less than about 2 hours, and even more preferably less than about 5 minutes, and still more preferably less than about 1 minute.The copper-free reducing agent solution 120 may comprise any compound or groups of compounds capable of converting the copper oxide 105 to elemental copper within the desired time window and other processing constrains. Preferably the solution 120 includes a reducing agent having a reduction potential that is less than, and more preferably at least about 0.1 Volts lower than, the reduction potential of a copper-containing solution having substantially the same composition and temperature as the solution 120 containing the reducing agent. For example, the reduction potential of a standard 1.0 Molar Copper ion solution at room temperature is about -0.23 Volts. Modification of the temperature, pH, ionic strength and concentration of compounds in the solution 120, or a comparable copper-containing solution, will alter the reduction potential in a predictable manner well known to those skilled in the art. Certain preferred embodiments of the reducing agent include compounds selected from the group consisting of formate, formaldehyde, dimethylamine borane, ammonium hypophosphite, and hydrazine sulfate.To adjust its concentration, the reducing agent may be dissolved in any liquid compatible with the semiconductor substrate 115 and its overlying structures. More preferably, the reducing agent is dissolved in an aqueous solution. The concentration may be adjusted so as to increase or decrease the rate or extent of conversion of the copper oxide to elemental copper. In certain preferred embodiments, the substantially copper-free reducing agent has a concentration of less than 1 Molar and preferably less than about 0.1 Molar.An important benefit of the present invention is that a thickness 140 of the copper seed layer after exposure to the reducing agent solution 120 is substantially the same as compared to a deposited copper seed layer, for example within 5 minutes, of the substrate's 115 removal from a copper seed layer depositing tool. Maintaining the copper seed layer at a constant thickness 140 can facilitate the deposition of a second thicker copper layer 125. This, in turn, improves the structural integrity of the metal interconnection, as well as the speed and reliability of transmitting electrical signals through the interconnection.As well understood by those skilled in the art, the deposition of the copper seed layer 110 results in step coverage, where the thickness of the side walls 142 of an opening 145, such as a via or trench is substantially less, for example about 20 to about 5 percent, compared to the thickness 147 at the top of the substrate 115. In certain preferred embodiments, the thickness 140 of the copper seed layer 110 on the side walls of an opening 145, such as a via or trench opening of about is at least about 20 Angstroms. The preferred side wall thickness 140 of the copper seed layer 110 will depend upon the dimensions of the opening 145. For example, for a via opening length 148 of about 0.18 microns, the side wall thickness 140 is preferably between about 50 and about 100 Angstroms.Another important benefit of the present invention is that the reduction of the copper oxide to element copper can be achieved without carrying the semiconductor substrate to a different processing tool. Carrying out these steps in the same tool, minimizes the time window where further oxidation of the copper seed layer can occur, and saves clean room floor and reduces fabrication cycle times.An exemplary configuration of such tools are illustrated in FIGS. 2A and 2B, wherein each tool is schematically represented by the box in which each of the sectional views are located. Using analogous number to illustrate analogous features discussed in FIG. 1, FIG. 2A illustrates that the copper seed layer 210 is preferably formed by conformally depositing copper within an opening 245, such as a via or trench opening, located in the dielectric layer 235 over the semiconductor substrate 215. As seen from this view, the opening 245 exposes an area of the conductive layer 230 that is located under the dielectric layer 235. This deposition is preferably achieved by using a seed layer deposition tool 250. The tool 250 may include instruments for chemical vapor deposition (CVD), and more preferably physical vapor deposition (PVD).Electrochemically deposition of the second copper layer is performed using a separate multi-chambered conventional ECD tool 255, as illustrated in FIG. 2B, wherein each chamber of the ECD tool 255 is schematically represented by the inset boxes in which the sectional views are located. During transfer of the semiconductor substrate 215 from the deposition tool 250 to the ECD tool 255, the copper oxide 205 forms as a result of the oxidation of the copper seed layer 210. Preferably, the ECD tool 255 includes a drying chamber 260 and an electrochemical depositing or plating chamber 265. It is advantageous to expose the copper seed layer 210 to the substantially copper-free reducing agent solution 220 in the same chamber 260 as used for drying the semiconductor substrate 215. For example, the copper-free reducing agent solution 220 may be placed on the seed layer while the semiconductor substrate 215 is in a spin rinse drying (SRD) chamber 260 that is part of the same tool 255 used for electrochemical deposition.In certain preferred embodiments, the exposure to the copper-free reducing agent solution 220 continues until immediately before drying and then moving the semiconductor substrate 215 into the electrochemical depositing chamber 265. It is preferred that the exposure of the copper oxide 205 to the copper-free reducing agent solution 220 and the electrochemical deposition of the second copper layer 225 be carried out in the same tool to minimize any further oxidation of the copper seed layer 210. In such embodiments, the copper seed layer's 210 exposure to an oxidizing atmosphere is preferably for less than about 1 minute.FIGS. 3A-C illustrate another aspect of the present invention, a method 300 of making an integrated circuit at different stages of fabrication. FIG. 3A illustrates forming active devices 370 on a semiconductor substrate 315. The active device 370 may include conventional MOS integrated circuit components, such as a doped region 375 or source/drain regions found in conventional CMOS devices, located between field oxide structures 374 and below a gate structure 376. Such structures and their method of fabrication, are more fully discussed, for example, in U.S. Pat. No. 6,245,672 to Hong et al., which is incorporated by reference herein. FIG. 3A also shows forming interconnect metals lines 380 in or on one or more dielectric layers 382, 384 located over the active devices 370.Forming the interconnect 380 includes exposing a copper oxide 305 on a copper seed layer 310 located over the semiconductor substrate 315 to a substantially copper-free reducing agent 320, in accordance with the processes discussed above (FIG. 3B). Forming the interconnect 380 also includes electrochemically depositing a second copper layer 325 using an electrochemical deposition tool and the processes discussed above (FIG. 3C). Any of the above described embodiments, including the copper seed layer and substantially copper-free reducing agent solution, may be applied to the method of making the integrated circuit 300. One of ordinary skill would understand that the method may further be extended to form any number of additional interconnects located over the interconnect metal line 380 and would understand how to connect those interconnects with the active devices to form an operative integrated circuit.Although the present invention has been described in detail, one of ordinary skill in the art should understand that they can make various changes, substitutions and alterations herein without departing from the scope of the invention.
PROBLEM TO BE SOLVED: To provide a photovoltaic device with increased efficiency in conversion from optical energy to electrical energy.SOLUTION: Interferometrically tuned or interferometric photovoltaic devices (iPV) increase the absorption of optical energy in an active region 2101 of an interferometric photovoltaic cell 2100 and thereby increase the efficiency of the devices. In addition, one or more optical resonant cavities 2110 and/or optical resonant layers are included in the photovoltaic device 2100 to increase the electric field concentration and the absorption in the active region 2101.
A photovoltaic device, wherein the active layer is configured to generate an electrical signal between the first conductive layer and the second conductive layer as a result of light being absorbed by the active layer. Layer, a reflector layer disposed to reflect light transmitted through the active layer, an optical cavity resonator between the active layer and the reflector layer, the active layer, and And at least one via configured to provide an electrical connection with one conductive layer, wherein the optical cavity includes an air gap or dielectric material, and the optical cavity is present Thus, the electric field strength of light in the active layer is increased, the active layer is disposed on the first side of the optical cavity resonator, and the first conductive layer is formed of the optical cavity resonator. A photovoltaic device disposed on the second side.The photovoltaic device of claim 1, wherein the active layer comprises a semiconductor.The photovoltaic device according to claim 2, wherein the active layer comprises a PN junction or a PIN junction.The photovoltaic device of claim 1, wherein the reflector layer has a reflectivity greater than 80%.The photovoltaic device of claim 1, wherein the reflector layer comprises a metal.The photovoltaic device of claim 5, wherein the reflector layer comprises a metal selected from the group consisting of aluminum, molybdenum, silver and gold.The photovoltaic device of claim 1, wherein the reflector layer is partially reflective such that the photovoltaic device is partially light transmissive to several visible wavelengths.The photovoltaic device of claim 1, wherein the reflector layer is partially reflective such that the photovoltaic device is partially light transmissive to some infrared or ultraviolet wavelengths.The photovoltaic device of claim 1, wherein the optical cavity comprises a plurality of layers.The photovoltaic device of claim 1, wherein the optical resonant cavity comprises an air gap.The photovoltaic device of claim 1, wherein the optical cavity comprises a dielectric material.The photovoltaic device of claim 1, wherein the thickness of the optical cavity is optimized to increase light absorption in the active layer.The photovoltaic device of claim 1, further comprising an antireflective layer disposed on the active layer.The photovoltaic device of claim 1, wherein the optical cavity has a thickness of less than about 2000 nm.The active layer has an absorption efficiency for wavelengths in the solar spectrum, and the absorption efficiency integrated over the wavelengths in the solar spectrum is increased by at least about 5% in the presence of the optical cavity. The photovoltaic device according to claim 1.Claim: 1. Having an overall conversion efficiency for a wavelength in the solar spectrum, wherein the overall conversion efficiency integrated over the wavelength in the solar spectrum is increased by at least about 15% in the presence of the optical cavity resonator. The photovoltaic device as described in 1.The photovoltaic device of claim 16, wherein the overall conversion efficiency integrated over the wavelengths in the solar spectrum increases by at least about 20% in the presence of the optical cavity.The photovoltaic device of claim 16, wherein the overall conversion efficiency integrated over the wavelengths in the solar spectrum is increased by at least about 25% in the presence of the optical cavity.The photovoltaic device of claim 1, wherein the presence of the optical cavity increases the average field strength in the active layer by at least 20% when the photovoltaic device is exposed to the solar spectrum. .The photovoltaic device of claim 1, wherein the presence of the optical cavity increases the average field strength in the active layer by at least 25% when the photovoltaic device is exposed to the solar spectrum. .The photovoltaic device of claim 1, wherein the presence of the optical cavity increases the average field strength in the active layer by at least 30% when the photovoltaic device is exposed to the solar spectrum. .The photovoltaic device of claim 1, wherein the optical resonant cavity has a thickness such that the photovoltaic device has an integrated conversion efficiency integrated over a solar spectrum greater than 0.7.The photovoltaic device of claim 1, wherein the optical resonant cavity has a thickness such that the photovoltaic device has an integrated conversion efficiency integrated over a solar spectrum greater than 0.8.The photovoltaic device of claim 1, wherein the optical resonant cavity has a thickness such that the photovoltaic device has an integrated conversion efficiency integrated over a solar spectrum greater than 0.9.The photovoltaic device of claim 1, wherein the optical resonant cavity has a thickness such that the photovoltaic device has an integrated conversion efficiency integrated over a solar spectrum greater than 0.95.The photovoltaic device of claim 1, further comprising at least one additional active layer and at least one inactive layer separating the active layers from one another.27. The photovoltaic device of claim 26, wherein the at least one inactive layer comprises at least one optical resonant layer.The photovoltaic device of claim 1, further comprising at least one additional optical resonant layer.The photovoltaic device of claim 28, wherein the at least one additional optical resonant layer comprises at least one optical cavity between the active layer and the reflector layer.The photovoltaic device according to claim 1, wherein the presence of the optical cavity reduces the amount of light absorbed in one or more layers surrounding the active layer.The photovoltaic device according to claim 1, wherein the presence of the optical cavity causes an electric field resonance in the active layer.The increase of the average field strength integrated over the solar spectrum for the active layer, brought about by the presence of the optical cavity, is the average field strength integrated over the solar spectrum for the other layers in the photovoltaic device The photovoltaic device according to claim 1, which is large compared to the increase.The presence of the optical cavity reduces the amount of light absorbed in one or more layers in front of the reflector layer and surrounding the active layer. Photovoltaic devices.
Interference solar cellThe present invention relates generally to the field of optoelectronic converters, such as, for example, solar cells, which convert light energy into electrical energy.For over a century, fossil fuels such as coal, oil and natural gas have been the major energy sources in the United States. The need for alternative energy sources is ever increasing. Fossil fuels are non-renewable energy sources that are being depleted rapidly. Large-scale industrialization in developing countries such as India and China places a significant burden on available fossil fuels. In addition, geopolitical issues can immediately affect the supply of such fuels. Global warming has also become a major issue in recent years. There are many factors involved in global warming, but the widespread use of fossil fuels is presumed to be the main cause of global warming. There is an urgent need to find economically sustainable and environmentally safe renewable energy sources.Solar energy is an environmentally safe renewable energy source that can be converted to other energy forms such as heat and electricity. Solar cells (PV) cells convert light energy into electrical energy and can therefore be used to convert solar energy into electrical power. The photovoltaic cells can be very thin and can be formed as a module. The size of the PV cell can range from a few millimeters to a few tens of centimeters. The individual electrical power obtained from one PV cell can range from a few milliwatts to a few watts. By electrically connecting and packaging several PV cells, a sufficient amount of electricity can be generated. PV cells can be used in a variety of applications, such as supplying power to satellites and other space vehicles, supplying electricity to residential and commercial facilities, and charging automotive batteries. However, low efficiency in converting light energy into electricity is a barrier to using solar energy as an economically competitive renewable energy source.「Light−Trapping in a−Si Solar Cells: A Summary of the Results from PV Optics」、B. L. Soporiら、National Center for Photovoltaics Program Review Meeting、Denver、Colorado、1988年9月8〜11日Miro Zeman著「Thin Film Solar Cells, Fabrication, Characterization & Applications」、edited by J. Poortmans & V. Arkhipov、John Wiley and Sons、2006年、205頁Krcら、「Optical and Electrical Modeling of Cu(In,Ga)Se2 Solar Cells」OPTICAL AND QUANTUM ELECTRONICS (2006年)38:1115〜1123頁Therefore, what is needed is a photovoltaic device and method with increased efficiency in converting light energy into electrical energy.Some embodiments of the present invention are interferometrically tuned where reflections from the interfaces of the layered PV device are coherently combined to generate a high electric field within the active area of the solar cell where light energy is converted to electrical energy. It has solar cells. Such an interference tuning or interferometric photovoltaic device (iPV) enhances the absorption of light energy in the active area of the interferometric solar cell, thereby improving the efficiency of the device. In various embodiments, one or more optical cavity resonators and / or optical resonant layers are included in the photovoltaic device to enhance electric field concentration and absorption in the active region. The optical cavity and / or layers can include transparent nonconductive materials, transparent conductive materials, air gaps, and combinations thereof. Other embodiments can also be used.In one embodiment, the photovoltaic device comprises an active layer configured to generate an electrical signal as a result of light absorbed by the active layer. A reflector layer is disposed to reflect light transmitted through the active layer, and an optical cavity resonator is disposed between the active layer and the reflector layer. The presence of the optical cavity can increase the amount of light absorbed by the active layer. In some embodiments, the optical resonant cavity can include a dielectric. In some implementations, the optical cavity can include an air gap. In some implementations, the optical resonant cavity can include multiple layers.In another embodiment, the photovoltaic device comprises at least one active layer configured to generate an electrical signal as a result of light absorbed by the active layer. The photovoltaic device also comprises at least one optical resonant layer, the at least one active layer having an absorption efficiency for wavelengths in the solar spectrum, and the absorption efficiency integrated over wavelengths in the solar spectrum is at least It increases by at least about 20% in the presence of one optical resonant layer.In one embodiment, the photovoltaic device comprises an active layer configured to generate an electrical signal as a result of light absorbed by the active layer. The photovoltaic device also comprises at least one optical resonant layer, wherein the photovoltaic device has an overall conversion efficiency for wavelengths in the solar spectrum, and the overall conversion efficiency integrated over the wavelengths in the solar spectrum is It is increased by at least about 15% by the presence of at least one optical resonant layer.In another embodiment, the photovoltaic device comprises an active layer configured to generate an electrical signal as a result of light absorbed by the active layer. The photovoltaic device further comprises an optical resonant layer, the optical resonant layer having a thickness such that the photovoltaic device has an overall conversion efficiency integrated over a solar spectrum greater than 0.7.In one embodiment, the photovoltaic device comprises an active layer configured to generate an electrical signal as a result of light absorbed by the active layer. The photovoltaic device further comprises at least one optical resonant layer that enhances the average electric field strength in the active layer, the active layer being for wavelengths in the solar spectrum in the layer when the photovoltaic device is exposed to sunlight It has an average electric field strength. The increase in average electric field intensity integrated over the solar spectrum for the active layer, caused by the presence of the at least one optical resonant layer, is an increase in the average electric field intensity integrated over the solar spectrum for the other layers in the photovoltaic device. It is large compared with.In one embodiment, the photovoltaic device comprises an active layer configured to generate an electrical signal as a result of light absorbed by the active layer. The active layer has an average field strength and absorbed light power for wavelengths in the solar spectrum in the layer when the photovoltaic device is exposed to sunlight. The photovoltaic device further comprises at least one optical resonant layer that enhances the average field strength and the absorbed light power in the active layer, integrated over the solar spectrum for the active layer provided by the presence of the at least one optical resonant layer The increase in absorbed light power is large compared to the increase in absorbed light power integrated over the solar spectrum for the other layers in the photovoltaic device.In one embodiment, a photovoltaic device comprises a substrate, an optical stack disposed on the substrate, and a reflector layer disposed on the optical stack. The optical stack further comprises at least one active layer and one or more layers, wherein the at least one active layer has an absorption efficiency greater than 0.7 for light of about 400 nm.In one embodiment, the method of enhancing light absorption in an active layer in a photovoltaic device using an interference principle comprises at least one active layer for absorbed light and converts the absorbed light into electrical energy And step of positioning the at least one optical resonant layer with respect to the active layer, wherein the interference principle of electromagnetic radiation enhances the absorption of solar energy in the at least one active layer by at least 5%, this absorption being the solar It is integrated for the wavelengths in the spectrum.In some embodiments, a photovoltaic device comprises at least one active layer for absorbing electromagnetic radiation and converting the electromagnetic radiation into electrical energy. The photovoltaic device further comprises at least one optical resonant layer arranged with respect to the active layer, and as a result of the optical interference, the optical resonant layer only absorbs at least 5% of the solar energy in the at least one active layer Enhanced, this absorption is integrated over the solar spectrum.In one embodiment, the photovoltaic device comprises an active layer configured to generate an electrical signal as a result of light absorbed by the active layer. The reflector layer is arranged to reflect light transmitted through the active layer, the reflector layer having partial light transmission so that the photovoltaic device is partially transparent for several wavelengths . The photovoltaic device further comprises at least one optical resonant layer disposed between the active layer and the reflector layer, the amount of light absorbed by the active layer in the presence of the at least one optical resonant layer Increases.In one embodiment, the photovoltaic device comprises an active layer configured to generate an electrical signal as a result of light absorbed by the active layer. The photovoltaic device further comprises at least one optical resonant layer, the presence of the at least one optical resonant layer increases the amount of light absorbed by the active layer, and the thickness of the at least one optical resonant layer is , Can be adjusted by applying a control signal to control the thickness.In one embodiment, the method of optimizing the absorption efficiency of a solar cell comprises forming a solar cell comprising a stack of layers, wherein at least one layer comprises at least one active layer, the solar cell The step of forming includes using interference principle to optimize the absorption efficiency of at least one active layer in the solar cell at multiple wavelengths.In one embodiment, a solar cell comprises a substrate, an optical laminate disposed on a transparent substrate, and a reflector disposed on the substrate. The optical stack further comprises an active layer optimized to absorb light of a selected wavelength based on the thickness of one or more thin film layers and one or more thin film layers, Absorption is optimized using analysis of the coherent summation of reflections from multiple interfaces.In one embodiment, the photovoltaic device comprises first and second active layers configured to generate an electrical signal as a result of light being absorbed by the active layer. The photovoltaic device further comprises a first optical resonant layer between the first active layer and the second active layer, and the presence of the optical resonant layer results in the first active layer and the second active layer. The amount of light absorbed by at least one ofIn one embodiment, the photovoltaic device comprises means for absorbing light. The light absorbing means is configured to generate an electrical signal as a result of the light being absorbed by the light absorbing means. The means for reflecting light is arranged to reflect light transmitted through the at least one light absorbing means. The means for generating the optical resonance is disposed between the light absorbing means and the light reflecting means. The optical resonance generating means is configured to increase the amount of light absorbed by the at least one light absorbing means, the optical resonance generating means comprising means for electrically isolating.In another embodiment, a method of manufacturing a photovoltaic device includes forming an active layer, the active layer configured to generate an electrical signal as a result of light being absorbed by the active layer. . The method comprises the steps of disposing a reflector layer to reflect light transmitted through the active layer, and disposing an optical cavity between the active layer and the reflector layer. In one embodiment, the optical cavity comprises a dielectric. In another embodiment, the optical cavity includes an air gap.In one embodiment, the photovoltaic device comprises means for absorbing light. The light absorbing means is configured to generate an electrical signal as a result of the light being absorbed by the light absorbing means. The photovoltaic device comprises means for reflecting light arranged to reflect light transmitted through the light absorbing means and means for generating an optical resonance between the light absorbing means and the light reflecting means Further comprising The optical resonance generating means is configured to increase the amount of light absorbed by the at least one light absorbing means, the optical resonance generating means comprising a plurality of means for propagating light therethrough.In another embodiment, a method of manufacturing a photovoltaic device includes forming an active layer, the active layer configured to generate an electrical signal as a result of light being absorbed by the active layer. . The method further includes disposing a reflector layer to reflect light transmitted through the at least one active layer, and forming an optical cavity resonator between the active layer and the reflector layer. The optical cavity includes a plurality of layers.In an alternative embodiment, the means for converting light energy into electrical energy comprises means for absorbing light, such that the light absorbing means generates an electrical signal as a result of the light being absorbed by the light absorbing means Configured The means for converting light energy into electrical energy may comprise means for reflecting light arranged to reflect light transmitted through the at least one light absorbing means, light absorbing means and light reflecting means The light absorbing means has an absorption efficiency for a wavelength in the solar spectrum and the absorption efficiency integrated over the wavelengths in the solar spectrum is , By at least about 20% in the presence of the optical resonance generating means.In one embodiment, a method of manufacturing a photovoltaic device includes forming at least one active layer, the active layer configured to generate an electrical signal as a result of light being absorbed by the active layer. It is done. The method comprises the steps of: disposing a reflector layer to reflect light transmitted through the at least one active layer; disposing at least one optical resonant layer between the active layer and the reflector layer; And at least one active layer has an absorption efficiency for wavelengths in the solar spectrum, and the absorption efficiency integrated over wavelengths in the solar spectrum is at least about one in the presence of the at least one optical resonant layer. Increase by 20%.In one embodiment, the means for converting light energy into electrical energy comprises means for absorbing light, such that the light absorbing means generates an electrical signal as a result of the light being absorbed by the light absorbing means. Configured The means for converting light energy into electrical energy may comprise means for reflecting light arranged to reflect light transmitted through the at least one light absorbing means, light absorbing means and light reflecting means And means for generating an optical resonance disposed therebetween. The means for converting light energy into electrical energy has an overall conversion efficiency for wavelengths in the solar spectrum, and the overall conversion efficiency integrated over wavelengths in the solar spectrum is in the presence of the optical resonance generating means. It increases by at least about 15%.In one embodiment, a method of manufacturing a photovoltaic device includes forming an active layer, which is configured to generate an electrical signal as a result of light absorbed by the active layer. The method comprises the steps of disposing a reflector layer to reflect light transmitted through the at least one active layer and disposing at least one optical resonant layer between the at least one active layer and the reflector layer. And the step of The photovoltaic device has an overall conversion efficiency for wavelengths in the solar spectrum, and the overall conversion efficiency integrated over wavelengths in the solar spectrum is increased by at least about 15% in the presence of at least one optical resonant layer .In one embodiment, the means for converting light energy into electrical energy comprises means for absorbing light, such that the light absorbing means generates an electrical signal as a result of the light being absorbed by the light absorbing means. Configured The means for converting light energy into electrical energy further comprises means for generating an optical resonance, the optical resonance generating means increasing the average electric field strength in the light absorbing means. The light absorbing means has an average electric field strength for the wavelengths in the solar spectrum in the means when the means for converting light energy into electrical energy is exposed to sunlight. The increase in average electric field intensity integrated over the solar spectrum for the light absorbing means, brought about by the presence of the optical resonance generating means, is integrated over the solar spectrum for the other layers in the means for converting light energy into electrical energy It is large compared to the increase in average electric field strength.In one embodiment, a method of manufacturing a photovoltaic device includes forming an active layer, which is configured to generate an electrical signal as a result of light absorbed by the active layer. The method further comprises forming at least one optical resonant layer, the optical cavity increasing the average field strength in the active layer. The active layer has an average field strength for wavelengths in the solar spectrum in the layer when the photovoltaic device is exposed to sunlight, and is provided across the solar spectrum for the active layer, provided by the presence of at least one optical resonant layer The increase in integrated mean field strength is large compared to the increase in mean field strength integrated over the solar spectrum for the other layers in the photovoltaic device.In another embodiment, the means for converting light energy into electrical energy comprises means for absorbing light configured to generate an electrical signal as a result of light being absorbed by the light absorbing means, The light absorbing means has an average electric field strength and absorbed light power in the means for the wavelengths in the solar spectrum when the means for converting light energy into electrical energy is exposed to sunlight. The means for converting light energy into electrical energy further comprises means for generating an optical resonance that increases the average electric field strength and absorbed light power in the light absorbing means, the light provided by the presence of the optical resonance generating means The increase in absorbed light power integrated over the solar spectrum for the absorbing means is large compared to the increase in absorbed light power integrated over the solar spectrum for the other layers in the means for converting light energy into electrical energy.In one embodiment, a method of manufacturing a photovoltaic device comprises forming an active layer, wherein the active layer is configured to generate an electrical signal as a result of light absorbed by the active layer, the active layer being When the photovoltaic device is exposed to sunlight, it has an average field strength and absorbed light power for the wavelengths in the solar spectrum in that layer. The method further comprises the step of forming at least one optical resonant layer, the optical resonant cavity increasing the average field strength and the absorbed optical power in the active layer, brought about by the presence of the at least one optical resonant layer The increase in absorbed light power integrated over the solar spectrum for the active layer is large compared to the increase in absorbed light power integrated over the solar spectrum for the other layers in the photovoltaic device.In one embodiment, the photovoltaic device comprises means for supporting. The photovoltaic device further comprises means for interacting with light disposed on the support means, the light interaction means comprising at least one means for absorbing light and one for propagating light Or a plurality of means. The photovoltaic device also comprises means for reflecting light disposed on the light interaction means, wherein at least one light absorbing means has an absorption efficiency greater than 0.7 for light of about 400 nm .In one embodiment, a method of manufacturing a photovoltaic device comprises forming a substrate. The method also comprises disposing an optical stack comprising at least one active layer and one or more layers on a substrate, and disposing a reflector layer on the optical stack, One active layer has an absorption efficiency greater than 0.7 for light of about 400 nm.In some embodiments, the photovoltaic device comprises means for absorbing light, the light absorbing means being configured to absorb light and convert the absorbed light into electrical energy. The photovoltaic device further comprises means for generating an optical resonance, the interference principle of the electromagnetic radiation increasing the absorption of solar energy in the light absorbing means by at least 5%, this absorption being a wavelength in the solar spectrum Is integrated.In some embodiments, the photovoltaic device comprises means for absorbing light configured to generate an electrical signal as a result of the light being absorbed by the means for absorbing light. The photovoltaic device generates an optical resonance between the means for reflecting light arranged to reflect light transmitted through the at least one light absorbing means, the light absorbing means and the light reflecting means And the presence of the optical resonance generating means increases the amount of light absorbed by the light absorbing means, and the reflecting means comprises several means for converting light energy into electrical energy. It has partial light transmission so that it is partially transparent for wavelengths ofIn one embodiment, a method of manufacturing a photovoltaic device comprises forming an active layer configured to generate an electrical signal as a result of light absorbed by the active layer, and transmitting at least one active layer. Forming at least one optical resonant layer between the active layer and the reflector layer, and forming at least one optical resonant layer between the active layer and the reflector layer. The presence of Z increases the amount of light absorbed by the active layer, and the reflector layer is partially light transmissive such that the photovoltaic device is partially transmissive for several wavelengths.In some embodiments, the photovoltaic device comprises means for absorbing light configured to generate an electrical signal as a result of the light being absorbed by the light absorbing means. The photovoltaic device comprises means for reflecting light arranged to reflect light transmitted through at least one light absorbing means, light arranged between the light absorbing means and the light reflecting means The optical resonance generating means further includes means for generating resonance, and the presence of the optical resonance generating means increases the amount of light absorbed by the light absorbing means, and the thickness of the optical resonance generating means controls the thickness. It is adjustable by applying a control signal forIn one embodiment, a method of manufacturing a photovoltaic device includes forming at least one active layer configured to generate an electrical signal as a result of light being absorbed by the active layer. The method comprises the steps of forming a reflector layer arranged to reflect light transmitted through the at least one active layer, and at least one optical resonant layer between the at least one active layer and the reflector layer. And the step of forming at least one optical resonant layer to increase the amount of light absorbed by the active layer, and the thickness of the at least one optical resonant layer to control the thickness. It is adjustable by applying a control signal ofIn one embodiment, the photovoltaic device comprises first and second light absorbing devices configured to generate an electrical signal as a result of the light being absorbed by the first and second light absorbing means. Means are provided. The photovoltaic device further comprises a first means for generating an optical resonance. The presence of the first optical resonance generating means increases the amount of light absorbed by the first and second light absorbing means.In one embodiment, a method of manufacturing a photovoltaic device comprises first and second active layers configured to generate an electrical signal as a result of light absorbed by the first and second active layers. The steps of forming and forming a first optical resonant layer, the presence of the first optical resonant layer increases the amount of light absorbed by the first and second active layers.Exemplary embodiments disclosed herein are shown in the accompanying schematic drawings, which are for illustrative purposes only.Fig. 5 is a schematic diagram showing an optical interference cavity.FIG. 7 is a schematic diagram illustrating an optical interference cavity that enhances reflected light.FIG. 5 is a block diagram of an interferometric modulator ("IMOD") stack comprising multiple layers including an absorber layer, an optical cavity, and a reflector.FIG. 4 is a schematic diagram showing a portion of the reflections generated by a light beam incident on “IMOD” of FIG. Only a portion of the reflection is shown for illustrative purposes. However, for a given layer, incident rays and rays reflected from various interfaces in the IMOD can be coherently summed to determine the electric field strength in that layer.FIG. 7 shows an IMOD in an “open” state.FIG. 7 shows the IMOD in the “closed” state.FIG. 7 illustrates the resulting spectral sensitivity characteristics, eg, reflection and absorption, of an interferometric light modulator in the “open” state for normal incidence and reflected light.FIG. 7 illustrates the resulting spectral sensitivity characteristics, eg, reflection and absorption, of an interferometric light modulator in the “open” state for normal incidence and reflected light.FIG. 7 illustrates the resulting spectral sensitivity characteristics, eg, reflection and absorption, of an interferometric light modulator in the “open” state for normal incidence and reflected light.FIG. 7 illustrates the resulting spectral sensitivity characteristics, eg, reflection and absorption, of an interferometric light modulator in the “open” state for normal incidence and reflected light.FIG. 6 illustrates spectral sensitivity characteristics of an interferometric light modulator in the “closed” state for normal incidence and reflected light.FIG. 6 illustrates spectral sensitivity characteristics of an interferometric light modulator in the “closed” state for normal incidence and reflected light.FIG. 6 illustrates spectral sensitivity characteristics of an interferometric light modulator in the “closed” state for normal incidence and reflected light.FIG. 6 illustrates spectral sensitivity characteristics of an interferometric light modulator in the “closed” state for normal incidence and reflected light.FIG. 6 illustrates the spectral sensitivity characteristics of an interferometric light modulator in the “open” state when the angle of incidence or viewing angle is approximately 30 degrees.FIG. 6 illustrates the spectral sensitivity characteristics of an interferometric light modulator in the “open” state when the angle of incidence or viewing angle is approximately 30 degrees.FIG. 6 illustrates the spectral sensitivity characteristics of an interferometric light modulator in the “open” state when the angle of incidence or viewing angle is approximately 30 degrees.FIG. 6 illustrates the spectral sensitivity characteristics of an interferometric light modulator in the “open” state when the angle of incidence or viewing angle is approximately 30 degrees.FIG. 7 illustrates the spectral sensitivity characteristics of an interferometric light modulator in the “closed” state when the angle of incidence or view angle is approximately 30 degrees.FIG. 7 illustrates the spectral sensitivity characteristics of an interferometric light modulator in the “closed” state when the angle of incidence or view angle is approximately 30 degrees.FIG. 7 illustrates the spectral sensitivity characteristics of an interferometric light modulator in the “closed” state when the angle of incidence or view angle is approximately 30 degrees.FIG. 7 illustrates the spectral sensitivity characteristics of an interferometric light modulator in the “closed” state when the angle of incidence or view angle is approximately 30 degrees.It is a schematic diagram showing a solar cell provided with pn junction.FIG. 1 is a block diagram schematically illustrating a photo cell having a pin junction including amorphous silicon.Fig. 5 is a schematic diagram showing another conventional PV cell.FIG. 6 is a schematic diagram showing an embodiment comprising a PV cell that uses the principle of interferometric modulation to increase absorption in the active region of the PV cell, thereby increasing efficiency.FIG. 6 is a schematic diagram showing an embodiment comprising a PV cell that uses the principle of interferometric modulation to increase absorption in the active region of the PV cell, thereby increasing efficiency.FIG. 6 is a schematic diagram showing an embodiment comprising a PV cell that uses the principle of interferometric modulation to increase absorption in the active region of the PV cell, thereby increasing efficiency.FIG. 6 is a schematic diagram showing an embodiment comprising a PV cell that uses the principle of interferometric modulation to increase absorption in the active region of the PV cell, thereby increasing efficiency.FIG. 6 is a schematic diagram showing an embodiment comprising a PV cell that uses the principle of interferometric modulation to increase absorption in the active region of the PV cell, thereby increasing efficiency.FIG. 6 is a schematic diagram showing an embodiment comprising a PV cell that uses the principle of interferometric modulation to increase absorption in the active region of the PV cell, thereby increasing efficiency.FIG. 6 is a schematic diagram showing an embodiment comprising a PV cell that uses the principle of interferometric modulation to increase absorption in the active region of the PV cell, thereby increasing efficiency.FIG. 5 is a schematic diagram showing an embodiment comprising a PV cell having an optical cavity with an electrostatically variable thickness.FIG. 5 is a schematic diagram showing an embodiment comprising a PV cell having an optical cavity with an electrostatically variable thickness.FIG. 7 is a schematic diagram illustrating the terms used in calculating the field strength in the various layers of the PV cell.FIG. 5 is a flow chart showing a method of processing a PV cell that increases absorption in the active area of the PV cell using the principles of IMOD.FIG. 6 is a graph showing modeled absorption in Cu (In, Ga) Se2 (CIGS) active layer for various designs of PV cell.FIG. 1 shows an example of a conventional PV cell comprising a pin junction comprising α-Si-H surrounded by first and second indium tin oxide (ITO) layers and an aluminum (Al) reflector. . The absorption and reflection spectra for the PV cell as shown in FIG. 15A having a first ITO layer of 900 nm thickness, an α-Si active layer of 330 nm thickness, and a second ITO layer of 80 nm thickness are as follows: Indicated.FIG. 15B is a graph showing total absorption versus wavelength for the PV cell of FIG. 15A.FIG. 15B is a graph showing total reflection versus wavelength for the PV cell of FIG. 15A.15B is a graph showing absorption versus wavelength in the active layer for the PV cell of FIG. 15A.FIG. 15B is a graph showing absorption versus wavelength in the first ITO layer for the PV cell of FIG. 15A.15B is a graph showing absorption versus wavelength in the ITO and reflector layers for the PV cell of FIG. 15A.15B is a graph showing absorption versus wavelength in the ITO and reflector layers for the PV cell of FIG. 15A.FIG. 16 is a contour plot showing integrated absorption in the active layer of the photovoltaic device of FIG. 15A as a function of first and second electrode thickness. Integrated absorption includes absorption integrated over the solar spectrum.Absorb the active layer of the optimized version of the PV cell of FIG. 15A with a first ITO layer (54 nm thick), an α-Si active layer (330 nm thick), and a second ITO layer (91 nm thick) FIG.Graph showing the total absorption of the optimized version of the PV cell of FIG. 15A with a first ITO layer (54 nm thick), an α-Si active layer (330 nm thick), and a second ITO layer (91 nm thick) It is.Fig. 3 is a schematic diagram showing a photovoltaic device disclosed by Krc et al. Comprising an active region comprising Cu (In, Ga) Se2 ("CIGS"), p-type and CdS, n-type layers, but In, Ga) Se2 ("CIGS"), p-type layers and CdS, n-type layers are not optimized for maximum absorption efficiency.FIG. 18 is a graph of modeled absorbance versus wavelength for the photovoltaic device of FIG. 17 including CIGS, p-type layers and CdS, n-type layers.FIG. 18 is a graph of modeled absorbance versus wavelength for the photovoltaic device of FIG. 17 including CIGS, p-type layers and CdS, n-type layers.FIG. 18 is a graph of modeled absorbance versus wavelength for the photovoltaic device of FIG. 17 including CIGS, p-type layers and CdS, n-type layers.FIG. 18 is a view of the photovoltaic device as shown in FIG. 17 after adding an optical cavity between the active region and the reflector layer.FIG. 18 is a view of the photovoltaic device as shown in FIG. 17 after adding an optical cavity between the active region and the reflector layer.Device shown in FIG. 19A comprising an active region including an CIGS, p-type layer and CdS, n-type layer, and an optical cavity resonator showing increased absorption in the active region as compared to the device of FIG. FIG. 16 shows a graph of modeled absorbance versus wavelength for.Device shown in FIG. 19A comprising an active region including an CIGS, p-type layer and CdS, n-type layer, and an optical cavity resonator showing increased absorption in the active region as compared to the device of FIG. FIG. 16 shows a graph of modeled absorbance versus wavelength for.Device shown in FIG. 19A comprising an active region including an CIGS, p-type layer and CdS, n-type layer, and an optical cavity resonator showing increased absorption in the active region as compared to the device of FIG. FIG. 16 shows a graph of modeled absorbance versus wavelength for.Fig. 6 is a schematic diagram showing a photovoltaic device having active regions top and bottom surrounded by conductive layers (ITO and metal layers) and having vias for electrical connection with the regions, provided that The further comprises an optical cavity designed to increase the absorption with interference in the active region.Fig. 3 is a schematic diagram showing a photovoltaic device having active regions top and bottom surrounded by an optical resonant layer and a metal layer and having vias for electrical connection, provided that the device absorbs within the active region It further comprises an optical cavity designed to increase with interference.Fig. 5 is a schematic diagram showing another photovoltaic device having an optical cavity disposed between the active region and the metal layer and having a via for electrical connection, but with the photovoltaic device Are designed to increase the absorption with interference in the active region.Modeled absorption of the CIGS, p-type layer of the photovoltaic device of FIG. 23 on a wavelength band of about 400 nm to about 1100 nm showing an average absorption of about 90% within the active region of 500 nm to 750 nm FIG.Fig. 5 is a schematic illustrating one embodiment of a photocell, wherein the active layer of the photocell is disposed between the optical cavity and the optical resonant layer.FIG. 25B is a schematic showing another embodiment similar to the photocell shown in FIG. 25A, except that the resonant layer overlying the active layer comprises a dielectric and the cavity resonator underlying the active layer is , Air gaps or dielectrics, vias provide electrical conduction through the air gaps or dielectrics.6 is a schematic view showing another embodiment in which an ITO layer is disposed between the active layer and the cavity resonator.Fig. 6 is a schematic diagram showing another embodiment of a simplified photocell having an optical cavity between the active layer of the photocell and the reflector, but without the layer being shown on the active layer.1 is a schematic diagram illustrating a conventional multijunction photovoltaic device.Schematic showing one embodiment of a multi-junction photovoltaic device as shown in FIG. 27, further comprising an optical resonant layer and an optical resonant cavity designed to increase absorption with interference in the active region It is.FIG. 28B is a schematic showing another embodiment similar to the multijunction photocell shown in FIG. 28A, except that the cavity includes an air gap or dielectric and the via is electrically conductive through the air gap or dielectric BringFIG. 28 is a schematic diagram showing the multi-junction photovoltaic device shown in FIG. 27 further comprising a plurality of optical resonant layers and an optical resonant cavity designed to increase absorption with interference in the active region.FIG. 29B is a schematic showing another embodiment similar to the multijunction photocell shown in FIG. 29A, except that the cavity includes an air gap or dielectric and the via is electrically conductive through the air gap or dielectric Bring1 is a schematic diagram showing a conventional translucent PV cell.FIG. 6 is a schematic showing a PV cell with a reduced thickness reflector for increased transparency.FIG. 1 is a schematic diagram showing a semi-transparent multi-junction PV cell with an optical resonant layer but without an optical cavity.FIG. 32B is a schematic view showing a translucent multi-junction PV cell similar to that shown in FIG. 32A with vias providing electrical connections.The following detailed description is directed to certain specific embodiments of the present invention. However, the invention can be embodied in many different ways. In this description, reference is made to the drawings in which like parts are designated like numerals throughout. As is apparent from the following description, embodiments can be implemented with devices that include photovoltaic materials. A MEMS device can be coupled to a photovoltaic device as described herein below.A light transmitting dielectric thin film or layer as shown in FIG. 1 is an example of an optical cavity resonator. The dielectric film or layer may comprise a dielectric material such as glass, plastic or other transparent material. One example of such an optical cavity is a soap film that can form a bubble and generate a spectrum of reflected color. The optical cavity shown in FIG. 1 has two surfaces 101 and 102. The two surfaces 101 and 102 may be opposing surfaces on the same layer. For example, these two surfaces 101 and 102 may comprise surfaces on glass or plastic plates or sheets or thin films. Air or other media may surround the sheet or film.The light beam 103 incident on the surface 101 of the optical cavity is partially reflected (e.g., by Fresnel reflection) as shown by the optical path 104 and partially transmitted along the optical path 105. Transmitted light may be partially reflected along optical path 107 (e.g., again by Fresnel reflection) and partially transmitted out of the cavity along optical path 106 and out. The amount of light transmitted and reflected may depend on the refractive index of the material comprising the optical cavity and the surrounding medium.For the purposes described herein, the total intensity of the light reflected from the optical cavity is a coherent superposition of the two reflected rays 104 and 107. With such a coherent superposition, both the amplitude and the phase of the two reflected rays contribute to the total intensity. This coherent superposition is called interference. In general, the two reflected rays 104 and 107 can have a phase difference with respect to each other. In some embodiments, the phase difference between the two waves is 180 degrees and may cancel one another. When the phases and amplitudes of the two rays 104 and 107 are configured to reduce the intensity, the two rays are said to interfere destructively. On the other hand, if the phase and amplitude of the two rays 104 and 107 are configured to increase the intensity, the two rays are said to interfere constructively. The phase difference depends on the optical path difference of the two paths, which depends on both the thickness and the refractive index of the optical cavity and thus the material between the two surfaces 101 and 102. If the wavelength of the incident ray 103 is different, the phase difference is also different. Thus, in some embodiments, the optical cavity can reflect a particular set of wavelengths of incident light 103 and transmit other wavelengths in incident light 103. Therefore, there are wavelengths that constructively interfere, and other wavelengths that disrupt destructively. In general, therefore, the color and total intensity reflected and transmitted by the optical cavity depends on the thickness and the material comprising the optical cavity. The reflection and transmission wavelengths also depend on the angle, with different angles being reflected and different transmitted wavelengths.In FIG. 2, the top reflector layer 201 is deposited on the top surface 101 of the optical cavity and the bottom reflector layer 202 is deposited on the bottom surface 102 of the optical cavity. The thicknesses of the top and bottom reflector layers 201, 202 may be substantially different from one another. For example, in some embodiments, top reflector layer 201 may be thinner than bottom reflector layer 202. The reflector layers 201, 202 can include metal. As shown in FIG. 2, a ray 203 incident on the top reflector layer 201 of the optical cavity interferometer is partially reflected from the optical cavity interferometer along each of the paths 204 and 207. The illumination field seen by the observer comprises a superposition of two reflected rays 204 and 207. Through the bottom reflector 202, the amount of light substantially absorbed or transmitted out of the device by the device is significantly increased or decreased by changing the thickness and / or composition of the reflector layers 201, 202. It can be done. In the illustrated embodiment, as the thickness of the bottom reflector 202 is increased, the reflection of the optical cavity 101 is increased.In some embodiments, the dielectric (eg, glass, plastic, etc.) between the top reflector layer 201 and the bottom reflector layer 202 can be replaced by an air gap. The optical cavity interferometer may reflect one or more specific colors of incident light. The one or more colors reflected by the optical cavity interferometer may depend on the thickness of the air gap. The color or colors reflected by the optical cavity interferometer can be changed by changing the thickness of the air gap.In some embodiments, the gap between the top reflector 201 and the bottom reflector 202 can be changed, for example, by a micro-electro-mechanical system (MEMS). MEMS comprises micro mechanical elements, actuators, and electronics. The micromechanical element etches away or removes a portion of the vapor deposition, etching and / or substrate and / or vapor deposition material layer, or other layers add layers to form electrical and electromechanical devices It can be fabricated using a micromachining process. Such MEMS devices include interferometric modulators ("IMODs") having electrically tunable optical cavity resonators. As used herein, the terms interferometric modulator or interferometric light modulator may be used regardless of whether the device is adjustable or movable within the device (eg, static IMOD) , A device that selectively absorbs and / or reflects light using the principle of optical interference. In some embodiments, the interferometric modulator can comprise a pair of conductive plates, one of the pair being partially reflective and partially transmissive, the other of the pair being partially It is reflective or totally reflective. The conductive plate can perform relative motion upon application of an appropriate electrical signal. In one particular embodiment, one plate may comprise a stationary layer deposited on a substrate and the other plate may comprise a metal membrane separated from the stationary layer by an air gap. As described in more detail herein, the relative position of one plate to the other may change the optical interference of light incident on the interferometric modulator. In this way, the color of the light output by the interferometric modulator can be changed.By using this optical cavity interferometer it is possible to produce at least two states. For example, in one embodiment, in a first state, an optical cavity interferometer of a particular size is provided, whereby light of a selected color (based on the size of the cavity) constructively interferes and reflects Come out of the cavity. The second state comprises a visible dark state generated by either constructive and / or destructive interference of light, whereby the visible wavelengths are substantially absorbed.FIG. 3 is a diagram of an interferometric modulator stack 300. As illustrated, IMOD stack 300 comprises a glass substrate 301, an electrode layer 302, and an absorber layer 303 thereon. The IMOD stack 300 also comprises an Al reflector 305 such that the optical cavity 304 is formed between the absorber layer 303 and the Al reflector 305. For example, the Al reflector 305 can have a thickness of about 300 nm in some embodiments, and the optical cavity 304 can include an air gap. In some embodiments, the light cavity can comprise one or more partially transparent conductors or partially transparent nonconductors. For example, in some embodiments, the optical cavity interferometer can include a transparent conductive layer such as an ITO layer or a nonconductive material such as, for example, a SiO2 layer or both. In various embodiments, the optical cavity may include an air gap, a transparent conductive material such as a transparent conductive oxide, a transparent nonconductive material such as a transparent nonconductive oxide, or a combination thereof. Or it can have a composite structure comprising multiple layers.In the embodiment shown in FIG. 3, light passes through IMOD stack 300 by first passing through glass substrate 301 and electrode layer 302 and into absorber layer 303. Light not absorbed in the absorber layer 303 passes through the optical cavity interferometer 304 where it is reflected by the Al reflector 305 back through the optical cavity 304 back into the absorber layer 303. Within the IMOD, the thickness of the air gap can be selected to produce a "bright" state for a given wavelength or wavelength band, or a "dark" state for a given wavelength or wavelength band. In some embodiments, in the “bright” state, the thickness of the optical cavity 304 is such that light exhibits a first interference at the absorber layer 303. In the “dark” state, the thickness of the optical cavity 304 is such that light exhibits a second interference at the absorber layer 303. In some embodiments, the second interference is constructive as compared to the first interference (eg, for visible wavelengths). The more constructive the interference in the absorber layer, the stronger the electric field and the greater the absorption in the absorber layer 303.To illustrate how IMOD can produce dark output, FIG. 4A shows the rays incident on the IMOD shown in FIG. 3 and the various reflections of that incident ray from different interfaces within the IMOD. It is done. These reflections comprise only a part of the reflections resulting from such incident rays. For example, light rays reflected from various interfaces may again be reflected from other interfaces, resulting in multiple back and front reflections. However, for the sake of simplicity, only a portion of the reflected and reflected rays is shown.For example, in FIG. 4A, ray 401 includes the ray incident on the IMOD structure. The incident ray 401 may have an intensity E1 and a phase 11. After striking the layer 301 of the IMOD, the incident ray 401 may be partially reflected as shown by ray 402 and partially transmitted as shown by ray 403. Reflected ray 402 may have an intensity E1ar and a phase 11ar. The transmitted ray 403 may have an intensity E2 and a phase Φ2. Transmitted ray 403 may be further partially reflected as indicated by ray 403 a and partially transmitted as indicated by ray 404 on the surface of layer 302. The reflected ray 403a may have an intensity E2ar and a phase Φ2ar. The transmitted ray 404 may have an intensity E3 and a phase Φ3. Similarly, transmitted ray 404 may be further partially reflected as indicated by ray 404 a and partially transmitted as indicated by ray 405 after striking the top surface of layer 303. Reflected ray 404a may have an intensity E3ar and a phase Φ3ar. The transmitted ray 405 may have an intensity E4 and a phase Φ4. Transmitted ray 405 may again be partially reflected as shown by ray 405 a and partially transmitted as shown by ray 406 from the surface of layer 304. The reflected ray 405a may have an intensity E4ar and a phase Φ4ar. The transmitted ray 406 may have an intensity E5 and a phase Φ5. Transmitted ray 406 may be further partially reflected as indicated by ray 406 a and partially transmitted as indicated by ray 407 on the surface of layer 305. The reflected ray 406a may have an intensity E5ar and a phase 55ar. The transmitted ray 407 may have an intensity E6 and a phase Φ6. At the bottom of the reflector 305, the transmitted light indicated by the ray 407 is almost completely reflected as indicated by the ray 407a. The intensity of light ray 407a may be E6ar and the phase may be Φ6ar.Reflected rays 403a, 404a, 405a, 406a, and 407a are transmitted out of each of the layers of the IMOD and finally out of the device as shown in FIG. 4A. These rays are transmitted through the additional interface and thus undergo additional Fresnel reflections. For example, the reflected ray 403a passes through the substrate 301 as represented by ray 403b. Reflected ray 404a is transmitted through electrode 302 and substrate 301 (as shown by ray 404b) and is present as ray 404c. Similarly, the reflected ray 405a passes through the absorber 303, the electrode 302 and the substrate 301 (as shown by rays 405b, 405c) and exists as a ray 405d. Reflected ray 405a passes through absorber 303, electrode 302, and substrate 301 (as shown by rays 405b, 405c) and is present as ray 405d. The reflected ray 406a passes through the optical cavity 304, the absorber 303, the electrode 302, and the substrate 301 (as shown by rays 406b, 406c, 406d) and exists as a ray 405e. Reflected ray 407a passes through reflector 305, optical cavity 304, absorber 303, electrode 302, and substrate 301 (as shown by rays 406b, 406c, 406d, 406e) as ray 405f. Exists.As described with reference to FIG. 1, the intensity and wavelength of light reflected from the IMOD structure measured on the top surface of layer 301 are both taken into account in the respective amplitudes and phases of the reflected rays. As such, it involves the coherent superposition of all reflected rays 402, 403b, 404c, 405d, 406e, 407f. Other reflected rays not shown in FIG. 4A may also be included in the coherent superposition of rays. Similarly, the total intensity of light in any region within the IMOD structure, eg, within the absorber 403, can be calculated based on the strength of the electric field of the reflected and transmitted waves. Thus, it is possible to design an IMOD by changing the thickness and material of each layer such that the amount of light or the strength of the electric field in a given layer is increased or decreased using the interference principle. By using methods to control the strength and electric field strength levels in different layers by changing the layer thickness and material, the amount of light in the absorber and thus the amount of light absorbed by the absorber is increased Or can be optimized.The above description is an approximation of the optical process. More detailed content can be included in higher order analysis. For example, as mentioned above, only one pass and the generated reflection have been described above. Of course, light reflected from any of the layers can again be reflected back towards the other interface. Thus, light propagates multiple times in any of the layers including optical cavity 304. These additional reflection effects are not shown in FIG. 4A, but these reflections are considered in the superposition of coherent light. Thus, a more detailed analysis of the optical process can be performed. A mathematical approach can also be used. For example, software can be used to model the system. Some embodiments of such software can calculate reflections and absorptions and perform multivariate conditional optimization.The IMOD stack 300 may be static. In a static IMOD stack, the various layer thicknesses and materials are fixed by the manufacturing process. Some embodiments of the static IMOD stack include an air gap. In other embodiments, for example, instead of an air gap, the optical cavity can include dielectric or ITO. However, the light output by the static IMOD stack 300 depends on the viewing angle, the wavelength of the light incident thereon and the state of interference in the field plane of the IMOD stack for that particular wavelength incident thereon. In contrast, in a dynamic IMOD stack, the thickness of the optical cavity 304 can be changed in real time using, for example, a MEMS engine, thereby changing the interference state in the field of view of the IMOD stack. Similar to a static IMOD stack, the light output by the dynamic IMOD stack also depends on the viewing angle, the wavelength of the light, and the interference conditions in the viewing plane of the IMOD stack. 4B and 4C show dynamic IMOD. FIG. 4B shows the IMOD configured to be in the “open” state, and FIG. 4C shows the IMOD configured to be in the “closed” or “folded” state. The IMOD shown in FIGS. 4B and 4C comprises a substrate 301, a thin film layer 303, and a reflective film 305. The reflective film 305 can include a metal. The thin film layer 303 may include an absorber. Thin film layer 303 can include additional electrode layers and / or dielectric layers, and thus thin film layer 303 can be described as a multilayer in some embodiments. In some embodiments, thin film layer 303 can be attached to substrate 301. In the “open” state, thin film layer 303 is separated from reflective film 305 by gap 304. In some embodiments, for example, as shown in FIG. 4B, the gap 304 may be an air gap. In the “open” state, the thickness of the gap 304 can vary, for example, between 120 nm and 400 nm in some embodiments (eg, about 260 nm). In some embodiments, the IMOD can be switched from the "open" state to the "closed" state by applying a potential difference between the thin film stack 303 and the reflective film 305. In the "closed" state, the gap between the thin film stack 303 and the reflective film 305 is smaller than the thickness of the "open" gap. For example, the gap in the "closed" state may vary between 30 nm and 90 nm (eg, about 90 nm) in some embodiments. In general, the thickness of the air gap may vary between about 0 nm and about 2000 nm, for example, in some embodiments between the "open" and "closed" states. Other thicknesses can also be used in other embodiments.In the “open” state, one or more frequencies of incident light constructively interfere on the surface of the substrate 301 as described with reference to FIG. 4A. Thus, some frequencies of incident light are not substantially absorbed into the IMOD, but instead reflected from the IMOD. The frequencies reflected from the IMOD interfere constructively outside the IMOD. The displayed color observed by the viewer looking at the surface of the substrate 301 corresponds to the frequency that is substantially reflected from the IMOD and not substantially absorbed by the various layers of the IMOD. The frequencies that constructively interfere and are not substantially absorbed can be changed by changing the thickness of the gap. The reflection and absorption spectra of the IMOD as well as the absorption spectra of some layers herein are shown in FIGS. 5A-5D for light normally incident on the IMOD in the “open” state.FIG. 5A shows a graph of the total reflection of the "open" state IMOD (eg, IMOD 300 of FIG. 3) as a function of the wavelength seen at normal incidence when light is directed onto the IMOD at normal incidence. ing. The graph of total reflection shows a reflection peak of about 550 nm (eg yellow). From the observer watching the IMOD, the IMOD looks yellow. As mentioned above, the placement of the peaks of the total reflection curve can be shifted by changing the thickness of the air gap or by changing the material and / or thickness of one or more other layers in the stack. For example, the total reflection curve can be shifted by changing the thickness of the air gap. FIG. 5B shows a graph of the total absorption of IMOD over the wavelength range of about 400 nm to 800 nm. The total absorbance curve shows a valley at about 550 nm corresponding to the reflection peak. FIG. 5C shows a graph of the absorption of an absorber layer of IMOD (eg, layer 303 of FIG. 3) over a wavelength range of about 400 nm to 800 nm. FIG. 5D shows the absorption of the IMOD reflector layer (eg, layer 305 of FIG. 3) over a wavelength range of about 400 nm to 800 nm. The energy absorbed by the reflector is low. The total absorption curve is obtained by the addition of the absorption curve in the absorber portion of IMOD 400 and the absorption curve in the reflector portion of IMOD when the absorption in the other layers is negligible. It should be noted that the transmission in the IMOD stack is substantially negligible as the low reflector (e.g., 305 of FIG. 3) is substantially thick.Referring to FIG. 4C, in the “closed” state, the IMOD absorbs almost all frequencies of incident visible light in the thin film stack 303. Only a small amount of incident light is reflected. The display color seen by the viewer looking at the surface of the substrate 301 may generally be black, reddish black or purple in some embodiments. The frequency absorbed in thin film stack 303 can be changed or "tuned" by changing the thickness of the gap.The spectral responses of the various layers of the IMOD in the "closed" state to normal incident light, as viewed normal to the IMOD, are shown in FIGS. 6A-6D. FIG. 6A shows a graph of total internal reflection versus wavelength of IMOD over a wavelength range of about 400 nm to 800 nm. Total reflection is observed to be uniformly low throughout the wavelength range. Thus, very little light is reflected out of the interferometric modulator. FIG. 6B shows a graph of the total absorbance of IMOD over the wavelength range of about 400 nm to 800 nm. The total absorbance curve shows approximately uniform absorbance over the entire wavelength range corresponding to the graph of total reflection. FIG. 6C shows a graph of the absorption of the absorber layer over a wavelength range of about 400 nm to 800 nm. FIG. 6D shows the absorption of the IMOD reflector layer over the wavelength range of about 400 nm to 800 nm. Note from FIG. 6A that in the “closed” state, IMOD exhibits relatively low total internal reflection as compared to the total internal reflection in FIG. 5A. In addition, IMOD exhibits relatively high total absorbance and absorbance in the absorber layer in the “closed” state (FIGS. 6B and 6C, respectively) as opposed to the “open” state (FIGS. 5B and 5C). Reflector absorption is relatively low within the IMOD, both when the IMOD is in the "open" state (FIG. 5D) or in the "closed" state (FIG. 6D). Thus, most of the electric field strength occurs in the absorber layer where light is absorbed.In general, IMOD stacks have viewing angle dependencies that can be considered at the design stage. More generally, the spectral sensitivity characteristics of IMOD may depend on the incident angle and the viewing angle. 7A-7D are a series of graphs showing modeled absorbance and reflection versus wavelength for an IMOD in the "open" state when the incident or viewing angle is 30 degrees with respect to the normal direction of the stack. . FIG. 7A shows a graph of total internal reflection of IMOD versus wavelength for IMOD over a wavelength range of about 400 nm to 800 nm. The graph of total reflection shows a reflection peak of about 400 nm. A comparison of FIGS. 7A and 5A shows that the graph of total reflection versus wavelength shifts along the wavelength axis as the angle of incidence or view angle changes from normal incidence to 30 degrees. FIG. 7B shows a graph of total absorbance over the wavelength range of about 400 nm to 800 nm for IMOD. The full absorbance curve shows a valley at about 400 nm corresponding to the reflection peak. Comparing FIGS. 7B and 5B, it can be seen that the valleys of the absorption curve also shift along the wavelength axis when the angle of incidence or view angle changes from normal incidence to 30 degrees. FIG. 7C shows a graph of the absorption of an absorber of IMOD (eg, 303 of FIG. 3) over a wavelength range of about 400 nm to 800 nm. FIG. 7D shows the absorption of the reflector of IMOD (eg, 305 of FIG. 3) over a wavelength range of about 400 nm to 800 nm.8A-8D are a series of graphs showing modeled absorbance and reflection versus wavelength for the IMOD of FIG. 4A in the "closed" state when the incident or viewing angle is 30 degrees. FIG. 8A shows a graph of total internal reflection of IMOD versus wavelength for IMOD over a wavelength range of about 400 nm to 800 nm. Total reflection is observed to be uniformly low throughout the wavelength range. Thus, very little light is reflected out of the interferometric modulator. FIG. 8B shows a graph of total absorbance over a wavelength range of about 400 nm to 800 nm. The total absorbance curve shows approximately uniform absorbance over the entire wavelength range corresponding to the graph of total reflection. FIG. 8C shows a graph of the absorption of the absorber layer over a wavelength range of about 400 nm to 800 nm. FIG. 8D shows the absorption of the reflector layer of IMOD over the wavelength range of about 400 nm to 800 nm. Comparing FIGS. 6A-6D with FIGS. 8A-8D, the spectral sensitivity characteristics of the IMOD in the “closed” state are approximately the same for normal incidence and when the incident or viewing angle is 30 degrees Recognize. Thus, it can be inferred that the spectral sensitivity characteristics of the "closed" state IMOD do not show a strong dependence on the incident angle or the viewing angle.FIG. 9 shows a typical solar cell 900. A typical solar cell can convert light energy into electrical energy. PV cells are an example of renewable energy sources that have a small carbon footprint and low environmental impact. The use of PV cells can reduce energy generation costs and provide possible cost benefits.PV cells can have various sizes and shapes, for example, from smaller than postage stamps to several inches across. Several PV cells can be connected to form PV cell modules several feet long and several feet wide. These modules can include electrical connections, mounting hardware, power conditioners, and batteries that store solar energy for use when the sun is not shining. The modules can then be combined and connected to form PV cells with different sizes and outputs. The size of an array depends on several factors, such as the amount of sunlight available at a particular location and the needs of the consumer.The photocell has an overall energy conversion efficiency (η, “eta”) that can be determined by measuring the power output from the photocell and the light power incident on the solar cell and calculating the ratio. According to one convention, the efficiency of a solar cell is given by the ratio of peak power (watts) produced by a photocell having a surface area of 1 m2 exposed to standard solar radiation (referred to as "air mass 1.5") Can. Standard solar radiation is the amount of solar radiation at the equator at noon on a sunny vernal or autumnal day. Standard solar radiation has a power density of 1000 watts / m 2.A typical PV cell has an active area disposed between two electrodes and can include a reflector. The reflector can have a reflectivity of greater than 50%, 60%, 70%, 80%, 90% or more in some embodiments. The reflector may have a lower reflectivity in other embodiments. For example, the reflectance may be 10%, 20%, 30%, 40% or more. In some embodiments, the PV cell additionally comprises a substrate. A substrate can be used to support the active layer and the electrode. For example, the active layer and the electrodes can be deposited on a substrate during and / or after fabrication of the photovoltaic device and comprise a thin film supported by the substrate. The active layer of the PV cell can comprise a semiconductor material such as silicon. In some embodiments, the active region can include a pn junction formed by contacting n-type semiconductor material 903 and p-type semiconductor material 904 as shown in FIG. Such pn junctions have diode-like properties and may therefore also be referred to as photodiode structures.Layers 903 and 904 are sandwiched between two electrodes forming a current path. The back electrode 905 can be formed of aluminum or molybdenum or some other conductive material. The back electrode may be rough and unpolished. The front electrode 901 is designed to cover most of the front of the pn junction to lower the contact resistance and increase the collection efficiency. In embodiments where the front electrode is formed of an opaque material, the front electrode may be configured to have a hole or gap so that the illumination light impinges on the surface of the pn junction. In such embodiments, the front electrode may be a grid or be configured in the shape of a fork or finger. In some other embodiments, the electrode may be formed of a transparent conductor, for example, a transparent conductive oxide (TCO) such as tin oxide (SnO2) or indium tin oxide (ITO). TCO provides good electrical contact and conductivity, and at the same time can be optically transparent to incoming light. In some embodiments, the PV cell can comprise a layer of antireflective (AR) coating 902 disposed on the front electrode 901. The layer of AR coating 902 can reduce the amount of light reflected from the surface of n-type layer 903 shown in FIG.When light is applied to the surface of the pn junction, photons transfer energy to electrons in the active region. If the energy transmitted by the photon is greater than the band gap of the semiconductor material, the electrons may have sufficient energy to enter the conduction band. An internal electric field is generated along with the formation of the pn junction. The internal electric field acts on the excited electrons to move them, thereby generating a current in the external circuit 907. The resulting current can be used to power various electrical devices, such as a light bulb 906 as shown in FIG.The efficiency at which optical power is converted to power corresponds to the overall efficiency as described above. The overall efficiency depends at least in part on the efficiency with which light is absorbed by the active layer. This efficiency, here the efficiency η ab, referred to as absorption efficiency, is proportional to the refractive index n in the active layer, the extinction coefficient k, and the square of the electric field amplitude | E (x) | × k × | E (x) |The value n is the real part of the complex index of refraction. The absorption or extinction coefficient k is generally the imaginary part of the complex refractive index. Thus, the absorption efficiency η abs can be calculated based on the material properties of the layer and the electric field strength in the layer (eg active layer). The field strength for a particular layer may be referred to herein as the average field strength, which is the average of the field strength over the thickness of a particular layer.As mentioned above, light absorbed in the active layer generates free carriers, eg electron-hole pairs, which can be used for powering. The overall efficiency or overall conversion efficiency depends at least in part on the efficiency with which the electrons and holes generated in the active material are collected by the electrode. This efficiency is referred to herein as the collection efficiency η collection. Thus, the overall conversion efficiency depends on both the absorption efficiency η abs and the collection efficiency η collection.The absorption efficiency ηabs and collection efficiency collectioncollection of the PV cell depend on various factors. For example, the thickness and material used for electrode layers 901 and 905 can simultaneously affect both absorption efficiency η abs and collection efficiency η collection. In addition, the thickness and material used for PV materials 903 and 904 can affect absorption efficiency and collection efficiency.The overall efficiency can be measured by attaching probes or conductive leads to the electrode layers 901 and 905. The overall efficiency can also be calculated using a model of photovoltaic device.As used herein, these efficiencies are relative to standard solar radiation-air mass 1.5-. Also, the electric field, absorption efficiency, etc. can be integrated for wavelengths across the solar spectrum. The solar spectrum is well known and includes the wavelengths of light emitted by the sun. These wavelengths include visible, ultraviolet and infrared wavelengths. In some embodiments, the electric field, absorption efficiency, overall efficiency, etc. are integrated over a portion of the solar spectrum, eg, wavelengths in the visible range, wavelengths in the infrared range, or wavelengths in the ultraviolet range. In some embodiments, the electric field, absorption efficiency, overall efficiency, etc., are calculated over wavelength bands having smaller wavelength bands, eg, bandwidths such as 10 nm, 100 nm, 200 nm, 300 nm, 400 nm, 500 nm, or 600 nm. Be done.In some embodiments, the pn junction shown in FIG. 9 can be replaced by a pin junction in which the intrinsic or non-doped semiconductor layer is sandwiched between a p-type semiconductor and an n-type semiconductor. The pin junction can have high efficiency compared to the pn junction. In some other embodiments, a PV cell can comprise multiple junctions.Active regions are crystalline silicon (c-silicon), amorphous silicon (α-silicon), cadmium telluride (CdTe), copper indium diselenide (CIS), copper indium gallium diselenide (CIGS), light absorption It may be formed from various light absorbing materials such as dyes and polymers, polymers in which light absorbing nanoparticles are disposed, III-V semiconductors such as GaAs. Other materials can also be used. A light absorbing material in which photons are absorbed and the photons transfer energy, for example to electrons, is referred to herein as the active layer of a PV cell. The material for the active layer can be selected according to the desired performance and application of the PV cell.In some embodiments, PV cells can be formed by using thin film technology. For example, in one embodiment, a PV cell can be formed by depositing a first layer of TCO on a substrate. A layer of active material (or light absorbing material) is deposited on the first TCO layer. A second TCO layer can be deposited on the layer of active material. In some embodiments, a layer of AR coating can be deposited on top of the second TCO layer. These layers can be deposited using growth methods such as physical vapor deposition, chemical vapor deposition, and electrochemical vapor deposition. The thin film PV cell can include polycrystalline material such as thin film polycrystalline silicon, CIS, CdTe, or CIGS. Some of the advantages of thin film PV cells include, among other things, small device footprint and good scalability of the manufacturing process.FIG. 10 is a schematic block diagram of a typical thin film PV cell 1000. A typical PV cell 1000 includes a glass substrate 1001 through which light can pass. Provided on the glass substrate 1001 are a first transparent electrode layer 1002, a layer 1003 of a PV material containing amorphous silicon, a second transparent electrode layer 1005, and aluminum or Mo, Ag, Au, etc. Is a reflector 1006 that includes some other metal. The second transparent electrode layer 1005 may contain ITO. A portion of the active material can be doped to form an n-type region and a p-type region, and a portion of the active material can be undoped to form a pin structure. In one design, the thickness of the first transparent electrode layer can be about 900 nm and the thickness of the PV material can be about 330 nm. In one design, the second transparent electrode layer 1005 has a thickness of about 80 nm, and the reflector 1006 has a thickness of about 300 nm. As illustrated, the first transparent electrode layer 1003 and the second transparent electrode layer 1005 sandwich the amorphous silicon layer 1003 therebetween. The reflector layer 1006 is disposed on the second transparent electrode layer 1005. In a PV cell, photons are absorbed into the active or absorber layer, and some of the absorbed photons can generate electron-hole pairs.A comparison of FIG. 10 with FIG. 3 shows that the IMOD structure and the typical PV device are similar. For example, the IMOD shown in FIG. 3 and the PV cell shown in FIG. 10 each comprise a stacked structure comprising multiple layers. Both IMOD and PV devices also comprise light absorbing layers (e.g. 303 in FIG. 3 and 1003 in FIG. 10) disposed on a substrate (e.g. 301 in FIG. 3 and 1001 in FIG. 10). The light absorbing layer can be chosen to have similar properties for both IMOD and PV cells. Both the IMOD of FIG. 3 and the PV cell of FIG. 10 comprise reflectors (e.g., 305 of FIG. 3 and 1006 of FIG. 10). Thus, it is believed that the ability to provide the PV device with the ability to tune the IMOD to form the desired distribution of the electric field in the various layers, resulting in an output. For example, an optical cavity may be placed under the active layer (eg, light absorbing layer 1003 of FIG. 10) to tune the PV device to reduce absorption in all layers except the active or absorbing layer 1003; The absorption in the active or absorbing layer 1003 can be enhanced, and in a sense the IMOD is said to be incorporated into the PV cell or vice versa.In the conventional PV cell as shown in FIG. 10, the absorption in the PV material layer 1003 was conventionally thought to be enhanced by the introduction of layer 1005. Thus, the second transparent electrode 1005 was referred to as a reflection enhancing layer. Also, it is a conventional idea that the absorption in the active layer increases in proportion to the thickness of the second transparent electrode 1005 (see, for example, Non-Patent Document 1). In general, the inclusion of layer 1005 does not increase the reflection of reflector layer 1006. Furthermore, the absorption in the active layer does not necessarily increase in proportion to the thickness of the second transparent electrode layer 1005 as conventionally considered. In general, the thickness of the first electrode layer 1002 and the second electrode layer 1005 can have an optimum point at which absorption is maximized, as will be shown below.In addition, in some conventional designs, the thicknesses of the electrode layer 1005 and the reflector layer 1006 are varied to minimize the total amount of light reflected from the PV cell. It is assumed that when the light is not reflected from the PV cell, this light is absorbed and the overall efficiency of the photovoltaic device is increased. To this end, the surface of the reflector 1006 can be roughened to increase diffusion and reduce specular reflection from the reflector. Using these methods, it is possible to produce PV cells that appear black. However, the above-described methods directed to reducing reflections from PV devices and producing PV cells that appear black may not be sufficient to increase absorption or absorption in the active layer 1003, and thus , May not be sufficient to increase the efficiency of the photovoltaic device.The success of such conventional approaches to increase the efficiency of PV cells has been limited. However, as mentioned above, using the interference principle “tunes” one or more layers in the PV device and optimizes the PV cell so that more light is absorbed by the absorbing layer 1003. It can be absorbed. For example, the principles of interference used in IMOD design can be applied to PV cell processing. An optical cavity that generates electric field resonances in the active layer can be included in the PV cell, which can increase the electric field strength and absorption in the active layer. For example, as shown, the second transparent electrode layer 1005 includes an air gap or a transparent non-conductive dielectric such as SiO 2 to increase absorption in the active layer (or light absorbing layer 1003). It may be replaced by an optical cavity resonator. Replacing the transparent electrode layer 1005 with an optical cavity does not necessarily enhance the reflection of the reflector, but the optical cavity includes a low absorption layer that can increase the absorption in the active layer by interference. .To illustrate how solar cell efficiency can be increased, we will examine the conventional solar cell shown in FIG. 11A. FIG. 11A shows a PV including a Cu (In, Ga) Se2 “CIGS / CdS” PV stack. The PV cell comprises an ITO or ZnO conducting electrode layer 1101, a layer 1102 of n-type material comprising CdS, a layer 1103 of p-type material comprising CIGS, a reflector layer 1104 comprising Mo, and a glass substrate 1105. As explained above, the efficiency of the PV cell shown in FIG. 11A can be increased by incorporating the IMOD structure and the principle of interference utilized in IMOD into the PV cell. This can be achieved by introducing a static or dynamic optical resonant layer as shown in FIGS. 11B-11H. In various embodiments, the optical resonant layer causes an electrical resonance in the active layer, which enhances the average electric field therein. In the following description, the following naming convention is adopted for clarity. An optical resonant layer sandwiched between an absorbing layer and a reflector layer is referred to as an "optical cavity resonator" but an optical resonant layer disposed elsewhere in the stack is , "Optical resonant layer". The terms "resonant" and "resonance" describing a cavity or layer can be used interchangeably.In FIG. 11B, the optical cavity 1106 containing ITO is sandwiched between the active or absorbing material (layers 1102 and 1103) and the reflector layer 1104. In the embodiment shown in FIG. 11C, the optical cavity 1106 comprises a hollow region. In some embodiments shown in FIG. 11C, the hollow region comprises air or other gas. Replacing the ITO layer with an air gap can reduce the absorption in all layers (including, for example, the optical cavity) except the active layer. Thus, for some embodiments, the choice of the material of the optical cavity may be an important issue. For example, in embodiments where the optical cavity comprises air or SiO 2 as shown in FIG. 11D, the absorption in the active layer is better than in the optical cavity comprising ITO as shown in FIG. 11B. It can be enhanced. The embodiments shown in FIGS. 11B-11D comprise optical cavity resonators comprising a single material or medium through which light propagates. In various embodiments as shown in FIGS. 11E-11H, an interferometric tunable photovoltaic (iPV) cell can comprise a composite optical cavity resonator that includes two or more layers. For example, in the embodiment shown in FIG. 11E, the optical cavity comprises an ITO layer 1106a and an air layer 1106b. The embodiment shown in FIG. 11F comprises a composite optical cavity including ITO layer 1106a and SiO2 layer 1106b. The embodiment shown in FIG. 11G comprises a composite optical cavity including an SiO 2 layer 1106 a and an air gap 1106 b. The embodiment shown in FIG. 11H can include an ITO layer 1106a, a SiO2 layer 1106b, and an air gap 1106c. Thus, in various embodiments, the optical cavity and other optical resonant layers may include one or more transparent conductive or nonconductive materials, such as conductive or nonconductive oxide or nitride layers. . Other materials can also be used. These layers may be partially transparent.The optical cavity (or layer) can be dynamic in some embodiments. As shown in FIG. 11I, for example, the reflector layer 1107 can be separated from the active layer with the posts 1107. The reflector layer 1104 may be movable, in particular, it may move in a direction towards or away from the active layer, thereby changing the thickness of the optical cavity resonator be able to. The movement of the reflector layer 1104 can be triggered by applying a voltage between the reflector layer 1104 and the ITO layer 1101 to generate an electrostatic force. The optical cavity can be dynamically tuned, which can change, for example, the absorption characteristics of the active layer with changes in environmental conditions. FIG. 11J shows an alternative embodiment in which the optical cavity is a composite cavity including a layer 1106a of SiO2 and an air gap 1106b. The dielectric layer 1106 a containing SiO 2 can be used in electrically insulating the electrodes 1101 and 1104 in the closed state. The process of increasing the absorption efficiency of the iPV cell is described below.In general, the optical stack can comprise multiple layers, each interface between layers being reflective of part of the incident light. In general, at these interfaces, part of the incident light can also be transmitted (except in the case of the last layer). FIG. 12 shows incident light reflected from various layers of a generalized iPV device having an unspecified number of layers. The incoming wave characterized by the electric field Ei incident on layer 1201 of the iPV device is partially reflected and partially transmitted as described above with reference to FIG. 4A. The transmitted wave is characterized by the electric fields E1, r propagating towards the right of the drawing. A portion of this wave characterized by the electric field E'j-1, r is incident on the interface of the layers 1202 and 1203. The portion characterized by Ej, r penetrates into the absorber layer 1203. A part of the transmitted electromagnetic wave is absorbed in the absorber 1203. The uninhaled portion of the wave characterized by the electric field E'j, r is incident on the boundary of the layers 1203 and 1204. The portion of the incident electric field E′j, r characterized by Ej + 1, r is transmitted into the optical cavity 1204. The small part characterized by the electric field Et of the incoming wave Ei is transmitted out of the iPV when the metal conductor / reflector 1205 is partially transparent.At the interfaces of the various layers, part of the incident radiation is likewise reflected. For example, the electric field Ej + 1,1 reflects the portion of the electric field Ej + 1, r that is reflected from the boundary of the layers 1204 and 1205 and thus propagates towards the left of the drawing. Similarly, the electric fields E1 j, 1, Ei, 1 ', E'j-1,1 and E1,1 represent waves propagating in the iPV device towards the layer 1201. The reflected wave Er is given by the superposition of the waves reflected from the various layers of the iPV device. The electric field into and out of a given interface can be calculated using the matrix method and values for the reflection and transmission coefficients for the various interfaces, and also the phase that occurs across those layers. Once the electric field in a given layer, eg, the active layer, is known, the absorption therein can be determined. Calculate the time-averaged magnitude of the pointing vector, or the time-averaged energy flux (time-averaged power per unit normal area) coming into the absorber layer 1203 and exiting from the absorber layer, for example be able to. Thus, the total power absorbed by the absorber layer 1203 can be calculated by subtracting the amount of power exiting the absorber layer 1203 from the total power entering the absorber layer 1203. In various embodiments, by increasing the ratio of the time averaged magnitude of the pointing vectors entering absorber layer 1203 to the time averaged magnitude of the pointing vectors exiting absorber layer 1203, the iPV device Efficiency can be increased.The power absorbed in any layer of the iPV cell, eg, the absorber layer, may depend on the entire iPV stack as described above. The amount of energy absorbed into the layers of the iPV cell is directly proportional to the refractive index n of the layer, the extinction coefficient k of the layer, the square of the electric field amplitude in the layer | E (x) | Do. One of the approaches to increase or optimize energy absorption in iPV devices reduces the amount of energy absorbed in the layer surrounding the absorber layer and increases the amount of energy absorbed in the absorber layer It is. The amount of energy absorbed in the layer surrounding the absorber layer may, for example, select a material with a low value of n × k to reduce the thickness of the surrounding layer or of the electric field in the surrounding layer It can be reduced by reducing the strength, or any combination of these approaches. For example, in one optimization method, one or more of the following can be used to increase the electric field in the absorber layer of an iPV cell. A) The materials and thicknesses of the various layers of the iPV stack can be adjusted so that the reflected and transmitted fields reaching the active layer constructively interfere. B) The strength of the electric field in layers of iPV devices other than the active layer can be reduced, for example, at least in part as a result from destructive interference. C) For example, a material for an optical cavity resonator having a desired or optimal refractive index n and a low refractive index n and / or a low extinction coefficient constant k that provides appropriate phase shift and reflection, in the band gap of the active layer The absorption of the optical cavity is selected to be low for the corresponding wavelength, which results in less light being absorbed by the optical cavity and being converted into electrical energy by the active layer. In some embodiments, the composition and thickness of the optical cavity may be such that the electric field within the absorber is increased, eg, for wavelengths having energy corresponding to the bandgap of the active layer. it can. D) More generally, for example, using a material having a low product of the refractive index n and the extinction coefficient k for the wavelength having energy corresponding to the band gap of the active layer for layers other than the active layer Can. An iPV device by reducing the strength of the electric field in the layers of the iPV device other than the active layer and / or reducing the absorption using materials with low refractive index and / or extinction coefficient k in those layers Energy absorption in all layers except the active or absorber layer of E) Materials with low n and / or k values and thus low absorption can also be used, in particular, in layers other than the active layer where the electric field strength is high.The thickness of the optical cavity can be selected to enhance the light intensity in the active region through interference effects, in order to optimize the iPV device for increased absorption in the active or absorber layer . In some embodiments, the thickness of the gap in the optical cavity is selected or optimized at the design stage of the iPV cell using modeling software and numerical analysis routines. The thickness of the gap in the optical cavity can also be dynamically changed in real time by further incorporating a MEMS engine or platform into the IMOD embedded PV cell structure of FIGS. 11B-11F. (See, eg, FIGS. 11G and 11H). However, in various embodiments, the gap is fixed. In some embodiments, the thickness of the active layer can also be altered or optimized in addition to altering or optimizing the thickness of the optical cavity to increase the absorption efficiency of the active or absorber layer .FIG. 13 is a flow chart illustrating an embodiment of a method of fabricating the iPV device 1300. The process begins at start state 1302 and then moves to state 1304, where the iPV device designer identifies a set of design characteristics and / or processing constraints. An iPV device comprises an optical stack comprising a plurality of layers. In general, the layers comprise an active layer and an optical resonant layer (eg, an optical cavity). Additional layers may include, for example, electrodes, and electrically insulating layers. In some embodiments, the optical resonant layer comprises an electrode, an electrically insulating layer, or a layer having other functions in addition to increasing absorption in the active layer. The various parameters (eg, thickness, material) of any of these layers may need to be constrained for one or more reasons. Design characteristics and / or processing constraints include, for example, in the plane of one or more electrode layers such that the collected electrons are used for electrical power rather than dissipated by heat or absorption in the inactive layer. There is resistance. Furthermore, the absorption in the active layer depends both on the thickness of all layers in the stack, as well as the specific material used, so the thickness of such materials and layers of the layer (s) to be constrained Is carefully selected in some embodiments.The method then moves to state 1306, where unconstrained parameters are selected or optimized to increase the efficiency (eg, absorption efficiency) of the active layer. In one embodiment, optimizing efficiency comprises identifying a maximum value of efficiency based on at least one design characteristic. In some embodiments, the efficiency may be optimized for particular wavelengths or wavelength bands (eg, solar spectrum, visible spectrum, infrared spectrum, ultraviolet spectrum). This range can be at least 100 nm, 200 nm, 300 nm, 400 nm, 500 nm, 600 nm, and the like. The process for increasing or optimizing the absorption in a particular layer at a particular wavelength or wavelength band may involve calculations based on all or most of the layers in the optical stack. In some embodiments, the exact thickness of the layered material can be calculated to increase or optimize the absorption in the active layer for a particular wavelength or particular wavelength band.In some embodiments, the layer comprises a thin film layer. Thus, these layers are treated as thin films in the design process. A "thin film" can have a thickness less than the coherence length of incident light, or a thickness on the order of the coherence length of incident light, eg, less than 5000 nm. For thin films, the phase of light is taken into account in what is referred to as coherent superposition to determine the intensity levels resulting from multiple reflections. As mentioned above, the absorption efficiency of the active layer can be optimized using analysis of the coherent summation of the reflections from multiple interfaces of the iPV device. In some embodiments, such coherent summation is used to calculate the energy input and output from a given layer, to determine the absorption in a layer, for example in the active layer, and also , Determine its absorption efficiency. This process can use pointing vectors. Other steps of the method can also include the removal or replacement of layers in conventional photovoltaic devices.In some embodiments, the overall efficiency can be increased or optimized by increasing or optimizing the absorption efficiency η abs. However, as mentioned above, the overall absorption efficiency ηoverall depends on both the efficiency absabs at which light is absorbed in the active layer to form electron-hole pairs and the efficiency collectioncollection at which the electron-hole pairs are collected by the electrode Do.The interference principle can be used to increase or optimize the overall conversion efficiency ηoverall by increasing or optimizing one or both of the parameters absabs and collectioncollection defined above. For example, in some embodiments, the absorption efficiency η abs can be optimized or maximized without considering the collection efficiency η collection. However, parameters modified to increase or optimize the absorption efficiency η ab can also affect the collection efficiency collection collection. For example, the thickness of the electrode or the thickness of the active layer can be varied to enhance absorption in the active layer, although this thickness adjustment can also affect the collection efficiency. Thus, in some embodiments, both collection efficiency η collection and absorption efficiency η abs are considered and / or optimized to increase or optimize overall efficiency 総 合 overall. In some other embodiments, the absorption efficiency η abs and the collection efficiency η collection can be recursively optimized to maximize the overall efficiency η over all. Other factors can also be included in the optimization process. In some embodiments, for example, optimizing the overall efficiency of the iPV device may be based on heat dissipation or absorption in one or more inert layers.The method then proceeds to state 1308 where the photovoltaic device is fabricated according to processing constraints and optimized factors. After the designer completes state 1308, the method ends at end state 1310. It will be understood that other steps can be included to improve or optimize the photovoltaic device.FIG. 14 shows a graph of modeled absorption in the wavelength range from about 400 nm to about 1100 nm for each of the embodiments described in FIGS. 11A-11C. Curve 1401 is the absorbance at absorber layer 1103 for the embodiment shown in FIG. 11A. Curve 1402 is the absorbance at absorber layer 1103 for the embodiment shown in FIG. 11B. Curve 1403 is the absorbance at absorber layer 1103 for the embodiment shown in FIG. 11C. As shown in FIG. 14, according to curve 1402, the modeled absorption in the absorber layer of the embodiment shown in FIG. 11B at a wavelength equal to about 550 nm is shown in curve 1401. It is approximately 28% higher than the corresponding modeled absorption value in the absorber layer of the embodiment of FIG. 11A. Further, according to curve 1403, the modeled absorption in the absorber layer of the embodiment shown in FIG. 11C at a wavelength equal to about 550 nm is the absorption of the embodiment of FIG. 11A shown in curve 1401. About 35% higher than the corresponding modeled absorption value in the body layer. Thus, the embodiments shown in FIGS. 11B and 11C with optical cavity resonators show about 10% to 35% improvement in absorption in the active region as compared to the embodiment shown in FIG. 11A. . Comparing curves 1402 and 1403 between the embodiment with the ITO layer in the optical cavity shown in FIG. 11B and the embodiment with air or SiO 2 in the optical cavity shown in FIG. 11C. It can be seen that the embodiment shown in FIG. 11C has high absorption in the absorber layer 1103. This result can be explained as follows. The strength of the electric field in the active or absorber layer is high. The electric field in the optical cavity layer outside the absorber layer drops sharply but does not go to zero. The product of the refractive index n of ITO and the extinction coefficient k is low at a wavelength having an energy corresponding to the band gap of the absorber layer (for example, a wavelength between 300 nm and 800 nm) but corresponds to the band gap of the absorber layer Not less than the product of the index of refraction n of air or SiO 2 and the extinction coefficient k at a wavelength having Therefore, the ITO layer in the optical cavity absorbs electromagnetic waves significantly compared to the air (or SiO 2) layer. As a result, the absorption of the absorber layer is reduced. Curve 1403 shows that, when optimized, the modeled absorption in the active layer of the embodiment shown in FIG. 11C is about 90% in the wavelength band from 500 nm to 700 nm.FIG. 15A shows a diagram of a single pin junction amorphous silicon solar cell structure. This device is similar to that disclosed in chapter 5 of Non-Patent Document 2, except that the PV cell contains multiple ITO layers (replacing the TCO and ZnO layers disclosed by Miro Zeman). ing. The embodiment shown in FIG. 15A comprises a textured glass substrate 1501, a first ITO layer 1502 of about 900 nm thickness, a pin junction of about 330 nm thickness, where the region 1504 is α And Si: an 80 nm thick second ITO layer 1506; and a 300 nm thick Ag or Al layer 1507. The thickness of the various layers is consistent with the thickness disclosed by Miro Zeman selected to maximize total absorption in the entire stack disclosed by Miro Zeman. This maximization was achieved by varying the thickness of the various layers until the PV cell appeared black. The relationship of total absorption versus wavelength is shown in FIG. 15B. It can be seen that all wavelengths are absorbed uniformly in the PV stack. The total reflection versus wavelength relationship from the PV device is shown in FIG. 15C. The total reflection from the PV cell is low, and likewise, the PV cell looks black. FIG. 15D shows absorption in the absorber or active layer 1504 of the PV cell. 15E-G show absorption in the first ITO layer 1502, the second ITO layer 1506, and the Ag or Al layer 1507. As shown in FIGS. 15D and 15E, the amount of electromagnetic radiation absorbed in the active layer 1504 is approximately equal to the amount of electromagnetic radiation absorbed in the first ITO layer 1502. Thus, this design is a sub-optimal design because light that would otherwise be converted to electrical energy by the active layer 1504 is absorbed instead into the first ITO layer 1502. The amount of absorption in the second ITO layer 1506 and the Ag or Al layer 1507 is negligibly small.However, the PV stack of FIG. 15A can be optimized by applying the interference principle of the IMOD design described above. In some embodiments, the values of the refractive index n and the extinction coefficient k for the p, i and n layers may be substantially similar to one another, and the p, i and n layers are optimized It can be thought of as a single layer with a combined thickness of three different layers in the process. In one embodiment, optimization can be performed by changing the thickness of the first ITO layer 1502 and the second ITO layer 1506 while keeping the thickness of the active layer 1504 constant. FIG. 16A shows a contour plot 1600 showing the relationship of integrated energy absorbed within the active or absorber layer to the thickness of the first ITO layer 1502 and the second ITO layer 1506. Each point in FIG. 16A is the integral in the active layer where the thicknesses of the first ITO layer 1502 and the second ITO layer 1506 are given by the corresponding x (horizontal) and y (vertical) axes. Absorption (absorption integrated over the wavelength range). The lighter the shade, the greater the total absorption of the active layer. In the contour plot 1600, maximum absorption 1610 is obtained when the thicknesses of the first ITO layer 1502 and the second ITO layer 1506 are approximately 54 nm and 91 nm, respectively. Thus, an increased or optimal absorption efficiency results when the thickness of the first ITO layer 1502 is significantly reduced from 900 nm to 54 nm. The plot of FIG. 16A shows that the absorption in the active layer does not increase in proportion to the increase in thickness of the ITO layer, contrary to the conventional idea. Instead, the absorption varies non-linearly with changes in thickness, and there may be an optimum thickness for the thickness of ITO that maximizes absorption in the active layer. The increase in absorption in the active layer 1504 is due exclusively to the significant reduction in the amount of electromagnetic radiation absorbed in the first ITO layer. Thus, contour plot 1600 can be used to determine the desired or optimal thickness of the electrode layers in the stack to enhance absorption efficiency in a particular active layer 1504.FIG. 16B shows the absorption in the active layer of the optimized PV stack. Comparing FIGS. 16A and 15D, it can be seen that the absorption in the active layer of the optimized PV stack is increased by a factor of 2 over the absorption in the active layer of the non-optimized PV stack. FIG. 16C shows the total absorption versus wavelength in the optimized PV stack. This absorption curve shows that the absorption in the wavelength range centered on red is low. Thus, an observer looking at the optimized PV stack observes that the PV cell appears reddish black as opposed to looking completely black in the non-optimized PV stack. This example demonstrates that in some embodiments, PV cells that appear black do not necessarily have the highest absorption in the active layer. In some embodiments, if the absorption in the active layer is high, the entrained device has some color other than perfect black. Advantageously, in some embodiments, as mentioned above, increasing the energy absorption of the PV absorber results in a linear increase in the overall energy conversion efficiency of the PV cell.FIG. 17 shows a diagram of a photovoltaic device 1700 similar to the device shown in FIG. 11A. The photovoltaic device 1700 of FIG. 17 comprises a thin film layer including an active region 1701 including Cu (In, Ga) Se2 (“CIGS”), a p-type layer 1706 and CdS, an n-type layer 1707, the active region 1701. Are not optimized to maximize absorption efficiency in the active region. The photovoltaic device shown in FIG. 17 is similar to that disclosed in Non-Patent Document 3 ("Krc et al."). This embodiment includes a glass substrate 1702, ITO or ZnO electrode layer 1703, polycrystalline Cu (In, Ga) Se2 (CIGS), p-type layer 2206, CdS, n-type layer 1707, and Mo or Al reflector layer 1708 including.18A-18C are a series of graphs of modeled absorbance versus wavelength of CIGS, p-type layer 1706 and CdS, n-type layer 1707 in the device reported by Krc et al. FIG. 18A shows GIGS over the wavelength band from about 400 nm to about 800 nm, an absorbance of about 60% for the p-type layer 1706. An almost 70% absorbance was obtained from about 500 nm to about 700 nm. FIG. 18B shows a graph of the absorbance of CdS, n-type layer 1707 over the wavelength range of about 400 nm to about 800 nm, resulting in an absorbance range of 0% to 20%. FIG. 18C shows a graph of total absorbance for the active region 1701 over a wavelength range of about 400 nm to about 800 nm. Within this range, an average absorbance of about 70% was obtained. The results of the modeled graph of FIG. 18A are approximately the same as the measured absorbance of the CIGS layer shown in FIG. 2 as reported in Krc. As described below, the measured and modeled absorbances, as shown by Krc and in FIGS. 18A-18C, indicate that the optical cavity has the active region 1701 and the reflector layer 1708 of the embodiment of FIG. Improves dramatically if put in between.FIG. 19A is a diagram of a photovoltaic device 1900A after the addition of an optical cavity 1910 between the active region 1701 and the reflector layer 1708 of FIG. In particular, photovoltaic device 1700 has been optimized in accordance with the principles of IMOD design described above. In this embodiment, the optical cavity comprises transparent ITO or ZnO. The thickness and optical properties (eg, refractive index n and extinction coefficient k) of the active layer 1901 including CdS, n-type layer 1907 and CIGS, p-type layer 1906 were not changed. In other embodiments, parameters of the glass substrate 1902 and the Mo or Al reflector layer 1908, such as thickness and refractive index, were not altered by the optimization process. The thickness of the ITO or ZnO electrode layer 1904 and the optical cavity 1910 did not change, and the absorption in the active region 1901 was thereby increased. The optimized thickness of the ITO or ZnO electrode layer 1904 was about 30 nm, and the optimized thickness of the optical cavity 1910 was about 70 nm. The absorbances of CIGS, p-type layer 1906 and CdS, n-type layer 1907 were then modeled as shown in FIGS. 20A-20C. FIG. 19B shows an alternative embodiment of FIG. 19A, where the optical cavity 1910 includes an air gap.20A-20C are a series of graphs of modeled absorbance versus wavelength of the CIGS, p-type layer 1906 and CdS, n-type layer 1907 in the optimized photovoltaic device 1900A of FIG. 19A. FIG. 20A shows a modeled graph of absorbance in the CIGS, p-type layer 1906, over a wavelength range of about 400 nm to about 800 nm showing an absorbance of about 60% to 90%. FIG. 20B shows a modeled graph of absorbance in CdS, n-type layer 1907, over a wavelength range of about 400 nm to about 800 nm with 0% to 30% absorbance. FIG. 20C shows a modeled graph of the total absorption of about 90% of CIGS, p-type layer 1906 and CdS, n-type layer 1907 over the 400 nm to 800 nm wavelength band. Thus, the absorption efficiency of the combination of CIGS, p-type layer 1906 and CdS, n-type layer 1907 was enhanced by 20% over the 400 nm to 800 nm wavelength band by applying the method described above to the embodiment of FIG.FIG. 21 is a diagram of an embodiment of an iPV device 2100 optimized by the method described above. The photovoltaic device 2100 includes an active region 2101. The photovoltaic device 2100 also comprises a glass substrate 2102 and an ITO layer 2104 disposed on the active area 2101. Active region 2101 comprises CIGS, p-type layer 2106 and CdS, n-type layer 2107. The two metal layers 2108A and 2108B are each disposed on a glass substrate 2102 (a first metal layer 2108A on top of a second metal layer 2108B). The first metal layer 2108A is a reflector and simultaneously an electrode. The second metal layer 2108B is also an electrode. A dielectric material 2108c is disposed between the reflector 2108a and the electrode 2108b so that these electrical paths can be electrically isolated from one another. The metal layers 2108A and 2108B each include Mo or Al. In this embodiment, an optical cavity 2110 with an air gap is formed between the first metal layer 2108 A and the active region 2101. Air has less absorption and lower values of k than other materials. Also, the refractive index of air is 1.0. Although air gaps may be effective for the purpose of absorption efficiency, air is a nonconductor of electricity. Thus, the photovoltaic cell has no function to supply current from the absorbed light. This problem is solved by drawing charge from the active layer using vias. Therefore, it is the first via 2111A that electrically connects the first metal layer 2108A to the CIGS and p-type layer 2106. It is the second via 2111 B that electrically connects the second metal layer 2108 B to the ITO layer 2 104 and passes through the optical cavity 2110, CIGS, p-type layer 2106 and CdS, n-type layer 2107. The second via 2111 B is surrounded by an insulating layer, which may, for example, electrically isolate the via from the CIGS, p-type layer 2106. When optimized, ITO layer 2104 has a thickness of 15 nm, CdS, n-type layer 2107 has a thickness of 40 nm, CIGS, p-type layer 2106 has a thickness of 360 nm, and an air gap The optical cavity 2110 has a thickness of 150 nm. The air gap optical cavity 2110 can be replaced with other transparent dielectrics such as silicon dioxide or magnesium dioxide, or MgF 2 or other suitable materials known in the art. In various embodiments, dielectrics with low n × k values are used. In such embodiments, the first via 2111A can advantageously connect the bottom electrode to the CIGS, p-type absorber layer 2106. Using vias in various other embodiments disclosed herein, as well as embodiments not yet conceived with an optical resonant layer (e.g., an optical cavity) further comprising a non-conductive material Electrical connections can be made through such nonconductor layers.FIG. 22 is a diagram of the embodiment shown in FIG. 21 with the via 2111 B and the metal electrode layer 2108 B removed. For example, electrical contact can be made by contacting the top optical resonant layer 2204 which can include a transparent conductive material such as a conductive oxide.FIG. 23 is a diagram of one embodiment of a photovoltaic device 2300 that is similar to the embodiment of FIG. 21 except that the ITO layer 2104 is removed. Thus, the photovoltaic device 2300 comprises a glass substrate 2302 and a first metal layer 2308A disposed on the second metal layer 2308B disposed on the glass substrate 2302. The air gap optical cavity 2310 separates the first metal layer 2308 A from the CIGS, p-type layer 2306 and CdS, n-type layer 2307. As described above, the first metal layer 2308A is a reflector and also an electrode electrically connected to the CIGS and the base of the p-type layer 2306 by the first via 2311A. Similarly, the second metal layer 2308B comprises an electrode electrically connected over the CdS, n-type layer 2307 by the second via 2311B. When optimized, the CdS, n-type layer 2307 has a thickness of 40 nm, the CIGS, p-type layer 2306 has a thickness of 360 nm, and the air gap optical cavity 2310 has a thickness of 150 nm . Similar to the above description, the air gap optical cavity 3010 can be replaced with silicon dioxide or magnesium dioxide or other dielectrics. In such an embodiment, the first via 2311 A can advantageously connect the electrode 2308 A to the CIGS, p-type absorber layer 2306.FIG. 24 is a graph of modeled absorption in the CIGS, p-type layer of the photovoltaic device 2300 of FIG. 23 over a wavelength range of about 400 nm to about 1100 nm. From this graph it can be seen that the CIGS, p-type layer exhibits an absorption efficiency of over 90% in the wavelength range from about 500 nm to about 750 nm.In general, a layer that increases absorption in the active layer can be incorporated into the PV device by appropriate selection of parameters associated with the layer, such as material and dimensions. The parameters of one or more of these layers can be adjusted while keeping the parameters of the other layers constant, or in some embodiments, the absorption in the active layer is increased One or more parameters of one or more layers can be adjusted to do so. In some embodiments, one or more parameters of all layers can be adjusted to increase absorption in the active layer. In various embodiments, these parameters can be adjusted at the design stage, for example, by calculating the effect of different parameters on absorption. An optimization procedure can be used. Various other techniques can also be used to obtain values for the parameters that result in the performance improvement.For example, FIG. 25A shows how an optical resonant layer 2506 and an optical cavity 2503 can be incorporated into a photovoltaic device and how it can be tuned to increase absorption. This device is a further generalized version of the device shown in FIGS. 19A and 19B. Parameters of the optical resonant layer 2506 and the optical cavity 2503, such as thickness, can be tuned to tune the device by interference and to increase absorption in the active layer.In some embodiments, the optical resonant layer 2506 and the optical cavity 2503 can comprise electrode layers. However, in various embodiments, either or both of the optical resonant layer 2506 and the optical cavity 2503 have a low extinction (or absorption) coefficient k and / or a low refractive index n and a low n × k value. Can contain materials that One or both of the optical resonant layer 2506 and the optical cavity 2503 can have, for example, a low n × k value. For example, as mentioned above, the optical cavity interferometer 2503 can include air or a dielectric material such as SiO 2 or a conductive material such as TCO such as ITO or ZnO. Other materials with low k or near zero k can also be used to obtain low n × k values. Still other materials are possible. Similarly, the optical resonant layer 2506 may be air, a dielectric material having a low extinction (or absorption) coefficient k, or a conductive material such as TCO such as ITO or ZnO, or any other material having a low n × k value. Can be provided. Other materials can also be used.In some embodiments, hybrid or composite structures are used for the optical cavity and / or the optical resonant layer. For example, the optical cavity and / or the optical resonant layer can comprise air / dielectric, conductor / dielectric, air / conductor combinations or mixtures.In the illustrated embodiment, the active layer of the PV cell comprises an n-type CDS layer 2505 and a p-type CIGS layer 2504. In other embodiments, the active layer can include other materials. The optical stack can be deposited on the substrate 2501 by using thin film processing techniques. Substrate 2502 may comprise glass or other suitable material. In some embodiments, a reflector 2502 can be deposited between the substrate and the remainder of the optical stack including the optical resonant layer and the active layer surrounded by the optical cavity. The reflector can be formed from Al, Mo or other reflective material such as metal or dielectric. In some embodiments, the reflector can comprise a single material or a composite material.The reflector 2502 of FIG. 25A can also be selected to optimize several parameters. For example, the material and thickness of the reflector layer 2502 can be selected to increase or optimize the reflectivity over several wavelength bands. In other embodiments, the reflector can be selected to reflect a particular wavelength band (such as red) and absorb another wavelength band (such as blue).As mentioned above, the optical cavity 2503 and the optical resonant layer 2506 may comprise TCO such as ITO or SnO2. In other embodiments, the optical cavity and the optical resonant layer can comprise a transparent dielectric material or an air gap or a combination thereof. The materials used for the optical cavity 2503 and the optical resonant layer 2506 need not be the same. FIG. 25B shows an embodiment of an iPV cell, wherein the optical cavity 2503 comprises an air gap or dielectric material such as SiO 2, and the optical resonant layer 2506 is a nonconductive layer such as SiO 2. Also includes. In order to form a conduction path for electrons from the active layer, vias 2507a and 2507b are formed as shown in FIG. 25B. The iPV cell comprises a reflector 2502b and an electrode 2502a as shown in FIG. 25B. In some embodiments, electrode 2502a can comprise the same material as reflector 2502b. The reflector 2502b and the electrode 2502c can include a conductive material. The via 2507a terminates in a reflector 2502b and the via 2507b terminates in an electrode 2502a. Two reflectors can be provided with metal leads to form an external electrical connection. A dielectric material 2502c is disposed between the reflector 2502b and the electrode 2502a so that these electrical paths can be electrically isolated from one another. Reflectors 2502a and 2502b can thus be used as electrical pathways to extract power from the active layer using vias. In embodiments where the optical resonant layer 2506 comprises a conductive material, the vias 2507b may extend to the optical resonant layer 2506. Instead, in such an embodiment, the vias 2507b can all be eliminated together.FIG. 25C shows another embodiment of an iPV cell comprising a conductive ITO layer 2508 disposed between the active layer and the optical cavity 2503. Conduction paths for electrons coming from the active layer are formed by vias 2507a and 2507b. The via 2507a connects the ITO layer 2508 to the reflector 2502b, and the via 2507b connects the n-type CdS layer 2505 to the electrode 2502a. The ITO layer 2508 and the optical cavity 2503 can form a composite optical cavity as described in FIGS. 11E-11H, thus ITO can be said to be part of the optical cavity .As mentioned above, one or more parameters of one or more of the layers in these devices shown in FIGS. 25A-25C may be, for example, using interference principles or As a result of the interference effect, the absorption in the active layer can be adjusted to increase.FIG. 26 shows a simpler device than the devices shown in FIGS. 25A-25C. The PV device comprises an optical cavity 2603 disposed between the active layer of the iPV and the reflector 2602. The active layer of iPV includes an n-type CdS layer 2605 and a p-type CIGS layer 2604. The reflector layer 2602 may comprise Al, Mo, or other metal / dielectric reflective material. As mentioned above, the optical cavity can comprise air, a dielectric material, or a transparent conductive material having a low n × k value, or a combination thereof. Other materials can also be used. In some embodiments, reflector 2602 may be removed. As mentioned above, adjusting one or more parameters of one or more of the layers in the device, for example, to increase absorption in the active layer based on the principle of interference Can. In some embodiments, optical cavity 2603 may be excluded, yet one or more parameters of one or more layers may be adjusted to increase absorption in the active layer. .The parameters of the different layers can be selected based on the spectral properties of those layers. For example, gold has a high extinction coefficient k in a wavelength range centered on red and has a relatively low extinction coefficient k in a wavelength range centered on blue. However, the refractive index n of gold is low in the wavelength range centered on red and high in the wavelength range centered on blue. As a result, in the case of gold, the product n × k is low in the wavelength range centered on red and high in the wavelength range centered on blue. Thus, a reflector comprising gold reflects exclusively the wavelength group centered on red and absorbs the wavelength group centered on blue. Thus, the reflector has a low n × k value in the wavelength band corresponding to the useful light absorption region of the active layer (light is absorbed and converted to power) and is within the useful light absorption region of the active layer Used to tune the absorption by selecting a material for the reflector that has high n × k at no wavelengths (eg light energy may be converted to heat, which may reduce device performance) It can be done. For example, if it is advantageous not to allow blue light to enter the iPV device, it may be desirable to form a gold reflector 1104. In some embodiments, the reflector material can be selected to absorb infrared wavelengths.Similarly, as described above, the choice of a particular gap distance determines whether a particular color, eg, red, green or black, is reflected by the reflector layer (eg, 1104 in FIGS. 11B-H) . For example, the gap distance should be chosen such that the reflector reflects a substantial part of the incident light in the wavelength range corresponding to the band gap of the active layer or absorber layer and is then absorbed by the active layer / absorber Can, therefore, IMOD looks black. However, contrary to conventional methods directed to increasing the efficiency of solar cells, the above-described method of optimizing iPV devices to increase absorption in the active layer results in devices that appear completely black. It is not always associated. The device may appear reddish black or other colors, for example, in some embodiments.FIG. 27 shows a diagram of a conventional multi-junction photovoltaic device 2700. As shown in FIG. The photovoltaic device 2700 comprises a glass substrate 2702, transparent electrodes 2704A and 2704B, active layers 2706A, 2706B, 2706C, and a reflector layer 2708. In this embodiment, the substrate 2702 comprises glass, the first and second transparent electrodes 2704A and 2704B comprise ITO, and the reflector layer 2708 comprises Al. The first active layer 2706A is configured to absorb blue light, the second active layer 2706B is configured to absorb green light, and the third active layer 2706C is configured to absorb red and infrared light. Configured to absorb. In some embodiments, active layers 2706A, 2706B, and 2706C include similar materials with different band gaps for red, green, or blue. In some embodiments, active layers 2706A, 2706B, and 2706C include different material systems such as silicon, GaAs, or other combinations of materials known in the art.In multijunction photovoltaic devices, there are numerous approaches to optimizing energy absorption at each of the junctions of the photovoltaic device. For example, one approach may be to dispose an optical cavity between the combined stack of multijunction active layers (eg, 2706A-2706C) and the reflector 2708. Another approach is to place an optical resonant layer between each active layer forming a multijunction photovoltaic device, and an optical cavity between the last active layer of the photovoltaic device and the reflector. Can be arranged. These two approaches are described in more detail below.FIG. 28A shows a diagram of one optimized device of the multi-junction photovoltaic device shown in FIG. In this embodiment, the three absorbers / active layers 2806A, 2806B, and 2806C are configured to absorb light in the "blue", "green", and "red and IR" wavelength bands. These absorber layers are sandwiched between the first optical resonant layer 2804A and the second optical cavity 2804B. The optical resonant layer 2804A and the optical cavity 2804B can comprise transparent electrodes, ITO, air gaps, SiO2, or other materials. If the optical resonant layer or the optical resonant cavity comprises a non-conductive material, electrical connections can be made using vias as shown in FIG. 28B. The labels "red, green and blue" only refer to a range of wavelengths, for example not to the actual wavelength band of red. The active layer can absorb other wavelengths. In addition, more or less areas may be included. Other variations are also possible.FIG. 29A shows a diagram of one optimized device of a multijunction photovoltaic device, where the optical resonant layer is not only disposed between the respective active layers but also the top Also disposed between the active layer and the substrate, the optical cavity is disposed between the bottom active layer and the reflector. For example, the optical resonant layer 2904A is disposed between the base 2902 and the joint 2906A. Similarly, optical resonant layers 2904B and 2904C are added to form an alternating stack of optical resonant layers and active layers 2906A, 2906B, 2906C. The optical cavity 2905 is disposed between the last active layer 2906 C and the reflector 2908. Each optical resonant layer 2904A-2804C and optical cavity 2905 can comprise, for example, ITO, air gap, SiO2, or other medium. If the optical resonant layer or the optical resonant cavity comprises a non-conductive material, electrical connections can be made using vias as shown in FIG. 29B. Thus, the optical stack of photovoltaic device 2900 comprises an optical resonant layer 2904A comprising ITO, an active layer 2906A configured to absorb wavelengths within the range of blue light, an optical resonant layer 2904B, a range of green light Active layer 2906 B, which is configured to absorb an internal wavelength, optical resonant layer 2904 C, active layer 2906 C, which is configured to absorb a wavelength within the red and infrared light range, an optical cavity 2905, And a reflector layer 2908. Multi-junction photodiodes can be optimized based on the interference principle described above. For example, in this modeled and optimized diagram of a multijunction photovoltaic device, the absorbance of each active layer is used within the thickness or other layer of the other layer present in the optical stack. It can be enhanced by changing the material. The photovoltaic device can further comprise an insulator 2908C and an electrode 2908A.In some embodiments, the multijunction photodiode comprises fewer optical resonant layers than those shown in FIG. 29A. For example, in one embodiment, the optical resonant layer 2904A may be disposed between the substrate 2902 and the active layer of one of the plurality of active layers 2906A, and the other optical resonant layers 2904B and 2904C may be excluded. In other embodiments, the optical resonant layer 2904B may be disposed between the active layers 2906A and 2906B, and the other optical resonant layers 2904A and 2904C may be excluded. In other embodiments, the optical resonant layer 2904C may be disposed between the active layers 2906B and 2906C, and the other optical resonant layers 2904A and 2904B may be excluded. In other embodiments, only one of the plurality of optical resonant layers 2904A, 2904B, 2904C may be excluded. The optical cavity 2905 can be included or excluded from any of the embodiments. The included active layer can be increased or decreased. These active layers can be separated by layers other than the optical resonant layer. The optical resonance layer used can be increased or decreased. Thus, the number, arrangement, and type of active layers, optical resonant layers, and optical resonant cavities may be different, which may depend on the design and / or optimization process. As mentioned above, the labels "red, green and blue" refer only to a range of wavelengths, for example not to the actual wavelength bands of red, green and blue light. The active layer can absorb other wavelengths. Other variations are also possible.As mentioned above, the composition and / or thickness of each layer in different embodiments of the photovoltaic device is designed and used to increase absorption in the active layer and reduce reflection using the methods described above It can be optimized at the processing stage. For example, an iPV embodiment can be optimized using the IMOD design principle as described above. In some embodiments, a MEMS engine or platform can be provided to dynamically change the thickness of the optical cavity or layer in these embodiments during operation of the iPV cell. Thus, the iPV embodiments described above can be improved as a result of interference effects. Increased absorption of energy in the PV absorber / active region may result in an increase in the overall efficiency of the iPV device.However, the design is not truly optimal in every respect. For example, in embodiments that include a TCO layer in an optical cavity, the electrical losses may be negligible. However, TCO can cause some optical loss. Embodiments that include air or SiO2 in the optical cavity may show a slight decrease in light absorption due to the presence of vias. In some embodiments, the presence of vias for electrical connection may result in loss of light aperture.In some embodiments of the iPV device, the increase or optimization of the absorption efficiency of the active layer may not necessarily depend on the orientation of the incident light with respect to the iPV device. For example, if the incident light is substantially normal to the iPV device, the absorption efficiency may be high if the incident light is at a high grazing incidence angle (eg, approximately 89 degrees from the iPV device normal) It may be approximately the same as the absorption efficiency in the case of light. Thus, the orientation of the solar cells need not be perfectly aligned for optimal absorption efficiency. However, the angle of incidence affects the intensity of the light reaching the active layer, thus affecting the effective energy absorbed by the active layer, the less light reaching the solar cell, the less the active layer The amount of energy absorbed by is small. Thus, for a given area of a photovoltaic device, if active tracking (eg moving the solar cell to the direction of the sun) does not occur, then as the angle of incidence θi increases, only cos (θi) It should be noted that the absorbed energy is reduced.However, in some embodiments, when the absorption efficiency changes as a function of angle of incidence, the iPV stack can be designed for a particular angle of incidence using IMOD principles and interference effects. For example, the thickness of the optical cavity can be adjusted to increase the absorption of the desired wavelength of light incident on the device at non-normal angles. In some embodiments, the optical cavity may be variable (as opposed to being fixed), for example, to be at different angles of incidence of the sun at different times.The principles described herein are applicable to PV devices that are fully reflective (eg, opaque) and more transparent.FIG. 30 shows a conventional translucent PV cell. As used herein, the term "translucent" means partial light transmission and is not limited to 50% transmission. The semitransparent PV cell shown in FIG. 30 is formed by sandwiching the light absorbing layer 3004 between two transparent conductive oxide (TCO) layers 3005 and 3002. These stacks can be disposed to cover the substrate 3001. Metal leads 3007 can be provided on the TCO layer 3005 to form electrical connections. A metal lead similar to 3007 can be provided in all the embodiments described herein having an upper optical resonant layer comprising a conductive material. Such metal leads can be used in other embodiments as well. For example, in embodiments where the top layer includes a nonconductive material, comprising a metal lead on the top nonconductive layer that is similar to 3007 and electrically connecting the metal lead to the electrode layer, eg, through a via Can.In order to optimize the translucent PV cell of FIG. 30 using the principles of optical interference and IMOD design principles, one approach is to use a light absorbing layer 3104 and a reflector layer 3102 as shown in FIG. The optical cavity 3103 can be disposed between the two. In some embodiments, the top electrode layer 3105 can be an optical resonant layer that includes a transparent electrode. The upper electrode layer 3105 can include, for example, ITO or ZnO. In some embodiments, an AR coating can be disposed on the top electrode layer 3105. Thickness and material properties (eg, refractive index n and extinction coefficient k) for various layers comprising a PV cell including an optical cavity 3103 that increases absorption in the active layer, a reflector layer 3102, an active layer 3304 It can be used. The thickness of the reflector can control the degree of transparency. For example, an iPV device with a very thin reflector may have a higher transparency compared to a reflector with a relatively thick reflector layer. By reducing the thickness of the reflector layer, translucent iPV devices can be produced. For example, in some embodiments, the thickness of the reflector in a translucent iPV device can be in the range of 5 nm to 25 nm. In some embodiments, the thickness of the reflector in a translucent iPV device can be in the range of 1 nm to 500 nm. In various embodiments, the reflection has a reflectivity of at least 10%, 20%, 30%, 40% or more. In some embodiments, the reflector has a reflectivity of 50%, 60%, 70%, 80%, 90% or more. In some embodiments, translucent PV cells can be designed to use thinner PV materials as compared to opaque PV cells. The thickness of the reflector layer can be incorporated into the design, eg optimization, calculation to increase the absorption in the active layer. A translucent PV cell designed according to the above-described method can be more efficient than the conventional PV cell described in FIG. 30 due to the increased absorption efficiency. In other embodiments described herein as well as embodiments not yet conceived, the PV cell may be at least partially transparent or light transmissive.For example, the multijunction PV shown in FIGS. 28A-29B can be made partially light transmissive by the method described above. FIG. 32A also shows an embodiment of a multijunction PV cell that may be at least partially light transmissive. The embodiment shown in FIG. 32A includes a multijunction active material comprising three active or absorber layers 3204a, 3204b, and 3204c. The three absorber layers can absorb light having different frequencies. For example, layer 3204a can absorb light having frequencies in the red and IR regions, layer 3204b can substantially absorb light having frequencies in the green region, and layer 3204c can be blue It can substantially absorb light having a frequency in the region. The active layer can absorb other wavelengths in alternative embodiments. The reflector 3202 is disposed below the multijunction active material. The optical resonant layer 3205 is disposed on the multijunction active material. The thickness and material composition of the optical resonant layer 3205 can be selected or optimized using the interference principle described above such that the absorption in the active material can be increased or maximized. In the embodiment shown in FIG. 32A, the optical resonant layer can comprise a transparent conductive material such as TCO or a transparent conductive nitride. However, in other embodiments, the optical resonant layer can include a transparent nonconductive dielectric such as SiO 2 or an air gap. In another embodiment, the optical resonant layer can comprise a composite structure as described above. Other materials and designs can also be used. In embodiments where the optical resonant layer comprises a non-conductive material, vias 3206 may be used to form electrical connections as shown in FIG. 32B. The optical laminate is disposed on a substrate 3201 as shown in FIGS. 32A and 32B. The substrate may be light transmissive or opaque as described above.Partially transmissive reflector layers may be used in other designs disclosed herein. For example, a partially light transmitting reflector layer can be used in a PV device having a single active layer. Still other configurations are possible. As FIG. 32A shows, a PV cell can include one or more optical resonant layers, but does not include an optical cavity. Thus, optical cavity may be excluded in the various PV cells described herein.In the various embodiments described herein, as noted above, although the absorption in the active layer is optimized, in some embodiments the overall efficiency may be other factors such as collection efficiency. Can be increased or optimized by additionally considering the effect of For example, one or more parameters can be adjusted to enhance the combined effect of both absorption efficiency and collection efficiency. For example, in such embodiments, the overall efficiency can be monitored in the optimization process. However, other figures of merit can also be used and can be incorporated into the optimization, design or manufacturing process.As mentioned above, the devices or systems in which this device is incorporated may be modeled and calculations may be performed to evaluate the performance of the device or system. In some embodiments, the actual performance can be measured. For example, the overall efficiency can be measured by making an electrical connection with the electrode containing the active layer. For example, electrical probes 3110 and 3112 are shown in FIG. 31 which make electrical contact with one of the metal leads 3107 and the reflector 3102 which is also an electrode. Electrical probes 3110 and 3112 are electrically connected to a voltmeter 3114 that measures the electrical output of the PV device. Similar arrangement arrangements can be used for the various embodiments disclosed herein. Electrical contact can be made with the metal leads, such as via an electrode layer, to measure the electrical output signal. Other configurations can also be used.Various variations of the methods and structures described herein are also possible.Thus, in the various embodiments described herein, the performance of photovoltaic devices can be improved using interferometric techniques. In some embodiments, an optical cavity disposed between the active layer and the reflector can increase absorption in one or more active layers. However, as mentioned above, an optical resonator layer placed elsewhere may also increase the absorption in one or more active layers and correspondingly increase the efficiency. Thus, as described above, one or more parameters of one or more layers can be adjusted, for example, to increase the efficiency of the device in converting optical power to power. These one or more layers may be layers used in conventional photovoltaic devices or not layers loaded with such a structure for improved performance. Thus, the optical resonant layer is not limited to the layers loaded in the structure to obtain an improvement. In addition, the optical resonant layer is not limited to the layers described above, but can include other layers tuned to increase absorption in the active layer using the principle of interference. The optical resonant layer or optical cavity may also have other functions to operate as an electrode. Design or optimization may be implemented to increase absorption and efficiency in one or more active layers.In addition, although various techniques are described above as achieving optimization, the methods and structures described herein are not limited to truly optimal solutions. These techniques can be used, for example, to increase the absorption in the active layer or the overall optical efficiency of the device, but not necessarily to maximize. Similarly, the use of techniques can reduce absorption in layers other than the active layer, but does not necessarily minimize absorption. Likewise, the resulting structure is not necessarily the optimum result, but may nevertheless exhibit improved performance or properties.However, the methods and structures disclosed herein provide a wide range of advantages, including performance advantages for some photovoltaic devices. For example, the absorption efficiency of a photovoltaic device can be improved by using an optical cavity or other optical resonant layer in a PV cell. In some embodiments, for example, the absorption efficiency of one or more active layers is increased by at least about 20% in the presence of at least one optical resonant cavity or optical resonant layer. Here, the absorption values are integrated over the wavelengths in the solar spectrum. In some other photovoltaic devices, the absorption efficiency integrated over wavelengths in the solar spectrum is at least 25%, 30%, 40%, 50%, due to the presence of the optical cavity or optical resonant layer. It can increase by a rate of 60%, 70%, 80%, 90% or more. In other embodiments, the increase can be 5% or more, 10% or more, or 20% or more. For some embodiments, these values can be applied when integrating to smaller wavelength bands.Thus, the interference principle can be applied to increase or optimize the efficiency of the active layer for one or more wavelengths. For example, at least one active layer of the plurality of active layers may be configured to absorb light at a wavelength of about 400 nm, having an absorption efficiency greater than 0.7. At least one active layer of the plurality of active layers has an absorption efficiency greater than 0.7 and can be configured to absorb light in a wavelength range of 400 nm to 450 nm, or 350 nm to 400 nm. . In some embodiments, these one or more active layers can be configured to absorb light in the range of 350 nm to 600 nm, with absorption efficiencies greater than 0.7. In other embodiments, the absorption efficiency is for a single wavelength in the range of 250 nm to 1500 nm, or alternatively for a bandwidth of at least 50 nm, 100 nm, or 500 nm within the wavelength band of 250 nm to 500 nm. Can be increased or optimized. For some embodiments, these values can be applied when integrating to smaller wavelength bands.The overall efficiency of the photovoltaic device can be increased as well. For example, in some photovoltaic devices, the overall conversion efficiency integrated over wavelengths in the solar spectrum is at least 15%, 20%, 25% or more in the presence of one or more suitable optical resonant layers It can be increased by 30%, 40%, 50%, 60%, 70%, 80%, 90% or more. In some embodiments, this increase can be 5% or more or 10% or more. In some embodiments, the overall conversion efficiency of the photovoltaic device is greater than 0.7, 0.8, 0.9, or 0.95. In other embodiments, the overall conversion efficiency may be smaller than these. For example, the overall conversion efficiency can be at least 0.3, 0.4, 0.5, 0.6 or more. In one embodiment, the overall conversion efficiency may be 0.1 or 0.2 or more. For some embodiments, these values can be applied when integrating to smaller wavelength bands.An increase in the absorption of solar energy in the at least 5%, 10%, 20%, 25%, 30% or more of the one or more active layers is obtained as a result of optical interference. These absorption values can be determined by integrating over the solar spectrum. For some embodiments, these values can be applied when integrating to smaller wavelength bands.In some embodiments, the presence of at least one optical cavity or optical resonant layer results in at least 20%, 25%, or 30 when the photovoltaic device is exposed to electromagnetic radiation such as the solar spectrum. The average field strength in the one or more active layers can be increased by%. In other embodiments, the increase in average field strength is at least 40%, 50%, 60%, 70%, 80%, 90% or more. In some embodiments, the increase is 5% or more, 10% or more, or 15% or more. As described below, the average field strength corresponds to the field averaged over the thickness of the particular layer of interest, eg, the active layer. For some embodiments, these values can be applied when integrating to smaller wavelength bands.In some embodiments, the increase in average electric field intensity integrated across the solar spectrum for one or more active layers provided by the presence of at least one optical cavity or optical resonant layer is within the photovoltaic device Is greater than the increase in average field strength integrated over the solar spectrum for the other layers of. In some embodiments, the average field strength in the one or more active layers of the photovoltaic device is at least one of the average field strength in the one or more active layers of the PV cell without the optical resonant layer. It may increase by a factor of 1. In some other embodiments, the average field strength in the one or more active layers of the photovoltaic device is at least the average field in the one or more active layers of the PV cell without the optical resonant layer. It can be 1.2 times or 1.3 times. In other embodiments, the increase is at least 1.4 times, 1.5 times, 1.6 times, or 1.7 times the average electric field in the active layer of the PV cell without one or more resonant layers. It is a double. For some embodiments, these values can be applied when integrating to smaller wavelength bands.In some embodiments, the increase in average field strength may be large in other layers of the photovoltaic device other than one or more active layers. However, in such embodiments, the absorption in this other layer of the photovoltaic device may be less than the absorption in one or more active layers. In some embodiments, the average electric field in one or more active layers is high compared to the other layers, but in other embodiments, the layers other than the active layer have the highest average electric field strength. . Such conditions may be achieved over wavelengths on the solar spectrum or smaller wavelength bands.In the various embodiments disclosed, the light power absorbed by one or more active layers is increased. In some embodiments, the increase in optical power absorbed by one or more active layers is greater than the optical power absorbed by all other inactive layers of the combined photovoltaic device . The increase in optical power absorbed by one or more active layers exceeds 1.1, 1.2, or 1.3 times the increase in absorbed optical power relative to other layers in the PV device It is good. In other embodiments, this increase is greater than 1.4, 1.5, 1.6, or 1.7 times the increase in absorbed light power relative to other layers in the PV cell.As mentioned above, these values may be determined by integrating over the solar spectrum. In addition, these values can be determined for the standard solar radiation known as "Air mass 1.5".As mentioned above, in some embodiments, these values apply over wavelength bands smaller than the solar spectrum. These values can be applied, for example, to the visible wavelength spectrum, the ultraviolet wavelength spectrum or the infrared wavelength spectrum. These values can be applied to wavelength bands of 100 nm, 200 nm, 300 nm, 400 nm, 500 nm, 600 nm, 700 nm, 800 nm, 900 nm, 1000 nm or more. These values can be applied to larger or smaller wavelength bands as well. Thus, in some embodiments, these values are applied when parameters such as absorption efficiency, overall efficiency, electric field, light power etc are integrated over a smaller wavelength band different from the entire solar spectrum.In addition, these values can be values for one or more active layers. For example, a PV cell can be designed to increase the absorption in one or more active layers (such as p-type, intrinsic semiconductor or n-type layers) together or separately. Thus, these values can be applied to any of these layers individually or to any combination of these layers.Similarly, one or more optical resonant layers may contribute to the level of performance referred to herein. Similarly, the above performance values may depend on the presence of one or more design parameters of one optical resonant layer or of one group of two or more optical resonant layers.Various alternative configurations are possible. For example, components (eg, layers) can be added, removed, or rearranged. Likewise, processes and method steps can be added, removed or reordered. Also, although the terms thin film and layer are used herein, such terms as used herein include laminated films and multilayer structures. Such laminated films and multilayer structures can be adhered to other structures using adhesives, or can be formed on other structures using deposition or otherwise. . Similarly, the term active layer may be used when including p-type and n-type doped regions, and / or intrinsic portions of the active region. Similarly, other types of materials can also be used. For example, although the active layer can comprise a semiconductor, in some embodiments other materials such as organic materials can also be used.The device of the present disclosure has many possible applications. Photovoltaic devices may be used, for example, on building structures such as houses or buildings, or within a single structure such as a solar farm. Photovoltaic devices can be incorporated into vehicles such as cars, planes, ships, spacecraft and the like. Solar cells can be used with electronic devices including, but not limited to, cell phones, computers, portable commercial devices. Solar cells can be used in military, medical, consumer and scientific applications. Applications beyond the ranges specifically described herein are also possible.Those skilled in the art will also appreciate that various modifications and alterations can be made without departing from the scope of the present invention. Such modifications and variations are intended to be within the scope of the present invention as defined by the appended claims.101 and 102 surface 103 light beam 104 light path 105 light path 107 light path 201 top reflector layer 202 bottom reflector layer 203 light ray 204 and 207 path 300 interferometric modulator stack 301 glass substrate 302 electrode layer 303 absorber layer 304 optical cavity resonance Resonator 305 Al reflector 400 IMOD 401 Ray 402 Absorber 403 Ray 403a Ray 403b Ray 404a Ray 404b Ray 404c Transmitted ray 405a Ray 405b, 405c Ray 405d Ray 405e Ray 405f Ray 406 406 Rays 406b 406c , 406 d, 406 e ray 407 ray 407 a ray 407 f reflected ray 900 solar battery cell 901 front electrodes 901 and 905 electrodes 902 anti-reflection (AR) coating 903 n-type layer 903 and 904 PV material 906 bulb 907 external circuit 1000 thin film PV cell 1001 glass substrate 1002 first transparent electrode layer 1003 layer of PV material 1005 second transparent electrode layer 1006 reflection Body 1101 ITO or layer of n-type material 1102 containing CdS 1103 Layer of p-type material containing CIGS Reflector layer 1105 containing Mo Glass base 1106 Optical cavity 1106a ITO layer 1106b SiO2 layer 1106c Air Gap 1107 Strut 1201, 1202, 1203, 1204 Layer 1203 Absorber Layer 1204 Optical Cavity Resonator 1205 Metal Conductor / Reflector 1300 iPV Device 1302 Starting State 1304 State 1306 State 1308 State 1310 End state 1401 Curve 1402 Curve 1403 Curve 1501 Textured glass substrate 1502 First ITO layer 1504 Region 1506 Second ITO layer 1507 Ag or Al layer 1600 Contour plot 1610 Absorption 1700 Photovoltaic device 1701 Active region 1702 Glass base 1703 ITO or ZnO electrode layer 1706 Cu (In, Ga) Se 2 (“CIGS”), p-type layer 1707 CdS, n-type layer 1708 Mo or Al reflector layer 1900 A photovoltaic device 1901 active layer 1902 glass Substrate 1904 ITO or ZnO electrode layer 1906 CIGS, p-type layer 1907 CdS, n-type layer 1908 Mo or Al reflector layer 1910 Optical cavity 2100 iPV device 2100 B metal electrode layer 101 Active region 2102 Glass base 2104 ITO layer 2106 CIGS, p-type layer 2107 CdS, n-type layer 2108A first metal layer 2108a reflector 2108B second metal layer 2108b electrode 2108c dielectric material 2110 optical cavity 2111A first 1 via 2111 B second via 2204 upper optical resonant layer 2206 polycrystalline Cu (In, Ga) Se2 (CIGS), p-type layer 2300 photovoltaic device 2302 glass substrate 2306 CIGS, p-type layer 2307 CdS, n-type Layer 2308A first metal layer 2308B second metal layer 2310 air gap optical cavity 2311A first via 2501 base 2502 base 2502 reflector 2502a electrode 2502b reflector 2503 optical cavity 2504 p-type CIGS layer 25 05 n-type CDS layer 2506 optical resonant layers 2507a and 2507b vias 2508 conductive ITO layer 2602 reflector 2603 optical cavity 2604 p-type CIGS layer 2605 n-type CdS layer 2700 conventional multijunction photovoltaic device 2702 glass substrate 2704A And 2704B transparent electrode 2706A, 2706B, 2706C active layer 2708 reflector layer 2804A first light absorption layer 2804B second optical cavity 2806A, 2806B, and 2806C absorber / active layer 2902 substrate 2904A optical resonance layer 2904B and 2904A 2904C Optical resonant layer 2905 Optical cavity 2906A Junctions 2906A, 2906B, 2906C Active layer 2908 Reflector 3001 Base 3004 Light absorbing layer 3005 and 3002 Transparent conductive oxide (TC O) layer 3007 metal lead 3010 air gap optical cavity resonator 3102 reflector layer 3103 cavity resonator 3104 light absorbing layer 3105 upper electrode layer 3107 metal lead 3110 and 3112 electrical probe 3114 voltmeter 3201 substrates 3204a, 3204b, and 3204c active Or absorber layer 3206 via
Copper interconnects are formed by depositing substantially pure copper into the lower portion of an interconnect opening. The upper portion of the interconnect opening is then filled with doped copper followed by a planarization process. The resulting copper interconnect exhibits reduced electromigration while maintaining low overall resistivity.
What is claimed is: 1. A semiconductor device, comprising:a semiconductor substrate; a plurality of levels of dielectric layers and conductive layers formed on the semiconductor substrate; and an interconnect formed in at least one of the dielectric layers, the interconnect electrically connecting at least two of the conductive layers or one of the conductive layers and an active region in the semiconductor substrate, the interconnect comprising: a layer of doped copper formed along the bottom and sidewalls of the interconnect, the layer of doped copper including 7% to about 12% by weight of a dopant element, a lower portion comprising pure copper, wherein the layer of doped copper encapsulates the pure copper in the lower portion, and an upper portion comprising doped copper formed directly on the pure copper lower portion, the upper portion including 7% to about 12% by weight of a dopant element. 2. The semiconductor device of claim 1, wherein the interconnect represents at least one of a contact, a via and an interconnect line.3. The semiconductor device of claim 1, wherein the lower portion comprises about 40% to about 90% of the interconnect.4. The semiconductor device of claim 1, wherein the dopant element comprises at least one of tin, zirconium, strontium, palladium, magnesium, chromium, and tantalum.5. The semiconductor device of claim 1, wherein the doped copper covers a substantial portion of an upper surface of the interconnect.6. The semiconductor device of claim 1, wherein the layer of doped copper formed along the bottom and sidewalls includes at least 8% by weight of a dopant element.7. The semiconductor device of claim 1, wherein the upper portions includes at least 8% by weight of a dopant element.8. A semiconductor device, comprising:a semiconductor substrate; a plurality of levels of dielectric layers and conductive layers formed on the semiconductor substrate; and an interconnect formed in at least one of the dielectric layers, the interconnect electrically connecting at least two of the conductive layers or one of the conductive layers and an active region in the semiconductor substrate, the interconnect comprising: a copper alloy seed layer formed along the bottom and sidewalls of the interconnect, a lower portion comprising substantially pure copper formed over the seed layer, the copper alloy seed layer surrounding the substantially pure copper in the lower portion and impeding electromigration of the substantially pure copper, and an upper portion comprising doped copper, the doped copper including 7% to about 12% by weight of a dopant element, wherein the upper portion has increased electromigration resistance as compared to the lower portion of the interconnect. 9. The semiconductor device of claim 8, wherein the dopant element in the upper portion comprises at least one of zirconium, strontium, palladium, magnesium, chromium, and tantalum.10. The semiconductor device of claim 8, wherein the copper alloy seed layer includes about 0.3% to about 12% by weight of a dopant element, the dopant element comprising at least one of tin, zirconium, strontium, palladium, magnesium, chromium, and tantalum.11. The semiconductor device of claim 8, wherein the interconnect represents at least one of a contact, a via and an interconnect line.12. The semiconductor device of claim 8, wherein the lower portion comprises about 40% to about 90% of the interconnect.13. The semiconductor device of claim 8, wherein the doped copper covers a substantial portion of an upper surface of the interconnect and exhibits increased electromigration resistance as compared to the lower portion of the interconnect.14. The semiconductor device of claim 8, wherein the upper portion comprising doped copper includes at least 9% by weight of a dopant element.15. The semiconductor device of claim 8, wherein the copper alloy seed layer includes at least 9% by weight of a dopant element.16. A semiconductor device, comprising:a semiconductor substrate; a plurality of levels of dielectric layers and conductive layers formed on the semiconductor substrate; and an interconnect formed in at least one of the dielectric layers, the interconnect electrically connecting at least two of the conductive layers or one of the conductive layers and an active region in the semiconductor substrate, the interconnect comprising: a lower portion comprising substantially pure copper, the lower portion comprising about 40% to about 90% of the interconnect, and an upper portion formed directly on the substantially pure copper lower portion, the upper portion including 7% to about 12% by weight of a dopant element, wherein the upper portion has increased electromigration resistance as compared to the lower portion of the interconnect. 17. The semiconductor device of claim 16, wherein the dopant element comprises at least one of zirconium, strontium, palladium, magnesium, chromium, and tantalum.18. The semiconductor device of claim 16, further comprising:a layer of doped copper formed along the bottom and sidewalls of the interconnect, the doped copper encapsulating the substantially pure copper in the lower portion of the interconnect. 19. The semiconductor device of claim 18, wherein the layer of doped copper comprises at least one of tin, zirconium, strontium, palladium, magnesium, chromium, and tantalum.20. The semiconductor device of claim 16, wherein the interconnect represents at least one of a contact, a via and an interconnect line.21. The semiconductor device of claim 16, wherein the doped copper covers a substantial portion of an upper surface of the interconnect.22. The semiconductor device of claim 16, wherein the upper portion includes at least about 10% by weight of a dopant element.23. The semiconductor device of claim 16, further comprising:a layer of doped copper formed along the bottom and sidewalls of the interconnect, the layer of doped copper including at least about 10% by weight of a dopant element.
CROSS-REFERENCE TO RELATED APPLICATIONSThis application is related to the following commonly assigned, copending application Ser. No. 09/593,669 filed Jun. 14, 2000, entitled: METHOD OF MANUFACTURING A SEMICONDUCTOR DEVICE HAVING COPPER INTERCONNECTS.TECHNICAL FIELDThe present invention relates to a semiconductor device and a method of manufacturing a semiconductor device having copper interconnects. The present invention has particular applicability to high density semiconductor devices with submicron design features.BACKGROUND ARTThe escalating requirements for high density and performance associated with ultra large scale integration semiconductor devices require design features of 0.25 microns and under, increased transistor and circuit speeds, high reliability and increased manufacturing throughput. The reduction of design features to 0.25 microns and under challenges the limitations of conventional methodology.Conventional semiconductor devices typically comprise a semiconductor substrate, normally made of monocrystalline silicon, and multiple dielectric and conductive layers formed thereon. In a conventional semiconductor device 100 illustrated in FIG. 1, substrate 1 is provided with field oxide 2 for isolating an active region including source/drain regions 3, and a gate electrode 4, typically of doped polysilicon, above the semiconductor substrate with gate oxide 5 therebetween. Interlayer dielectric layer 6, typically silicon dioxide, is then deposited thereover and openings formed using conventional photolithographic and etching techniques. The openings are filled with conductive material to establish electrical contact between subsequently deposited conductive layer 8 and source/drain regions 3 through contacts 7, and to transistor gate electrode 49. Dielectric layer 9, typically silicon dioxide, is deposited on conductive layer 8, and another conductive layer 10, typically aluminum or an aluminum-base alloy, is formed on dielectric layer 9 and electrically connected to conductive layer 8 through vias 11.With continued reference to FIG. 1, conductive layer 10 is the uppermost conductive layer and, hence, constitutes the wire bonding layer. Dielectric layer 12, also typically silicon dioxide, is deposited, and a protective dielectric scratch resistant topside layer 13 is deposited thereon. Protective dielectric layer 13 typically includes a nitride layer, such as silicon nitride (Si3N4). Alternatively, protective dielectric layer 13 may include a dual topcoat comprising a nitride layer on an oxide layer. The protective dielectric layer 13 provides scratch protection to the semiconductor device 100 and protection against moisture and impurity contamination during subsequent processing. After deposition of protective dielectric layer 13, conventional photolithographic etching techniques are employed to form an opening to expose wire bonding layer 10 for external connection via bonding pad 14 and electrically conductive wires 15 or an external connection electrode (not shown).Although only two conductive layers 8 and 10 are depicted in FIG. 1 for illustrative convenience, conventional semiconductor devices may include more than two conductive layers, e.g., five conductive metal layers, depending on design requirements. Also in the interest of illustrative convenience, FIG. 1 does not illustrate any particular type of plug or barrier layer technology. However, such technology is conventional and, therefore, the details of such features are not set forth herein.As device features continue to shrink in size, the interconnect structures, such as contacts 7 and vias 11 enable the semiconductor device 100 to offer more packing density, higher speeds and more flexibility in circuit design. Various metals, such as aluminum and aluminum-base alloys, have typically been used to form the electrical interconnects. More recently, copper and copper-base alloys have also been used to fill the openings to form the electrical interconnects. In such cases, the copper is typically deposited via a single electroplating process. That is, a single plating solution employing one type of plating chemistry is supplied to an electroplating chamber where the electroplating proceeds to fill the openings that form the interconnects. One problem with copper interconnects is that copper has low electromigration resistance and readily diffuses through silicon dioxide, the typical dielectric interlayer used in the manufacture of semiconductor devices.In some prior processes, a dopant has been added to the copper to enhance the low electromigration resistance of copper. The dopant element forms intermetallic compounds with the copper and increases the electromigration resistance of the copper. In processes that employ copper alloys, however, the copper alloy is typically deposited throughout the entire opening that will form the interconnect or deposited and annealed to diffuse the dopant element throughout the entire interconnect structure. This use of copper alloys may help solve electromigration problems, but including the dopant throughout the entire interconnect increases the resistivity of the interconnect. This increased resistivity leads to slower processing associated with the semiconductor device.DISCLOSURE OF THE INVENTIONThere exists a need for a semiconductor device and a method for manufacturing a semiconductor device that improves electromigration problems associated with copper interconnects while maintaining low resistivity of the interconnect.These and other needs are met by the present invention, where substantially pure copper is introduced into the lower portion of an interconnect opening followed by the introduction of doped copper at the top portion of the opening. The copper interconnect is then planarized, resulting in a copper interconnect having reduced electromigration and low overall resistivity.Additional advantages and other features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from the practice of the invention. The advantages and features of the invention may be realized and obtained as particularly pointed out in the appended claims.According to the present invention, the foregoing and other advantages are achieved in part by a method of forming an interconnect in a semiconductor device. The method includes forming an opening in a dielectric layer and depositing substantially pure copper to fill a portion of the opening. The method also includes depositing doped copper over the substantially pure copper to fill the opening and planarizing the semiconductor device so that the filled opening is substantially coplanar with an upper surface of the dielectric layer.According to another aspect of the invention, a semiconductor device is provided. The semiconductor device comprises a semiconductor substrate and a plurality of levels of dielectric layers and conductive layers formed on the semiconductor substrate. The semiconductor device also includes an interconnect formed in at least one of the dielectric layers. The interconnect electrically connects at least two of the conductive layers or one of the conductive layers and an active region in the semiconductor substrate. The interconnect includes a lower portion comprising substantially pure copper and an upper portion comprising doped copper.Other advantages and features of the present invention will become readily apparent to those skilled in this art from the following detailed description. The embodiments shown and described provide illustration of the best mode contemplated for carrying out the invention. The invention is capable of modifications in various obvious respects, all without departing from the invention. Accordingly, the drawings are to be regarded as illustrative in nature, and not as restrictive.BRIEF DESCRIPTION OF THE DRAWINGSReference is made to the attached drawings, wherein elements having the same reference number designation represent like elements throughout.FIG. 1 schematically illustrates the cross-section of a conventional semiconductor device.FIG. 2A schematically illustrates the formation of interconnect openings in a dielectric layer in accordance with an embodiment of the present invention.FIG. 2B schematically illustrates the partial filling of the interconnect openings in FIG. 2A in accordance with an embodiment of the present invention.FIG. 3 illustrates filling the remainder of the interconnect openings in FIG. 2B, in accordance with an embodiment of the present invention.FIG. 4 illustrates the cross-section of the semiconductor device of FIG. 3 after planarization, in accordance with an embodiment of the present invention.FIG. 5 illustrates the cross-section of an interconnect formed in accordance with an embodiment of the present invention.BEST MODE FOR CARRYING OUT THE INVENTIONThe present invention addresses and solves the problems of electromigration associated with copper interconnects while maintaining low overall resistivity of the interconnects.FIG. 2A illustrates the cross-section of a semiconductor device 200 formed in accordance with an embodiment of the present invention. Referring to FIG. 2A, a dielectric layer 22, such as silicon dioxide or another material having a low dielectric constant (K), is formed above semiconductor substrate 20, typically comprising monocrystalline silicon. The dielectric layer 22 may also be formed of several films. For example, the dielectric layer 22 may be a composite including a low K material, a nitride layer formed thereon to serve as an anti-reflective coating (ARC) for subsequent lithographic and etching steps and a TEOS or nitride capping layer to protect the low K material. The dielectric layer 22 is shown directly above the substrate 20. It should be understood, however, that dielectric layer 22 may be an interlayer dielectric layer formed a number of layers above the surface of semiconductor substrate 20. For example, dielectric layer 22 may be an interlayer dielectric formed above a number of conductive and other dielectric layers (not shown) in semiconductor device 200.Openings 24 are formed in dielectric layer 22 using conventional photolithographic and etching techniques. These openings 24 may represent holes for forming contacts or vias or trenches for forming interconnect lines. In FIG. 2A, three openings 24 are shown for simplicity and to illustrate various sized openings having different aspect ratios. The present invention, however, may be used to form any number of interconnects having any particular feature sizes and aspect ratios, based on the particular circuit requirements.As discussed previously, conventional practices for forming interconnects use aluminum, aluminum-base alloys, copper or copper-base alloys. The present invention departs from conventional practices by depositing copper and a copper-base alloy in two plating processes to form the interconnect structure. FIG. 2B illustrates an exemplary embodiment of the present invention in which the first plating deposits essentially pure copper into the openings 24 using a non-conformal electroplating chemistry.For example, the first plating process may use a plating solution that includes additives that enhance bottom filling of openings 24. Any conventional additive chemistry that is designed to enhance bottom filling, such as Nanoplate 2001 or Ultrafill 2001, both manufactured by Shipley Company of Marlborough, Mass., may be mixed with the plating solution used in the first plating process. Other plating chemistries designed to enhance the filling of the bottom portion of openings may also be used.The bottom-enhanced filling chemistry is prepared in a conventional plating solution tank and supplied to a conventional electroplating chamber to begin the first plating process. Power is supplied to the electroplating chamber where the semiconductor device 200 acts as one of the two electrodes. The electroplating proceeds to deposit the essentially pure copper into openings 24. The electroplating is monitored so that a predetermined amount of copper is deposited into the openings 24. According to an exemplary embodiment of the present invention, a layer of essentially pure copper 30 is deposited on semiconductor device 200 until the openings are about 40% to about 90% filled. For example, FIG. 2B shows the openings 24 being about 70% filled. The particular percentage that the essentially pure copper 30 fills the openings 24 may be optimized based on the particular circuit requirements, as described in more detail below. After the predetermined amount of pure copper 30 has been deposited, the first plating process is terminated.The present invention further departs from methodology that deposits doped copper over pure copper by using a second plating process to deposit a copper alloy. According to an exemplary embodiment of the present invention, a second plating solution employing a conformal filling chemistry is prepared in a plating solution tank to fill the remaining portion of the openings 24. The second plating solution also contains a dopant to form a copper alloy in the unfilled portions of the openings 24. The dopant element used to form the copper alloy may include tin, zirconium, strontium, palladium, magnesium, chromium or tantalum. Alternatively, any other dopant element that is known to increase the electromigration resistance of copper may be used. According to an exemplary embodiment, the plating solution is designed so that the percentage weight of the dopant element in the copper alloy ranges from about 0.3% to about 12.0%, based on the particular dopant element and the particular circuit requirements, as described in more detail below. Other percentages of the dopant element may be used in alternate embodiments of the present invention.The second plating solution is then supplied to the electroplating chamber to deposit the doped copper into the unfilled portions of the openings 24. According to an exemplary embodiment, the second plating process may be an electroless plating process. In this case, no electrical potential needs to be applied to the semiconductor device 200 while the plating process occurs. The electroless plating process relies on the autocatalytic deposition of the doped copper by the interaction of the agents in the plating solution. Alternatively, the second plating process may be an electroplating process. In either case, the second plating process deposits a layer of doped copper 32 on the semiconductor device 200 until the openings 24 are completely filled, as illustrated in FIG. 3. The layer of doped copper 32 also forms over the dielectric layer 22.According to an exemplary embodiment of the present invention, the same plating chamber may be used for both platings, i.e., the copper plating and the copper alloy plating. In this scenario, the plating solution from the first plating is drained or returned to a holding tank and the second plating solution is supplied to the plating chamber. The supply of the two plating solutions may slightly overlap to preserve continuity and keep the surface of semiconductor device 200 from drying or starving for plating solution. Alternatively, a separate plating chamber may be used for the second plating process. In this scenario, the semiconductor device 200 is transported from the first plating chamber to a second plating chamber. Using separate plating chambers enables the first plating chamber to reuse existing plating solution used in the first electroplating process and also enables the second plating chamber to reuse plating solution used in the second plating process.After the doped copper layer 32 has been deposited, the semiconductor device 200 is subjected to a chemical mechanical polishing (CMP) process. The CMP removes excess copper alloy 32 over the dielectric layer 22 and the filled openings 24 and planarizes the copper deposited in openings 24 with the upper surface of the dielectric layer 22.FIG. 4 illustrates the results of the CMP of semiconductor device 200. After CMP, the pure copper 30 remains in the bottom portion of the openings 24 and the copper alloy 32 remains in the upper portion of the openings 24. Additionally, the upper surface of the filled interconnect openings 24 are substantially coplanar with the upper surface of the dielectric layer 22. In this manner, subsequent processing steps may be performed over a substantially planar and smooth surface.The resulting interconnect structures illustrated in FIG. 4 advantageously include the copper alloy in the areas requiring reduced electromigration, such as the upper surface of the interconnects. Additionally, the essentially pure copper is located throughout the remainder of the interconnect structure, resulting in low overall resistivity and higher speed interconnects. The particular percentage of the interconnect filled with copper versus the copper alloy may be optimized to provide increase electromigration benefits and maintain low overall resistivity. For example, in situations where operating speed is more important than increased electromigration resistance, the percentage of the interconnects that include the copper alloy may be reduced. Additionally, in such situations, the percentage of the dopant element in the copper alloy may also be reduced to further decrease the overall resistivity. In this manner, the preferential deposition of copper and a copper alloy may be optimized to provide improved electromigration while maintaining low overall resistivity.It should be understood that FIG. 4 does not illustrate any diffusion barrier layer that may be deposited in openings 24 prior to deposition of the copper layer 30. Such diffusion barrier layers are well known and further impede the electromigration of copper into various dielectric layers. It should also be noted that FIG. 4 does not illustrate a copper alloy seed layer that may be deposited on a diffusion barrier layer to enhance the adhesion of the undoped copper layer 30 during electroplating.According to an exemplary embodiment of the present invention, a copper alloy seed layer may be deposited along the bottom and sidewall portions of the interconnect openings 24 to carry electrical current for electroplating. In depositing a relatively thin seed layer in the interconnect openings, the techniques disclosed in co-pending U.S. patent application Ser. No. 09/561,622, now U.S. Pat. No. 6,228,759, entitled: "Method of Forming an Alloy Precipitate to Surround Interconnect to Minimize Electromigration," may be used.For example, FIG. 5 illustrates a copper alloy seed layer 50 conformally deposited along the sidewalls 40 and bottom 42 of the respective interconnect openings 24 in semiconductor device 300. The copper alloy seed layer 50 may be deposited using any conventional process, such as chemical vapor deposition (CVD), ionized metal plasma (IMP) deposition, physical vapor deposition (PVD) or other known processes to conformally deposit a relatively thin layer in the interconnect openings 24. The thickness of the copper alloy seed layer 50 may range from about 200 Å to about 1000 Å, depending on the particular circuit requirements.The dopant element in the copper alloy seed layer 50 may include magnesium, aluminum, zinc, zirconium, tin, nickel, palladium, silver or gold. Alternatively, other dopant elements may be used in copper alloy seed layer 50. The percentage by weight of the dopant element in the copper alloy seed layer 50 may be optimized based on the particular device requirements. For example, the percentage by weight of the dopant element in copper alloy seed layer 50 may range from about 0.3% to about 12%, based on the particular dopant and the particular circuit requirements.After the copper alloy layer 50 is deposited, the first and second plating processes described with regard to FIGS. 2B and 3 are performed to deposit the essentially pure copper layer 30 and the doped copper layer 32. A CMP follows to planarize the semiconductor device 300, resulting in the semiconductor device 300 illustrated in FIG. 5. Advantageously, the doped copper seed layer 50 essentially encapsulates the pure copper 30, resulting in improved electromigration resistance throughout the entire interconnect structure.Thus, in accordance with the present invention, an interconnect is formed using essentially pure copper and doped copper. Advantageously, the resulting interconnect structure exhibits improved electromigration resistance in areas more susceptible to electromigration while maintaining low overall resistivity. A copper alloy seed layer 50 formed along the bottom and sidewalls of the interconnect opening further improves the electromigration resistance of the interconnect, thereby further improving the reliability of the semiconductor device. The present invention is also cost effective and can be easily integrated into conventional processing.In the previous descriptions, numerous specific details are set forth, such as specific materials, structures, chemicals, processes, etc., in order to provide a thorough understanding of the present invention. However, the present invention can be practiced without resorting to the specific details set forth herein. In other instances, well known processing structures have not been described in detail, in order not to unnecessarily obscure the thrust of the present invention.The dielectric and conductive layers used in manufacturing a semiconductor device in accordance with the present invention can be deposited by conventional deposition techniques. For example, metallization techniques, such as various types of chemical vapor deposition (CVD) processes, including low pressure chemical vapor deposition (LPCVD) and enhanced chemical vapor deposition (ECVD) can be employed.The present invention is applicable in the manufacturing of semiconductor devices and particularly in semiconductor devices with design features of 0.25 microns and below, resulting in increased transistor and circuit speeds and improved reliability. The present invention is applicable to the formation of any of various types of semiconductor devices, and hence, details have not been set forth in order to avoid obscuring the thrust of the present invention. In practicing the present invention, conventional photolithographic and etching techniques are employed and, hence, the details of such techniques have not been set forth herein in detail.Only the preferred embodiments of the invention and a few examples of its versatility are shown and described in the present disclosure. It is to be understood that the invention is capable of use in various other combinations and environments and is capable of modifications within the scope of the inventive concept as expressed herein.For example, the present invention has been described with the example of single level interconnects formed by created openings in a dielectric layer and filling the openings. The present invention is also applicable to other situations where interconnects are formed, such as dual damascene techniques which form a conductive via that contacts an upper trench section. In this scenario, the pure copper may be deposited in the conductive via and a portion of the conductive trench. The upper portion of the conductive trench may then be filled with doped copper.
A microelectronic substrate and method for removing conductive material from a microelectronic substrate. In one embodiment, the microelectronic substrate includes a conductive or semiconductive material with a recess having an initially sharp corner at the surface of the conductive material. The corner can be blunted or rounded, for example, by applying a voltage to an electrode in fluid communication with an electrolytic fluid disposed adjacent to the corner. Electrical current flowing through the corner from the electrode can oxidize the conductive material at the corner, and the oxidized material can be removed with a chemical etch process.
CLAIMS 1. A method for processing a microelectronic substrate, comprising : disposing an electrolytic fluid adjacent to a conductive material of the microelectronic substrate, the conductive material having a first surface in a first plane and a recess in the first surface, the recess being bounded by a second surface in a second plane, the conductive material further having a corner between the first and second surfaces; and removing at least part of the conductive material from the corner by positioning first and second electrodes in fluid communication with the electrolytic fluid and coupling at least one of the electrodes to a source of electrical potential. 2. The method of claim 1 wherein the microelectronic substrate has a face surface and the recess extends generally transverse to the face surface, further wherein removing at least part of the conductive material includes positioning two electrodes to face toward the face surface, coupling at least one of the electrodes to a source of electrical potential, and disposing an electrolytic fluid between the face surface and the electrodes. 3. The method of claim 1, further comprising: emitting electrical signals from an electrode spaced apart from the microelectronic substrate; receiving the electrical signals at the corner of the conductive material; oxidizing at least part of the conductive material at the corner by passing the electrical signals through the conductive material; and exposing an oxidized portion of the conductive material to a chemical etchant. 4. The method of claim 1 wherein the first surface of the conductive material is positioned proximate to a generally non-conductive material, with the generally non-conductive material positioned between the first surface and at least one of the electrodes, and wherein removing at least part of the conductive material from the corner includes removing conductive material engaged with the generally non-conductive material. 5. The method of claim 1, further comprising: disposing a generally non-conductive layer on the conductive material; and removing at least part of the non-conductive layer to expose the corner of the conductive material before removing at least part of the conductive material from the corner. 6. The method of claim 1, further comprising: disposing an oxide layer on the conductive material ; disposing a nitride layer on the oxide layer; and removing at least part of the nitride layer and part of the oxide layer to expose the corner of the conductive material before removing conductive material from the corner. 7. The method of claim 1 wherein removing the conductive material includes oxidizing at least a potion of the conductive material by passing electrical current through the portion, and exposing the portion to an etchant. 8. The method of claim 1, further comprising selecting the electrolyte to include water and at least one of hydrochloric acid and hydrofluoric acid. 9. The method of claim 1 wherein removing at least part of the conductive material includes passing electrical current into the conductive material at a rate of from about one to about 500 milliamps per square centimeter. 10. The method of claim 1 wherein removing at least part of the conductive material includes selecting the source of electrical potential to provide about 15 voltsrms to the conductive material. 11. The method of claim 1 wherein removing at least part of the conductive material includes selecting a current passing through the conductive material to vary at approximately 60 Hz. 12. The method of claim 1 wherein removing at least part of the conductive material includes selecting a current passing through the conductive material to be an alternating current. 13. The method of claim 1, further comprising selecting the electrolytic fluid to include water, hydrochloric acid, and hydrofluoric acid in a ratio of about 500: 1: 1. 14. The method of claim 1, further comprising selecting the conductive material to include doped silicon. 15. The method of claim 1, further comprising selecting at least one of the first and second electrodes to include at least one of platinum, tantalum and graphite. 16. The method of claim 1, further comprising positioning at least one of the first and second electrodes a distance of from about one millimeter to about two millimeters from the microelectronic substrate. 17. The method of claim 1, further comprising disposing an insulating layer on walls of the recess after removing material from the corner. 18. The method of claim 1, further comprising disposing a dielectric material in the recess. 19. The method of claim 1 wherein removing at least part of the conductive material includes reducing a rate at which the conductive material is removed from the corner by rounding the corner. 20. A method for processing a microelectronic substrate, comprising : disposing a generally non-conductive material adjacent to a conductive material of the microelectronic substrate ; forming a recess extending through the generally non-conductive material and into the conductive material, the recess defining a corner at least proximate to an interface between the conductive material and the generally non-conductive material; and removing at least part of the conductive material from the corner to at least partially blunt the corner by exposing the corner to an electrical potential. 21. The method of claim 20 wherein removing at least part of the conductive material includes positioning a first electrode and a second electrode proximate to and spaced apart from the microelectronic substrate, coupling at least one of the electrodes to a source of electrical potential, passing an electrical current from at least one of the electrodes to the corner to oxidize conductive material at the corner, and exposing oxidized conductive material at the comer to an etchant. 22. The method of claim 20, further comprising: emitting electrical signals from an electrode spaced apart from the microelectronic substrate; receiving the electrical signals at the corner of the conductive material; oxidizing at least part of the conductive material at the corner by passing the electrical signals through the conductive material; and exposing an oxidized portion of the conductive material to a chemical etchant. 23. The method of claim 20 wherein removing at least part of the conductive material from the corner includes removing conductive material engaged with the generally non-conductive material. 24. The method of claim 20, further comprising removing at least part of the non-conductive material to expose the corner of the conductive material before removing at least part of the conductive material from the corner. 25. The method of claim 20, further comprising: disposing an oxide layer on the conductive material; disposing a nitride layer on the oxide layer; and removing at least part of the nitride layer and part of the oxide layer to expose the corner of the conductive material before removing at least part of the conductive material from the corner. 26. The method of claim 20 wherein removing the conductive material includes oxidizing at least a portion of the conductive material by passing electrical current through the portion, and exposing the portion to an etchant. 27. The method of claim 20 wherein removing at least part of the conductive material includes passing electrical current into the conductive material at a rate of about 100 milliamps. 28. The method of claim 20, wherein removing at least part of the conductive material includes passing electrical current into the conductive material at a potential of about 15 volts rms. 29. The method of claim 20 wherein removing at least part of the conductive material includes passing a current through the conductive material at a frequency of approximately 60 Hz. 30. The method of claim 20 wherein removing at least part of the conductive material includes selecting a current passing through the conductive material to be an alternating current. 31. The method of claim 20, further comprising selecting the conductive material to include doped silicon. 32. The method of claim 20, wherein removing at least part of the conductive material includes positioning first and second electrodes in fluid communication with the corner, coupling at least one of the electrodes to a source of electrical potential, and selecting at least one of the first and second electrodes to include at least one of platinum, tantalum and graphite. 33. The method of claim 20, further comprising disposing an insulating layer on walls of the aperture after removing material from the corner. 34. The method of claim 20, further comprising forming a transistor gate in the recess. 35. The method of claim 20 wherein the microelectronic substrate has a face surface and the recess extends generally transverse to the face surface, further wherein removing at least part of the conductive material includes positioning two electrodes to face toward the face surface, coupling at least one of the electrodes to a source of electrical potential, and disposing an electrolytic fluid between the face surface and the electrodes. 36. The method of claim 20, further comprising reducing a rate at which material is removed from the corner by rounding the corner. 37. A method for processing a microelectronic substrate, comprising: forming an oxide layer on a doped silicon material of the microelectronic substrate; disposing a nitride layer on the oxide layer; etching a recess through the nitride layer and the oxide layer and into the conductive material; removing a portion of the nitride layer and the oxide layer proximate to the recess to expose a corner of the conductive material ; disposing an electrolytic fluid adjacent to the corner of the conductive material; oxidizing at least part of the conductive material at the corner by positioning first and second electrodes proximate to and spaced apart from the microelectronic substrate and in fluid communication with the electrolytic fluid, and coupling at least one of the electrodes to a source of electrical potential; and removing at least part of the oxidized material by exposing the oxidized material to an etchant; and reducing a rate at which material is removed from the corner by rounding the corner to reduce a flow of electrical current from the at least one electrode to the corner. 38. The method of claim 37, wherein removing a portion of the nitride layer and the oxide layer proximate to the recess includes removing material from the nitride layer at a first rate and removing material from the oxide layer at a second rate, with the first rate approximately equal to the second rate. 39. The method of claim 37, further comprising removing the oxide layer and the nitride layer with an etchant after removing at least part of the oxidized material. 40. The method of claim 37 wherein removing a portion of the nitride layer and the oxide layer proximate to the recess includes disposing an etchant adjacent to the nitride layer and the oxide layer, with the etchant having a chemical composition approximately the same as a chemical composition of the electrolytic fluid. 41. A method for processing a microelectronic substrate, comprising: forming a recess in a conductive material of the microelectronic substrate, the recess defining a corner at an intersection of the aperture and a plane of the conductive material; forming a conductive microelectronic feature in the recess; and controlling electromagnetic emanations from the conductive microelectronic feature by rounding the corner defined by the recess, wherein rounding the corner includes electrically coupling a source of electrical potential to the corner to oxidize the conductive material, and removing oxidized material from the corner by exposing the oxidized material to an etchant. 42. The method of claim 41 wherein forming a recess in a conductive material includes forming a recess in a semiconductor material. 43. The method of claim 41 wherein rounding the corner includes positioning a first electrode and a second electrode proximate to and spaced apart from the microelectronic substrate, coupling at least one of the first and second electrodes to a source of electrical potential, passing an electrical current from at least one of the first and second electrodes through an electrolytic fluid to the corner to oxidize conductive material at the corner, and exposing oxidized conductive material at the corner to an etchant. 44. The method of claim 41, further comprising: emitting electrical signals from an electrode spaced apart from the microelectronic substrate; receiving the electrical signals at the corner of the conductive material; oxidizing at least part of the conductive material at the corner by passing the electrical signals through the conductive material; and exposing an oxidized portion of the conductive material to a chemical etchant. 45. The method of claim 41 wherein the conductive material is positioned proximate to a generally non-conductive material, with the generally non-conductive material positioned between the plane of the conductive material and at least one electrode, and wherein removing at least part of the conductive material from the corner includes removing conductive material engaged with the generally non-conductive material. 46. The method of claim 41, further comprising: disposing a non-conductive layer on the conductive material; and removing at least part of the non-conductive layer to expose the corner of the conductive material before removing at least part of the conductive material from the corner. 47. The method of claim 41, further comprising: disposing an oxide layer on the conductive material; disposing a nitride layer on the oxide layer; and removing at least part of the nitride layer and part of the oxide layer to expose the corner of the conductive material before removing at least part of the conductive material from the corner. 48. The method of claim 41, further comprising selecting the conductive material to include doped silicon. 49. The method of claim 41, further comprising disposing an insulating layer on walls of the recess after rounding the corner. 50. The method of claim 41, further comprising forming a transistor gate in the recess. 51. The method of claim 41 wherein the microelectronic substrate has a face surface and the recess extends generally transverse to the face surface, further wherein rounding the corner includes positioning two electrodes to face toward the face surface, coupling at least one of the electrodes to a source of electrical potential, and disposing an electrolytic fluid between the face surface and the electrodes. 52. A microelectronic substrate formed by a process, comprising : disposing a generally non-conductive material adjacent to a conductive material of the microelectronic substrate; forming a recess extending through the generally non-conductive material and into the conductive material, the recess defining a corner at least proximate to an interface between the conductive material and the generally non-conductive material; and removing at least part of the conductive material from the corner to at least partially blunt the corner by exposing the corner to an electrical potential. 53. The microelectronic substrate of claim 52 wherein removing at least part of the conductive material includes positioning a first electrode and a second electrode proximate to and spaced apart from the microelectronic substrate, coupling at least one of the electrodes to a source of electrical potential, passing an electrical current from at least one of the electrodes to the corner to oxidize conductive material at the corner, and exposing oxidized conductive material at the corner to an etchant. 54. The microelectronic substrate of claim 52 wherein the process further comprises: emitting electrical signals from an electrode spaced apart from the microelectronic substrate; receiving the electrical signals at the corner of the conductive material; oxidizing at least part of the conductive material at the corner by passing the electrical signals through the conductive material; and exposing an oxidized portion of the conductive material to a chemical etchant. 55. The microelectronic substrate of claim 52 wherein removing at least part of the conductive material from the corner includes removing conductive material engaged with the generally non-conductive material. 56. The microelectronic substrate of claim 52 wherein the process further comprises removing at least part of the non-conductive material to expose the corner of the conductive material before removing at least part of the conductive material from the corner. 57. The microelectronic substrate of claim 52 wherein the process further comprises: disposing an oxide layer on the conductive material; disposing a nitride layer on the oxide layer; and removing at least part of the nitride layer and part of the oxide layer to expose the corner of the conductive material before removing at least part of the conductive material from the corner. 58. The microelectronic substrate of claim 52 wherein removing the conductive material includes oxidizing at least a portion of the conductive material by passing electrical current through the portion, and exposing the portion to an etchant. 59. The microelectronic substrate of claim 52 wherein the process further comprises selecting the conductive material to include doped silicon. 60. The microelectronic substrate of claim 52 wherein removing at least a portion of the conductive material includes selecting a current passing through the conductive material to be an alternating current. 61. The microelectronic substrate of claim 52 wherein the process further comprises disposing an insulating layer on walls of the recess after removing material from the corner. 62. The microelectronic substrate of claim 52 wherein the process further comprises forming a transistor gate in the recess. 63. The microelectronic substrate of claim 52 wherein removing at least part of the conductive material includes positioning two electrodes to face toward a face surface of the microelectronic substrate, coupling at least one of the electrodes to a source of electrical potential, and disposing an electrolytic fluid between the face surface and the electrodes. 64. A microelectronic substrate formed by a process, comprising: disposing an electrolytic fluid adjacent to a conductive material of the microelectronic substrate, the conductive material having a first surface in a first plane and a recess in the first surface, the recess being bounded by a second surface in a second plane, the conductive material further having a corner between the first and second surfaces; and removing at least part of the conductive material from the corner by positioning first and second electrodes in fluid communication with the electrolytic fluid and coupling at least one of the electrodes to a source of electrical potential. 65. The microelectronic substrate of claim 64 wherein the recess extends generally transverse to a face surface of the microelectronic substrate, and wherein removing at least part of the conductive material includes positioning two electrodes to face toward the face surface of the microelectronic substrate, coupling at least one of the electrodes to a source of electrical potential, and disposing an electrolytic fluid between the face surface and the electrodes. 66. The microelectronic substrate of claim 64 wherein the process further comprises: emitting electrical signals from at least one of the electrodes with the electrode spaced apart from the microelectronic substrate; receiving the electrical signals at the corner of the conductive material; oxidizing at least part of the conductive material at the corner by passing the electrical signals through the conductive material; and exposing an oxidized portion of the conductive material to a chemical etchant. 67. The microelectronic substrate of claim 64 wherein the first surface of the conductive material is positioned proximate to a generally nonconductive material, with the generally non-conductive material positioned between the first surface and at least one of the electrodes, and wherein removing at least part of the conductive material from the corner includes removing conductive material engaged with the generally non-conductive material. 68. The microelectronic substrate of claim 64, further comprising : disposing an oxide layer on the conductive material; disposing a nitride layer on the oxide layer; and removing at least part of the nitride layer and part of the oxide layer to expose the corner of the conductive material. 69. The microelectronic substrate of claim 64 wherein removing the conductive material includes oxidizing at least a potion of the conductive material by passing electrical current through the portion, and exposing the portion to an etchant. 70. The microelectronic substrate of claim 64 wherein removing at least part of the conductive material includes selecting a current passing through the conductive material to be an alternating current. 71. The microelectronic substrate of claim 64, further comprising forming a transistor gate in the aperture.
MICROELECTRONIC SUBSTRATE HAVING CONDUCTIVE MATERIALWITH BLUNT CORNERED APERTURES, AND ASSOCIATED METHODSFOR REMOVING CONDUCTIVE MATERIALCROSS-REFERENCE TO RELATED APPLICATIONSThis application is a continuation-in-part of U. S. Application No.09/651,779 (attorney docket number 108298515US), titled"Methods andApparatus for Removing Conductive Material From a MicroelectronicSubstrate,"filed 30 August 2000, and U. S. Application No. 09/888,084 (attorney docket number 108298515US1), titled"Methods and Apparatus forElectrical, Mechanical and/or Chemical Removal of Conductive Material From a Microelectronic Substrate,"filed 21 June 2001, and U. S. Application No.09/888,002 (attorney docket number 108298515US3), titled"Methods andApparatus for Electrically and/or Chemically-Mechanically RemovingConductive Material From a Microelectronic Substrate,"filed 21 June 2001, all of which are incorporated herein in their entireties by reference.TECHNICAL FIELDThis invention relates to methods and apparatuses for removing conductive and/or semiconductor material from microelectronic substrates.BACKGROUNDMicroelectronic substrates and substrate assemblies typically include a semiconductor material having features, such as transistors and transistor gates, that are linked with conductive lines. One conventional method for forming transistor gates (shown schematically in Figures lA-C) is shallow trench isolation (STI). Referring first to Figure 1A, a typical STI process includes doping a semiconductor substrate 10 to form an at least partially conductive material 11. An oxide layer 14 is disposed on the conductive material 11, and a nitride layer 15 is disposed on the oxide layer 14. A mask 16 having mask openings 17 is then positioned over the oxide layer 15, and the semiconductor substrate 10 is etched to form apertures 60, shown in Figure 1B. As shown in Figure 1C, the apertures 60 are coated with a gate oxide layer 61, and a gate material 62 is disposed adjacent to the gate oxide 61. Accordingly, the gate oxide 61 can electrically isolate adjacent gates. The nitride layer 14 and the oxide layer 15 can then be removed. One drawback with the STI structure described above with reference to Figures lA-C is that the conductive material 11 has sharp corners 63 (shown in Figures 1B and 1C) at the edges of the apertures 60. The sharp corners 63 can emit electromagnetic radiation (generally in the manner of an antenna) which can interfere with the operation of adjacent semiconductor features. One conventional approach to addressing this drawback is to oxidize material at the sharp corners 63 by exposing the semiconductor substrate 10 to a high temperature environment (e. g., about 1050 C). The oxidized material is then removed (for example, with an etchant) to blunt the corners. One drawback with this approach is that the curvature that can be achieved with a high temperature process may be limited. Another drawback is that the high temperature can damage portions or components of the semiconductor substrate.Still another drawback is that the high-temperature process can be expensive, which can increase the cost of the products formed from the semiconductor substrate. One conventional technique for removing bulk conductive material from semiconductor substrates includes applying an alternating current to a conductive layer via an intermediate electrolyte to remove portions of the layer. In one arrangement, shown in Figure 2A, a conventional apparatus 60 includes a first electrode 20a and a second electrode 20b coupled to a current source 21. The first electrode 20a is attached directly to a metallic layer 11 a of a semiconductor substrate 10 and the second electrode 20b is at least partially immersed in a liquid electrolyte 31 disposed on the surface of the metallic layer1 la by moving the second electrode downwardly until it contacts the electrolyte 31. A barrier 22 protects the first electrode 20a from direct contact with the electrolyte 31. The current source 21 applies alternating current to the substrate10 via the electrodes 20a and 20b and the electrolyte 31 to remove conductive material from the conductive layer 1 la. The alternating current signal can have a variety of wave forms, such as those disclosed by Frankenthal et al. in a publication entitled"Electroetching of Platinum in the Titanium-Platinum-GoldMetallization on Silicon Integrated Circuits" (Bell Laboratories), incorporated herein in its entirety by reference. One drawback with the arrangement shown in Figure 2A is that it may not be possible to remove material from the conductive layer 11a in the region where the first electrode 20a is attached because the barrier 22 prevents the electrolyte 31 from contacting the substrate 10 in this region. Alternatively, if the first electrode 20a contacts the electrolyte in this region, the electrolytic process can degrade the first electrode 20a. Still a further drawback is that the electrolytic process may not uniformly remove material from the substrate 10.For example,"islands"of residual conductive material having no direct electrical connection to the first electrode 20a may develop in the conductive layer lla. The residual conductive material can interfere with the formation and/or operation of the conductive lines, and it may be difficult or impossible to remove with the electrolytic process unless the first electrode 20a is repositioned to be coupled to such"islands." One approach to addressing some of the foregoing drawbacks is to attach a plurality of first electrodes 20a around the periphery of the substrate 10 to increase the uniformity with which the conductive material is removed.However, islands of conductive material may still remain despite the additional first electrodes 20a. Another approach is to form the electrodes 20a and 20b from an inert material, such as carbon, and remove the barrier 22 to increase the area of the conductive layer 11 a in contact with the electrolyte 31. However, such inert electrodes may not be as effective as more reactive electrodes at removing the conductive material, and the inert electrodes may still leave residual conductive material on the substrate 10. Figure 2B shows still another approach to addressing some of the foregoing drawbacks in which two substrates 10 are partially immersed in a vessel 30 containing the electrolyte 31. The first electrode 20a is attached to one substrate 10 and the second electrode 20b is attached to the other substrate 10. An advantage of this approach is that the electrodes 20a and 20b do not contact the electrolyte. However, islands of conductive material may still remain after the electrolytic process is complete, and it may be difficult to remove conductive material from the points at which the electrodes 20a and 20b are attached to the substrates 10.SUMMARYThe present invention is directed toward microelectronic substrates that include conductive materials having recesses with rounded corners, and methods for forming such microelectronic substrates. A method in accordance with one aspect of the invention includes disposing an electrolytic fluid adjacent to a conductive material of the microelectronic substrate. The conductive material has a first surface in a first plane and a recess in the first surface, with the recess being bounded by a second surface in a second plane.The conductive material further has a corner between the first and second surfaces. The method can further include removing at least part of the conductive material from the corner by positioning first and second electrodes in fluid communication with the electrolytic fluid, and coupling at least one of the electrodes to a source of electrical potential. Removing the conductive material from the corner can be self-limiting, with the rate at which the conductive material is removed decreasing as the corner is rounded. In another aspect of the invention, a method for forming a microelectronic substrate can include disposing a generally non-conductive material adjacent to a conductive material of the microelectronic substrate. The method can further include forming a recess extending through the generally non-conductive material and into the conductive material, with the recess defining a corner at least proximate to an interface between the conductive material and the generally non-conductive material. The method can still further include removing at least part of the conductive material from the corner by exposing the corner to an electrical potential to at least partially blunt the corner. The invention is also directed toward a microelectronic substrate formed by a process that can include disposing a generally non-conductive material adjacent to a conductive material of the microelectronic substrate and forming a recess extending through the generally non-conductive material and into the conductive material. The recess defines a corner at least proximate to an interface between the conductive material and the generally non-conductive material. The process can further include removing at least part of the conductive material from the corner to at least partially blunt the corner. In another aspect of the invention, the microelectronic substrate can be formed by a process that includes disposing an electrolytic fluid adjacent to a conductive material of a microelectronic substrate, with the conductive material having a first surface in a first plan and a recess in the first surface.The recess can be bounded by a second surface in a second plane, with the conductive material having a corner between the first and second surfaces. The process can further include removing at least part of the conductive material from the corner by positioning first and second electrodes in fluid communication with the electrolytic fluid, and coupling at least one of the electrodes to a source of electrical potential. BRIEF DESCRIPTION OF THE DRAWINGSFigures lA-C are schematic illustrations of a shallow trench isolation process for forming semiconductor features in a semiconductor substrate in accordance with the prior art. Figures 2A-B are partially schematic, side elevational views of apparatuses for removing conductive material from a semiconductor substrate in accordance with the prior art. Figure 3 is a partially schematic, side elevational view of an apparatus having a support member and a pair of electrodes for removing conductive material from a microelectronic substrate in accordance with an embodiment of the invention. Figure 4 is a partially schematic, side elevational view of an apparatus for removing conductive material and sensing characteristics of the microelectronic substrate from which the material is removed in accordance with another embodiment of the invention. Figure 5 is a partially schematic, side elevational view of an apparatus that includes two electrolytes in accordance with still another embodiment of the invention. Figure 6 is a partially schematic, plan view of a substrate adjacent to a plurality of electrodes in accordance with still further embodiments of the invention. Figure 7 is a cross-sectional side elevational view of an electrode and a substrate in accordance with yet another embodiment of the invention. Figure 8A is a partially schematic, isometric view of a portion of a support for housing electrode pairs in accordance with still another embodiment of the invention. Figures 8B-8C are isometric views of electrodes in accordance with still further embodiments of the invention. Figures 9A and 9B schematically illustrate a circuit and waveform for electrolytically processing a microelectronic substrate in accordance with yet another embodiment of the invention. Figure 10A-F schematically illustrate a process for rounding or blunting the corners of apertures in a conductive material of a microelectronic substrate in accordance with an embodiment of the invention. Figure 11 is a partially schematic illustration of a process for rounding or blunting the corners of apertures in a conductive material of a microelectronic substrate in accordance with another embodiment of the invention.DETAILED DESCRIPTIONThe present disclosure describes methods and apparatuses for removing conductive materials from a microelectronic substrate and/or substrate assembly used in the fabrication of microelectronic devices. Many specific details of certain embodiments of the invention are set forth in the following description and in Figures 3-11 to provide a thorough understanding of these embodiments. One skilled in the art, however, will understand that the present invention may have additional embodiments, or that the invention may be practiced without several of the details described below. Figures 3-9B and the associated discussion refer generally to devices for removing conductive material from microelectronic substrates in accordance with embodiments of the invention. Figures 10A-11 and the associated discussion refer generally to techniques for rounding or blunting corners of conductive materials using, for example, apparatuses of the type described with reference to Figures 3-9B. As used herein, the term conductive materials includes, but is not limited to, metals, such as copper, platinum and aluminum, and semiconductor materials, such as doped silicon and/or polysilicon. The term microelectronic substrate refers generally to substrates and substrate assemblies configured to support microelectronic features, such as semiconductor devices. Figure 3 is a partially schematic, side elevational view of an apparatus 160 for removing conductive material from a microelectronic substrate or substrate assembly 110 in accordance with an embodiment of the invention. In one aspect of this embodiment, the apparatus 160 includes a vessel 130 containing an electrolyte 131, which can be in a liquid or a gel state.As used herein, the terms electrolyte and electrolytic fluid refer generally to electrolytic liquids and gels. Structures in fluid communication with electrolytic fluids are accordingly in fluid communication with electrolytic liquids or gels. The microelectronic substrate 110 has an edge surface 112 and two face surfaces 113. A support member 140 supports the microelectronic substrate 110 relative to the vessel 130 so that a conductive layer 111 on at least one of the face surfaces 113 of the substrate 110 contacts the electrolyte 131.The conductive layer 111 can include metals such as platinum, tungsten, tantalum, gold, copper, or other conductive materials. In another aspect of this embodiment, the support member 140 is coupled to a substrate drive unit 141 that moves the support member 140 and the substrate 110 relative to the vessel 130. For example, the substrate drive unit 141 can translate the support member 140 (as indicated by arrow"A") and/or rotate the support member 140 (as indicated by arrow"B"). The apparatus 160 can further include a first electrode 120a and a second electrode 120b (referred to collectively as electrodes 120) supported relative to the microelectronic substrate 110 by a support member 124. In one aspect of this embodiment, the support arm 124 is coupled to an electrode drive unit 123 for moving the electrodes 120 relative to the microelectronic substrate 110. For example, the electrode drive unit 123 can move the electrodes toward and away from the conductive layer 111 of the microelectronic substrate 110, (as indicated by arrow"C"), and/or transversely (as indicated by arrow"D") in a plane generally parallel to the conductive layer 111. Alternatively, the electrode drive unit 123 can move the electrodes in other fashions, or the electrode drive unit 123 can be eliminated when the substrate drive unit 141 provides sufficient relative motion between the substrate 110 and the electrodes 120. In either embodiment described above with reference to Figure 3, the electrodes 120 are coupled to a current source 121 with leads 128 for supplying electrical current to the electrolyte 131 and the conductive layer 111.In operation, the current source 121 supplies an alternating current (single phase or multiphase) to the electrodes 120. The current passes through the electrolyte 131 and reacts electrochemically with the conductive layer 111 to remove material (for example, atoms or groups of atoms) from the conductive layer 111.The electrodes 120 and/or the substrate 110 can be moved relative to each other to remove material from selected portions of the conductive layer 111, or from the entire conductive layer 111. In one aspect of an embodiment of the apparatus 160 shown inFigure 3, a distance D, between the electrodes 120 and the conductive layer 111 is set to be smaller than a distance D2 between the first electrode 120a and the second electrode 120b. Furthermore, the electrolyte 131 generally has a higher resistance than the conductive layer 111. Accordingly, the alternating current follows the path of least resistance from the first electrode 120a, through the electrolyte 131 to the conductive layer 111 and back through the electrolyte 131 to the second electrode 120b, rather than from the first electrode 120a directly through the electrolyte 131 to the second electrode 120b. Alternatively, a low dielectric material (not shown) can be positioned between the first electrode 120a and the second electrode 120b to decouple direct electrical communication between the electrodes 120 that does not first pass through the conductive layer 111. One feature of an embodiment of the apparatus 160 shown inFigure 3 is that the electrodes 120 do not contact the conductive layer 111 of the substrate 110. An advantage of this arrangement is that it can eliminate the residual conductive material resulting from a direct electrical connection between the electrodes 120 and the conductive layer 111, described above with reference to Figures 1 and 2. For example, the apparatus 160 can eliminate residual conductive material adjacent to the contact region between the electrodes and the conductive layer because the electrodes 120 do not contact the conductive layer 111. Another feature of an embodiment of the apparatus 160 described above with reference to Figure 3 is that the substrate 110 and/or the electrodes 120 can move relative to the other to position the electrodes 120 at any point adjacent to the conductive layer 111. An advantage of this arrangement is that the electrodes 120 can be sequentially positioned adjacent to every portion of the conductive layer to remove material from the entire conductive layer 111.Alternatively, when it is desired to remove only selected portions of the conductive layer 111, the electrodes 120 can be moved to those selected portions, leaving the remaining portions of the conductive layer 111 intact. Figure 4 is a partially schematic, side elevational view of an apparatus 260 that includes a support member 240 positioned to support the substrate 110 in accordance with another embodiment of the invention. In one aspect of this embodiment, the support member 240 supports the substrate 110 with the conductive layer 111 facing upwardly. A substrate drive unit 241 can move the support member 240 and the substrate 110, as described above with reference to Figure 3. First and second electrodes 220a and 220b are positioned above the conductive layer 111 and are coupled to a current source 221. A support member 224 supports the electrodes 220 relative to the substrate 110 and is coupled to an electrode drive unit 223 to move the electrodes 220 over the surface of the support conductive layer 111 in a manner generally similar to that described above with reference to Figure 3. In one aspect of the embodiment shown in Figure 4, the apparatus 260 further includes an electrolyte vessel 230 having a supply conduit 237 with an aperture 238 positioned proximate to the electrodes 220. Accordingly, an electrolyte 231 can be disposed locally in an interface region 239 between the electrodes 220 and the conductive layer 111, without necessarily covering the entire conductive layer 111. The electrolyte 231 and the conductive material removed from the conductive layer 111 flow over the substrate 110 and collect in an electrolyte receptacle 232. The mixture of electrolyte 231 and conductive material can flow to a reclaimer 233 that removes most of the conductive material from the electrolyte 231. A filter 234 positioned downstream of the reclaimer 233 provides additional filtration of the electrolyte 231 and a pump 235 returns the reconditioned electrolyte 231 to the electrolyte vessel 230 via a return line 236. In another aspect of the embodiment shown in Figure 4, the apparatus 260 can include a sensor assembly 250 having a sensor 251 positioned proximate to the conductive layer 111, and a sensor control unit 252 coupled to the sensor 251 for processing signals generated by the sensor 251. The control unit 252 can also move the sensor 251 relative to the substrate 110.In a further aspect of this embodiment, the sensor assembly 250 can be coupled via a feedback path 253 to the electrode drive unit 223 and/or the substrate drive unit 241. Accordingly, the sensor 251 can determine which areas of the conductive layer 111 require additional material removal and can move the electrodes 220 and/or the substrate 110 relative to each other to position the electrodes 220 over those areas. Alternatively, (for example, when the removal process is highly repeatable), the electrodes 220 and/or the substrate 110 can move relative to each other according to a pre-determined motion schedule. The sensor 251 and the sensor control unit 252 can have any of a number of suitable configurations. For example, in one embodiment, the sensor 251 can be an optical sensor that detects removal of the conductive layer 111 by detecting a change in the intensity, wavelength or phase shift of the light reflected from the substrate 110 when the conductive material is removed.Alternatively, the sensor 251 can emit and detect reflections of radiation having other wavelengths, for example, x-ray radiation. In still another embodiment, the sensor 251 can measure a change in resistance or capacitance of the conductive layer 111 between two selected points. In a further aspect of this embodiment, one or both of the electrodes 220 can perform the function of the sensor 251 (as well as the material removal function described above), eliminating the need for a separate sensor 251. In still further embodiments, the sensor 251 can detect a change in the voltage and/or current drawn from the current supply 221 as the conductive layer 111 is removed. In any of the embodiments described above with reference toFigure 4, the sensor 251 can be positioned apart from the electrolyte 231 because the electrolyte 231 is concentrated in the interface region 239 between the electrodes 220 and the conductive layer 111. Accordingly, the accuracy with which the sensor 251 determines the progress of the electrolytic process can be improved because the electrolyte 231 will be less likely to interfere with the operation of the sensor 251. For example, when the sensor 251 is an optical sensor, the electrolyte 231 will be less likely to distort the radiation reflected from the surface of the substrate 110 because the sensor 251 is positioned away from the interface region 239. Another feature of an embodiment of the apparatus 260 described above with reference to Figure 4 is that the electrolyte 231 supplied to the interface region 239 is continually replenished, either with a reconditioned electrolyte or a fresh electrolyte. An advantage of this feature is that the electrochemical reaction between the electrodes 220 and the conductive layer 111 can be maintained at a high and consistent level. Figure 5 is a partially schematic, side elevational view of an apparatus 360 that directs alternating current to the substrate 110 through a first electrolyte 33 la and a second electrolyte 331b. In one aspect of this embodiment, the first electrolyte 33 la is disposed in two first electrolyte vessels 330a, and the second electrolyte 331b is disposed in a second electrolyte vessel 330b. The first electrolyte vessels 330a are partially submerged in the second electrolyte 331b. The apparatus 360 can further include electrodes 320, shown as a first electrode 320a and a second electrode 320b, each coupled to a current supply 321 and each housed in one of the first electrolyte vessels 330a.Alternatively, one of the electrodes 320 can be coupled to ground. The electrodes 320 can include materials such as silver, platinum, copper and/or other materials, and the first electrolyte 331a can include sodium chloride, potassium chloride, copper sulfate and/or other electrolytes that are compatible with the material forming the electrodes 320. In one aspect of this embodiment, the first electrolyte vessels 330a include a flow restrictor 322, such as a permeable isolation membrane formed from Teflon, sintered materials such as sintered glass, quartz or sapphire, or other suitable porous materials that allow ions to pass back and forth between the first electrolyte vessels 330a and the second electrolyte vessel 330b, but do not allow the second electrolyte 330b to pass inwardly toward the electrodes 320 (for example, in a manner generally similar to a salt bridge). Alternatively, the first electrolyte 33 la can be supplied to the electrode vessels 330a from a first electrolyte source 339 at a pressure and rate sufficient to direct the first electrolyte 3 3 la outwardly through the flow restrictor 322 without allowing the first electrolyte 331a or the second electrolyte 330b to return through the flow restrictor 322. In either embodiment, the second electrolyte 331b remains electrically coupled to the electrodes 320 by the flow of the first electrolyte 33 la through the restrictor 322. In one aspect of this embodiment, the apparatus 360 can also include a support member 340 that supports the substrate 110 with the conductive layer 111 facing toward the electrodes 320. For example, the support member 340 can be positioned in the second electrolyte vessel 330b. In a further aspect of this embodiment, the support member 340 and/or the electrodes 320 can be movable relative to each other by one or more drive units (not shown). One feature of an embodiment of the apparatus 360 described above with reference to Figure 5 is that the first electrolyte 33 la can be selected to be compatible with the electrodes 320. An advantage of this feature is that the first electrolyte 331a can be less likely than conventional electrolytes to degrade the electrodes 320. Conversely, the second electrolyte 331b can be selected without regard to the effect it has on the electrodes 320 because it is chemically isolated from the electrodes 320 by the flow restrictor 322.Accordingly, the second electrolyte 331b can include hydrochloric acid or another agent that reacts aggressively with the conductive layer 111 of the substrate 110. Figure 6 is a top plan view of the microelectronic substrate 110 positioned beneath a plurality of electrodes having shapes and configurations in accordance with several embodiments of the invention. For purposes of illustration, several different types of electrodes are shown positioned proximate to the same microelectronic substrate 110; however, in practice, electrodes of the same type can be positioned relative to a single microelectronic substrate 110. In one embodiment, electrodes 720a and 720b can be grouped to form an electrode pair 770a, with each electrode 720a and 720b coupled to an opposite terminal of a current supply 121 (Figure 3). The electrodes 770a and 770b can have an elongated or strip-type shape and can be arranged to extend parallel to each other over the diameter of the substrate 110. The spacing between adjacent electrodes of an electrode pair 370a can be selected to direct the electrical current into the substrate 110, as described above with reference to Figure 3. In an alternate embodiment, electrodes 720c and 720d can be grouped to form an electrode pair 770b, and each electrode 720c and 720d can have a wedge or"pie"shape that tapers inwardly toward the center of the microelectronic substrate 110. In still another embodiment, narrow, strip-type electrodes 720e and 720f can be grouped to form electrode pairs 770c, with each electrode 720e and 720f extending radially outwardly from the center 113 of the microelectronic substrate 110 toward the periphery 112 of the microelectronic substrate 110. In still another embodiment, a single electrode 720g can extend over approximately half the area of the microelectronic substrate 110 and can have a semicircular planform shape. The electrode 720g can be grouped with another electrode (not shown) having a shape corresponding to a mirror image of the electrode 720g, and both electrodes can be coupled to the current source 121 to provide alternating current to the microelectronic substrate in any of the manners described above with reference to Figures 3-5. Figure 7 is a partially schematic, cross-sectional side elevational view of a portion of the substrate 110 positioned beneath the electrode 720c described above with reference to Figure 6. In one aspect of this embodiment, the electrode 720c has an upper surface 771 and a lower surface 772 opposite the upper surface 771 and facing the conductive layer 111 of the substrate 110.The lower surface 772 can taper downwardly from the center 113 of the substrate 110 toward the perimeter 112 of the substrate 110 in one aspect of this embodiment to give the electrode 720c a wedge-shaped profile. Alternatively, the electrode 720c can have a plate-type configuration with the lower surface 772 positioned as shown in Figure 7 and the upper surface 771 parallel to the lower surface 772. One feature of either embodiment is that the electrical coupling between the electrode 720c and the substrate 110 can be stronger toward the periphery 112 of the substrate 110 than toward the center 113 of the substrate 110. This feature can be advantageous when the periphery 112 of the substrate 110 moves relative to the electrode 720c at a faster rate than does the center 113 of the substrate 110, for example, when the substrate 110 rotates about its center 113. Accordingly, the electrode 720c can be shaped to account for relative motion between the electrode and the substrate 110. In other embodiments, the electrode 720c can have other shapes.For example, the lower surface 772 can have a curved rather than a flat profile. Alternatively, any of the electrodes described above with reference to Figure 6 (or other electrodes having shapes other than those shown in Figure 6) can have a sloped or curved lower surface. In still further embodiments, the electrodes can have other shapes that account for relative motion between the electrodes and the substrate 110. Figure 8A is a partially schematic view of an electrode support 473 for supporting a plurality of electrodes in accordance with another embodiment of the invention. In one aspect of this embodiment, the electrode support 473 can include a plurality of electrode apertures 474, each of which houses either a first electrode 420a or a second electrode 420b. The first electrodes 420a are coupled through the apertures 474 to a first lead 428a and the second electrodes 420b are coupled to a second lead 428b. Both of the leads 428a and 428b are coupled to a current supply 421. Accordingly, each pair 470 of first and second electrodes 420a and 420b defines part of a circuit that is completed by the substrate 110 and the electrolyte (s) described above with reference to Figures 3-5. In one aspect of this embodiment, the first lead 428a can be offset from the second lead 428b to reduce the likelihood for short circuits and/or capacitive coupling between the leads. In a further aspect of this embodiment, the electrode support 473 can have a configuration generally similar to any of those described above with reference to Figures 1-7. For example, any of the individual electrodes (e. g., 320a, 320c, 320e, or 320g) described above with reference to Figure 6 can be replaced with an electrode support 473 having the same overall shape and including a plurality of apertures 474, each of which houses one of the first electrodes 420a or the second electrodes 420b. In still a further aspect of this embodiment, the electrode pairs 470 shown in Figure 8A can be arranged in a manner that corresponds to the proximity between the electrodes 420a, 420b and the microelectronic substrate 110 (Figure 7), and/or the electrode pairs 470 can be arranged to correspond to the rate of relative motion between the electrodes 420a, 420b and the microelectronic substrate 110. For example, the electrode pairs 470 can be more heavily concentrated in the periphery 112 of the substrate 110 or other regions where the relative velocity between the electrode pairs 470 and the substrate 110 is relatively high (see Figure 7). Accordingly, the increased concentration of electrode pairs 470 can provide an increased electrolytic current to compensate for the high relative velocity. Furthermore, the first electrode 420a and the second electrode 420b of each electrode pair 470 can be relatively close together in regions (such as the periphery 112 of the substrate 110) where the electrodes are close to the conductive layer 111 (see Figure 7) because the close proximity to the conductive layer 111 reduces the likelihood for direct electrical coupling between the first electrode 420a and the second electrode 420b. In still a further aspect of this embodiment, the amplitude, frequency and/or waveform shape supplied to different electrode pairs 470 can vary depending on factors such as the spacing between the electrode pair 470 and the microelectronic substrate 110, and the relative velocity between the electrode pair 470 and the microelectronic substrate 110. Figures 8B and 8C illustrate electrodes 820 (shown as first electrodes 820a and second electrodes 820b) arranged concentrically in accordance with still further embodiments of the invention. In one embodiment shown in Figure 8B, the first electrode 820a can be positioned concentrically around the second electrode 820b, and a dielectric material 829 can be disposed between the first electrode 820a and the second electrode 820b. The first electrode 820a can define a complete 360 arc around the second electrode 820b, as shown in Figure 8B, or alternatively, the first electrode 820a can define an arc of less than 360 . In another embodiment, shown in Figure 8C, the first electrode 820A can be concentrically disposed between two second electrodes 820b, with the dielectric material 829 disposed between neighboring electrodes 820. In one aspect of this embodiment, current can be supplied to each of the second electrodes 820b with no phase shifting. Alternatively, the current supplied to one second electrode 820b can be phase-shifted relative to the current supplied to the other second electrode 820b. In a further aspect of the embodiment, the current supplied to each second electrode 820b can differ in characteristics other than phase, for example, amplitude. One feature of the electrodes 820 described above with respect toFigures 8B and 8C is that the first electrode 820a can shield the second electrode (s) 820b from interference from other current sources. For example, the first electrode 820a can be coupled to ground to shield the second electrodes 820b. An advantage of this arrangement is that the current applied to the substrate 110 (Figure 7) via the electrodes 820 can be more accurately controlled. Figure 9A is a schematic circuit representation of some of the components described above with reference to Figures 3-8C. As shown schematically in Figure 9A, the current source 521 is coupled to the first electrode 520a and the second electrode 520b with leads 528a and 528b respectively. The electrodes 520a and 520b are coupled to the microelectronic substrate 110 with the electrolyte 531 in an arrangement that can be represented schematically by two sets of parallel capacitors and resistors. A third capacitor and resistor schematically indicate that the microelectronic substrate 110 "floats"relative to ground or another potential. In one aspect of an embodiment shown in Figure 9A, the current source 521 can be coupled to an amplitude modulator 522 that modulates the signal produced by the current source 521, as is shown in Figure 9B.Accordingly, the current source 521 can generate a high-frequency wave 904, and the amplitude modulator 522 can superimpose a low-frequency wave 902 on the high-frequency wave 904. For example, the high-frequency wave 904 can include a series of positive or negative voltage spikes contained within a square wave envelope defined by the low-frequency wave 902. Each spike of the high-frequency wave 904 can have a relatively steep rise time slope to transfer charge through the dielectric to the electrolyte, and a more gradual fall time slope. The fall time slope can define a straight line, as indicated by highfrequency wave 904, or a curved line, as indicated by high-frequency wave 904a. In other embodiments, the high-frequency wave 904 and the lowfrequency wave 902 can have other shapes depending, for example, on the particular characteristics of the dielectric material and electrolyte adjacent to the electrodes 420, the characteristics of the substrate 110, and/or the target rate at which material is to be removed from the substrate 110. An advantage of this arrangement is that the high frequency signal can transmit the required electrical energy from the electrodes 520a and 520b to the microelectronic substrate 110, while the low frequency superimposed signal can more effectively promote the electrochemical reaction between the electrolyte 531 and the conductive layer 111 of the microelectronic substrate 110. Accordingly, any of the embodiments described above with reference toFigures 3-8C can include an amplitude modulator in addition to a current source. Figures 10A-F schematically illustrate a process for forming features in a microelectronic substrate in accordance with another embodiment of the invention, using any of the devices described above with reference toFigures 3-8C. In one aspect of this embodiment, the process can include forming shallow trench isolation (STI) features, and in other embodiments, the process can include forming other types of features. In any of these embodiments, the process can include rounding or blunting corners of a conductive material, as described in greater detail below. Figure 10A illustrates a portion of a microelectronic substrate 1010 having a face surface 1013 with a conductive, partially conductive, and/or semiconductive material 1011 (referred to collectively as a conductive material 1011). For example, in one embodiment, the conductive material 1011 can include silicon doped with boron or phosphorous. In other embodiments, the conductive material 1011 can include other conductive or semiconductive materials. In any of these embodiments, the process can further include forming apertures in the conductive material 1011, for example, to support a dielectric material or other microelectronic feature. In one aspect of this embodiment, the process can include disposing an oxide layer 1014 on the conductive material 1011, and then disposing a nitride layer 1015 on the oxide layer 1014. A mask 1016 having openings 1017 corresponding to the desired location of the microelectronic features is positioned adjacent to the nitride layer 1015, and the microelectronic substrate 1010 is exposed to an etchant. As shown in Figure 10B, the etchant can remove material positioned beneath the openings 1017 to form apertures 1060 or other recesses that extend through the nitride layer 1015, through the oxide layer 1014, and through an upper surface 1065 of the conductive material 1011. Accordingly, the apertures 1060 can include sidewalls 1064 generally transverse to the upper surface 1065, and corners 1063 at the junction between the sidewalls 1064 and the upper surface 1065. Referring now to Figure 10C, the nitride layer 1015 and the oxide layer 1014 can be etched away from the corners 1063 before the corners 1063 are rounded or blunted. For example, in one aspect of this embodiment, a liquid etchant having about 500 parts water to about one part hydrofluoric acid and about one part hydrochloric acid can etch back the nitride layer 1015 and the oxide layer 1014 at approximately the same rate to expose the upper surface 1065 of the conductive material 1011 near the corners 1063. In a further aspect of this embodiment, the etching process can be completed at a temperature of about 60 C. In an alternate embodiment, the step of etching the nitride layer 1015 and the oxide layer 1014 back from the corners 1063 can be eliminated, as described in greater detail below with reference to Figure 11. As shown in Figure 10D, the exposed corners 1063 can be rounded or blunted to form rounded corners 1063a (shown in Figure 10D in broken lines). For example, in one aspect of this embodiment, an electrolytic fluid 1031 can be disposed adjacent to the corners 1063 and placed in fluid communication with a first electrode 1020a and a second electrode 1020b (collectively referred to as electrodes 1020). In further aspect of this embodiment, the electrodes 1020 can be spaced apart from the microelectronic substrate 1010 by a distance of from about one millimeter to about two millimeters. In other embodiments, this spacing can have other values. At least one of the electrodes 1020 can be coupled to a source of electrical potential, such as an alternating current source, in a manner generally similar to that described above with reference to Figures 3-9B. Accordingly, electrical current will tend to flow from one of the electrodes 1020 through the electrolytic fluid 1031 to the corners 1063 to oxidize the conductive material at the corners 1063.The electrical current can travel through the conductive material 1011 and back through the electrolytic fluid 1031 to the other electrode 1020 to complete an electrical circuit. The oxidized material at the corner 1063 can be removed by chemical interaction with the electrolytic fluid to form the rounded corners 1063a. In one aspect of this embodiment, electrical current can be introduced into the electrolytic fluid 1031 at a rate of from about one to about 500mA/cm2 (and in a particular embodiment, about 50 mA/cm2), a frequency of about 60 Hz, and a voltage of about 15 Vrms. Alternatively, the electrical current can have other characteristics. In any of these embodiments, the composition of the electrolytic fluid 1031 can be the same as the composition of the etchant used to etch back the oxide layer 1014 and the nitride layer 1015. In a further aspect of this embodiment, the constituents of the electrolytic fluid 1031 can be selected to reduce or eliminate etching at the sidewalls 1064 of the apertures 1060. For example, when the conductive material 1011 includes silicon, hydrochloric acid in the electrolytic fluid 1031 can reduce the pH of the fluid to at least reduce etching at the sidewalls 1064. Accordingly, the electrolytic fluid 1031 can be (a) sufficiently conductive to conduct electrical current to the corners 1063 to oxidize conductive material at the corners 1063, (b) sufficiently reactive to remove oxidized material from the corners 1063, and (c) not so reactive as to remove un-oxidized material from the sidewalls 1064 of the aperture 1060. Alternatively, ethane glycol can be added to the electrolytic fluid 1031 to reduce the etching rate of the silicon sidewalls 1064. In other embodiments, other chemicals can be disposed in the electrolytic fluid 1031 to control the rate of material removal at the sidewalls 1064, while still allowing material from the corners 1063 to be removed, as discussed above. Figure 10E illustrates a portion of the microelectronic substrate 1010 shown in Figure 10D after the corners 1063 (Figure 10D) have been rounded to form the blunted corners 1063a. In one aspect of this embodiment, the cross-sectional shape of the corners 1603a can define an approximately circular arc. In other embodiments, the blunted corners 1063a can have other shapes. In any of these embodiments, the blunted corners 1063a are rounder or less sharp than the sharp corners 1063 shown in Figure 10D. Figure 10F shows a gate oxide material 1066 disposed in the apertures 1060 to coat the sidewalls 1064. The gates can then be formed by disposing a conventional gate material 1067 on the gate oxide 1066 within the apertures 1060. One feature of an embodiment of the process described above with reference to Figures 10A-F is that the initially sharp corners 1063 formed at the junction between the sidewalls 1064 and the upper surface 1065 of the conductive material 1011 can be blunted or rounded without elevating the temperature of microelectronic substrate 1110 significantly above room temperature. Accordingly, the blunted corners 1063a can be less likely to emit electromagnetic signals during operation of the microelectronic substrate 1010, which can create interference with other features of the microelectronic substrate 1010. Additionally, the microelectronic substrate can be less expensive to manufacture and more reliable as a result of spending less time in a high temperature environment. Another feature of an embodiment of the process described above with reference to Figures 10A-F is that the process can be self-limiting. For example, as conductive material 1011 at the corners 1063 oxidizes and etches away, the corners 1063 become blunter and less likely to attract electrical current any more rapidly than other conductive surfaces in fluid communication with the electrodes 1020. Accordingly, the process may not need to be monitored as closely as other material removal processes. Figure 11 is a partially schematic illustration of a process for rounding or blunting conductive corners of a microelectronic substrate 1110 in accordance with another embodiment of the invention. In one aspect of this embodiment, the microelectronic substrate 1110 can include conductive material 1111, an oxide layer 1114, and a nitride layer 1115 arranged generally in the same manner as that described above with reference to Figure 10B. Apertures 1160 are etched through the nitride layer 1115 and the oxide layer 1114 and into the conductive material 1111, also in a manner generally similar to that described above with reference to Figure 10B. The apertures 1160 can have sidewalls 1164 that form sharp corners 1163 where the apertures 1160 intersect an upper surface 1165 of the conductive material 1111. In a further aspect of this embodiment, a first electrode 1120a and a second electrode 1120b can be positioned in fluid communication with an electrolyte 1131 disposed on the microelectronic substrate 1110 to round the initially sharp corners 1163 without first etching back the oxide layer 1114 and the nitride layer 1115 from the corners 1163. Accordingly, the oxide layer 1114 and the nitride layer 1115 can initially overhang the rounded corners 1163a, at least until the oxide layer 1114 and the nitride layer 1115 are removed from the microelectronic substrate 1110. An advantage of this process is that it can eliminate the step described above with reference to Figure 10C. From the foregoing, it will be appreciated that, although specific embodiments of the invention have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the invention. For example, the foregoing processes can be used to form features other than STI features. Accordingly, the invention is not limited except as by the appended claims.
To provide methods, apparatus, systems and articles of manufacture to generate a graphics processing unit (GPU) long instruction trace.SOLUTION: An example apparatus includes at least one memory and at least one processor. The at least one processor executes instructions to: at least identify a first routine based on an identifier of a second routine executed by the GPU, the first routine based on an emulation of the second routine; execute the first routine to determine a first value of a GPU state of the GPU, the first routine having (i) a first argument associated with the second routine and (ii) a second argument corresponding to a second value of the GPU state prior to executing the first routine; and control a workload of the GPU based on the first value of the GPU state.SELECTED DRAWING: Figure 1
A device for profiling hardware, at least one memory, and at least one processor, the first routine based on the identifier of the second routine executed by the graphics processing unit (GPU). To identify, the first routine executes the first routine for identifying and determining the first value of the GPU state of the GPU, based on the emulation of the second routine. That is, the first routine is (i) the first argument associated with the second routine, and (ii) the second GPU state before executing the first routine. With at least one processor executing instructions to execute and control the workload of the GPU based on the first value of the GPU state, having a second argument corresponding to the value. , Including equipment.The apparatus according to claim 1, wherein the GPU state is the state of the first register in the architecture register file associated with the hardware thread of the GPU or the second register in the general purpose register file of the hardware thread.The identifier is a first identifier extracted from an encoded binary file, wherein the at least one processor inserts one or more profile routines into the kernel executed by the hardware thread of the GPU. The long instruction trace is to determine the value of the first value, the second value, and the hardware thread identifier from the long instruction trace, and the long instruction trace is used to execute the one or more profile routines by the hardware thread. In response, the first value generated by the hardware thread corresponds to the GPU register value after execution of the kernel by the hardware thread, and the second value is said by the hardware thread. The device according to any one of claims 1 and 2, which corresponds to the GPU register value before execution of the kernel, and the hardware thread identifier identifies and determines the hardware thread. ..The hardware thread is a first hardware thread, the long instruction trace is a first long instruction trace associated with the first hardware thread, and the encoded binary file is The encoded binary file comprises a first long instruction trace and one or more second long instruction traces associated with one or more second hardware threads, and the encoded binary file is a multithreaded GPU trace. The device according to claim 3.The kernel comprises device access instructions executed by the hardware thread, wherein the at least one processor is one or more first of each first register of one or more of the general purpose register files of the GPU. Determining the register value, determining one or more second register values of one or more of each of the one or more second registers of the GPU's architecture register file, and determining the one or more first registers. The storage of the value, the one or more second register values, the one or more third register values, and the device access instruction in the long instruction trace, the one or more third registers. The device of claim 3, wherein the value corresponds to, and stores, each one or more destination registers associated with the device access instruction.The at least one processor determines the utilization rate of the GPU based on the first GPU state, compares the utilization rate with a threshold value, and the threshold value is based on the comparison. In response to determining that the GPU is not satisfied, the GPU is loaded with at least one of the adjustments of the second routine or an increase in the number of computational tasks performed by the GPU. The device according to any one of claims 1 and 2, wherein the device is controlled and is performed.The first routine is an instrumentation routine that includes an emulation routine, wherein the at least one processor inserts the first callback routine into the instrumentation routine before the emulation routine. The first callback routine calls the first application programming interface (API) to provide and insert the second GPU state to the application, and the second emulation routine followed by the instrument routine. The second callback routine calls the first API or the second API to provide the first GPU state to the application. The device according to any one of claims 1 and 2, wherein the device is used.At least one storage device containing an instruction, said instruction, when executed, is a first routine based on the identifier of a second routine executed by the graphics processing unit (GPU) on at least one processor. Identifying a routine, wherein the first routine identifies and determines the first value of the GPU state of the GPU based on the emulation of the second routine. To execute, the first routine is (i) the first argument associated with the second routine, and (ii) the first of the GPU states before executing the first routine. At least one storage device having a second argument corresponding to a value of 2 to perform and to control the workload of the GPU based on the first value of the GPU state. ..The state of at least the first register in the architecture register file associated with the hardware thread of the GPU, or the state of the second register in the general purpose register file of the hardware thread. One storage device.The identifier is a first identifier extracted from an encoded binary file, and the instruction, when executed, to the at least one processor and to the kernel executed by the hardware thread of the GPU. Inserting one or more profile routines and determining the value 1, the second value, and the hardware thread identifier from the long instruction trace, the long instruction trace is by the hardware thread. Generated by the hardware thread in response to the execution of the one or more profile routines, the first value corresponds to the GPU register value after execution of the kernel by the hardware thread and the second. 8 corresponds to the GPU register value before the execution of the kernel by the hardware thread, and the hardware thread identifier identifies and determines the hardware thread. At least one storage device according to any one of 9 to 9.The hardware thread is a first hardware thread, the long instruction trace is a first long instruction trace associated with the first hardware thread, and the encoded binary file is The encoded binary file comprises the first long instruction trace and one or more second long instruction traces associated with one or more second hardware threads. The at least one storage device according to claim 10, which represents a trace.The kernel includes device access instructions executed by the hardware thread, which, when executed, to the at least one processor, each first of one or more of the general purpose register files of the GPU. Determining one or more first register values of a register and determining one or more second register values of one or more of each second register of the GPU's architecture register file. The storage of the one or more first register values, the one or more second register values, the one or more third register values, and the device access instructions in the long instruction trace. The at least one of claim 10, wherein the one or more third register values correspond to, store, and store in each of the one or more destination registers associated with the device access instruction. Storage device.When the instruction is executed, the instruction causes the at least one processor to determine the utilization rate of the GPU based on the first GPU state, and compares the utilization rate with a threshold value. At least one of the adjustment of the second routine or the increase in the number of computational tasks performed by the GPU in response to determining that the threshold is not met based on the comparison. The at least one storage device according to any one of claims 8 to 9, wherein the GPU is to be controlled and the workload of the GPU is controlled.The first routine is an instrumentation routine that includes an emulation routine, and when the instruction is executed, it tells the at least one processor that the instrumentation routine has a first callback routine before the emulation routine. The first callback routine calls the first application programming interface (API) to provide the second GPU state to the application, the insertion and the emulation routine. Is to insert a second callback routine into the instrumentation routine after the second callback routine calls the first API or the second API and the first GPU state. The at least one storage device according to any one of claims 8 to 9, wherein the application is provided, inserted, and performed.A device for profiling hardware, a means for identifying a first routine based on an identifier of a second routine executed by a graphics processing unit (GPU), said first routine. The means based on the emulation of the second routine and the means for executing the first routine for determining the first value of the GPU state of the GPU, the first routine is (i). A means having a first argument associated with the second routine, and (ii) a second argument corresponding to a second value of the GPU state before executing the first routine, and said. A device comprising a means for controlling a workload of the GPU based on the first value of the GPU state.The device according to claim 15, wherein the GPU state is the state of the first register in the architecture register file associated with the hardware thread of the GPU or the second register in the general purpose register file of the hardware thread.The identifier is a first identifier extracted from an encoded binary file, a means for inserting one or more profile routines into the kernel executed by the hardware thread of the GPU, and the hardware. Determining the first value, the second value, and the hardware thread identifier from the long instruction trace generated by the hardware thread in response to the execution of one or more profile routines by the thread. The first value corresponds to the GPU register value before the execution of the kernel by the hardware thread, and the second value corresponds to the execution of the kernel by the hardware thread. The apparatus according to any one of claims 15 to 16, further comprising a means for identifying the hardware thread, which corresponds to the subsequent GPU register value and the hardware thread identifier identifies the hardware thread.The hardware thread is a first hardware thread, the long instruction trace is a first long instruction trace associated with the first hardware thread, and the encoded binary file is The encoded binary file comprises a first long instruction trace and one or more second long instruction traces associated with one or more second hardware threads, and the encoded binary file is a multithreaded GPU trace. The device according to claim 17.The kernel comprises a device access instruction executed by the hardware thread, and the means for executing the kernel is one or more first of each first register of one or more of the general purpose register files of the GPU. To determine the register value of one or more of the second registers of one or more of the architecture register files of the GPU, and to determine the second register value of one or more of the first or more. The storage of the register value, the one or more second register values, the one or more third register values, and the device access instruction in the long instruction trace, wherein the one or more third register values. 17. The apparatus of claim 17, wherein the register value corresponds to, stores, and stores each one or more destination registers associated with the device access instruction.The means for determining the utilization rate of the GPU based on the first GPU state is further included, and the means for controlling the GPU responds to the determination that the utilization rate does not satisfy the threshold value. Any of claims 15-16, which controls the workload of the GPU by coordinating a second routine or performing at least one of an increase in the number of computational tasks performed by the GPU. The device according to paragraph 1.The first routine is an instrumentation routine including an emulation routine, and the means for executing the first routine is to insert the first callback routine into the instrumentation routine before the emulation routine. The first callback routine calls the first application programming interface (API) to provide and insert the second GPU state into the application, and the emulation routine is followed by the instrumentation routine. Inserting 2 callback routines, wherein the 2nd callback routine calls the 1st API or the 2nd API to provide the 1st GPU state to the application. The device according to any one of claims 15 to 16, wherein the device is to be used.A system that profiles hardware, a graphics processing unit (GPU) that has a hardware thread, which determines the first value of the GPU state and a GPU routine included in the kernel. To determine the second value of the GPU state and to generate a long instruction trace containing the GPU routine, the first value, and the second value. And the central processing unit (CPU), which inserts one or more profile routines into the kernel and identifies the first routine based on the identifier of the GPU routine. Routine is to execute the first routine that replays the execution of the GPU routine to identify and determine the second value of the GPU state based on the emulation of the GPU routine. The first routine has (i) a first argument associated with the GPU routine and (ii) a second argument corresponding to the first value of the GPU state to execute. A system comprising a CPU, which controls and performs the workload of the GPU based on the execution of the first routine.22. The system of claim 22, wherein the GPU state is the state of a second register in the architecture register file associated with the hardware thread of the GPU or a second register in the general purpose register file of the hardware thread.The identifier is a first identifier extracted from the encoded binary file, the encoded binary file contains the long instruction trace, the CPU is the first value, the second. The value of, and the hardware thread identifier from the encoded binary file are determined, and the hardware thread identifier identifies the hardware thread, according to any one of claims 22 to 23. Described system.The hardware thread is a first hardware thread, the long instruction trace is a first long instruction trace associated with the first hardware thread, and the encoded binary file is The one or more second hardware threads include said first long instruction trace and one or more second long instruction traces associated with one or more second hardware threads. 24. The encoded binary file represents a multithreaded GPU trace, wherein in response to one or more executions of the kernel, the one or more second long instruction traces are generated. system.
The present disclosure relates to computers in general, and more particularly to methods and devices for generating graphic processing unit long instruction traces.Software developers try to develop code that can be executed as efficiently as possible. To better understand code execution, profiling is used to measure different code execution statistics, such as execution time, memory consumption, and so on. In some examples, profiling is implemented by inserting profiling instructions into the code. Such profiling instructions can be used to store and analyze information about code execution.An exemplary Graphic Processing Unit A block diagram in which a long instruction trace (GLIT) engine inserts a profiling instruction into an exemplary GPU kernel executed by an exemplary graphic processing unit (GPU).It is a figure of an exemplary implementation of an exemplary portion of the GPU of FIG.An exemplary format of an exemplary long instruction trace is shown.FIG. 3 is a block diagram of an exemplary implementation of the GLIT engine of FIG.FIG. 6 is a diagram of an exemplary system in which the exemplary GPU of FIG. 1 and / or the exemplary GPU portion of FIG. 2 can be implemented to control the behavior of the hardware threads of an exemplary execution unit.FIG. 6 is a diagram of an exemplary GPU long instruction trace for the exemplary GPU of FIG. 1 and / or the exemplary GPU portion of FIG.FIG. 6 is a diagram of an exemplary system for generating and analyzing the GLIT of FIG.FIG. 6 is a diagram of an exemplary system for emulating and analyzing the GLIT of FIG.An exemplary kernel and an exemplary instrumentation kernel are shown.FIG. 9 is an exemplary workflow diagram for emulating the execution of the exemplary instrumentation kernel of FIG.Shown is exemplary source code for emulating the execution of the exemplary instrumentation kernel of Figure 9.Shown is exemplary source code for emulating the execution of an exemplary software thread.Illustrative Instrumentation The following is exemplary source code for emulating the execution of a software thread.Here is an exemplary source code for implementing an emulation routine.A flowchart representing a machine-readable instruction that can be executed to implement the GLIT engine of FIG. 1 and / or FIG. 4 to improve the operation of the exemplary GPU and / or the exemplary GPU portion of FIG. Is.FIG. 6 is a flow chart representing a machine-readable instruction that can be executed to implement the GLIT engine of FIG. 1 and / or FIG. 4 to emulate one or more exemplary GLITs.Another representing a machine-readable instruction that can be executed to implement the GLIT engine of FIG. 1 and / or FIG. 4 to improve the operation of the exemplary GPU and / or the exemplary GPU portion of FIG. It is a flowchart of.FIG. 3 is a block diagram of an exemplary processing platform structured to execute the machine-readable instructions of FIGS. 11-17 to implement the exemplary GLIT engine of FIGS. 1 and / or 4.Consumers (eg, for licensing, sale and / or use), retailers (eg, for sale, resale, licensing and / or sublicensing), and / or original equipment manufacturers (OEMs) ( Software on client devices, such as, for example, distributed to retailers and / or included in products sold directly to customers (eg, software corresponding to the exemplary computer-readable instructions in Figures 11-17). It is a block diagram of an exemplary software distribution platform for distribution.The numbers are not on scale. Generally, the same reference number is used to refer to the same or similar parts throughout the drawings and the accompanying written description. As used herein, connection references (eg, attachments, joins, connections, and joins) are intermediate members between the elements referenced by the connection reference and / or between those elements, unless otherwise indicated. Relative movement may be included. As such, connection references do not necessarily infer that the two elements are directly connected and / or in a fixed relationship with each other.Unless otherwise stated, descriptors such as "first", "second", "third" supplement or otherwise supplement the meaning of priority, physical order, placement in a list, and / or any order. Not indicated by, and used only as a label and / or any name to distinguish the elements for the sake of comprehension of the disclosed examples. In some examples, the descriptor "first" may be used to refer to an element in the embodiment for carrying out the invention, but the same element may be different, such as "second", "third", etc. Sometimes referred to in a claim using a descriptor. In such cases, it should be understood that such descriptors are used, for example, only to distinguish these elements that may otherwise share the same name.Developers want to create the most computationally efficient machine-readable code to perform a desired task on a processor such as a central processing unit (CPU). In some cases, the developer creates machine-readable code for the CPU and analyzes the efficiency of the machine-readable code using a CPU simulator that executes LIT (Long Instruction Trace). LIT is a snapshot of the architectural state of the CPU. The architectural state may include the state of the system memory, which may include the value of the memory register associated with the CPU. Some such LITs may include a list of system interrupts needed to simulate system events such as direct memory access (DMA) traffic. Some LITs include a snapshot of the entire system memory in response to user and / or execution of kernel instructions.Developers develop CPU kernels and use profilers and / or profiling systems to collect CPU kernel behavior statistics (eg, behavior parameters, performance statistics, etc.) and the efficiency of the CPU kernel executed by the CPU. You can get a better understanding. The profiler inserts additional instructions into the CPU kernel to collect such behavioral statistics. Such profilers and / or profiling systems can be used to determine CPU utilization. Such profilers and / or profiling systems can determine CPU utilization because the operating system running on the CPU provides CPU utilization visibility for each of the CPU cores and threads. Developers may not be able to utilize such LIT and / or profiling techniques for alternative types of processors such as graphics processing units (GPUs).The GPU is an electronic circuit that executes an instruction to modify the contents of the buffer. Typically, the buffer is a frame buffer used to output information to a display device (eg, monitor, touch screen, etc.). In recent years, GPUs have been used for tasks that are not necessarily related to the generation of output images.The GPU executes instruction packages commonly called kernels, computational kernels, and shaders. The term kernel is used for general purpose computing tasks such as OpenCL (Open Computing Language) tasks, C for media tasks, for example. Typically, the term shader is used when the kernel is used for graphics-related tasks such as DirectX, OpenGL (Open Graphics Library) tasks, pixel shader / shading tasks, vertex shaders / shading tasks, and so on. The exemplary approach disclosed herein uses the term kernel, but such an approach is as suitable as using shaders. Such a kernel generally corresponds to an internal loop of a program that is repeated multiple times. As used herein, GPU kernel refers to a kernel in binary form. GPU programmers develop kernels / shaders in high-level programming languages such as HLSL (High-Level Shader Language), OpenCL, and then compile the code into a binary version of the kernel, which is executed by the GPU. The exemplary approach disclosed herein applies to the binary version of the kernel.Like CPU developers, GPU developers want to create the most computationally efficient machine-readable code to perform their desired tasks on the GPU. However, profilers and / or profiling systems may not be efficient for GPU developers to analyze their machine-readable code. Unlike a CPU that has an operating system running on the CPU, the GPU does not have an operating system running on the GPU, so the GPU is a GPU with the granularity of the GPU's execution unit and hardware threads. Does not have the ability on the GPU to measure behavioral statistics such as busy and idle time intervals, register values in response to kernel execution. Some GPU device vendors offer GPU profiling tools, but such tools are limited and complex analysis of GPU workloads at the level of each particular GPU instruction without compromising the performance of GPU execution. Is not efficient for dynamic application.The examples disclosed herein improve GPU profiling that can be used to identify improvements in GPU operation by generating and analyzing GPU long instruction traces (GLITs). In some disclosed examples, the GLIT captures the state of the GPU (eg, the GPU state) in response to the GPU running the instrumentation kernel (eg, the instrumentation GPU kernel). Some of the examples disclosed herein measure GPU operating parameters based on GLIT analysis and determine whether to adjust GPU operation based on the measured operating parameters. Improves the behavior of. In some disclosed examples, a processor such as a CPU comprises at least one of a GPU state, execution time parameter, busy time parameter, idle time parameter, occupancy time parameter, or utilization parameter based on GLIT. One or more operational parameters associated with the GPU (eg, operational statistics, performance statistics, etc.) can be determined.As used herein, instrumented kernel refers to a kernel that contains profiling and / or trace instructions executed by hardware that statistically measures and / or monitors kernel execution when executed. .. As used herein, a GPU state is one or more first values stored in a general purpose register file (GRF) and / or an architecture register file (ARF) associated with the GPU's hardware thread. ) Refers to one or more second values stored in. For example, a GPU can have a hardware thread with a GRF containing a plurality of first registers and an ARF containing a plurality of second registers. In such an example, the first value of the first GRF register may be in the first GPU state, and the first value of the first one in the ARF register may be in the second GPU state. Good and so on.As used herein, the execution time of the GPU is the hardware thread of the GPU and / or, more generally, the time interval that the GPU uses to run the kernel (eg, the instrumentation kernel). , Time duration, etc. As used herein, GPU busy time refers to time intervals, time durations, and the like when GPU hardware threads are busy performing computational tasks. As used herein, GPU idle time refers to the time interval, time duration, etc. when a GPU hardware thread is not performing a computational task. As used herein, GPU occupancy refers to the set of busy and / or idle time intervals associated with a GPU's execution unit and / or hardware thread during the execution of one or more computational tasks. .. As used herein, GPU utilization refers to the ratio of busy time and total time associated with performing one or more computational tasks.In some disclosed examples, the CPU inserts additional instructions into the kernel to collect information corresponding to one or more operating parameters associated with kernel execution. Additional instructions include a profiling instruction to instruct the GPU to generate a GLIT, which is the hardware thread identifier (TID), the GPU state of the hardware thread, the operation code to identify the GPU instruction, the GPU. It can include the type of instruction (eg, "read SEND" or EOT (End-of-Hard) instruction), the time stamp associated with the start and / or end time of the execution of the kernel, and / or a combination thereof. .. For example, when the GPU executes the kernel containing the additional instructions, the GPU is (i) the first value of the GRF register before executing the kernel, (ii) the first of the GRF registers after executing the kernel in GLIT. A value of 2 and / or a hardware thread identifier corresponding to the hardware thread that ran the (iii) kernel can be stored. The GPU can store the GLIT in a trace buffer in memory.In some disclosed examples, the CPU can take the GLIT from the trace buffer and replay the GLIT for GPU analysis. For example, the CPU can emulate kernel execution based on the first and / or second value of the GRF register. In some examples, the CPU provides output data from the emulated execution of the kernel to the GPU profiling tool and makes a callback routine to determine one or more operating parameters associated with the GPU. It can be registered (eg, registered using a software application, operating system (OS) and / or a combination thereof). Advantageously, GPU profiling tools may be utilized to determine the efficiency of the kernel executed by the GPU. For example, a GPU profiling tool may determine that a GPU can perform additional computational tasks, fewer additional computational tasks, etc., based on one or more operational parameters, and thus the kernel, and. / Or, more generally, improvements such as GPU operation, CPU operation scheduling, etc. may be identified.FIG. 1 is a block diagram showing an exemplary system 100 including an exemplary GPU long instruction trace (GLIT) engine 102, wherein the GLIT engine 102 is the first exemplary profiling instruction 104A-104C. Insert into kernel 106 to generate a second exemplary kernel 108. In this example, the first kernel 106 is a GPU kernel executed by an exemplary GPU 110. In this example, the second kernel 108 is an instrumentation kernel (eg, an instrumentation GPU kernel). Alternatively, the first kernel 106 may be another type of kernel, such as a kernel executed by a neural network processor, a visual processing unit (VPU), or the like.The GPU 110 may be implemented by a plurality of execution units arranged in a slice (for example, a GPU slice). For example, the GPU 110 may be implemented by a plurality of slices (eg, 3 slices, 6 slices, 12 slices, etc.). An exemplary implementation of the GPU slice 200 is shown in the exemplary example of FIG. Referring to FIG. 2, the GPU slice 200 includes three exemplary subslices 202 and 24 exemplary execution units 204. In this example, each of the subslices 202 contains eight execution units 204. Execution unit 204 is an independent computational unit used to execute three-dimensional (3D) shaders, media, and general-purpose processing graphic processing unit (GPGPU) kernels. For example, the execution unit 204 may be implemented in multithreaded hardware capable of performing SIMD (multi-issuing single instruction, multiple data) operations. In this example, each of the execution units 204 may be implemented with seven exemplary threads (eg, hardware threads, GPU threads, etc.) 206.In the example shown in FIG. 2, the GPU slice 200 includes an exemplary fixed function unit 207 that communicates with one (s) of the subslices 202. The fixed function unit 207 may be implemented by hardware that is not partially and / or otherwise fully programmable (eg, by the user, application, etc.). Alternatively, the GPU slice 200 may not include the fixed function unit 207. For example, the fixed function unit 207 is emulated by a programmable shader and / or implemented in other ways.In the example shown in FIG. 2, the GPU slice 200 includes an exemplary cache memory 210. In this example, the cache memory 210 is implemented by a level 3 (L3) data cache that includes an exemplary atomic barrier 212 and an exemplary shared local memory 214. Alternatively, the cache memory 210 may be implemented with any other type of memory, data storage, and the like.In the example shown in FIG. 2, one (s) of the subslices 202 is via at least one of the exemplary sampler (eg, texture sampler) 216 or the exemplary data port 218. It is communicating with the cache memory 210. In some examples, the sampler 216 may be implemented as a self-contained functional block (eg, hardware, firmware, and / or software block) within the graphics core. In some examples, the sampler 216 receives a message from another agent in the graphics core, fetches data from an external memory source, sometimes referred to as a "surface", performs actions on the data, and / or Results can be returned in standard format to the requester (or directly to an intermediate memory buffer (eg, directly to the RTT (Render Target Graphic)) if requested. In some examples, the sampler 216 filters from position in the texture map. / Can return blended pixels.In this example, the sampler 216 and / or the data port 218 can read data from the cache memory 210 at a rate of 64 bytes per cycle. For example, the sampler 216 reads a value from the first register of the corresponding ARF and / or the second register of the corresponding GRF implemented by one (s) of the corresponding threads 208, thereby executing unit 204. The GPU state of one (s) of threads 208 can be sampled. Alternatively, the sampler 216 and / or the data port 218 can read data from the cache memory 210 at any other speed. In this example, the data port 218 can write data to the cache memory 210 at a rate of 64 bytes per cycle. Alternatively, the data port 218 can write data to the cache memory 210 at any other speed.In the example shown in FIG. 2, one (s) of execution units 204 communicates with an exemplary local thread dispatcher 220. In this example, the local thread dispatcher 220 may be implemented using hardware that acquires instructions such as the second kernel 108 of FIG. 1 and stores the instructions in an exemplary instruction cache 222. For example, the instruction cache 222 may be implemented using a memory capable of storing instructions (for example, non-volatile memory, volatile memory, etc.).In this example, the local thread dispatcher 220 dispatches, distributes, and / or otherwise sends instructions, such as a second kernel 108, to one (s) of execution units 204 for execution. can do. For example, the local thread dispatcher 220 can spread an instance of the second kernel 108 to one (s) available of execution units 204 for execution. In some examples, hundreds or thousands of instances of the second kernel 108 will have one (s) of each of the execution units 204, a subset of the data intended by the application, such as application 120 in FIG. Alternatively, it can be operated in parallel and / or executed in other ways on one (s) of the available execution units 204, partly processed. As used herein, a "job" or "software thread" makes a second kernel 108 one of threads 208 and / or more generally one of execution units 204. Sometimes refers to an instance to be dispatched.In the example shown in FIG. 2, one (s) of the execution units 204 receive and / or otherwise obtains an instruction (eg, a kernel) executed from the exemplary instruction fetch interface 224. .. For example, one (s) of the execution units 204 may acquire a kernel such as the second kernel 108 of FIG. 1 for execution from the instruction fetch interface 224. The instruction fetch interface 224 may allocate the kernel to one (s) of threads 208 of the execution unit 204. In this example, one (s) of threads 208 may each be implemented with 128 32-byte registers. For example, one (s) of threads 208 can each have an exemplary general purpose register file (GRF) and an exemplary architecture register file (ARF). The data read or written by thread 208 may be stored in one corresponding GRF of thread 208. In this example, the GRF may be implemented with 128 general purpose registers, each having one (s) of the general purpose registers storing 32 bytes. The data element address in the GRF may be indicated by a register number (eg, r0 to r127 in the case of 128 general-purpose register GRFs) and a sub-register number.In the example shown in FIG. 2, the ARF may be implemented using a register file containing registers used to implement a particular ISA (Instruction Set Architecture) function. For example, instruction pointers and / or condition flags may be implemented using ARF registers. As used herein, "ISA functionality" is visible to programs and programmers (eg, developers), data types, registers, memory access, addressing modes, exceptions, instruction coding, and the instruction set itself. Refers to a processor aspect independent of a particular implementation, including. In some examples, a GPU hardware thread, such as one (s) of threads 208, may execute the instruction corresponding to the ISA function. In some examples, each instruction may be a vector instruction that can operate in different SIMD modes with different floating point and integer data types. In some examples, each of the instructions may have a corresponding opcode. For example, the GPU architecture may support a limited number of opcodes (eg, 60 opcodes, 80 opcodes, 100 opcodes, etc.).In the example shown in FIG. 2, one (s) of threads 208 may be communicating with an exemplary thread arbiter 226. In this example, the thread arbiter 226 acquires data output (s) from thread 208 and the data output (s) is an exemplary SEND instruction 228, branch instruction 230, or exemplary SIMD floating point unit (FPU) instruction. It may be implemented by hardware that determines whether or not it corresponds to 232. In this example, SEND instruction 228 may be generated by thread 208 in response to thread 208 ending execution of the kernel. In this example, the branch instruction 230 may be generated by thread 208 in response to executing the kernel containing conditional instructions such as "if", "do", and "while" instructions. In this example, the FPU instruction 232 may be generated by thread 208 in response to the thread performing a floating point calculation.Returning to the example shown in FIG. 1, the GPU 110 may execute profiling instructions 104A-104C to generate an exemplary GLIT112. In this example, the GPU 110 stores the GLIT 112 in an exemplary trace buffer 114. In this example, the trace buffer 114 is stored in an exemplary memory 116. The GLIT 112 is generated and / or generated by the GPU 110 in response to executing the profiling instructions 104A-104C included in the second kernel 108 in response to being configured by the GLIT engine 102 to generate GLIT data and the like. Includes GLIT data output by other methods. For example, the GLIT 112 may include GLIT data that implements and / or otherwise otherwise stores a snapshot of the architectural state of the GPU 110. In some examples, the architectural state of the GPU 110 is stored in a first value stored in the GRF and / or in an ARF associated with a hardware thread such as thread 208 in FIG. 2 of the GPU 110. Can contain values. In some examples, the GLIT 112 stores data associated with one (s) of the SEND instruction 228, the branch instruction 230, or the SIMD FPU instruction 232 of FIG. 2 and / or the corresponding time stamp. .. The GLIT engine 102 can acquire and analyze the GLIT 112 to better understand the execution of the second kernel 108 by the GPU 110. The GLIT engine 102 can determine to adjust the operation of the GPU 110 based on the analysis of the GLIT 112.In some examples, profiling instructions 104A-104C, when executed by GPU110, can be used to better understand the execution of the second kernel 108, such as counters, hardware thread identifiers, register values, timestamps, etc. A profile routine (eg, machine-readable code and / or software profile routine) that generates, determines, and / or stores operational information for. For example, profiling instructions 104A-104C can characterize the execution of the second kernel 108 by the GPU 110 by profiling and / or other methods.In some examples, profiling instructions 104A-104C are the first address of the kernel (eg, the first position) (eg, the first of the first kernel 106) to initialize the variables used for profiling. ) Is inserted. In some examples, the profiling instructions 104A-104C are inserted in the middle of the original instructions (eg, between the instructions in the first kernel 106). In some examples, when profiling instructions 104A-104C are inserted and executed at a second address in the kernel (eg, at a second position) (eg, after an instruction from the first kernel 106), The GPU 110 collects and / or stores the metrics accessible by the GLIT engine 102. In some examples, profiling instructions 104A-104C are inserted at the end of the kernel (eg, first kernel 106) to perform cleanup (eg, freeing memory locations). However, such profiling instructions 104A-104C may be additionally or optionally inserted at any location or location in any order.In the example shown in FIG. 1, the exemplary CPU 118 includes an exemplary application 120, an exemplary GPU driver 122, and an exemplary GPU compiler 124, and / or otherwise these. To implement. The application 120 outputs output from the GPU 110 onto one or more display devices when the GPU 110 performs graphic-related tasks such as DirectX task, OpenGL task, pixel shader / shading task, vertex shader / shading task, and the like. A software application that can be used to display. In some examples, application 120 may be implemented in one or more dynamic link libraries (DLLs). Additional or alternative, application 120 may be used to display and / or otherwise process the output from GPU 110 as GPU 110 performs non-graphics related tasks. Additional or alternative, application 120 may be used by GPU programmers to facilitate the development of kernels / shaders in high level programming languages such as HLSL, OpenCL and the like. For example, the application 120 can be a profiling tool such as a GPU profiling tool or a GPU analysis tool.In the example shown in FIG. 1, the application 120 sends a task (eg, a computational task, a graphic-related task, a non-graphic-related task, etc.) to the GPU driver 122. In some examples, the GPU driver 122 receives the task and tells the GPU compiler 124 the code associated with the task into a binary version (eg, a binary format corresponding to the binary code, a binary instruction, a machine-readable instruction, etc.). Instruct to compile and generate the first kernel 106. The GPU compiler 124 sends the compiled binary version of the first kernel 106 to the GPU driver 122.In some examples, the GLIT engine 102 configures, programs, and / or otherwise controls the GPU 110 to output data to the trace buffer 114. For example, the GLIT engine 102 dumps and / or dumps the data and / or information described below in FIG. 3 to the GPU driver 122 at a particular kernel execution point, such as the first kernel 106 or the second kernel 108. Alternatively, it may be instructed to control the GPU 110 to output. In some examples, the GLIT engine 102 may instruct the GPU driver 122 to cause the GPU 110 to output data associated with an instruction executed by the GPU 110 to the trace buffer 114. For example, the GLIT engine 102 gives the GPU 110 a GPU instruction (for example, an instruction included in the first kernel 106, a second kernel 108, etc.), a device access instruction (for example, a memory access instruction, and the sampler 216 of FIG. 2 on the GPU 110). , The data associated with the instruction executed by the GPU 110, etc. to access the cache memory 210, etc.) can be output.In some examples, in response to the GPU 110 executing a GPU instruction (eg, additional instruction, move instruction, etc.), the GPU 110 is a GPU instruction, the first value of a register before executing the GPU instruction, the GPU. The second value of the register after executing the instruction can be output to the trace buffer 114. In some examples, the GPU 110 may output device access instructions, register values, etc. to the trace buffer 114 in response to the GPU 110 executing a device access instruction to cause the sampler 216 to send a register value. .. Advantageously, in some such examples, the GLIT engine 102 can control the GPU 110 to output GLIT data to the trace buffer 114 without the kernel.In some examples, the GLIT engine 102 may control the GPU 110 to output GLIT data to the trace buffer 114 via binary instrumentation. For example, the GLIT engine 102 may obtain a first kernel 106 (eg, binary format) from the GPU driver 122. The GLIT engine 102 can instrument the first kernel 106 by inserting additional instructions such as profiling instructions 104A to 104C into the first kernel 106. For example, the GLIT engine 102 may modify the first kernel 106 to generate an instrumented GPU kernel, such as the second kernel 108. That is, the GLIT engine 102 produces a second kernel 108 without performing any compilation of the first kernel 106. In this way, an already compiled GPU kernel can be instrumented and / or profiled. The second kernel 108 is passed to the GPU 110 via memory 116. For example, the GLIT engine 102 can send a second kernel 108 to the GPU driver 122, which can store the second kernel 108 in memory 116 for withdrawal by the GPU 110.In some examples, the GPU 110 executes profiling instructions 104A-104C that generate one or more GLIT 112s. In this example, the profiling instructions 104A-104C include an exemplary first profiling instruction 104 of "TRACE (0, TID)" inserted in the first position, and the first profiling instruction 104A is a trace (1). For example, it corresponds to generating one of GLIT112). For example, the trace may refer to a sequence of data records written (eg, dynamically written) to a memory buffer such as the trace buffer 114. In some examples, the first trace operation is the read operation of a register associated with the hardware thread (eg, a hardware register) and the storage of the first value read from the register in the first variable. It may be implemented using behavior. In such an example, the first trace operation is the first of the GLIT 112 that includes (i) the first value and / or (ii) the thread identifier (TID) associated with the hardware thread that accessed the register. May be implemented by generating.In the example shown in FIG. 1, profiling instructions 104A-104C include an exemplary second profiling instruction 104B of "TRAC (1, TID)" inserted in the second position, the second profiling instruction. 104B corresponds to the second trace operation. In some examples, the second trace operation is implemented using a register read operation associated with the hardware thread and a second value store operation read from the register in the second variable. May be good. For example, the second value may differ from the first value of the first trace operation because it may be generated in response to the GPU 110 running the second kernel 108. In such an example, the second trace operation produces (i) a second value and / or (ii) a second of the GLIT 112 containing the TID associated with the hardware thread that accessed the register. It may be implemented by.In the example shown in FIG. 1, profiling instructions 104A-104C include an exemplary third profiling instruction 104C of "TRAC (2, TID)" inserted at a third position, the third profiling instruction. 104C corresponds to the third trace operation. In some examples, the third trace operation is implemented using the read operation of the register associated with the hardware thread and the store operation of the third value read from the register in the third variable. May be good. For example, the third value may be generated in response to the GPU 110 running the second kernel 108, so that the first value of the first trace operation and / or the second trace operation. It may be different from the second value. In such an example, the third trace operation produces (i) a third value and / or (ii) a third of the GLIT 112 containing the TID associated with the hardware thread that accessed the register. It may be implemented by.In some examples, in response to executing profiling instructions 104A-104C, and / or more generally, a second kernel 108, the GPU 110 stores the GLIT 112 in the trace buffer 114. The trace buffer 114 includes an exemplary record (eg, a data record) 126 in which the GLIT 112 can be implemented. For example, record 126 can implement GLIT data from GPU 110. In some examples, record 126, and / or more generally, GLIT 112 may be encoded in binary format based on the exemplary GLIT format 300 shown in the example shown in FIG. ..Referring to FIG. 3, the GLIT format 300 is shown in clear text and represents an exemplary binary data format in which one (s) of the GLIT 112 of FIG. 1 can be implemented, and / or. It can be dealt with in other ways. For example, the GLIT format 300 can be used to implement an exemplary binary file (eg, an encoded binary file) in which the GPU 110 can be used to store the GLIT 112. Alternatively, the GLIT format 300 may be implemented using any other format.In some examples, the CPU 118 of FIG. 1 can get the record 126 from the trace buffer 114. In such an example, the CPU 118 may generate one of the GLIT 112 to include one (s) of the records 126 based on the GLIT format 300. In some examples, the GLIT format 300 may be implemented as a buffer in an encoded binary format containing a plurality of exemplary records (eg, data records) 302. For example, the record 302 may implement the record 126 of FIG. In such an example, the first of the records 302 may correspond to the first of the records 126 of FIG.In some examples, the GLIT format 300 may be generated in an atomic manner. For example, in the GPU 110, the first of the records 302 is adjacent to the second of the records 302, and the first of the records 302 precedes the second of the records 302. The GLIT 112 generated in GLIT 112 may be sequentially generated in the GLIT format 300. Alternatively, the GLIT 112 (s) having the GLIT format 300 may be generated in a manner different from atomic, using, for example, round robin techniques. The GPU 110 can generate the record 302 from a plurality of hardware threads such as the thread 208 of FIG.In the example shown in FIG. 3, the GLIT format 300 includes data records 302 having administrative properties, such as a format version (VERSION) of the GLIT format 300, a GEN model identifier (GEN MODEL ID), and the like. For example, the GEN MODEL ID can refer to a particular architecture of the GPU 110. In some examples, the CPU 118 can determine the behavior, specifications, etc. of the GPU 110 based on the GEN MODEL ID.In the example shown in FIG. 3, the GLIT format 300 contains decrypted information of kernel instructions such as the second kernel 108 of FIG. For example, INST_DECODE_T INST0 may correspond to a decrypted version of the first kernel instruction, such as INSTR1 DST, SRC1, SRC2 in FIG. 1 of the second kernel 108. In some examples, INST_DECODE_T INST1 may correspond to a decrypted version of the second kernel instruction, such as INSTR2 DST, SRC1, SRC2 in FIG. 1 of the second kernel 108. In some examples, the decrypted kernel instruction can be used by the GLIT engine 102 to emulate and / or otherwise simulate the execution of the instruction of the second kernel 108 by the GPU 110. Data can be implemented.In the example shown in FIG. 3, the GLIT format 300 is the number of instructions (NUMBER OF INSTRUTIONS) (eg, the number of instructions in the second kernel 108), the number of associated basic blocks (BBLs) (NUMBER OF RELEVANT BBLs). ), Number of SEND instructions (NUM OF SENDS) (eg, number of SEND instructions 228 in FIG. 2), data associated with each of the SEND instructions (eg, SEND0 DATA, SEND1 DATA, etc.), maximum number of hardware threads. Includes exemplary operating parameters such as (MAX NUM OF HW THREADS) (eg, maximum number of threads 208 in FIG. 2), hardware thread identifier count HW TID COUNT). For example, the BBL can refer to a contiguous set of instructions with a single entry point and exit point. In such an example, a kernel such as the second kernel 108 may be logically divided into one or more BBLs. Additional or alternative, the GLIT 300 may include operating parameters corresponding to different types of instructions, such as load instructions. For example, NUM OF SENDS may be replaced by the number of load instructions (NUM OF LOADS), SEND0 DATA may be replaced by LOAD0 DATA, and SEND0 DESTINOTION VALUES may be replaced by LOAD0 DESTINOTION VALUES. And / or combinations thereof.In some examples, the GLIT format 300 may be implemented to store data associated with device access instructions such as SEND and READ SEND instructions. For example, the GLIT format 300 may include an offset value (OFFSET), a destination register (DST), a number of registers (NUM OF REG), and the like. In some examples, the GLIT format 300 is a header data (eg, CE, DMASK, CR0.0, etc.) associated with device access instruction data (eg, SEND destination value data, SEND0 DESTINOTION VALUES, SEND1 DESTINOTION VALUES, etc.). May be implemented to include the value of the first register of the ARF (eg, the CE register) associated with the GPU 110, the value of the second register of the ARF (eg, the dispatch mask (DMASK) register). ) Etc. may be included. Additional or alternative, there may be fewer or more records than the record 302 shown in FIG. Advantageously, the GLIT engine 102 may obtain the GLIT 112 of FIG. 1 based on and / or otherwise having it in the GLIT format 300, which is used to improve the profiling of the GPU 110. May be done.In the example shown in FIG. 3, the GLIT based on the GLIT format 300 can store data associated with a plurality of hardware threads such as the thread 208 of FIG. For example, one (s) of the GLIT 112 based on the GLIT format 300 is the first data corresponding to the first of the threads 208 and the second data corresponding to the second of the threads 208. Etc. may be memorized. In this example, the first data may correspond to the first of thread 208 having the identifier of TID 0, such as NUM OF BBL RECORDS, BBL ID, HEADER, SEND0 DESTITION VALUES, SEND1 DESTINOTION VALUES, etc. good. In this example, the second data may correspond to the second of thread 208 having the identifier of TID 1, such as NUM OF BBL RECORDS, BBL ID, HEADER, SEND0 DESTITION VALUES, SEND1 DESTINOTION VALUES, etc. good. In this example, the GLIT format 300 may sequentially list the first data, the second data, and the like. Alternatively, the GLIT format 300 may list the first data, the second data, and the like in any order and / or format.With reference back to the example shown in FIG. 1, the GLIT engine 102 pulls the trace buffer 114 out of memory 116 (eg, iteratively pulls, periodically pulls, etc.). In some examples, the GLIT engine 102 determines one or more operating parameters associated with the second kernel 108, and / or more generally, the GPU 110. For example, the GLIT engine 102 may determine the GPU state, execution time parameter, busy time parameter, idle time parameter, occupancy time parameter, and / or utilization parameter associated with GPU 110. In some examples, the GLIT engine 102 adjusts the operation of the GPU 110 based on one or more operating parameters. For example, the GLIT engine 102 may instruct the CPU 118 to schedule an increase in instructions executed by the GPU 110, a decrease in instructions executed by the GPU 110, and the like, based on one or more operating parameters. can.In the example shown in FIG. 1, memory 116 includes one or more kernels such as a second kernel 108, a trace buffer 114, and an exemplary GPU data 128. Alternatively, memory 116 does not have to store one or more kernels. In some examples, the memory 116 may be implemented by volatile memory, non-volatile memory (eg, flash memory), and / or a combination thereof. In some examples, the GPU data 128 corresponds to the data generated by the GPU 110 in response to running at least the second kernel 108. For example, the GPU data 128 can include graphic-related data, output information to a display device, and the like.FIG. 4 is a block diagram of an exemplary implementation of the GLIT engine 102 of FIG. 1 for improving the operation of the GPU 110 of FIG. In some examples, the GLIT engine 102 instruments the binary shader / kernel before sending to the GPU 110. The GLIT engine 102 can collect the GLIT 112 of FIG. 1 obtained based on the GLIT format 300 of FIG. 3 from the memory 116 of FIG. The GLIT engine 102 can emulate the operation of the GPU 110 based on the record 126 stored in the GLIT 112. The GLIT engine 102 can determine the operating parameters associated with the GPU 110 that can be used to determine the improvement in operation of the GPU 110, CPU 118, and the like.In the example shown in FIG. 4, the GLIT engine 102 is an exemplary instruction generator 410, an exemplary trace extractor 420, an exemplary trace emulator 430, an exemplary trace analyzer 440, an exemplary hardware. Includes a configurator 450 and an exemplary storage 460. In this example, the storage 460 comprises and / or otherwise stores the exemplary GLIT470. In this example, at least one of an instruction generator 410, a trace extractor 420, a trace emulator 430, a trace analyzer 440, a hardware setter 450, and a storage 460 is attached to each other via an exemplary bus 480. It may be communicating with one (s). For example, the bus 480 may be implemented by an I2C (Inter-Integrated Circuit) bus, an SPI (Serial Peripheral Interface) bus, and / or a PCI (Peripheral Component Interconnect) bus.In the example shown in FIG. 4, the GLIT engine 102 includes an instruction generator 410 that instruments a kernel, such as the first kernel 106 of FIG. For example, the instruction generator 410 can access the first kernel 106 (eg, access the first kernel 106 from the memory contained in the CPU 118). The instruction generator 410 can instrument the first kernel 106 to generate the second kernel 108 of FIG. For example, the instruction generator 410 can generate the binary code associated with the profiling instructions 104A-104C of FIG. 1 and insert it into the first kernel 106 in order to generate the second kernel 108. In some examples, the instruction generator 410 provides and / or otherwise transmits the second kernel 108 to the GPU driver 122 of FIG. In such an example, in response to retrieving the second kernel 108 from the instruction generator 410, the GPU driver 122 stores the second kernel 108 in memory 116 for later withdrawal by the GPU 110. be able to.In some examples, the instruction generator 410 inserts one or more profile routines, such as one or more of the profiling instructions 104A-104C, into the kernel executed by one of threads 208 of the GPU 110. Implement the means for. In some examples, the means for insertion are one or more analog or digital circuits, logic circuits, programmable processors, programmable controllers, GPUs, digital signal processors (DSPs), application-specific integrated circuits (ASICs), programmable. It may be implemented by a logic device (PLD) and / or a field programmable logic device (FPLD) (eg, a field programmable gate array (FPGA)). In some examples, the means for insertion may be implemented by at least one of block 1602 in FIG. 16 or block 1802 in FIG.In some examples, the instruction generator 410 implements means for generating binary code (eg, binary instructions, machine-readable instructions, etc.) based on profiling instructions 104A-104C. In some examples, the instruction generator 410 uses the generated binary code in one or more places or locations within the first kernel 106 to generate the second kernel 108. Implement the means to insert into.In the example shown in FIG. 4, the GLIT engine 102 includes a GLIT 112 from the memory 116 of FIG. 1 and / or, more specifically, a trace extractor 420 that withdraws and / or collects the trace buffer 114. In some examples, the trace extractor 420 extracts the GLIT 112 from the trace buffer 114 and / or extracts the record 126 from the GLIT 112. In some examples, the trace extractor 420 traverses the GLIT 112 from a first position (eg, start) in GLIT format 300 to a second position (eg, end) in GLIT format 300, with record 126 in the middle. The GLIT 112 is processed by extracting. For example, the trace extractor 420 extracts, identifies, and / or otherwise determines from the GLIT format 300 of FIG. 3, the first record 302, the second record 302, and the like of FIG. Can be done.In some examples, the trace extractor 320 extracts records 126 from GLIT 112 by decoding the binary kernel representation of GLIT 112 in order to generate the decoded binary data. In some examples, the trace extractor 320 extracts the instruction identifier and / or opcode from the decoded binary data. For example, the trace extractor 320 can extract a first opcode corresponding to a SEND instruction, a READ SEND instruction, a branch instruction, and a SEND instruction executed by the GPU 110, a second opcode corresponding to the branch instruction, and the like. .. In some examples, the trace extractor 320 is one of records 126 (s) based on at least one of the opcodes corresponding to the instruction identifier or one (s) of records 126. ) Are classified and / or grouped in other ways.In some examples, the trace extractor 320 stores the association between the opcode and the emulation routine (eg, machine-readable code, firmware and / or software routine). For example, the trace extractor 320 can identify that the first opcode corresponds to the first emulation routine. In such an example, the first emulation routine, when executed, is an algorithm that mimics and / or otherwise performs the same or substantially similar function as the SEND instruction corresponding to the first opcode. It can represent a machine-readable instruction or the like. In some examples, the trace extractor 320 stores records 126, instruction identifiers, opcodes, associations, etc. in storage 460.In some examples, the trace extractor 420 implements means for identifying the first routine based on the identifier of the second routine executed by the GPU 110, where the first routine is of the second routine. Based on emulation. In some examples, the trace extractor 420 implements means for extracting the GLIT 112 from the trace buffer 114 and / or the record 126 from the GLIT 112. In some examples, the means for identification and / or the means for extraction are one or more analog or digital circuits, logic circuits, programmable processors, programmable controllers, GPUs, DSPs, ASICs, PLDs, and / or It may be implemented by FPD. In some examples, the identification means may be implemented by at least one of the blocks 1602, 1604, 1606, 1608 of FIG.In the example shown in FIG. 4, the trace emulator 430 emulates and / or otherwise replays the GLIT 112 of FIG. 1 to perform an analysis of the behavior of the GPU 110. For example, the trace emulator 430 can replay the execution of the second kernel 108 by the GPU 110 based on the data stored in the GLIT 112. In some examples, the trace emulator 430 replays one or more runs of the second kernel 108 by each of the threads 208 in FIG. 2 based on the data stored in the GLIT 112 corresponding to each of the threads 208. can do. In some examples, the trace emulator 430 executes an emulation routine that simulates the routine executed by the GPU 110. For example, the trace emulator 430 can extract one (s) of data records 126 from the GLIT 112 and simulate the execution of instructions (eg, add instruction, subtraction instruction, multiply instruction, etc.) by the GPU 110. One (s) of the extracted data records 126 can be input as an argument to the emulation routine of 1. In such an example, the retrieved one (s) of the data record 126 may be the state of the GPU 110, such as the value of a register such as ARF, GRF, etc. associated with the thread interested in processing the GPU 110. can.In some examples, the trace emulator 430 uses a callback routine (eg, a callback instruction) to facilitate analysis by the developer or user associated with application 120 of FIG. 1, CPU 118 of FIG. And instrument the emulation routine. For example, the trace emulator 430 can include high-level language (HLL) instructions that can represent machine-readable instructions in the emulation routine. In such an example, in response to the trace emulator 430 executing the instrumentation emulation routine, the trace emulator 430 calls the API and outputs the output data in connection with the execution of the instrumentation emulation routine, such as application 120. It can be provided and / or otherwise transmitted to the higher level analysis construct. Advantageously, the trace emulator 430 can instrument and execute emulation routines to generate data and provide data to the GPU profiling tool, which tools include GPU 110, CPU 118, and / or. It can be used to identify improvements to the behavior of those combinations.In some examples, the trace emulator 430 is a means for executing a first routine for determining the first value of the GPU state of the GPU, the first routine being (i) a second routine. Implements a means having a first argument associated with, and (ii) a second argument corresponding to the second value of the GPU state before executing the first routine. In some examples, the GPU state is the state of the first register in the ARF associated with the hardware thread of the GPU or the state of the second register in the GRF of the hardware thread. In some examples, the identifier may be a first identifier extracted from an encoded binary file, and the means to execute it responds to the execution of one or more profile routines by the hardware thread. Then, the first value, the second value, and the hardware thread identifier are determined from the long instruction trace generated by the hardware thread. In such an example, the first value can correspond to the GPU register value after the hardware thread executes the kernel, and the second value corresponds to the GPU register value before the hardware thread executes the kernel. Corresponding, the hardware thread identifier can identify the hardware thread.In some examples, the means to perform is to determine one or more first register values of one or more of each first register of one or more of the GPU's GRF, one or more of the GPU's ARF. Determining one or more second register values for each second register and / or one or more first register values, one or more second register values, one or more third. The register value of the above and the device access instruction (for example, SEND instruction, READ SEND instruction, etc.) are stored in a long instruction trace such as GLIT. In some examples, one or more third registers may correspond to one or more respective destination registers associated with a device access instruction.In some examples, the means to execute is to insert a first callback routine into the instrumentation routine before the emulation routine, where the first callback routine is the first application programming interface. (API) can be called to provide a second GPU state to the application. In some examples, the means to perform is to insert a second callback routine into the instrumentation routine after the emulation routine, where the second callback routine is the first API or the second. The API can be called to provide the first GPU state to the application.In some examples, the means to perform may be implemented by one or more analog or digital circuits, logic circuits, programmable processors, programmable controllers, GPUs, DSPs, ASICs, PLDs, and / or FPDs. In some examples, the means to perform may be implemented by at least one of block 1508 of FIG. 15, blocks 1612, 1614, 1616 of FIG. 16, or block 1714 of FIG.In the example shown in FIG. 4, the GLIT engine 102 includes a trace analyzer 440 that determines one or more operating parameters associated with the GPU 110 of FIG. In some examples, the trace analyzer 440 implements means for determining the GPU state, run time parameter, busy time parameter, idle time parameter, occupancy time parameter, and / or utilization parameter associated with GPU 110. do. In some examples, the trace analyzer 440 determines one or more operating parameters based on the emulation of the operation of the GPU 110 by replaying the GLIT 112. For example, the trace analyzer 440, in response to running the second kernel 108, identifies the change in the register value of the GRF corresponding to the first of the threads 208, thereby the thread of FIG. The GPU state of the first of 208 can be determined. In some examples, the trace analyzer 440 determines the amount of time the first of the threads needs to run the second kernel 108, thereby determining the first of the threads 208. You can calculate the execution time parameters of things. In some examples, the trace analyzer 440 is the first of threads 208 by calculating the ratio of the busy time of the first of threads 208 to the total time of the time period of interest. It is possible to determine the utilization parameter of a thing.In some examples, the trace analyzer 440 determines aggregate operation parameters based on two or more of threads 208. For example, the trace analyzer 440 can calculate aggregation execution time parameters, aggregation utilization parameters, and the like. In such an example, the trace analyzer 440 aggregates by calculating the ratio of one or more of the busy threads 208 to the total amount of threads 208 to the duration or duration of interest. Utilization rate parameters can be determined.In some examples, the trace analyzer 440 implements means for determining GPU operating parameters based on the GPU state. For example, the means for determining can determine the GPU utilization rate based on the first GPU state. In some examples, the means for determination may be implemented by one or more analog or digital circuits, logic circuits, programmable processors, programmable controllers, GPUs, DSPs, ASICs, PLDs, and / or FPDs. In some examples, the means for determining may be implemented by at least one of block 1510 of FIG. 15 or block 1716 of FIG.In the example shown in FIG. 4, the GLIT engine 102 includes a hardware setter 450 that adjusts the operation of the GPU 110 and / or the CPU 118 based on the GLIT 112, one or more operating parameters associated with the GLIT 112, and the like. In some examples, the hardware configurator 450 delivers, provides, and / or otherwise communicates one or more operating parameters to application 120 of FIG. For example, the hardware setting device 450 uses a performance analysis tool (for example, a GPU profiling tool), a graphical user interface (GUI) included in the performance analysis tool, and the like, and the GPU state and hardware thread utilization rate associated with the GPU 110. , Execution unit utilization, etc. can be reported to developers (eg, software developers, processor designers, GPU engineers, etc.) and / or communicated in other ways. In such an example, the developer improves the software by, for example, improving the load balancing of computational tasks or providing different data distributions among the hardware threads, execution units, etc. of the GPU 110. be able to.In some examples, the hardware configurator 450 calls the hardware, software, firmware, and / or any combination of hardware, software, and / or firmware (eg, GPU driver 122, CPU 118, etc.). The operation of the GPU 110 can be improved. For example, the hardware setter 450 can generate and transmit instructions (eg, commands, one or more machine-readable instructions, etc.) to the GPU driver 122, CPU 118, etc. of FIG. In response to receiving and / or executing the instruction in some other way, the GPU driver 122, CPU 118, etc. can be called to determine whether to adjust the operation of the GPU 110. For example, the GPU driver 122 and / or, more generally, the CPU 118 is called to coordinate the scheduling of computational tasks, jobs, workloads, etc. performed by the GPU 110 based on one or more operating parameters. You may.In some examples, the hardware configurator 450 calls and / or otherwise directs the GPU driver 122 to analyze one or more operating parameters based on the GLIT 112. For example, the GPU driver 122, and / or more generally, the CPU 118 sets the operating parameters to operating parameter thresholds (eg, GPU state threshold, execution time threshold, busy time threshold, idle time). It can be compared with thresholds, utilization thresholds, etc.). For example, when called, the GPU driver 122, and / or more generally, the CPU 118, the utilization of the GPU 110 is 95% corresponding to the GPU 110 where 95% of the measured time interval is busy. It can be determined that there is. The GPU driver 122 compares the 95% utilization with the 80% utilization threshold and determines that the GPU 110 should not accept more computational tasks based on the utilization that meets the utilization threshold. can do. As used herein, a job or workload may refer to a set of one or more computational tasks performed by one or more hardware threads, such as thread 208 in FIG.In some examples, the GPU driver 122 and / or, more generally, the CPU 118 can determine that the utilization of the GPU 110 is 40% when called by the hardware configurator 450. .. The GPU driver 122 can compare the utilization of 40% with the utilization threshold of 80% and determine that the GPU 110 has bandwidth available to perform more computational tasks. For example, the GPU driver 122 can determine that the utilization of 40% does not meet the utilization threshold of 80%. The GPU driver 122 may adjust or modify the resource scheduling to facilitate the tasks performed by the GPU 110 in response to determining that the utilization of the GPU 110 does not meet the utilization threshold. can. For example, the GPU driver 122 can increase the amount of computational tasks currently being performed and / or performed by the GPU 110 based on utilization parameters that can be determined based on the GLIT 112 of FIG.In some examples, the hardware configurator 450 implements means for improving and / or otherwise optimizing resource scheduling by the CPU 118 (eg, hardware scheduling, memory allocation, etc.). For example, the developer can develop and / or improve the hardware scheduling function or mechanism by analyzing one or more operating parameters associated with the GPU 110.In some examples, the hardware configurator 450 implements means for controlling the workload of the GPU based on the first value of the GPU state. In some examples, the means for control is a routine (eg, eg) executed by the GPU 110 in response to determining that the operating parameters (eg, busy time, utilization, etc.) do not meet the threshold. Control the workload of the GPU 110 by making at least one of the adjustments or the increase in the number of computational tasks of one or more instructions contained in the second kernel 108. In some examples, the means for control may be implemented by one or more analog or digital circuits, logic circuits, programmable processors, programmable controllers, GPUs, DSPs, ASICs, PLDs, and / or FPDs. In some examples, the means for control may be implemented by at least one of blocks 1512, 1514 of FIG. 15 or block 1720 of FIG.In the example of FIG. 4, the GLIT engine 102 includes a storage 460 for recording data such as the GLIT 470. For example, the GLIT 470 may include one or more of the GLIT 112 of FIG. In such an example, the GLIT 470 may be stored in storage 460 in a coded binary format such as the GLIT format 300 of FIG. In some examples, the storage 460 records and / or otherwise stores one (s) of records 126 in FIG. 1, which is an instruction identifier, opcode, and / or instruction identifier. Data associated with one (s) and / or one of the opcodes (s) and an instruction with one or more emulation routines and one (s) of one or more emulation routines. It may include one or more associations between one of the identifiers (s) and / or one of such as opcodes (s) and / or a combination thereof. In some examples, storage 460 stores an instrumented version of an emulation routine, such as an emulation routine, which may include a callback routine that calls data transfer via one or more APIs.The storage 460 of this example is a volatile memory (for example, SDRAM (SDRAM, Dynamic Random Access Memory), DRAM (Dynamic Random Access Memory), RRAM (RAMBUS Dynamic Random Access Memory), RRAM (RAMBUS Dynamic Random Access Memory), etc. It may be implemented by flash memory). The storage 460 may be additionally or optionally implemented by one or more dual data rate (DDR) memories such as DDR, DDR2, DDR3, DDR4, mDDR (mobile DDR). The storage 460 additionally or alternatively is one or more mass storage devices such as a hard disk drive (HDD), a compact disk (CD) drive, a digital versatile disk (DVD) drive, a solid state disk (SSD) drive, and the like. May be implemented by. In the illustrated example, storage 460 is shown as a single storage, but storage 460 is implemented by any number (eg, at least one storage disk or device) and / or any type of storage. May be good. Further, the data stored in the storage 460 may be in any data format such as binary data, comma separated data, tab separated data, structured query language (SQL) structure and the like.An exemplary scheme for implementing the GLIT engine 102 of FIG. 1 is shown in FIG. 4, but one or more of the elements, processes and / or devices shown in FIG. 4 may be in any other way. It may be combined, split, rearranged, omitted, removed and / or implemented. In addition, an exemplary instruction generator 410, an exemplary trace extractor 420, an exemplary trace emulator 430, an exemplary trace analyzer 440, an exemplary hardware setter 450, an exemplary storage 460, an exemplary storage 460. GLIT 470, and / or more generally, the exemplary GLIT engine 102 of FIG. 1 is implemented by any combination of hardware, software, firmware, and / or hardware, software, and / or firmware. You may. Thus, for example, an exemplary instruction generator 410, an exemplary trace extractor 420, an exemplary trace emulator 430, an exemplary trace analyzer 440, an exemplary hardware setter 450, an exemplary storage 460, Any of the exemplary GLIT 470s and / or, more generally, the exemplary GLIT engine 102 is one or more analog or digital circuits, logic circuits, programmable processors, programmable controllers, FPGAs, digital signals. It may be implemented by a processor (DSP), an application-specific integrated circuit (ASIC), a programmable logic device (PLD) and / or a field programmable logic device (FPLD) (eg, a field programmable gate array (FPGA)). When reading any of the claims of the device or system of this patent to cover purely software and / or firmware implementations, an exemplary instruction generator 410, an exemplary trace extractor 420, exemplary. At least one of a trace emulator 430, an exemplary trace analyzer 440, an exemplary hardware configurator 450, an exemplary storage 460, and / or an exemplary GLIT 470 is a memory containing software and / or firmware. , DVD, CD, Blu-ray® discs and other non-temporary computer-readable storage devices or storage discs are expressly defined herein. Further, the exemplary GLIT engine 102 of FIG. 1 may include, and / or substitute for, one or more elements, processes and / or devices in addition to or instead of those shown in FIG. It may include one or more of any or all of the elements, processes and devices. As used herein, the phrase "communicating" includes direct and / or indirect communication via one or more intermediate components, including its variants, and direct physics. It does not require targeted (eg, wired) and / or constant communication, but rather includes periodic intervals, scheduled intervals, aperiodic intervals, and / or selective communication in one-off events.FIG. 5 is a diagram of an exemplary system 500 capable of mounting the GPU 110 of FIG. 1 or a portion thereof and / or the GPU slice 200 of FIG. 2 or a portion thereof. In this example, the system 500 may be utilized to control the operation of the exemplary execution unit hardware thread 502. In this example, the execution unit hardware thread 502 may implement one of threads 208 in FIG.In the example shown in FIG. 5, the system 500 includes an execution unit hardware thread 502, an exemplary gateway sharing function 504, an exemplary thread dispatcher 506, and an exemplary device 508. In this example, the system 500 may exhibit different mechanisms, techniques, etc. for modifying and / or otherwise controlling the behavior of the execution unit hardware thread 502.In the example shown in FIG. 5, the system 500 includes a gateway sharing function 504 that implements interthread communication control. In this example, the gateway sharing function 504 communicates asynchronously with the execution unit hardware thread 502. Alternatively, the gateway sharing function 504 may interact with the execution unit hardware thread 502 on a synchronous basis. In some examples, the gateway sharing function 504 may be implemented as hardware that performs thread-thread (eg, hardware thread-hardware thread) synchronization. In some examples, the gateway sharing function 504 can facilitate remote register write operations. For example, the gateway sharing function 504 acquires a write request from the first register of the first thread 208 of FIG. 2, and sends the write request to the second of the second thread 208 of FIG. Can be transferred to the register of.In some examples, the gateway sharing function 504 implements active thread-thread communication based on direct register access. For example, the first thread (eg, the requesting thread) may be able to write to the GRF register space of another thread (eg, the receiving thread). Such direct register access between two threads in a multiprocessor environment is sometimes referred to as remote register access. The remote register access may implement a read or write operation. In some examples, the GPU 110 architecture may support remote register writes, but may not (originally) support remote register reads. For example, gateway sharing functionality 504 may thus support via message passing. Facilitates writing to remote registers. In some examples, the requesting thread may send a message requesting the receiving thread to write to the GRF register space to the gateway sharing function 504. The gateway sharing function 504 may send a write message to the receiving thread to complete the register write on behalf of the requester. The requesting thread and the receiving thread may be on the same execution unit or different execution units of the GPU 110.In the example shown in FIG. 5, system 500 includes a thread dispatcher 506 that provides an initial register value as an input payload (eg, an input data payload) to the execution unit hardware thread 502. In some examples, the thread dispatcher 506 may be implemented as a hardware functional unit for arbitrating the thread start request from the fixed functional unit 207 of FIG. 2 and instantiating thread 208 of the execution unit 204. .. For example, the thread dispatcher 506 can determine which one (s) of which execution unit 204 and thread 208 of which execution unit 204 dispatches a job or software thread. In some examples, the thread dispatcher 506 can load the initial GPU state into the idle one of threads 208 and start its execution based on the decision. In this example, the thread dispatcher 506 provides initial register values to register files such as the GRF and / or ARF of the execution unit hardware thread 502 on a synchronous basis. In some examples, the thread dispatcher 506 may implement the local thread dispatcher 220 of FIG.In the example shown in FIG. 5, the system 500 includes a device 508 that executes a response to a device access instruction from the execution unit hardware thread 502. In some examples, the device 508 may implement the sampler 216 of FIG. 2, the data port 218 of FIG. 2, the shared local memory 214 of FIG. 2, and / or the cache memory 210 of FIG. In some examples, device 508 facilitates execution of device access requests. For example, the device access request may be implemented by any instruction that causes the execution unit hardware thread 502 to write data to and / or read data from the device 508. In some examples, the device access request may be implemented by a SEND instruction, a READ SEND instruction, a LOAD instruction, and the like. For example, the execution unit hardware thread 502 may execute a device access request by generating a SEND instruction in response to completing execution of a kernel such as the second kernel 108 in FIG. In this example, the SEND instruction is known at consumption because it is generated in response to completing the execution of the kernel.In some examples, in response to executing a SEND instruction, the execution unit hardware thread 502 may send one or more register values associated with the execution unit hardware thread 502 to the device 508. .. In some examples, in response to executing a READ SEND instruction, the execution unit hardware thread 502 may request one or more register values stored in device 508. In such an example, the device 508 may prepare a response to the READ SEND instruction by sending the data read from the requested register stored in the device 508 to the hardware thread 502 of the execution unit. good.In some examples, the GLIT 112 of FIG. 1 may capture different mechanisms, techniques, etc. to modify and / or otherwise control the behavior of the execution unit hardware thread 502. For example, the GLIT 112 may include the first GPU state of the execution unit hardware thread 502 in the first time, the first time in preparation for running the second kernel 108, the thread dispatcher. It may correspond to the first value of ARF, the first value of GRF, etc. of the execution unit hardware thread 502 during the initialization state such as receiving the initial register value from 506. In some examples, the GLIT 112 may include a second GPU state of execution unit hardware thread 502 in the second time after the first time. The second GPU state is the second value of the ARF, in response to the gateway sharing function 504 performing one or more remote register write operations from different ones (s) of threads 208 in FIG. It can correspond to the second value of GRF and the like. In some examples, one or more remote register write operations may change one or more of the first values of the GRF to one or more of the second values. In some examples, the GLIT 112 may include a third GPU state of execution unit hardware thread 502 in a third time after the second time. The third GPU state is the third value of the ARF, the third of the GRF, in response to the execution unit hardware thread 502 calling the device 508 to generate a SEND instruction to read data from the ARF, GRF, etc. It may correspond to the value of.FIG. 6 is a diagram of an exemplary GLIT 600 for the GPU 110 of FIG. 1, the GPU slice 200 of FIG. 2, and / or the execution unit hardware thread 502 of FIG. In some examples, the GLIT 600 of FIG. 6 may implement one (s) of the GLIT 112 of FIG. In some examples, the GLIT 600 may be encoded in a binary kernel having a format based on the GLIT format 300 of FIG. For example, the GLIT 600 may be implemented by an encoded binary file representing an exemplary execution of kernel 108 in FIG. 1 by one of threads 208 in FIG.In some examples, one of the GLIT 112 in FIG. 1 may include multiple binary kernels. In some examples, the GLIT 600 may implement one of a plurality of binary kernels. Advantageously, the second kernel 108 of FIG. 1 can be distributed to the plurality of threads 208 of FIG. 2 for execution, so that the plurality of binary kernels can implement multithreaded GPU tracing.In this example, the GLIT 600 is on a hardware thread of an exemplary processor such as GPU 110 in FIG. 1, GPU slice 200 in FIG. 2, thread 208 in FIG. 2, execution unit hardware thread 502 in FIG. 5, and so on. You may implement a LIT for a single running software thread. In some examples, the GLIT for an execution unit, such as the execution unit 204 in FIG. 2, may be implemented by a group or collection of GLITs for one of the threads 208 of the execution unit 204. In some examples, the GLIT for a subslice, such as the subslice 202 of FIG. 2, may be implemented by a group or collection of GLITs for one of the execution units 204.In the example shown in FIG. 3, the GLIT 600 is exemplified using a GPU having a state (for example, a GPU state) based on the initial value of the ARF register of the hardware thread and the initial value of the GRF register of the hardware thread. It starts at the target starting point 602. In this example, the initial value of the GRF register may be for the entire GRF register. Alternatively, the initial value of a GRF register may be for a partial number of GRF registers.In this example, the initial value of the ARF register is the first register value corresponding to the ARF dispatch mask, the second register value corresponding to the hardware thread identifier (TID), and the third register corresponding to the execution mask. It may be for a partial number of registers in the ARF, such as a value, a fourth register value corresponding to a control register. Alternatively, the initial value of the ARF register may be for the entire ARF register.After the GPU state is initialized at the start point 602, the GLIT 600 is a first exemplary event (EVENT 1) 604 in the first time after the start point 602, a second after the first time. A second exemplary event (EVENT 2) 606 in time, a third exemplary event (EVENT 3) 608 in a third time after the second time, and a fourth after the third time. Includes a fourth exemplary event (EVENT 4) 610 at the time of. In this example, events 604, 606, 608, 610 are READ SEND instructions, which send messages from external hardware (eg, device 508 in FIG. 5) to the sampler 216 and / or cache memory 210 in FIG. Can represent sending a message to a hardware thread such as. For example, the first event 604 can represent a read from global memory such as cache memory 210 using the value of the destination register of the hardware thread represented by DST. In another example, the second event 606 can represent an access to a sampler, such as the sampler 216 of FIG. 2, using the value of the hardware thread's destination register represented by DST. Additional or alternative, a GLIT such as the GLIT 600 of FIG. 6 can include fewer or more events than those shown in FIG.Advantageously, the information associated with the GLIT 600 of FIG. 6 is encoded in a binary format such as the GLIT format 300 of FIG. 3 and is stored in memory for later access and / or withdrawal by a processor such as the CPU 118 of FIG. It is stored in 116 in binary format. For example, the initial values of the GPU state such as the GRF register value and the ARF register value at the start point 602 may be encoded using the GLIT format 300. In some examples, one (s) of events 604, 606, 608, 610 may be stored using GLIT format 300. In such an example, register values such as the values of the ARF and / or GRF registers before and / or after one (s) of events 604, 606, 608, 610 use GLIT format 300. And may be memorized.FIG. 7 is a diagram of an exemplary system 700 for generating and analyzing the GLIT 600 of FIG. The system 700 of FIG. 7 includes an exemplary GPU 702 and an exemplary CPU 704. In some examples, the GPU 702 can implement the GPU 110 of FIG. 1 and / or the GPU slice 200 of FIG. In some examples, the CPU 704 can implement the CPU 118 of FIG.In the example shown in FIG. 7, the GPU 702 runs an exemplary kernel 706. In this example, kernel 706 is an instrumentation kernel that can implement the second kernel 108 of FIG. In this example, kernel 706 is distributed and / or scheduled for execution of other methods by a plurality of exemplary hardware threads 708 of GPU 702. For example, each of the threads 708 may be implemented by one of the threads 208 of FIG. In this example, the first of the hardware threads 708 may have the hardware thread identifier of TID 0, and the second of the hardware threads 708 may have the hardware thread identifier of TID 1. The Nth hardware thread 708 may have a hardware thread identifier of TID N.In the example shown in FIG. 7, the hardware thread 708 may execute an instance of kernel 706 and generate exemplary GLIT data 710. For example, the first of the hardware threads 708 can generate and / or otherwise output GLIT DATA 0, and the second of the hardware threads 708 can generate and / or otherwise output GLIT DATA 1. / It can also be output in other ways, and the Nth of the hardware threads 708 can generate and / or output GLIT DATA 2 in other ways. In some examples, the GPU 702 can store the GLIT data 710 as a record 126 in the trace buffer 114 of the memory 116 of FIG.In some examples, the GLIT data 710 is the GPU state (eg, one or more ARF register values of a hardware thread, one or more GRF register values, etc.) or at least one of the data associated with kernel 706. May include one. For example, the data associated with the kernel 706 can include a GPU instruction, an opcode corresponding to the instruction, an instruction identifier corresponding to the instruction, and the like included in the kernel 706. In some examples, a portion of the GLIT data 710 can implement one of the records 126 of FIG. 1 (s), one of the records 302 of FIG. 3 (s), and the like. For example, the first portion of GLIT DATA 0 may contain the instructions of kernel 706, which may be stored by GPU702 in the trace buffer 114 as INST_DECODE INST0 of GLIT format 300 of FIG.In some examples, the CPU 704 obtains and / or otherwise otherwise retrieves GLIT data 710 from a buffer stored in memory, such as the trace buffer 114 stored in memory 116 of FIG. 1, based on the GLIT data 710. Generate an exemplary GLIT712. In some examples, the GLIT712 can implement the GLIT112 of FIG. In some examples, the CPU 704 is the first of the GLIT 712s corresponding to the first of the hardware threads 708 based on GLIT DATA 0, the first of the hardware threads 708 based on GLIT DATA 1. It is possible to generate a second of the GLIT 712s corresponding to the second one, a third of the GLIT 712s corresponding to the third of the hardware threads 708 based on GLIT DATA, and the like. In such an example, the first GLIT712, the second GLIT712, the third GLIT712, etc. are in a file (eg, a binary file) based on the GLIT format 300 of FIG. 3, with the GLIT data 710. It may be generated by arranging and / or otherwise organizing each of the above. In some examples, the CPU 704 is the first of the GLIT 712 by placing and / or otherwise organizing GLIT DATA 0, GLIT DATA 1, GLIT DATA N, etc. in a binary file based on the GLIT format 300. One of can be generated.In this example, the CPU 704 implements an exemplary GLIT replay application 714 that replays the execution of the kernel 706 by the GPU 702 based on the GLIT 712 by simulating the execution of the kernel 706. In some examples, the GLIT replay application 714 can implement the application 120 of FIG. For example, the GLIT replay application 714 instruments an emulation routine (eg, emulation instruction, emulation software routine, etc.) that corresponds to the simulation of the GPU routine (GPU instruction, GPU kernel routine, etc.) used to execute the kernel 706. Can be a software application.In some examples, the instrumentation emulation routine calls the exemplary API 716 to communicate and / or otherwise transmit data with the exemplary hardware profiling analysis tool 718. For example, the GLIT replay application 714 is a first callback routine before the execution of an instruction contained in the first emulation instruction (eg, an instruction simulating the execution of kernel 706), and / or a first post-execution instruction. The first emulation routine can be instrumented using the second callback routine.In some examples, in response to executing the first callback routine, the GLIT replay application 714 calls one of the API 716 before executing the instructions contained in the first emulation routine. A first GPU state corresponding to the first value of the GPU 702's GRF register can be provided to the hardware profiling analysis tool 718. In some examples, in response to executing the second callback routine, the GLIT replay application 714 calls one of the API 716 after executing the instructions contained in the first emulation routine. A second GPU state corresponding to the second value of the GRF register can be provided to the hardware profiling analysis tool 718. In some examples, the first GPU state may be the same as the second GPU state. That is, the GRF register was not changed in response to executing the first emulation routine. In some examples, the first GPU state may be different from the second GPU state. That is, the GRF register was modified in response to executing the first emulation routine to indicate that the execution of kernel 706 modified the GRF register.In some examples, the hardware profiling analysis tool 718 may be implemented by application 120 of FIG. For example, the hardware profiling analysis tool 718 runs kernel 706 to identify improvements to the behavior of at least one of GPU 702 or CPU 704 based on replay and / or emulation of kernel 706 runs. It can be a software application that analyzes replays and / or other methods of emulation. In some examples, the hardware profiling analysis tool 718 may be implemented by one or more DLLs. Additional or alternative, the hardware profiling analysis tool 718 may analyze the behavior of any other type of hardware processor, such as a neural network processor, VPU.In some examples, the hardware profiling analysis tool 718 can identify improvements based on changes in the GRF register, as described above. In some examples, the hardware profiling analysis tool 718 can determine that the GRF register changes are not typical or expected results, which was developed for improved execution by GPU702. Can be notified to modify the second kernel 108. In some examples, the hardware profiling analysis tool 718 determines that the absence of detected changes in the GRF register indicates that the distribution of kernel 706 to GPU 702 hardware threads is not an efficient distribution. It can notify the developer to modify the scheduling of the second kernel 108 in order to improve the distribution of kernel 706.FIG. 8 is a diagram of another exemplary system 800 for emulating and analyzing the GLIT 600 of FIG. For example, the system 800 of FIG. 8 may implement the system 700 of FIG. 7 or a part thereof. In this example, the system 800 includes an exemplary GLIT replay application 802 and a plurality of exemplary tools 804, 806, 808. For example, the GLIT replay application 802 may implement the GLIT replay application 714 of FIG. In some examples, the tools 804, 806, 808 may implement the hardware profiling analysis tool 718 of FIG. For example, one or more of the tools 804, 806, 808 is a software application that analyzes the execution of the kernel by the GPU by replaying the execution using the data stored and / or otherwise contained in the GLIT810. It may be implemented as. In some examples, one or more of the tools 804, 806, 808 may be implemented as one or more DLLs that perform different analyzes of kernel execution. For example, the first tool 804 of the tools 804, 806, 808 can profile kernel execution using the first set of analysis routines, features, etc., of the tools 804, 806, 808. The second tool 806 can profile kernel execution using a second set of analysis routines, features, etc., and / or the third tool 808 of the tools 804, 806, 808 , A third set of analysis routines, features, etc. can be used to profile kernel execution, and one or more of the first set, second set, and / or third set may be used. They may be different from each other.In the example shown in FIG. 8, the GLIT replay application 802 acquires an exemplary GLIT 810. For example, the GLIT 810 may implement the GLIT 112 of FIG. 1, the GLIT 470 of FIG. 4, the GLIT 600 of FIG. 6, and / or the GLIT 712 of FIG. In some examples, the GLIT810 may be an encoded binary kernel and the GLIT replay application 802 may decode the GLIT810. For example, the GLIT replay application 802 may decompress and / or otherwise extract data stored in a binary format such as the GLIT format 300 of FIG. In some examples, the GLIT replay application 802 uses a portion of the extracted data as an exemplary hardware thread identifier (TID0- It can be associated with TIDN) 812.As shown in FIG. 8, the GLIT replay application 802 communicates with one (s) of tools 804, 806, 808 via one or more exemplary API 814. For example, the GLIT replay application 802 can instrument an emulation routine that simulates the execution of the GPU kernel by including a callback routine before and / or after the execution of the instrumentation emulation routine. In some examples, the callback routine may include a "CALLBACKBEFORE ()" callback routine, which, when executed, of the API 814 before executing the instructions contained in the instrumentation emulation routine. The first one may be called to provide data such as GPU status to the corresponding ones of tools 804, 806, 808. For example, the "CALLBACKBEFORE ()" callback routine may call "GETSTART ()" to provide the GPU state. In some examples, the callback routine may include a "CALLBACKAFTER ()" callback routine, which, when executed, is the third of API 814 after executing the instructions contained in the instrumentation emulation routine. 2 may be called to provide data such as GPU status to the corresponding ones of tools 804, 806, 808. For example, the "CALLBACKAFTER ()" callback routine may call "GETSTART ()" to provide the GPU state. Additional or alternative, less or more API 814 than API 814 shown in FIG. 8 may be used. Additional or alternative, one or more API 814 different from the API 814 shown in FIG. 8 may be used. For example, one or more APIs 814 may be implemented with a PIN API that can be used to insert machine-readable code (eg, C code, C ++ code, etc.) in one or more locations in the kernel.FIG. 9 shows a GPU such as GPU 110 of FIG. 1, a slice of GPU such as GPU slice 200 of FIG. 2, and / or kernel 902 of the first example and kernel 904 of the second example that can be executed by GPU 702 of FIG. Is shown. In this example, the first kernel 902 may implement an uninstrumented kernel. For example, the first kernel 902 may implement the first kernel 106 of FIG. In this example, the second kernel 904 may implement an instrumentation kernel such as the second kernel 108 of FIG. 1 and / or the kernel 706 of FIG. In this example, the second kernel 904 may correspond to an instrumented version of the first kernel 902.In the example shown in FIG. 9, the first kernel 902 includes exemplary instructions such as a move instruction (MOV), an or instruction (OR), a multiplication (MUL) instruction, and an and (AND) instruction. In response to executing MOV, OR, MUL, and AND instructions, the first kernel 902 triggers the execution of the first SEND instruction (SEND) and the second SEND instruction (SEND). In this example, the SEND instruction is a read instruction from the global memory such as the cache memory 210 in FIG. 2 and the device 508 in FIG. In this example, the first SEND instruction implements the first read operation of two 32-byte wide registers (eg, global memory register r12 is 32 bytes wide, global memory r13 is 32 bytes wide, and so on). .. In this example, the second SEND instruction implements a second read operation of two 32-byte wide registers (eg, global memory registers r9 and r10).In the example shown in FIG. 9, the second kernel 904 includes the MOV, OR, MUL, and SEND instructions of the first kernel 902. In this example, the second kernel 904 includes exemplary instrumentation instructions (TRACE) 906, 908, 910 that generate exemplary GLITs such as GLIT112 in FIG. 1 and GLIT600 in FIG. In this example, instrumentation instructions 906, 908, 910 include the first exemplary trace instruction (TRACE (TID, R0-R15, CE, DMASK ...)) 906, the full input payload of the GRF register, and Trace some or a subset of the ARF registers associated with the hardware thread of the GPU running the second kernel 904. For example, the first trace instruction 906 can read the GRF registers r0 to r15 of the hardware thread, and at least the CE and DMASK registers of the ARF of the hardware thread. In such an example, the input payload represented by r0 to r15 can include 16 32-byte registers (eg, r0, r1, r2, ... r15).In the example shown, instrumentation instructions 906, 908, 910 include a second exemplary trace instruction 908 that traces the resulting destination value after execution of the first SEND instruction. For example, in response to executing the first SEND instruction, the second trace instruction 908 can acquire the values obtained as a result of the destination register (eg, r12 and r13 in global memory). In some examples, the second trace instruction 908 produces a trace record (eg, one of record 126 in FIG. 1, one of record 302 in FIG. 3, etc.) when executed. Then, the TID of the hardware thread that executed the second kernel 904 and the first offset (for example, the first offset value) of the original instruction in the first kernel 902 can be included.In the example shown, instrumentation instructions 906, 908, 910 include a third exemplary trace instruction 910 that traces the resulting destination value after execution of the second SEND instruction. For example, in response to executing the second SEND instruction, the third trace instruction 910 can acquire the values obtained as a result of the destination register (eg, r9 and r10 in global memory). In some examples, the third trace instruction 910, when executed, produces a trace record (eg, one of record 126 in FIG. 1, one of record 302 in FIG. 3, and so on). Then, the TID of the hardware thread that executed the second kernel 904 and the second offset (for example, the second offset value) of the original instruction in the first kernel 902 can be included. Advantageously, the order of the trace records corresponds to offset zero, which can provide the order of software thread dispatch. For example, the dispatch order of the second kernel 904 may be determined based on the first offset value and the second offset value for offset zero.FIG. 10 is an exemplary workflow 1000 for emulating the execution of an instrumented GPU kernel such as the second kernel 108 of FIG. 1, the kernel 706 of FIG. 7, and / or the second kernel 904 of FIG. It is a figure. For example, workflow 1000 may be implemented by the GLIT engine 102 of FIG. 1 and / or FIG. In this example, the workflow 1000 is implemented by an exemplary kernel instruction static data 1002, an exemplary opcode emulation table 1004, an exemplary embroidery routine 1006, and an exemplary GPU state 1008, 1010. Alternatively, any other exemplary workflow may be utilized to emulate the execution of the instrumented GPU kernel.In the example shown in FIG. 10, the kernel instruction static data 1002 may correspond to an instruction decoded from a binary kernel. For example, the second kernel 108 may include a plurality of exemplary encoded instructions in binary format. In some examples, the trace extractor 420 of FIG. 4 can extract and / or otherwise decode the coding instructions from the second kernel 108 in order to generate the kernel instruction static data 1002. In this example, the kernel instruction static data 1002 includes a first exemplary instruction having a first instruction identifier (INST 0) as an index, which is the first decryption from the second kernel 108 in FIG. It is possible to correspond to the issued instruction. For example, INST 0 can correspond to the first SEND instruction of the second kernel 904 of FIG. 9 (eg, SEND (16) R12 R6 0XC 0X4205E00).In the example shown in FIG. 10, the opcode emulation table 1004 can accommodate opcodes supported by a particular GPU architecture, such as the architecture of GPU 110 in FIG. In this example, the opcode emulation table 1004 comprises a first exemplary opcode (OPCODE 0), which is supported by the GPU 110 and / or configured to be executed at call time in some other way. It can correspond to one type of instruction.In the example shown in FIG. 10, when executed, the emulation routine 1006 can simulate the execution of a second instruction configured to be executed by the GPU 110 (eg, a machine). It can correspond to readable instructions). In this example, the emulation routine 1006 includes a first exemplary emulation routine (ADD_EMUL), which is an additional operation supported by the GPU 110 and / or otherwise configured to be performed at call time. It can support emulation. In this example, the opcodes in the opcode emulation table 1004 correspond to those of the emulation routine 1006. For example, OPCODE 0 corresponds to ADD_EMUL, OPCODE 1 corresponds to SUB_EMUL, and so on.In the example shown in FIG. 10, GPU states 1008 and 1010 include exemplary GRF states 1008 and exemplary ARF states 1010. In this example, the GRF state 1008 is the value of a register stored in the GRF implemented by the GPU's hardware thread, such as one of threads 208 in FIG. In this example, the GRF state 1008 is implemented using 128 registers (r0-127). In this example, the ARF state 1010 is the value of a register stored in the ARF implemented by the GPU's hardware thread, such as one of threads 208 in FIG.In this example, the ARF state 1010 comprises a portion of the ARF. For example, the ARF portion is the first register value (F0.0) that stores the value at the first end of the first floating point saturation range, and the second end of the first floating point saturation range. A second register value (F0.1) that stores the value, a third register value (F1.0) that stores the value at the first end of the second floating point saturation range, and a second floating point saturation. A fourth register value (F1.1) that stores the value at the second end of the range, a fifth register value (IP) that stores the value of the instruction pointer, and a sixth register that stores the value of the DMASK register. A value, a seventh register value that stores the value of the CE register, an eighth register value (ACC0) that stores the value of the cumulative register, a ninth register value (A0) that stores the address register, and a notification register (N0). , And a tenth register value that stores the value of the execution mask. As an example, the IP register can implement a pointer to the current instruction in GPU memory. In some examples, each of the threads 208 may have their own IP. Additional or alternative, the portion of ARF can include less or more ARF states than shown in the example of FIG.In an exemplary operation, the trace extractor 420 of FIG. 4 can decode a GLIT such as the GLIT 600 of FIG. 6 to generate and / or otherwise output the decoded binary data and GPU state. In some examples, the trace extractor 420 stores a portion of the decoded binary data as kernel instruction static data 1002 that utilizes the instruction identifier as an index. In some examples, the trace extractor 420 stores a portion of the decoded binary data as GPU states 1008, 1010. In some examples, the trace extractor 420 is one of the kernel instruction static data 1002 (s), an opcode in the opcode emulation table 1004, one of the emulation routines 1006 (s), and. / Or associate one (s) of GPU states 1008 and 1010. For example, the trace extractor 420 can determine that INST 0 corresponds to OPCODE 0 and OPCODE 0 corresponds to ADD_EMUL. In such an example, the trace extractor 420 can store at least one association of INST 0, OPCODE 0, ADD_EMUL, or the corresponding one of the GPU states 1008, 1010. For example, the trace extractor 420 can store the association in the storage 460 of FIG.In an exemplary operation, the trace emulator 430 of FIG. 4 can emulate the execution of a GPU kernel in which kernel instruction static data 1002 and / or GPU states 1008 and 1010 are generated. In some examples, the trace emulator 430 can replay a GLIT such as the GLIT 600 by selecting INST 0 to run. In some examples, in response to selecting INST 0 to run, the trace emulator 430 calls the first emulation routine of ADD_EMUL and Arguments INST 0, OPCODE 0, or to the first emulation routine. Enter at least one of the corresponding GPU states 1008 and 1010. Advantageously, the trace emulator 430 executes one (s) of the emulation routines in the emulation routine 1006 corresponding to the GPU instruction instructions, which may be represented by the information contained in the kernel instruction static data 1002 (eg, iterative). By executing), the execution of a GPU kernel such as the second kernel 108 in FIG. 1 can be replayed.Source code and / or exemplary hardware logic, machine-readable instructions, hardware-implemented state machines, and / or any combination thereof for implementing the exemplary GLIT engine 102 of FIGS. 1 and / or 4. Alternatively, the flowchart is shown in FIGS. 11 to 17. A machine-readable instruction is one or more executable programs of a computer processor and / or a processor circuit, eg, an executable program for execution by a processor 1812 shown in an exemplary processor platform 1800 described below in connection with FIG. It may be a part. The program may be implemented in software stored on a non-temporary computer-readable storage medium such as a CD-ROM, floppy disk, hard drive, DVD, Blu-ray disk, or memory associated with the processor 1812, but in an alternative. In addition, the entire program and / or a portion thereof may be executed by a device other than the processor 1812 and / or implemented in firmware or dedicated hardware. Further, the exemplary program is described with reference to the source code and / or flowchart shown in FIGS. 11-17, but instead uses many other methods of implementing the exemplary GLIT engine 102. You can also do it. For example, you can change the execution order of the blocks and / or change, delete, or combine some of the blocks described. Additional or alternative, any or all of the blocks are one or more hardware circuits (eg, discrete and / or integrated) structured to perform the corresponding operation without running software or firmware. It may be implemented by analog and / or digital circuits, FPGAs, ASICs, comparators, operational amplifiers (op-amps), logic circuits, etc.). Processor circuits may be distributed to different network locations and / or locally distributed to one or more devices (eg, a single machine multi-core processor, multiple processors distributed to a server rack, etc.). You may.The machine-readable instructions described herein may be stored in one or more of a compressed format, an encrypted format, a fragmented format, a compiled format, an executable format, a packaged format, and the like. The machine-readable instructions described herein are data, or data structures that can be used to create, manufacture, and / or generate machine-executable instructions (eg, instruction parts, codes, representations of codes, etc.). ) May be stored. For example, machine-readable instructions are fragmented and one or more storage and / or computing devices (eg, servers) that are located in the same or different locations on a network or set of networks (eg, cloud, edge device, etc.). It may be stored in. Machine-readable instructions are installed, modified, adapted, updated, combined, supplemented, configured, decrypted, decompressed, to be directly readable, interpretable, and / or executable by computing devices and / or other machines. One or more of unpacking, distribution, reallocation, compilation, etc. may be required. For example, machine-readable instructions may be stored in multiple parts that are individually compressed, encrypted, and stored in separate computing devices, which parts are decrypted, decompressed, and combined. Together, they form a set of executable instructions that implement one or more functions that can form a program as described herein.In another example, machine-readable instructions may be stored as they can be read by a processor circuit, but in order to execute the instructions on a particular computing device or other device, a library (eg, PLL). ), Software development kit (SDK), API, etc. may need to be added. In another example, a machine-readable instruction (eg, a memorized configuration, data entry, recorded network address, etc.) is before the machine-readable instruction and / or the corresponding program can be executed in whole or in part. May need to be set. Accordingly, the machine-readable medium used herein stores machine-readable instructions and / or programs at rest or in transit, regardless of the particular format or state of the machine-readable instructions and / or program. , Or can be included when not.The machine-readable instructions described herein can be represented by past, present, or future command languages, scripting languages, programming languages, and the like. For example, machine-readable instructions can be expressed in C, C ++, Java, C #, Perl, Python, JavaScript, HTML (HyperText Markup Language), SQL (Structured Quest Language), SQL (Structured Quary Language), etc. You may.As mentioned above, the exemplary process of FIGS. 11-17 is such as HDD, flash memory, read-only memory, CD, DVD, cache, random access memory, and / or any other storage device or storage disk. It may be implemented using executable instructions stored on non-temporary computer and / or machine-readable media (eg, computer and / or machine-readable instructions), and these storage devices or storage disks contain information. Is stored for any duration (eg, long term, permanent, short term, short moment, temporary buffer, and / or cache of information). As used herein, the term non-temporary computer-readable medium includes any type of computer-readable storage device and / or storage disk, expressly to eliminate propagating signals and to eliminate transmission media. Defined."Including" and "comprising" (and all forms and tenses thereof) are used herein as open-ended terms. Accordingly, any form of claim "includes" or "complies" (including, for example, complies, includes, comprising, including, having, etc.) as a preamble, or any kind of claim. It should be understood that whenever used within the description of a section, additional elements, terms, etc. may be present without exceeding the corresponding claim or description. As used herein, for example, when the phrase "at least" is used as a transitional phrase in a claim preamble, it is similar to the terms "comprising" and "including" being open-ended. It is an open end. The terms "and / or" are used in formats such as, for example, A, B, and / or C, (1) A only, (2) B only, (3) C only, (4) A. And B, (5) A and C, (6) B and C, (7) A and B and C, etc., refers to any combination or subset of A, B, C. As used herein in the context of describing structures, components, items, objects, and / or objects, the phrase "at least one of A and B" is (1) at least one A, (. 2) Intended to refer to an implementation that includes at least one B, (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects, and / or objects, the phrase "at least one of A or B" is (1) at least one. A, (2) is intended to refer to an implementation that includes at least one B, (3) at least one A and at least one B. As used herein in the context of describing a process, instruction, action, activity and / or execution of a step, the phrase "at least one of A and B" is (1) at least. It is intended to refer to an implementation that includes one A, (2) at least one B, (3) at least one A and at least one B. Similarly, as used herein in the context of describing the execution of a process, instruction, action, activity and / or step, the phrase "at least one of A or B" is (1) at least one. A, (2) is intended to refer to an implementation that includes at least one B, (3) at least one A and at least one B.As used herein, singular references (eg, "a", "an", "first", "second", etc.) do not exclude plurals. The entity of the term "a" or "an" as used herein refers to one or more of the entities. The terms "a" (or "an"), "one or more", and "at least one" may be used interchangeably herein. Further, although listed individually, multiple means, elements, or method actions may be implemented, for example, by a single unit or processor. Further, individual features may be included in different examples or claims, but these may be combined, and inclusion in different examples or claims means that the combination of features is infeasible and / Or does not suggest an advantage.FIG. 11 is an exemplary source for emulating the execution of an exemplary instrumentation kernel, such as the second kernel 108 of FIG. 1, the kernel 706 of FIG. 7, and / or the second kernel 904 of FIG. The code 1100 is shown. Alternatively, any other source code may be executed to emulate the execution of the instrumentation kernel. In some examples, the source code 1100 of FIG. 11 provides machine-readable instructions that can be executed by the trace emulator 430 of FIG. 4 and / or, more specifically, the GLIT engine 102 of FIG. 1 and / or FIG. It may be represented. For example, the trace emulator 430 may execute source code 1100 to emulate (eg, iteratively) instructions contained in an instrumented kernel executed by a GPU such as GPU 110 in FIG.In some examples, in response to executing source code 1100, the trace emulator 430 may select one of the instructions contained in the kernel instruction static data 1002 of FIG. For example, the trace emulator 430 can select the instruction corresponding to INST 0. In some examples, in response to executing source code 1100, the trace emulator 430 has an instruction corresponding to INST 0 in global memory (eg, cache memory 210) or sampler (eg, sampler 216 in FIG. 2). It can be determined whether or not it is a SEND instruction to. If the trace emulator 430 determines that the instruction is a SEND instruction to global memory or sampler, the trace emulator 430 can update the register value from the trace. For example, the trace emulator 430 may update the register value based on GPU states 1008 and 1010 before and / or after executing the instruction.In some examples, if the trace emulator 430 determines that the instruction is not a SEND instruction to global memory or sampler, the trace emulator 430 can emulate the instruction. For example, the trace emulator 430 may emulate an instruction by calling one of the emulation routines in the emulation routine table 1006 to emulate the instruction. In this example, the trace emulator 430 may execute the source code 1100 (for example, iteratively) for one or more of the instructions included in the kernel instruction static data 1002 of FIG.FIG. 12 shows an exemplary source code 1200 for emulating the execution of an exemplary software thread. Alternatively, any other source code may be executed to emulate the execution of the software thread. In some examples, the source code 1200 of FIG. 12 provides machine-readable instructions that can be executed by the trace emulator 430 of FIG. 4 and / or, more specifically, the GLIT engine 102 of FIG. 1 and / or FIG. It may be represented. For example, the trace emulator 430 runs source code 1200 to emulate a kernel, such as a second kernel 108, into a hardware thread, such as one of threads 208, by emulating the instructions contained in the kernel. You may emulate the instance to be dispatched.In some examples, in response to executing source code 1200, the trace emulator 430 determines the offset identifier (eg, offset value) (OffsetToID) corresponding to one of the instructions in the kernel. Emulates the instructions contained in. For example, the trace emulator 430 may determine the offset identifier based on the GPU state (State.IP) of the IP register value of the ARF. In some examples, in response to executing the source code 1200, the trace emulator 430 returns an instruction (ins) such as INST 0 in FIG. 10 based on the instruction identifier. In some examples, in response to executing the source code 1200, the trace emulator 430 identifies the opcode based on the instruction. In some examples, in response to executing the source code 1200, the trace emulator 430 identifies one of the emulation routines in the emulation routine table 1006 based on the opcode. In some examples, in response to executing source code 1200, the trace emulator 430 utilizes an instruction and one or more GPU states (State) as exemplary arguments to identify the emulation routine. Do one of them. In this example, the trace emulator 430 runs source code 1200 (eg, iterates) until an EOT (end-of-time) instruction that can be generated in response to the execution of the last kernel instruction is generated. Execute). For example, the EOT instruction may be generated in response to the INST N being emulated.FIG. 13 shows exemplary source code 1300 for emulating the execution of an exemplary instrumentation software thread. Alternatively, any other source code may be executed to emulate the execution of the instrumentation software thread. In some examples, the source code 1300 of FIG. 13 provides machine-readable instructions that can be executed by the trace emulator 430 of FIG. 4 and / or, more specifically, the GLIT engine 102 of FIG. 1 and / or FIG. It may be represented. For example, the trace emulator 430 runs source code 1300 to emulate a kernel, such as a second kernel 108, into a hardware thread, such as one of threads 208, by emulating the instructions contained in the kernel. You may emulate the instance to be dispatched.In some examples, the source code 1300 uses the source code 1200 of FIG. 12 as a first exemplary instrumentation routine (eg, instrumentation routine, instrumentation instruction, etc.) 1302 and a second exemplary instrumentation. It may be carried out by instrumentation using instrumentation routine 1304. For example, the trace emulator 430 may execute the first instrumentation routine 1302 before executing the emulation routine (EmulRoutes), and execute the second instrumentation routine 1304 after executing the emulation routine.In some examples, in response to executing the first instrumentation routine 1302, the trace emulator 430 calls a callback routine (eg, "CallbackBefore ();") to call the API and the figure. The GPU state of the hardware thread that executed the software thread, such as application 120 of 1 and the hardware profiling analysis tool 718 of FIG. 7, can be provided to higher level components.In some examples, in response to executing the second instrumentation routine 1304, the trace emulator 430 calls a callback routine (eg, "CallbackAfter ();") to call the API, and the figure. The GPU state of the hardware thread that executed the software thread, such as application 120 of 1 and the hardware profiling analysis tool 718 of FIG. 7, can be provided to higher level components. Advantageously, by registering the callback routine with a higher level component, the trace emulator 430 provides the GPU state of the hardware thread before and / or after executing the emulation routine to provide the emulation routine. It is possible to determine the GPU state change in response to the execution.FIG. 14 shows exemplary source code 1400 for implementing an emulation routine. Alternatively, other source code may be executed to implement the emulation routine. In some examples, the source code 1400 may implement one (s) of the emulation routines in the emulation routine table 1006 and / or the emulation routines (EmulRoutes) of FIG. In some examples, the source code 1400 of FIG. 14 is to simulate the execution of GPU kernel instructions, such as the instructions contained in the second kernel 108 of FIG. 1, the trace emulator 430 of FIG. 4 and / or , More generally, may represent machine-readable instructions that can be executed by the GLIT engine 102 of FIG. 1 and / or FIG.In some examples, in response to executing source code 1400, the trace emulator 430 determines the first source operand (src0) and the second source operand (src1) to emulate the instruction. You may prepare the data for. For example, the trace emulator 430 may determine the first source operand based on the first GPU state, such as the GRF state associated with the hardware thread that executed the GPU kernel instruction. In some examples, in response to executing source code 1400, the trace emulator 430 is based on a second GPU state, such as the ARF state associated with the hardware thread that executed the GPU kernel instruction. The source operand of may be determined.In some examples, in response to executing source code 1400, the trace emulator 430 commands by determining an execution mask (exec_mask), a destination register (dst), and the next IP register (next_ip). May be emulated. In some examples, in response to running source code 1400, the trace emulator 430 may commit a new GPU state based on the GPU state, the destination register, and the next IP register to process. For example, the trace emulator 430 may store a new GPU state for subsequent processing and / or analysis.FIG. 15 is a flow chart illustrating machine-readable instructions 1500 that may be executed to implement the GLIT engine 102 of FIGS. 1 and / or 4 in order to improve the operation of the GPU. The machine-readable instruction 1500 of FIG. 15 starts at block 1502, where the GLIT engine 102 instruments the kernel executed by the graphics processing unit (GPU). For example, the instruction generator 410 (FIG. 4) instrumentes the first kernel 106 of FIG. 1 by inserting the profiling instructions 104A-104C of FIG. 1 to generate the second kernel 108 of FIG. You may.At block 1504, the GLIT engine 102 sends an instrumentation kernel to the GPU for execution. For example, the instruction generator 410 can provide a second kernel 108 for storage in the memory 116 of FIG. In some examples, the GPU 110 in FIG. 1 can derive a second kernel 108 from the instruction generator 410, the GPU driver 122 in FIG. 1, and / or the memory 116.At block 1506, the GLIT engine 102 acquires a GPU long instruction trace (GLIT) from the GPU in response to the GPU executing the instrumentation kernel. For example, in response to acquiring a second kernel 108, the GPU 110 may execute the second kernel 108. In some examples, in response to running the second kernel 108, the GPU 110 can generate the GLIT 112 of FIG. 1, the GLIT 600 of FIG. 6, and so on. For example, the GPU 110 can encode the GLIT 112 of FIG. 6, the GLIT 600 of FIG. 6, and the like in a binary format such as the GLIT format 300 of FIG.At block 1508, the GLIT engine 102 emulates GLIT. For example, the trace extractor 420 (FIG. 4) can decode an encoded binary kernel capable of implementing GLIT112, GLIT600 of FIG. 6, and the like. In some examples, the trace emulator 430 (FIG. 4) displays the GPU state for application 120 in FIG. 1 via one or more APIs before and / or after executing the implementation emulation routine. An emulation routine can be instrumented to provide. An exemplary process that can be performed to implement block 1508 is described below in connection with FIG.At block 1510, the GLIT engine 102 determines the operating parameters of the GPU based on the emulated GLIT. For example, the trace analyzer 440 (FIG. 4) may determine the GPU state, execution time parameter, busy time parameter, idle time parameter, occupancy time parameter, or utilization parameter based on emulation of GLIT112, GLIT600, and the like. can.At block 1512, the GLIT engine 102 determines whether to adjust the GPU workload based on the operating parameters. For example, in the hardware setter 450 (FIG. 4), the utilization rate of one (s) of threads 208 in FIG. 2 and / or, more generally, the GPU slice 200 in FIG. 2 is the utilization rate. It can be determined to increase the number of instructions executed by the GPU 110 in response to determining that it is less than the threshold. In some examples, the hardware configurator 450 does not utilize one or more of threads 208 based on the corresponding GPU state not changing its value in response to distribution of the second kernel 108 to GPU 110. It can be determined that. In some examples, in response to determining that one (s) of threads 208 are underutilized and / or underutilized based on the utilization threshold not being met. The hardware configurator 450 may increase the number of instructions executed by one (s) of threads 208.If at block 1512 it is determined that the GLIT engine 102 does not adjust the GPU workload based on the operating parameters, control proceeds to block 1516 to determine whether to generate another instrumentation kernel. If in block 1512 it is determined that the GLIT engine 102 adjusts the GPU workload based on the operating parameters, in block 1514 the GLIT engine 102 calls the GPU driver to adjust the GPU workload. For example, the hardware configurator 450 increases the number of instructions executed by the GPU 110 and decreases the number of instructions executed by the GPU 110 for the GPU driver 122, and schedules the second kernel 108. It can be instructed to coordinate between one (s) of one (s) of the two execution units 204 (s) and / or a combination thereof.In response to calling the GPU driver in block 1514 to adjust the GPU workload, in block 1516 the GLIT engine 102 determines whether to generate another instrumentation kernel. For example, the instruction generator 410 can determine to instrument a different kernel than the first kernel 106 in FIG. In some examples, the instruction generator 410 adds, subtracts, and / or modifies one (s) of the profiling instructions 104A-104C to the kernel instructions (eg, INSTR1, INSTR2, etc. in FIG. 1). By adding, subtracting, and / or modifying one of (s), and / or a combination thereof, it is determined that the first kernel 106 is to be re-instrumented.At block 1518, the GLIT engine 102 determines whether to continue the analysis of the GPU. For example, the trace emulator 430 can determine to continue the analysis of the GPU 110 in order to determine the operating parameters associated with the GPU 110. In some examples, the trace emulator 430 can be determined to continue the analysis by restarting and / or re-emulating the GLIT112 of FIG. 1, the GLIT600 of FIG. At block 1518, if the GLIT engine 102 determines to continue analyzing the GPU, control returns to block 1506 and either retrieves another GLIT from the GPU in response to the GPU running the instrumentation kernel, or so. Otherwise, the exemplary machine-readable instruction 1500 of FIG. 15 ends.FIG. 16 is a flow chart illustrating a machine-readable instruction 1600 that can be executed to implement the GLIT engine 102 of FIG. 1 and / or FIG. 4 to emulate one or more exemplary GLITs. In some examples, the machine-readable instruction 1600 may implement block 1508 of FIG. The machine-readable instruction 1600 of FIG. 16 starts at block 1602, where the GLIT engine 102 selects the graphic processing unit (GPU) long instruction trace (GLIT) to emulate. For example, the trace extractor 420 (FIG. 4) can select the first one of GLIT 112 of FIG. 1 to emulate, the first one of GLIT 470 of FIG. 4, and the like. In some examples, the first of GLIT112 may include one or more binary kernels including the first binary kernel. In some examples, the first binary kernel may contain data corresponding to and / or otherwise associated with the GLIT 600 of FIG. In such an example, the first binary kernel may have a binary format such as the GLIT format 300 of FIG.At block 1604, the GLIT engine 102 decodes the GLIT and produces the decoded GLIT data including the routine executed by the GPU. For example, the trace extractor 420 can decode the first binary kernel to generate and / or otherwise output record 126 in FIG. 1, record 302 in FIG. 3, and so on. In some examples, the trace extractor 420 can identify the kernel instruction static data 1002 of FIG. 10 based on record 126 of FIG. 1, record 302 of FIG. 3, and the like. For example, the trace extractor 420 can identify routines executed by the GPU 110, such as additional instructions, multiplication instructions, SEND instructions, READ SEND instructions, and / or combinations thereof.In block 1606, the GLIT engine 102 stores the GLIT data decoded based on the instruction identifier. For example, the trace extractor 420 can store the kernel instruction static data 1002 by using the instruction identifier decoded from the first binary kernel as an index. In some examples, the trace extractor 420 can store the decoded GLIT data in storage 460 (FIG. 4).At block 1608, the GLIT engine 102 identifies the emulation routine based on the identifier of the routine executed by the GPU. For example, the trace extractor 420 can identify the first routine of the emulation routines in the emulation routine table 1006 based on the opcode corresponding to the first routine of the kernel instruction static data 1002.At block 1610, the GLIT engine 102 stores at least one association of instruction identifiers or emulation routines. For example, the trace extractor 420 is one (s) of the instruction identifiers (eg, INST0, INST1, INST2, etc.) of the kernel instruction static data 1002 of FIG. 10, the opcode emulation table 1004 of FIG. One of (eg, OPCODE 0, OPCODE 1, OPCODE 2, etc.) or one of the emulation routines (eg, ADD_EMUL, SUB_EMUL, MUL_EMUL, etc.) of the emulation routine table 1006 of FIG. You can associate one (s). In some examples, the trace extractor 420 can store the association in storage 460.At block 1612, the GLIT engine 102 instruments an emulation routine using a callback routine. For example, the trace emulator 430 (FIG. 4) provides exemplary instrumentation instructions, such as the first instrumentation routine 1302 in FIG. 13 and / or the second instrumentation routine 1304 in FIG. 13, to the source code 1300 in FIG. By inserting, one (s) of the emulation routines included in the emulation routine table 1006 of FIG. 10 can be instrumented.At block 1614, the GLIT engine 102 registers a callback routine to call the application programming interface (API). For example, the trace emulator 430 registers an instrumented version of the emulation routine in the emulation routine table 1006 with the application 120 of FIG. 1, the OS running on the CPU 118 of FIG. 1, and / or a combination thereof. Can be done.At block 1616, the GLIT engine 102 executes an instrumentation emulation routine, calls the API, and observes the GPU state. For example, in response to executing a registered callback routine contained in an instrumentation emulation routine, the trace emulator 430 may execute the registered callback routine to call one or more APIs and GPU. The condition can be observed. In some examples, the GPU state may correspond to the GPU state 1008, 1010 in FIG. For example, in response to calling one or more APIs, the trace emulator 430 may have a first value in the GPU state of one of threads 208 before running the second kernel 108, and / or a second. You can observe the second value of the GPU state of that one of threads 208 after running kernel 108.At block 1618, the GLIT engine 102 determines whether to select another GLIT to emulate. For example, the trace emulator 430 and / or the trace analyzer 440 can determine to select another one of the GLIT 112 to simulate. If at block 1618 it is determined to select another GLIT to be processed by the GLIT engine 102, control returns to block 1602 and selects another GLIT to emulate. If at block 1618 it is determined that the GLIT engine 102 does not select another GLIT to emulate, control returns to block 1510 of the exemplary machine-readable instruction 1500 of FIG. 15 and is based on the emulated GLIT. The GPU operating parameters may be determined. Additionally or optionally, the exemplary machine-readable instruction 1600 of FIG. 16 may be terminated.FIG. 17 is a flow chart representing a machine-readable instruction 1700 that may be executed to implement the GLIT engine 102 of FIG. 1 and / or FIG. 4 to improve the operation of the GPU. The machine-readable instruction 1700 of FIG. 17 begins at block 1702, where the GLIT engine 102 and / or, more generally, the CPU 118 of FIG. 1 is a graphics processing unit (GPU) instruction executed by the GPU. Insert a profile routine into the kernel that contains. For example, the instruction generator 410 (FIG. 4) can insert profiling instructions 104A-104C into the first kernel 106 to generate the second kernel 108 of FIG. 1 executed by the GPU 110 of FIG. .. In some examples, the first kernel 106 and the second kernel 108 include GPU instructions such as additional instructions, multiplication instructions, SEND instructions, READ SEND instructions, and / or combinations thereof.At block 1704, the GPU 110 distributes the kernel for execution by the GPU's hardware thread (HWT). For example, the instruction generator 410 can provide a second kernel 108 for storage in the memory 116 of FIG. In some examples, the GPU 110 in FIG. 1 can derive a second kernel 108 from the instruction generator 410, the GPU driver 122 in FIG. 1, and / or the memory 116. For example, the local thread dispatcher 220 of FIG. 2 may acquire a second kernel 108 and distribute the second kernel to one (s) of threads 208 for execution.At block 1706, the GPU 110 determines the first register value of each first register in the HWT general purpose register file (GRF). For example, the first thread of thread 208 can determine one or more first register values of one or more first registers of the first GRF implemented by the first thread. .. In some examples, the second thread of thread 208 determines one or more of the second register values of one or more of the second registers of the second GRF implemented by the second thread. Can be done.At block 1708, the GPU 110 determines the second register value of each second register in the HWT architecture register file (ARF). For example, the first thread of thread 208 can determine one or more first register values of one or more first registers of the first ARF implemented by the first thread. .. In some examples, the second thread of thread 208 determines one or more of the second register values of one or more of the second registers of the second ARF implemented by the second thread. Can be done.At block 1710, the GPU 110 determines a third register value in response to the HWT executing the GPU instruction. For example, the first thread of the threads 208 has one or more respective threads in response to the first thread executing a SEND instruction to the sampler 216 of FIG. 2, the cache memory 210 of FIG. 2, and the like. One or more fifth register values of the first destination register can be determined. In some examples, the second thread of threads 208 responds to one thread executing a SEND instruction to the sampler 216 of FIG. 2, the cache memory 210 of FIG. 2, and the like. One or more sixth register values of each of the above second destination registers can be determined.In block 1712, the GPU 110 stores the first register value, the second register value, the third register value, and the GPU instruction in the GPU long instruction trace (GLIT). For example, the first thread of thread 208 has one or more first register values, one or more third register values, one or more fifth register values, or one or more GPU instructions. At least one of these can be stored in an encoded binary file capable of implementing a GLIT such as one of the GLIT 112 of FIG. 1, one of the GLIT 470 of FIG. 4, and the GLIT 600 of FIG. In some examples, the second thread of thread 208 has one or more second register values, one or more fourth register values, one or more sixth register values, or one. At least one of the above GPU instructions can be stored in the encoded binary file.At block 1714, the GLIT engine 102 and / or, more generally, the CPU 118 inserts a callback routine into the routine and calls the API to provide information from the GLIT to the application. For example, the trace emulator 430 (FIG. 4) inserts the first instrumentation routine 1302 and / or the second instrumentation routine 1304 of FIG. 13 into the source code 1300 of FIG. 13 and data from the GLIT 112 such as the GPU state. Can be provided to the application 120 of FIG. 1 via one or more APIs.At block 1716, the GLIT engine 102 and / or, more generally, the CPU 118 determines the operating parameters of the GPU based on the GLIT including the GLIT utilization. For example, the trace analyzer 440 (FIG. 4) can determine one or more operating parameters of the GPU 110, including the utilization of the GPU 110, based on the data from the GLIT 112.At block 1718, the GLIT engine 102 and / or, more generally, the CPU 118 compares the operating parameters to the thresholds. For example, the trace analyzer 440 can compare the utilization with a threshold such as the utilization threshold. In some examples, the trace analyzer 440 can compare the busy time, occupancy, etc. of the GPU 110 with the busy time threshold, occupancy threshold, and the like.At block 1720, the GLIT engine 102 and / or, more generally, the CPU 118 adjusts the number of computational tasks performed by the GPU based on the comparison. For example, the hardware setter 450 (FIG. 4) can determine to increase the number of computational tasks performed by the GPU 110 or different GPUs based on a comparison of utilization and utilization thresholds. .. In some examples, the hardware configurator responds to determining that the 70% utilization of the GPU 110 is below the 90% utilization threshold of the GPU 110, thereby not meeting the utilization threshold. The 450 may instruct and / or cause the GPU driver 110 of FIG. 1 to increase the number of computational tasks, kernels, etc. performed by the GPU 110. The exemplary machine-readable instruction 1700 of FIG. 17 ends in response to adjusting the number of computational tasks performed by the GPU based on the comparison in block 1720.FIG. 18 is a block diagram of an exemplary processor platform 1800 configured to execute the instructions of FIGS. 11-17 to implement the GLIT engine 102 of FIGS. 1 and / or 4. The processor platform 1800 can be, for example, a server, a personal computer, a workstation, a self-learning machine (eg, a neural network), or any other type of computing device.The processor platform 1800 in the example shown includes a processor 1812. The processor 1812 in the example shown is hardware. For example, the processor 1812 can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers of any desired family or manufacturer. The hardware processor may be a semiconductor-based (eg, silicon-based) device. In this example, the processor 1812 is an exemplary instruction generator 410, an exemplary trace extractor 420, an exemplary trace emulator 430, an exemplary trace analyzer 440, and an exemplary hardware configurator. Implement 450.The processor 1812 in the example shown includes local memory 1813 (eg, cache). The processor 1812 in the example shown communicates via bus 1818 with main memory, including volatile memory 1814 and non-volatile memory 1816. The volatile memory 1814 is a SDRAM (Synchronous Dynamic Random Access Memory), a DRAM (Dynamic Random Access Memory), an RRAM (registered trademark) (RAMBUS (registered trademark), or any other type of DRAM BUS (registered trademark). It may be implemented by an access memory device. The non-volatile memory 1816 may be implemented by flash memory and / or any other desired type of memory device. Access to the main memories 1814 and 1816 is controlled by the memory controller.The processor platform 1800 of the example shown also includes an interface circuit 1820. The interface circuit 1820 is implemented by any type of interface standard, such as Ethernet® interface, Universal Serial Bus (USB), Bluetooth® interface, Proximity Field Communication (NFC) interface, and / or PCI Express Interface. May be done.In the example shown, one or more input devices 1822 are connected to the interface circuit 1820. The input device 1822 allows the user to enter data and / or commands into the processor 1812. Input devices can be implemented, for example, by audio sensors, microphones, cameras (still or video), keyboards, buttons, mice, touch screens, trackpads, trackballs, isopoint devices, and / or speech recognition systems.Also, one or more output devices 1824 are also connected to the interface circuit 1820 in the illustrated example. The output device 1824 may be, for example, a display device (eg, light emitting diode (LED), organic light emitting diode (OLED), liquid crystal display (LCD), cathode line tube (CRT) display, field switching (IPS) display, touch screen, etc.). It can be performed by a tactile output device, printer, and / or speaker. Therefore, the interface circuit 1820 in the example shown typically includes a graphics driver card, a graphics driver chip, and / or a graphics driver processor.The interface circuit 1820 in the example shown is a transmitter, receiver, transceiver, modem, residential gateway to facilitate the exchange of data with an external machine (eg, any type of computing device) over the network 1826. , Wireless access points, and / or communication devices such as network interfaces. Communication can be via, for example, Ethernet connections, digital subscriber line (DSL) connections, telephone line connections, coaxial cable systems, satellite systems, line-of-site wireless systems, cellular telephone systems, and the like.The processor platform 1800 of the example shown also includes one or more mass storage devices 1828 for storing software and / or data. Examples of such mass storage devices 1828 include floppy disk drives, hard drive disks, compact disc drives, Blu-ray disk drives, RAID (redundant array of index discs), and digital versatile disk (DVD) drives. In this example, one or more mass storage devices 1828 implement the storage 460 of FIG. 4 that stores the exemplary GLIT 470 of FIG.The machine executable instructions 1832 of FIGS. 11-17 are stored in the mass storage device 1828, in the volatile memory 1814, in the non-volatile memory 1816, and / or in a removable non-temporary computer-readable storage medium such as a CD or DVD. You may.A block diagram showing an exemplary software distribution platform 1905 for distributing software such as the exemplary computer-readable instruction 1832 of FIG. 18 to a third party is shown in FIG. The exemplary software distribution platform 1905 may be implemented by any computer server, data facility, cloud service, etc. that can store the software and send it to other computing devices. The third party may be the entity of the customer who owns and / or operates the software distribution platform. For example, the entity that owns and / or operates the software distribution platform can be the developer, seller, and / or licensor of the software, such as the exemplary computer-readable instruction 1832 of FIG. The third party may be a consumer, user, retailer, OEM, etc. who purchases and / or licenses the software for use and / or resale and / or sublicense. In the example shown, the software distribution platform 1905 comprises one or more servers and one or more storage devices. As described above, the storage device stores the computer-readable instructions 1832 that may correspond to the exemplary computer-readable instructions 1100, 1200, 1300, 1400, 1500, 1600, 1700 of FIGS. 11-17. One or more servers on the exemplary software distribution platform 1905 are communicating with network 1910, which is any one or more of the Internet described above and / or any of the exemplary networks 1826. It can correspond to one or more. In some examples, one or more servers respond to requests to send software to a requester as part of a commercial transaction. Payments for software delivery, sale and / or licensing may be processed by one or more servers on the Software Distribution Platform 1905 and / or via a third-party payment entity. The server allows the purchaser and / or licensor to download computer-readable instructions 1832 from the software distribution platform 1905. For example, software capable of responding to the exemplary computer-readable instruction 1832 of FIG. 18 may be downloaded to the exemplary processor platform 1800, which may include the exemplary GLIT engine 102 of FIG. 1 and / or FIG. Perform computer-readable instructions 1832 to implement. In some examples, one or more servers on the software distribution platform 1905 are software (eg, diagrams) to ensure that improvements, patches, updates, etc. are distributed and applied to the software on end-user devices. Periodically provide, transmit, and / or enforce updates to 18 exemplary computer-readable instructions 1832).From the above, it will be appreciated that exemplary systems, methods, devices, and manufactured articles that can be used to improve the operation of hardware processors such as GPUs are disclosed. The disclosed systems, methods, equipment, and manufactured articles define LITs for different hardware processors such as GPUs and develop flexible analytical tools that can be developed in high-level languages such as C, C ++, etc. make it easier. Advantageously, such an analysis tool can analyze the behavior of a hardware processor in order to generate profiling data at the granularity level of a single hardware thread of the hardware processor. Advantageously, the disclosed systems, methods, equipment, and manufacturing articles can generate multithreaded traces because the same kernel can be distributed to multiple threads of the hardware processor.The disclosed systems, methods, devices, and manufacturing articles can improve the development of models such as kernel debugging, memory, cache, and samplers that can be used to improve the operation of the GPU. For example, the disclosed systems, methods, equipment, and manufacturing articles may be used to improve the behavior of a computing device's hardware processor by increasing the amount of computational tasks performed by the hardware processor. Improve the efficiency of use. Accordingly, the disclosed methods, devices, and manufactured articles are intended to improve one or more of the functions of the computer.Exemplary methods, devices, systems, and manufactured articles for generating long instruction traces for graphics processing units are disclosed herein. Further examples and combinations thereof include:Example 1 is to identify the first routine based on at least one memory and at least one processor, the identifier of the second routine executed by the graphics processing unit (GPU). Routine 1 is to identify and execute the first routine to determine the first value of the GPU state of the GPU, based on the emulation of the second routine, where the first routine is ( i) To execute, with a first argument associated with the second routine, and (ii) a second argument corresponding to the second value of the GPU state before executing the first routine. Includes a device comprising controlling a GPU workload based on a first value of GPU state, and at least one processor executing at least an instruction to do so.Example 2 is the apparatus according to Example 1, wherein the GPU state is the state of the first register in the architecture register file associated with the hardware thread of the GPU or the second register in the general purpose register file of the hardware thread. include.In Example 3, the identifier is a first identifier extracted from an encoded binary file, where at least one processor inserts one or more profile routines into the kernel executed by the GPU's hardware threads. That is, determining the value of 1, the second value, and the hardware thread identifier from the long instruction trace, which is in response to the execution of one or more profile routines by the hardware thread. , Generated by the hardware thread, the first value corresponds to the GPU register value after the hardware thread has executed the kernel, and the second value corresponds to the GPU register value before the hardware thread has executed the kernel. However, the hardware thread identifier includes the device according to any one of Examples 1 and 2, which identifies and determines the hardware thread.In Example 4, the hardware thread is the first hardware thread, the long instruction trace is the first long instruction trace associated with the first hardware thread, and the encoded binary file is An encoded binary file containing a first long instruction trace and one or more second long instruction traces associated with one or more second hardware threads represents a multithreaded GPU trace. , The apparatus according to any one of Examples 1 to 3.In Example 5, the kernel contains device access instructions executed by hardware threads, where at least one processor is one or more first of each of one or more of the general purpose register files of the GPU. Determining a register value and determining one or more second register values in one or more of each second register in the GPU's architecture register file and one or more first register values, One or more second register values, one or more third register values, and device access instructions are stored in the long instruction trace, where one or more third register values are device access instructions. The device according to any one of Examples 1 to 4, which performs storage and corresponds to each one or more destination registers associated with.In Example 6, at least one processor determines the GPU utilization based on the first GPU state, compares the utilization with the threshold, and the threshold is satisfied based on the comparison. Controlling the GPU workload by performing at least one of a second routine adjustment, or an increase in the number of computational tasks performed by the GPU, in response to determining that it is not. The apparatus according to any one of Examples 1 to 5 is included.In Example 7, the first routine is an instrumentation routine that includes an emulation routine, wherein at least one processor inserts the first callback routine into the instrumentation routine before the emulation routine. The callback routine of 1 calls the first application programming interface (API) to provide the application with a second GPU state, inserts, and a second callback routine in the instrumentation routine after the emulation routine. The second callback routine calls the first API or the second API to provide, insert, and insert the first GPU state into the application, eg 1. The device according to any one of 6 to 6.Example 8 is at least one storage device containing an instruction, which, when executed, is based on the identifier of a second routine executed by the graphics processing unit (GPU) on at least one processor. Identifying the first routine, the first routine executing a first routine for identifying and determining the first value of the GPU state of the GPU, based on the emulation of the second routine. That is, the first routine corresponds to (i) the first argument associated with the second routine, and (ii) the second value of the GPU state before executing the first routine. Includes at least one storage device that has a second argument to execute and that controls the workload of the GPU based on the first value of the GPU state.Example 9 is at least described in Example 8, wherein the GPU state is the state of the first register in the architecture register file associated with the hardware thread of the GPU, or the second register in the general purpose register file of the hardware thread. Includes one storage device.In Example 10, the identifier is the first identifier extracted from the encoded binary file, and the instruction, when executed, to at least one processor, to the kernel executed by the GPU's hardware thread. Inserting one or more profile routines and determining a value of 1, a second value, and a hardware thread identifier from a long instruction trace, a long instruction trace is one or more by a hardware thread. Generated by the hardware thread in response to the execution of the profile routine in, the first value corresponds to the GPU register value after the hardware thread has executed the kernel, and the second value is the hardware thread's kernel. Corresponds to the GPU register value before execution, and the hardware thread identifier includes the storage device according to any one of Examples 8 to 9, which causes the hardware thread to be identified, determined, and performed.In Example 11, the hardware thread is the first hardware thread, the long instruction trace is the first long instruction trace associated with the first hardware thread, and the encoded binary file is An encoded binary file containing a first long instruction trace and one or more second long instruction traces associated with one or more second hardware threads is a multithreaded GPU trace. Represents include at least one storage device according to any one of Examples 8-10.In Example 12, the kernel contains a device access instruction executed by a hardware thread, and when the instruction is executed, it is in at least one processor with one or more first registers of each of the general purpose register files of the GPU. Determining one or more first register values of, and determining one or more second register values of one or more of each second register of one or more of the GPU's architecture register files. The above first register value, one or more second register values, one or more third register values, and device access instructions are stored in the long instruction trace, and one or more third registers are stored. At least one storage device according to any one of Examples 8 to 10, wherein the register value of is stored and stored, corresponding to each one or more destination registers associated with the device access instruction. including.In Example 13, when an instruction is executed, at least one processor determines the GPU utilization based on the first GPU state, compares the utilization with a threshold, and compares. In response to determining that the threshold is not met based on, at least one of the adjustments of the second routine, or the increase in the number of computational tasks performed by the GPU, is performed on the GPU. Includes at least one storage device according to any one of Examples 8-12, which controls and causes the workload to be performed.In Example 14, the first routine is an instrumentation routine that includes an emulation routine, and when an instruction is executed, it causes at least one processor to have a first callback routine in the instrumentation routine before the emulation routine. To insert, the first callback routine calls the first application programming interface (API) to provide the application with a second GPU state, insert, and instrument after the emulation routine. Inserting a second callback routine into the routine, the second callback routine calls the first API or the second API to provide the application with the first GPU state, insert. It includes at least one storage device according to any one of Examples 8 to 13, which causes the operation to be performed.Example 15 is a means for identifying the first routine based on the identifier of the second routine executed by the graphics processing unit (GPU), wherein the first routine emulates the second routine. Based on the means and means for executing the first routine for determining the first value of the GPU state of the GPU, the first routine is (i) the first associated with the second routine. And (ii) a means having a second argument corresponding to the second value of the GPU state before executing the first routine, and a GPU workload based on the first value of the GPU state. Includes devices, including means for controlling.Example 16 is the apparatus of Example 15, wherein the GPU state is the state of the first register in the architecture register file associated with the hardware thread of the GPU or the second register in the general purpose register file of the hardware thread. include.In Example 17, the identifier is a first identifier extracted from an encoded binary file, a means for inserting one or more profile routines into the kernel executed by the hardware thread of the GPU, and hardware. Performs to determine a first value, a second value, and a hardware thread identifier from a long instruction trace generated by a hardware thread in response to the execution of one or more profile routines by the hardware thread. The first value corresponds to the GPU register value before the hardware thread executes the kernel, and the second value corresponds to the GPU register value after the hardware thread executes the kernel. , The hardware thread identifier comprises the device according to any one of Examples 15-16, further comprising means for identifying the hardware thread.In Example 18, the hardware thread is the first hardware thread, the long instruction trace is the first long instruction trace associated with the first hardware thread, and the encoded binary file is An encoded binary file containing a first long instruction trace and one or more second long instruction traces associated with one or more second hardware threads represents a multithreaded GPU trace. , A device according to any one of Examples 15-17.In Example 19, the kernel contains a device access instruction executed by a hardware thread, and the means for executing it is one or more first of each first register of one or more of the general purpose register files of the GPU. To determine the register value of, and to determine one or more second register values of one or more of each second register of the GPU's architecture register file, and to determine one or more first register values. One or more second register values, one or more third register values, and device access instructions are stored in the long instruction trace, where one or more third register values are device access. Includes the device according to any one of Examples 15-18, which corresponds to, stores, and performs for each one or more destination registers associated with the instruction.Example 20 further includes means for determining the GPU utilization based on the first GPU state, and the means for controlling is the first in response to the determination that the utilization does not meet the threshold. Any one of Examples 15-19, which controls the workload of the GPU by performing at least one of two routine adjustments or an increase in the number of increased computational tasks performed by the GPU. Including the device described in.In Example 21, the first routine is an instrumentation routine that includes an emulation routine, and the means for execution is to insert the first callback routine into the instrumentation routine before the emulation routine. The first callback routine calls the first application programming interface (API) to provide the application with a second GPU state, insert it, and make a second callback to the instrumentation routine after the emulation routine. Inserting a routine, wherein the second callback routine calls the first API or the second API to provide, insert, and insert a first GPU state into the application, eg. Includes at least one storage device according to any one of 15-20.Example 22 is a graphics processing unit (GPU) having a hardware thread, which determines the first value of the GPU state and executes a GPU routine contained in the kernel to execute the GPU state. A GPU and a central processing unit (CPU) that determines the second value of and generates a GPU routine, a first value, and a long instruction trace containing the second value. , Inserting one or more profile routines into the kernel and identifying the first routine based on the identifier of the GPU routine, the first routine being identified based on the emulation of the GPU routine. And to execute the first routine that replays the execution of the GPU routine that determines the second value of the GPU state, where the first routine is (i) the first argument associated with the GPU routine. , And (ii) having a second argument corresponding to the first value of the GPU state, performing execution and controlling the GPU workload based on the execution of the first routine, CPU. And, including, including the system.Example 23 is the system according to Example 22, wherein the GPU state is the state of the second register in the architecture register file associated with the hardware thread of the GPU or the second register in the general purpose register file of the hardware thread. include.In Example 24, the identifier is the first identifier extracted from the encoded binary file, the encoded binary file contains a long instruction trace, and the CPU has a first value, a second value. , And the system according to any one of Examples 22-23, wherein the hardware thread identifier determines the hardware thread identifier from the encoded binary file and identifies the hardware thread.In Example 25, the hardware thread is the first hardware thread, the long instruction trace is the first long instruction trace associated with the first hardware thread, and the encoded binary file is One or more second hardware threads include a first long instruction trace and one or more second long instruction traces associated with one or more second hardware threads. In response to one or more executions, one or more second long instruction traces are generated and the encoded binary file is described in any one of Examples 22-24 representing a multithreaded GPU trace. Including the system of.In Example 26, the kernel contains a device access instruction executed by a hardware thread, and the GPU contains one or more first register values in one or more of each first register in the GPU's general purpose register file. Determining and determining one or more second register values in one or more of each one or more second registers in the GPU's architecture register file and one or more first register values. One or more second register values, one or more third register values, and device access instructions are stored in the long instruction trace, where one or more third register values are device access. Includes the system according to any one of Examples 22-25, which corresponds to, stores, and performs for each one or more destination registers associated with the instruction.In Example 27, the CPU determines the GPU utilization rate based on the first GPU state, compares the utilization rate with the threshold value, and the threshold value is not satisfied based on the comparison. In response to a determination, eg, perform at least one of a GPU routine c-tuning, or an increase in the number of computational tasks of a computational task performed by the GPU, to control the workload of the GPU. 22-26 includes the system according to any one of 22 and 26.In Example 28, the first routine is an instrumentation routine that includes an emulation routine, further includes an application, and the CPU inserts the first callback routine into the instrumentation routine before the emulation routine. The first callback routine then calls the first application programming interface (API) to provide the application with a second GPU state, insert it, and put it in the instrumentation routine after the emulation routine. Inserting a callback routine, the second callback routine calls the first API or the second API to provide, insert, and provide the first GPU state to the application. , A system according to any one of Examples 22-27.Example 29 is to identify the first routine based on the identifier of the second routine executed by the graphics processing unit (GPU), the first routine being based on the emulation of the second routine. Identifying and executing a first routine to determine the first value of the GPU state of the GPU, where the first routine is (i) the first argument associated with the second routine. , And (ii) having a second argument corresponding to the second value of the GPU state before executing the first routine, executing and the GPU workload based on the first value of the GPU state. Including methods, including controlling and.Example 30 is the method of Example 29, wherein the GPU state is the state of the first register in the architecture register file associated with the hardware thread of the GPU or the second register in the general purpose register file of the hardware thread. include.In Example 31, the identifier is the first identifier extracted from the encoded binary file, inserting one or more profile routines into the kernel executed by the hardware thread of the GPU and the hardware thread. Determines the second value of the GPU state before executing the kernel, and in response to the hardware thread executing the kernel, the first value of the GPU state and the corresponding second value of the hardware thread. The method according to any one of Examples 29-30, further comprising generating a long instruction trace comprising the identifier of 2.In Example 32, the hardware thread is the first hardware thread, the long instruction trace is the first long instruction trace associated with the first hardware thread, and the encoded binary file is. An encoded binary file containing a first long instruction trace and one or more second long instruction traces associated with one or more second hardware threads represents a multithreaded GPU trace. , Any one of Examples 29-31.In Example 33, the kernel contains a device access instruction executed by a hardware thread and determines the first register value of one or more of each first register of one or more of the general purpose register files of the GPU. And one or more second register values of one or more of each second register of the GPU's architecture register file, and one or more first register values and one or more second registers. Register value of, one or more third register values, and device access instructions are stored in the long instruction trace, where one or more third register values are one associated with the device access instruction. Includes the methods of Examples 29-32, further comprising storing, corresponding to each of the above destination registers.Example 34 determines the GPU utilization rate based on the first GPU state, compares the utilization rate with the threshold value, and determines that the threshold value is not satisfied based on the comparison. In response to, eg, tuning the GPU routine, or performing at least one of an increase in the number of computational tasks performed by the GPU to control the workload of the GPU, further comprising, eg, 29-33. The method according to any one of the above is included.In Example 35, the first routine is an instrumentation routine that includes an emulation routine, in which the first callback routine is inserted into the instrumentation routine before the emulation routine, and the first callback routine is. , Calling the first application programming interface (API) to provide the application with a second GPU state, inserting, and inserting a second callback routine into the instrumentation routine after the emulation routine. The second callback routine is any of Examples 29-34, further comprising invoking the first API or the second API to provide, insert, and insert the first GPU state into the application. Including the method described in one.Specific exemplary systems, methods, devices, and manufactured articles are disclosed herein, but the scope of the present invention is not limited thereto. Conversely, this patent covers all systems, methods, devices, and manufactured articles within the claims of this patent.The following claims are incorporated by this reference into embodiments of the invention, each claim itself being positioned as a separate embodiment of the present disclosure.
Methods and system to perform performance measurements job operations in a network function virtualization (NFV) network are discussed. An example system includes an apparatus configured to be employed within a network manager (NM) of the NFV network comprising one or more processors configured to: output an NM request comprising a request associated with one or more performance measurement (PM) jobs to an element manager (EM); and process an EM response received from the EM, in response to the NM request, wherein the EM response comprises a parameter indicating a result of the respective PM jobs. The NM request can comprise a request to create a PM job, a request to delete a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs.
CLAIMSWhat is claimed is:1 . An apparatus configured to be employed within a Network Manager (NM), comprising:one or more processors; and a memory including instructions comprising operations, for execution via the one or more processors, to: output an NM request comprising a request associated with one or more performance measurement (PM) jobs to an element manager (EM) in a network function virtualization (NFV) network, wherein each of the PM job is configured to collect virtualization resource (VR) PM data associated with a virtual network function (VNF) instance of the NFV network, wherein the VNF instance implements a network function associated with an evolved packet core (EPC) network, and wherein the request is a request to create a PM job, a request to stop a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs; and process an EM response received from the EM, in response to the NM request, wherein the EM response comprises a parameter, status, indicating a result of the respective PM jobs.2. The apparatus of claim 1 , wherein the instructions comprise further operations, for execution via the one or more processors to process an EM notification received from the EM, when the one or more PM jobs is in operation, wherein the EM notification indicates an availability of the VR PM data associated with the respective VNF instances.3. The apparatus of any of the claims 1 -2, wherein the request to create a PM job includes an instance identifier, iOCInstanceList that identifies the VNF instance for which the PM job is to be created and the EM response comprises a PM job identifier, Jobid, for the PM job created and a parameter, status, indicating a result of the PM job creation.4. The apparatus of any of the claims 1 -2, wherein the request to stop the PM job includes a PM job identifier, Jobid that identifies the PM job to be stopped, and the EM response comprises a parameter, status, indicating a result of stopping the PM job.5. The apparatus of any of the claims 1 -2, wherein the request to suspend the PM job includes a PM job identifier, Jobid that identifies the PM job to be suspended, and the EM response comprises a parameter, status, indicating a result of the PM job suspension.6. The apparatus of any of the claims 1 -2, wherein the request to resume the PM job includes a PM job identifier, Jobid that identifies the PM job to be resumed, wherein the PM job has been previously suspended and the EM response comprises a parameter, status, indicating a result of the PM job resumption.7. The apparatus of any of the claims 1 -2, wherein the request to list one or more PM jobs includes a criteria to list the PM jobs, jobldList and the EM response comprises information on PM jobs that match the criteria, joblnfoList and a parameter, status, indicating a result of the PM job listing.8. An apparatus configured to be employed within an Element Manager (EM), comprising: one or more processors; and a memory including instructions comprising operations, for execution via the one or more processors, to; process a network manager (NM) request comprising a request associated with one or more performance measurement (PM) jobs, received from an NM in a network function virtualization (NFV) network, wherein each of the PM job is configured to collect virtualization resource (VR) PM data associated with a virtual network function (VNF) instance of the NFV network, wherein the VNF instance implements a network function associated with an evolved packet core (EPC) network, and wherein the request is a request to create a PM job, a request to stop or delete a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs; output an EM request comprising a request associated with the one or more PM jobs to a virtualized network function manager (VNFM), wherein the EM request is generated based on the received NM request; process a VNFM response received from the VNFM in response to the EM request, wherein the VNFM response comprises information on respective identifiers that identifies the one or more PM jobs; and output an EM response generated based on the received VNFM response, to the NM, wherein the EM response comprises a parameter, status, indicating a result of the respective PM jobs.9. The apparatus of claim 8, wherein the instructions comprise further operations, for execution via the one or more processors to:process a VNFM notification received from the VNFM in response to the EM request when the one or more PM jobs is in operation, wherein the VNFM notification indicates an availability of the VR PM data associated with the respective VNF instances; andoutput an EM notification generated based on the received VNFM notification, to the NM, wherein the EM notification indicates an availability of the VR PM data associated with the respective VNF instances.10. The apparatus of any of the claims 8-9, wherein the NM request and the EM request to a create a PM job, includes a respective instance identifier that identifies the VNF instance for which the PM job is to be created, wherein the VNFM response comprises a PM job identifier for the PM job created, and wherein the EM response comprises a PM job identifier for the PM job created and a parameter, status, indicating a result of the PM job creation.1 1 . The apparatus of any of the claims 8-9, wherein the NM request and the EM request to delete a PM job includes a PM job identifier that identifies the PM job is to be deleted, wherein the VNFM response comprises an identifier for the PM job deleted and wherein the EM response comprises a parameter, status, indicating a result of the PM job deletion.12. The apparatus of any of the claims 8-9, wherein the NM request and the EM request to suspend a PM job includes a PM job identifier that identifies the PM job to be suspended, wherein the VNFM response comprises an identifier for the PM job suspended and wherein the EM response comprises a parameter, status, indicating a result of the PM job suspension.13. The apparatus of any of the claims 8-9, wherein the NM request and the EM request to resume a PM job includes a PM job identifier that identifies the PM job to be resumed, wherein the PM job has been previously suspended, and wherein the VNFM response comprises an identifier for the PM job resumed and the EM response comprises a parameter, status, indicating a result of the PM job resumption.14. The apparatus of any of the claims 8-9, wherein the NM request and the EM request to list PM jobs includes a criteria to list one or more PM jobs meeting the criteria, and wherein the VNFM response and the EM response comprises the list of the one or more PM jobs that meets the criteria.15. An apparatus configured to be employed within a virtualized network function manager (VNFM), comprising: one or more processors; and a memory including instructions comprising operations, for execution via the one or more processors, to; process an element manager (EM) request comprising a request associated with one or more performance measurement (PM) jobs, received from an EM in a network function virtualization (NFV) network, wherein each of the PM job is configured to collect virtualization resource (VR) PM data associated with a virtual network function (VNF) instance of the NFV network, wherein the VNF instance implements a network function associated with an evolved packet core (EPC) network, and wherein the request is a request to create a PM job, a request to delete a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs; and output a VNFM response to the EM, in response to the EM request, wherein the VNFM response comprises information on the respective identifiers that identify the one or more PM jobs.16. The apparatus of claim 15, wherein the instructions comprise further operations, for execution via the one or more processors to:output a VNFM request comprising a request associated with the one or more PM jobs to a virtualized infrastructure manager (VIM), wherein the VNFM request is generated based on the received EM request; andprocess a VIM response received from the VIM, in response to the VNFM request, wherein the VIM response comprises information on respective identifiers that identifies the one or more PM jobs, when the request associated with the one or more PM jobs is successfully completed, prior to generating the VNFM response.17. The apparatus of claim 16, wherein the instructions comprise further operations, for execution via the one or more processors to,process a VIM notification received from the VIM in response to the VNFM request when the one or more PM jobs is in operation, wherein the VIM notification indicates an availability of the VR PM data associated with the respective VNF instances; andoutput a VNFM notification generated based on the received VIM notification, to the EM, wherein the VNFM notification indicates an availability of the VR PM data associated with the VNF instance.18. The apparatus of any of the claims 16-1 7, wherein the EM request and the VNFM request to create a PM job includes an object instance identifier that identifies the VNF instance for which a PM job to be created, and wherein the VIM response and the VNFM response comprises a PM job identifier for the PM job created.19. The apparatus of any of the claims 16-1 7, wherein the EM request and the VNFM request to delete a PM job includes a PM job identifier that identifies the PM job to be deleted, and wherein the VIM response and the VNFM response comprises an identifier for the PM job deleted.20. The apparatus of any of the claims 16-1 7, wherein the EM request and the VNFM request to suspend a PM job includes a PM job identifier that identifies the PM job to be suspended, and wherein the VIM response and the VNFM response comprises an identifier for the PM job suspended.21 . The apparatus of any of the claims 16-1 7, wherein the EM request and the VNFM request to resume a PM job includes a PM job identifier that identifies the PM job to be resumed, wherein the PM job has been previously suspended and wherein the VIM response and the VNFM response comprises an identifier for the PM job resumed.22. The apparatus of any of the claims 16-1 7, wherein the EM request and the VNFM request to list one or more PM jobs includes a criteria to list the one or more PM jobs meeting the criteria, and wherein the VIM response and the VNFM response comprises the list of the one or more PM jobs that meets the criteria.23. An apparatus configured to be employed within a virtualized infrastructure manager (VIM) comprising: one or more processors; and a memory including instructions comprising operations, for execution via the one or more processors, to; process a virtualized network function manager (VNFM) request comprising a request associated with one or more performance measurement (PM) jobs, received from a VNFM in a network function virtualization (NFV) network, wherein each of the one or more PM jobs is configured to collect virtualization resource (VR) PM data associated with a virtual network function (VNF) instance of the NFV network, wherein the VNF instance implements a network function associated with an evolved packet core (EPC) network, and wherein the request is a request to create a PM job, a request to delete a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs; and output a VIM response to the VNFM, in response to the VNFM request, wherein the VIM response comprises information on respective identifiers that identifies the one or more PM jobs.24. The apparatus of claim 23, wherein the instructions comprise further operations, for execution via the one or more processors to output a VIM notification in response to the VNFM request when the one or more PM jobs is in operation, wherein the VIM notification indicates an availability of the VR PM data associated with the respective VNF instances.25. The apparatus of any of the claims 23-24, wherein the VNFM request to create a PM job includes an object instance identifier that identifies the VNF instance for which the PM job is to be created, and wherein the VIM response comprises a PM job identifier for the PM job created.
METHOD AND SYSTEM TO PERFORM PERFORMANCE MEASUREMENTS JOBOPERATIONSREFERENCE TO RELATED APPLICATIONS[0001] This application claims the benefit of U.S. Provisional Application No.62/312,344 filed March 23, 2016, entitled "METHOD AND SYSTEM TO PERFORM PERFORMANCE MEASUREMENTS JOB OPERATIONS", the contents of which are herein incorporated by reference in their entirety.FIELD[0002] The present disclosure relates to wireless technology, and more specifically to performance measurements of virtual network functions (VNFs) of a network function virtualization network of a wireless network.BACKGROUND[0003] Network Function Virtualization (NFV) involves the replacement of physical network nodes with Virtual Network Functions (VNFs) implemented via Virtualization Resources (VRs) that perform the same function as the physical node.BRIEF DESCRIPTION OF THE DRAWINGS[0004] Fig. 1 depicts a network function virtualization (NFV) system/network that facilitates to perform performance measurement (PM) job operations in connection with various aspects described herein.[0005] Fig. 2 illustrates an example flow diagram that depicts a signal flow between the various entities in a NFV network that facilitates to perform the PM job operations according to various aspects described herein.[0006] FIG. 3a is a diagram illustrating components of a network in accordance with some embodiments.[0007] Fig. 3b is a block diagram illustrating components, according to some example embodiments, able to read instructions from a machine-readable or computer- readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies discussed herein. [0008] Fig. 4 illustrates a block diagram of an apparatus included within a Network Manager (NM) that facilitates to perform performance measurement (PM) job operations associated with an NFV network, according to various aspects described herein.[0009] Fig. 5 illustrates a block diagram of an apparatus included within an Element Manager (EM) that facilitates to perform performance measurement (PM) job operations associated with an NFV network, according to various aspects described herein.[0010] Fig. 6 illustrates a block diagram of an apparatus included within a Virtualized Network Function Manager (VNFM) that facilitates to perform performancemeasurement (PM) job operations associated with an NFV network, according to various aspects described herein.[0011] Fig. 7 illustrates a block diagram of an apparatus included within a Virtualized Infrastructure Manager (VIM) that facilitates to perform performance measurement (PM) job operations associated with an NFV network, according to various aspects described herein.[0012] Fig. 8 illustrates a flowchart of a method for a network manager (NM) of a network function virtualization network, that facilitates to perform performance measurement (PM) job operations, according to various embodiments of the disclosure.[0013] Fig. 9 illustrates a flowchart of a method for an element manager (EM) of a network function virtualization network, that facilitates to perform performance measurement (PM) job operations, according to various embodiments of the disclosure.[0014] Fig. 10 illustrates a flowchart of a method for a Virtualized Network Function Manager (VNFM) of a network function virtualization network, that facilitates to perform performance measurement (PM) job operations, according to various embodiments of the disclosure.[0015] Fig. 11 illustrates a flowchart of a method for a Virtualized Infrastructure Manager (VIM) of a network function virtualization network, that facilitates to perform performance measurement (PM) job operations, according to various embodiments of the disclosure.DETAILED DESCRIPTION[0016] In one embodiment of the disclosure, an apparatus configured to be employed within a network manager (NM) in a network function virtualization (NFV) network of a wireless network is disclosed. The apparatus comprises one or more processors and a memory including instructions comprising operations, for execution via the one or more processors to output an NM request comprising a request associated with one or more performance measurement (PM) jobs to an element manager (EM) in a network function virtualization (NFV) network, wherein each of the PM job is configured to collect virtualization resource (VR) PM data associated with a virtual network function (VNF) instance of the NFV network, wherein the VNF instance implements a network function associated with an evolved packet core (EPC) network, and wherein the request is a request to create a PM job, a request to stop a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs. In some embodiments, the instructions comprise further operations, for execution via the one or more processors to process an EM response received from the EM, in response to the NM request, wherein the EM response comprises a parameter, status, indicating a result of the respective PM jobs.[0017] In one embodiment of the disclosure, an apparatus configured to be employed within an element manager (EM) in a network function virtualization (NFV) network of a wireless network is disclosed. The apparatus comprises one or more processors and a memory including instructions comprising operations, for execution via the one or more processors to process a network manager (NM) request comprising a request associated with one or more performance measurement (PM) jobs, received from an NM in a network function virtualization (NFV) network, wherein each of the PM job is configured to collect virtualization resource (VR) PM data associated with a virtual network function (VNF) instance of the NFV network, wherein the VNF instance implements a network function associated with an evolved packet core (EPC) network, and wherein the request is a request to create a PM job, a request to stop or delete a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs and output an EM request comprising a request associated with the one or more PM jobs to a virtualized network function manager (VNFM), wherein the EM request is generated based on the received NM request. In some embodiments, the instructions comprise further operations, for execution via the one or more processors to process a VNFM response received from the VNFM in response to the EM request, wherein the VNFM response comprises information on respective identifiers that identifies the one or more PM jobs; and output an EM response generated based on the received VNFM response, to the NM, wherein the EM response comprises a parameter, status, indicating a result of the respective PM jobs.[0018] In one embodiment of the disclosure, an apparatus configured to be employed within a virtualized network function manager (VNFM), in a network function virtualization (NFV) network of a wireless network is disclosed. The apparatus comprises one or more processors and a memory including instructions comprising operations, for execution via the one or more processors to process an element manager (EM) request comprising a request associated with one or more performance measurement (PM) jobs, received from an EM in a network function virtualization (NFV) network, wherein each of the PM job is configured to collect virtualization resource (VR) PM data associated with a virtual network function (VNF) instance of the NFV network, wherein the VNF instance implements a network function associated with an evolved packet core (EPC) network, and wherein the request is a request to create a PM job, a request to delete a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs. In some embodiments, the instructions comprise further operations, for execution via the one or more processors to output a VNFM response to the EM, in response to the EM request, wherein the VNFM response comprises information on the respective identifiers that identify the one or more PM jobs.[0019] In one embodiment of the disclosure, an apparatus configured to be employed within a virtualized infrastructure manager (VIM), in a network function virtualization (NFV) network of a wireless network is disclosed. The apparatus comprises one or more processors; and a memory including instructions comprising operations, for execution via the one or more processors to process a virtualized network function manager (VNFM) request comprising a request associated with one or more performance measurement (PM) jobs, received from a VNFM in a network function virtualization (NFV) network, wherein each of the one or more PM jobs is configured to collect virtualization resource (VR) PM data associated with a virtual network function (VNF) instance of the NFV network, wherein the VNF instance implements a network function associated with an evolved packet core (EPC) network, and wherein the request is a request to create a PM job, a request to delete a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs. In some embodiments, the instructions comprise further operations, for execution via the one or more processors to output a VIM response to the VNFM, in response to the VNFM request, wherein the VIM response comprises information on respective identifiers that identifies the one or more PM jobs.[0020] The present disclosure will now be described with reference to the attached drawing figures, wherein like reference numerals are used to refer to like elements throughout, and wherein the illustrated structures and devices are not necessarily drawn to scale. As utilized herein, terms "component," "system," "interface," and the like are intended to refer to a computer-related entity, hardware, software (e.g., in execution), and/or firmware. For example, a component can be a processor (e.g., a microprocessor, a controller, or other processing device), a process running on a processor, a controller, an object, an executable, a program, a storage device, a computer, a tablet PC and/or a user equipment (e.g., mobile phone, etc.) with a processing device. By way of illustration, an application running on a server and the server can also be a component. One or more components can reside within a process, and a component can be localized on one computer and/or distributed between two or more computers. A set of elements or a set of other components can be described herein, in which the term "set" can be interpreted as "one or more."[0021] Further, these components can execute from various computer readable storage media having various data structures stored thereon such as with a module, for example. The components can communicate via local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network, such as, the Internet, a local area network, a wide area network, or similar network with other systems via the signal).[0022] As another example, a component can be an apparatus with specific functionality provided by mechanical parts operated by electric or electronic circuitry, in which the electric or electronic circuitry can be operated by a software application or a firmware application executed by one or more processors. The one or more processors can be internal or external to the apparatus and can execute at least a part of the software or firmware application. As yet another example, a component can be an apparatus that provides specific functionality through electronic components without mechanical parts; the electronic components can include one or more processors therein to execute software and/or firmware that confer(s), at least in part, the functionality of the electronic components. [0023] Use of the word exemplary is intended to present concepts in a concrete fashion. As used in this application, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless specified otherwise, or clear from context, "X employs A or B" is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then "X employs A or B" is satisfied under any of the foregoing instances. In addition, the articles "a" and "an" as used in this application and the appended claims should generally be construed to mean "one or more" unless specified otherwise or clear from context to be directed to a singular form. Furthermore, to the extent that the terms "including", "includes", "having", "has", "with", or variants thereof are used in either the detailed description and the claims, such terms are intended to be inclusive in a manner similar to the term"comprising."[0024] As used herein, the term "circuitry" may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group), and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable hardware components that provide the described functionality. In some embodiments, the circuitry may be implemented in, or functions associated with the circuitry may be implemented by, one or more software or firmware modules. In some embodiments, circuitry may include logic, at least partially operable in hardware.[0025] Embodiments described herein may be implemented into a system using any suitably configured hardware and/or software. FIG. 3a illustrates components of a network in accordance with some embodiments. In various aspects, part(s) or all of one or more of the components illustrated in connection with FIG. 3a and network functions associated therewith, can be implemented as virtual network functions (VNFs) or VNF instances in connection with various aspects described herein. An Evolved Packet Core (EPC) network 380 is shown to include a Home Subscriber Server (HSS) 383, a Mobility Management Entity (MME) 384, a Serving GateWay (SGW) 385, a Packet DataNetwork (PDN) GateWay (PGW) 386, a Policy and Charging Rules Function (PCRF) 387.[0026] The HSS 383 comprises one or more databases for network users, including subscription-related information to support the network entities' handling ofcommunication sessions. For example, the HSS 383 may provide support for routing/roaming, authentication, authorization, naming/addressing resolution, location dependencies, etc. The EPC network 380 may comprise one or several HSSs 383, depending on the number of mobile subscribers, on the capacity of the equipment, on the organization of the network, etc.[0027] The MME 384 is similar in function to the control plane of legacy Serving General packet radio service (GPRS) Support Nodes (SGSN). The MMEs 384 manage mobility aspects in access such as gateway selection and tracking area listmanagement. The EPC network 380 may comprise one or several MMEs 384[0028] The SGW 385 terminates the interface toward an Evolved UMTS (Universal Mobile Telecommunications System) Terrestrial Radio Access Network (E-UTRAN), and routes data packets between the E-UTRAN and the EPC network 380. In addition, the SGW 385 may be a local mobility anchor point for inter-eNodeB handovers and also may provide an anchor for inter-3GPP mobility. Other responsibilities may include lawful intercept, charging, and some policy enforcement.[0029] The PGW 386 terminates an SGi interface toward the PDN. The PGW 386 routes data packets between the EPC network 380 and external networks, and may be a node for policy enforcement and charging data collection. The PCRF 387 is the policy and charging control element of the EPC network 380. In a non-roaming scenario, there may be a single PCRF in the Home Public Land Mobile Network (HPLMN) associated with a User Equipment's (UE) Internet Protocol Connectivity Access Network (IP-CAN) session. In a roaming scenario with local breakout of traffic, there may be two PCRFs associated with a UE's IP-CAN session: a Home PCRF (H-PCRF) within a HPLMN and a Visited PCRF (V-PCRF) within a Visited Public Land Mobile Network (VPLMN). The PCRF 387 may be communicatively coupled to an application server (alternatively referred to as application function (AF)). Generally, the application server is an element offering applications that use Internet Protocol (IP) bearer resources with the core network (e.g., UMTS Packet Services (PS) domain, Long Term Evolution (LTE) PS data services, etc.). The application server may signal the PCRF 387 to indicate a new service flow and selecting the appropriate Quality of Service (QoS) and charging parameters. The PCRF 387 may provision this rule into a Policy and ChargingEnforcement Function (PCEF) (not shown) with the appropriate traffic flow template (TFT) and QoS class of identifier (QCI), which commences the QoS and charging as specified by the application server. [0030] The components of the EPC 380 may be implemented in one physical node or separate physical nodes. In some embodiments, Network Functions Virtualization (NFV) is utilized to virtualize any or all of the above described network node functions via executable instructions stored in one or more computer readable storage mediums (described in further detail below). A logical instantiation of the EPC network 380 may be referred to as a network slice 381 . A logical instantiation of a portion of the EPC network 380 may be referred to as a network sub-slice 382 (e.g., the network sub-slice 382 is shown to include the PGW 386 and the PCRF 387).[0031] Fig. 3b is a block diagram illustrating components, according to some example embodiments, able to read instructions from a machine-readable or computer- readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically, FIG. 3b shows adiagrammatic representation of hardware resources 300 including one or more processors (or processor cores) 310, one or more memory/storage devices 320, and one or more communication resources 330, each of which are communicatively coupled via a bus 340. For embodiments where node virtualization (e.g., NFV) is utilized, a hypervisor 302 may be executed to provide an execution environment for one or more network slices/sub-slices to utilize the hardware resources 300.[0032] The processors 310 (e.g., a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a digital signal processor (DSP) such as a baseband processor, an application specific integrated circuit (ASIC), a radio-frequency integrated circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor 312 and a processor 314. The memory/storage devices 320 may include main memory, disk storage, or any suitable combination thereof.[0033] The communication resources 330 may include interconnection and/or network interface components or other suitable devices to communicate with one or more peripheral devices 304 and/or one or more databases 306 via a network 308. For example, the communication resources 330 may include wired communication components (e.g., for coupling via a Universal Serial Bus (USB)), cellularcommunication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components.[0034] Instructions 350 may comprise software, a program, an application, an applet, an app, or other executable code for causing at least any of the processors 310 to perform any one or more of the methodologies discussed herein. The instructions 350 may reside, completely or partially, within at least one of the processors 310 (e.g., within the processor's cache memory), the memory/storage devices 320, or any suitable combination thereof. Furthermore, any portion of the instructions 350 may be transferred to the hardware resources 300 from any combination of the peripheral devices 304 and/or the databases 306. Accordingly, the memory of processors 31 0, the memory/storage devices 320, the peripheral devices 304, and the databases 306 are examples of computer-readable and machine-readable media.[0035] Various embodiments described herein can facilitate to perform performance measurement job operations associated with performance measurement (PM) jobs configured to collect virtualized resources (VR) performance measurement (PM) data associated with Virtualized Network Functions (VNFs) deployed in a network function virtualization infrastructure (NFVI) of an NFV network.[0036] The performance of an application software is tightly coupled to the hardware resource on which the application software is running. For example, this can be when a web application is running so slowly that it may take minutes to display the web page that contains multimedia content, such as pictures, video, audio, text, etc. When this happens, a common response is to launch a task manager to see how the computer hardware is performing. The task manager can display the statistics of CPU, memory, disc, and network (i.e. WiFi, or Ethernet) usages.[0037] The following are a few example scenarios that can be found from the task manager: (1 ) CPU 100%, memory 100%, network 70%, which may indicate that the computer is using all its resources to process the multimedia content; (2) CPU 10%, memory 40%, network 10%, which may indicate that the application server is too busy to provide the content on time; or (3) CPU 5%, memory 85%, network 30%, which may indicate that the application is pending on the availability of certain resources (e.g., memory) that have been exhausted due to unknown reasons.[0038] These different scenarios can result from different circumstances, each of which can be analogous to scenarios that can occur in connection with Network Function Virtualization (NFV). To insure that the VNFs deployed on the NFVinfrastructure (NFVI) is able to deliver a consistent and acceptable service quality to end users, as well as to isolate and correct failure conditions at the most timely manner, the virtualized resource performance measurements are required. These performance measurements need to reflect the way VNFs are impacted by the NFVI services, and the inherent nature of the services being offered by the NFVI, for example, CPU, Virtual Machines, memory, and Virtual Networks.[0039] In various embodiments, techniques described herein can be employed to perform performance measurement job operations associated with PM jobs configured to collect virtualized resources (VR) performance measurement (PM) data (e.g., CPU usage, memory usage etc.) associated with Virtualized Network Functions (VNFs) deployed in a network function virtualization infrastructure (NFVI). In particular, in this disclosure, an apparatus and a method to perform PM job operations that include PM job creation, PM job deletion, PM job suspension, PM job resume, PM job listing and PM availability notification are proposed.[0040] Referring to FIG. 1 , illustrated is a diagram of a network function virtualization (NFV) system/network 100 that facilitates to perform performance measurement (PM) job operations in connection with various aspects described herein. The system illustrated in FIG. 1 comprises a Network Manager (NM) 102, Network FunctionVirtualization (NFV) Orchestrator (NFVO) 104, Element Manager (EM) 106, a set of Virtualized Network Functions (VNFs) or VNF instances, for example VNF1 108a and VNF2 108b virtualized by Virtualization Resources (VRs) of a NFV Infrastructure (NFVI) 108, a VNF Manager (VNFM) 1 10, and a Virtualized Infrastructure Manager (VIM) 1 12. The solid lines between these entities indicate the various reference points that facilitate data exchange between these entities, while the dashed and dotted lines indicate the flow of VR PM data.[0041] For example, the reference point Ve-Vnfm-Em 109 between the EM 106 and the VNFM 1 10 facilitates a signal/data flow between the EM 1 06 and the VNFM 1 10, the reference point Itf-N 103 between the NM 102 and the EM 106 facilitates a signal/data flow between the NM 1 02 and the EM 1 06, the reference point Vi-Vnfm 1 1 1 between the VNFM 1 10 and the VIM 1 12 facilitates a signal/data flow between the VNFM 1 10 and the VIM 1 12, and the reference point Nf-Vi 107 between the VIM 1 12 and the NFVI 108 facilitates a signal/data flow between the VIM 1 12 and the NFVI 108. In some embodiments, the NFV network 100 enables to perform various PM job operations that include PM job creation, PM job deletion, PM job suspension, PM job resume, PM job listing and PM availability notification. In some embodiments. In some embodiments, the various reference points, for example, Ve-Vnfm-Em 109, Itf-N 103, Vi-Vnfm 1 1 1 , Nf- Vi 107 etc. support a capability to perform a signal/data flow associated with the various PM job operations.[0042] For example, in one embodiment, in order to collect VR PM data associated with the VNFs (e.g., 108a, 108b) deployed on the NFVI 108, a PM job can be created. In such embodiments, the NM can create a PM job at the EM by sending request to the EM that contains the measurement types, and the periods for which the collection of the VR PM data is to be performed. Similarly, the EM can create a VR PM job at the VNFM by sending a request that contains the same information as received from the NM. Further, the VNFM can create a VR PM job at the VIM by sending a request that contains the same information as received from the EM. The VIM can request the NFVI to collect the VR PM data based on a schedule and time period defined in the VR PM job. In some embodiments, the VR PM data collected by the NFVI 108 are stored in data repositories, for example, repository 1 1 2a, repository 1 1 0a or repository 106a. In other embodiments, other PM job operations, for example, PM job deletion, PM job suspension etc. can also be performed. The signal flow associated with the different PM job operations are described in detail in subsequent embodiments below.[0043] Fig. 2 illustrates an example flow diagram that depicts the signal flow between the various entities in a network function virtualization (NFV) network 200 that facilitates to perform PM job operations according to various aspects described herein. The NFV network 200 comprises a network manager (NM) 202, an element manager (EM) 204, a virtualized network function manager (VNFM) 206 and a virtualized infrastructure manager (VIM) 208. In some embodiments, the NFV network 200 further comprises an NFV infrastructure (NFVI) with virtualized network functions (VNFs) deployed using the virtualized resources of the NFVI (not shown). In some embodiments, the NFV network 200 is similar to the NFV network 100 described above with respect to Fig. 1 . In some embodiments, in order to perform a PM job operation associated with PM jobs, the NM 202 within the NFV network 200 is configured to initiate a signal flow associated therewith. The PM job operations include PM job creation, PM job deletion, PM job suspension, PM job resume and PM job listing. [0044] For example, in some embodiments, the NM 202 initiates a PM job operation by outputting/sending an NM request 210 comprising a request associated with one or more performance measurement (PM) jobs to the EM 204. In some embodiments, the NM request 210 comprises a request to create a PM job, a request to stop/delete a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs. Upon receiving the NM request 210, the EM 204 is configured to output an EM request 212 comprising a request associated with the one or more PM jobs to a virtualized network function manager (VNFM) 206. In some embodiments, the EM request 212 is generated at the EM 204, based on the received NM request 21 0. Upon receiving the EM request 21 2, the VNFM 206 is configured to output a VNFM request 214 comprising a request associated with the one or more PM jobs to a virtualized infrastructure manager (VIM) 208. In some embodiments, the VNFM request 214 is generated at the VNFM 206, based on the received EM request 212.[0045] Upon processing the VNFM request 214, the VIM 208 is configured to output a VIM response 216 to the VNFM 206, in response to the VNFM request 214. In some embodiments, the VIM response 21 6 comprises information on identifiers that identifies the one or more PM jobs. Upon receiving the VIM response 216, the VNFM 206 is configured to output a VNFM response 218 to the EM 204, based on the received VIM response 218. In some embodiments, the VNFM response 218 comprises information on identifiers that identify the one or more PM jobs. Upon receiving the VNFM response 21 8, the EM 204 is configured to output an EM response 220 to the NM 202. In some embodiments, the EM response 220 is generated at the EM 204, based on the received VNFM response 218 and the EM response 220 comprises a parameter indicating a result of the respective PM jobs.[0046] In one embodiment, where the PM job operation comprises a PM job creation, the NM request 210 comprises a request to create a PM job. In such embodiments, the NM request 210 includes job information parameters, for example, an object instance identifier, for example, a VNF instance identifier, iOCInstanceList and/or a VNF class identifier, iOCName, that identifies the VNF instance for which the PM job is to be created. In some embodiments, the NM request 210 further include parameters that identify the measurement types to be collected (e.g., measurementCategoryList), reporting period, start time of PM job operation, stop time of PM job operation, granularity period, schedule of the PM job operation, priority of the PM job etc.Similarly, for PM job creation, the EM request 21 2 comprises an object instance identifier (e.g., sourceSeiector) that identify the VNF instances for which the PM job is to be created and parameters that identify the measurement types to be collected (e.g., performanceMetric and performanceMetricGroup). In addition, the EM request 212 comprises parameters including, a collection period for the respective VNFM (e.g., VNFM 206), a reporting period for the VNFM, reportingBoundary that identifies a stop time for reporting etc. Further, the VNFM request 214 comprises an object instance identifier (e.g., resourceSeiector) that identify the VNF instances for which the PM job is to be created, and parameters that identify the measurement types to be collected (e.g.,performanceMetric and performanceMetricGroup). In addition, the VNFM request 214 comprises parameters including a collection period for the respective VIM (e.g., VIM 208), a reporting period for the VIM, reportingBoundary that identifies a stop time for reporting etc.[0047] Furthermore, upon successful creation of a PM job, the VIM response 216 comprises a PM job identifier, for example, pmJobId, that identifies the PM job created. Similarly, the VNFM response 218 comprises a PM job identifier, for example, pmJobId, that identifies the PM job created. In addition, the EM response 220 comprises a parameter, status, indicating a result of the PM job creation and a PM job identifier, for example, jobld, that identifies the PM job created, when the PM job is successfully created. In some embodiments, the PM job identifier, jobld and the PM job identifier, pmJobId are the same, or there is one on one mapping between the PM job identifiers, jobld and pmJobId. In some embodiments, the EM response 220 further comprises a parameter, for example, unsupportedList, that identifies one or more reasons for an unsuccessful or a partially successful creation of the PM job, when the PM job is not successfully created.[0048] The Table 1 below indicate an example use case of a PM job creation, according to one embodiment of the disclosure. Table 1 : Use Case of PM job creation[0049] Similarly, Table 2 below indicate an example use case of a PM job creation, according to another embodiment of the disclosure. Table 2: Use Case of PM job creation[0050] In one embodiment, where the PM job operation comprises a PM job deletion, the NM request 210 comprises a request to stop/delete a PM job. In such embodiments, it is assumed that the PM job to be deleted already exists or is created. In such embodiments, the NM request 210 includes a PM job identifier, for example, Jobld, that identifies the PM job that is to be deleted/stopped. Similarly, for PM job deletion, the EM request 212 comprises a PM job identifier, for example, pmJobId, that identifies the PM job that is to be deleted/stopped. Further, the VNFM request 214 comprises a PM job identifier, for example, pmJobId, that identifies the PM job that is to be deleted/stopped. In some embodiments, the PM job identifier, jobld and the PM job identifier, pmJobId are the same, or there is one on one mapping between the PM job identifiers, jobld and pmJobId. Furthermore, upon successful deletion of a PM job, the VIM response 216 comprises an identifier, for example, deletedPmJobId, that identifies the PM job that is successfully deleted. Similarly, the VNFM response 218 comprises an identifier, for example, deletedPmJobId, that identifies the PM job that is successfully deleted. In addition, the EM response 220 comprises a parameter, status, indicating a result of the PM job deletion.[0051] The Table 3 below indicate an example use case of a PM job deletion, according to one embodiment of the disclosure.Table 3: Use Case of PM job deletion[0052] The Table 4 below indicate an example use case of a PM job deletion, according to another embodiment of the disclosure. Table 4: Use Case of PM job deletion[0053] In one embodiment, where the PM job operation comprises a PM job suspension, the NM request 21 0 comprises a request to suspend a PM job. In such embodiments, it is assumed that the PM job to be suspended already exists and is collecting VNF related virtualized resources (VR) PM data. In such embodiments, the NM request 210 includes a PM job identifier, for example, JobId, that identifies the PM job that is to be suspended. Similarly, for PM job suspension, the EM request 212 comprises a PM job identifier, for example, pmJobId or JobId, that identifies the PM job that is to be suspended. Further, the VNFM request 214 comprises a PM job identifier, for example, pmJobId or JobId, that identifies the PM job that is to be suspended. In some embodiments, the PM job identifier, jobld and the PM job identifier, pmJobId are the same, or there is one on one mapping between the PM job identifiers, jobld and pmJobId. Furthermore, upon successful suspension of a PM job, the VIM response 216 comprises an identifier that identifies the PM job that is successfully suspended.Similarly, the VNFM response 218 comprises an identifier that identifies the PM job that is successfully suspended. In some embodiments, the PM job identifier and the identifier that identifies the PM job that is successfully suspended are different. In addition, the EM response 220 comprises a parameter, status, indicating a result of the PM job suspension.[0054] The Table 5 below indicate an example use case of a PM job suspension, according to one embodiment of the disclosure.Table 5: Use Case of PM job suspension[0055] The Table 6 below indicate an example use case of a PM job suspension, according to another embodiment of the disclosure.Table 6: Use Case of PM job suspensionn m-em- -[0056] In one embodiment, where the PM job operation comprises a PM job resumption, the NM request 210 comprises a request to resume a PM job. In such embodiments, it is assumed that the PM job for collecting VNF related virtualized resources (VR) PM data is suspended. In such embodiments, the NM request 210 includes a PM job identifier, for example, Jobld, that identifies the PM job that is to be resumed. Similarly, for PM job resumption, the EM request 212 comprises a PM job identifier, for example, pmJobld or Jobld, that identifies the PM job that is to be resumed. Further, the VNFM request 214 comprises a PM job identifier, for example, pmJobld or Jobld, that identifies the PM job that is to be resumed. In someembodiments, the PM job identifier, jobld and the PM job identifier, pmJobld are the same, or there is one on one mapping between the PM job identifiers, jobld and pmJobld. Furthermore, upon successful resumption of a PM job, the VIM response 21 6 comprises an identifier that identifies the PM job that is successfully resumed. Similarly, the VNFM response 218 comprises an identifier that identifies the PM job that is successfully resumed. In some embodiments, the PM job identifier and the identifier that identifies the PM job that is successfully resumed are different. In addition, the EM response 220 comprises a parameter, status, indicating a result of the PM job resumption.[0057] The Table 7 below indicate an example use case of a PM job resumption, according to one embodiment of the disclosure.Table 7: Use Case of PM job resumption[0058] The Table 8 below indicate an example use case of a PM job resumption, according to another embodiment of the disclosure. Table 8: Use Case of PM job resumption[0059] In one embodiment, where the PM job operation comprises a PM job listing, the NM request 210 comprises a request to list one or more PM jobs that match a criteria (e.g., a query criteria or a search criteria). In such embodiments, it is assumed that one or more PM jobs for collecting VNF related virtualized resources (VR) PM data already exists. In such embodiments, the NM request 210 includes a query criteria to list one or more PM jobs that matches the query criteria. In some embodiments, the query criteria include a parameter, for example, jobldList, that identifies the PM jobs to be listed. Similarly, for PM job listing, the EM request 212 comprises a query criteria, to list one or more PM jobs that matches the query criteria. In some embodiments, the query criteria comprise a parameter, for example, queryFilter, that identifies the query criteria. Further, the VNFM request 214 comprises a query criteria, to list one or more PM jobs that matches the query criteria. In some embodiments, the query criteria comprise a parameter, for example, queryFilter, that identifies the query criteria. In some embodiments, the query criteria, jobldList and the query criteria, queryFilter, are the same. Furthermore, after the query criteria is matched, the VIM response 21 6 comprises a parameter, for example, pmJobDetails, that identifies/lists the PM jobs that matches the query criteria. Similarly, the VNFM response 218 comprises a parameter, for example, pmJob, that identifies/lists the PM jobs that matches the query criteria. In some embodiments, the parameters, pmJobDetails and pmJob are the same. In addition, the EM response 220 comprises a parameter, status, indicating a result of the PM job listing and a parameter, for example, joblnfoList, that lists the PM jobs that matches the query criteria.[0060] The Table 9 below indicate an example use case of a PM job listing, according to one embodiment of the disclosure.Table 9: Use Case of PM job listing[0061] The Table 10 below indicate an example use case of a PM job listing, according to another embodiment of the disclosure.Table 10: Use Case of PM job listingn m-em- -[0062] Once the PM job is successfully created or the PM job is in operation (i.e., the PM job is collecting VNF related VR PM data), the VIM 208 is configured to request the NFVI to collect VR PM data based on the parameters defined in the PM job. In such embodiments, the VIM 208 is further configured to output a VIM notification 226 to the VNFM 206, when a VNF related VR PM data associated with the PM job is available. In some embodiments, the VIM notification 226 includes an identifier, for example, PerformancelnformationAvailableNotification, that indicates an availability of VNF related VR PM data, and an object instance identifier, for example, objectlnstanceld, that identifies the VNF instance for which the VR PM data is available. Further, the VNFM 206 is configured to output a VNFM notification 224 to the EM 204, wherein the VNFM notification 224 indicates an availability of the VR PM data associated with the PM job. In some embodiments, the VNFM notification 224 is generated based on the received VIM notification 226 and comprises an object instance identifier, for example, objectlnstanceld, that identifies the VNF instance for which the VR PM data is available.[0063] In addition, the EM 204 is configured to output an EM notification 222 to the NM 202, wherein the EM notification 222 indicates an availability of the VR PM data associated with the PM job. In some embodiments, the EM notification 222 is generated based on the received VNFM notification 224, and comprises an identifier, for example, notifyFileReady, that indicates the availability of the VNF related VR PM data. In some embodiments, the EM notification 222 further comprises an object instance identifier, for example, objectlnstanceld, that identifies the VNF instance for which the VR PM data is available.[0064] The Table 1 1 below indicate an example use case of a VR PM data available notification, according to one embodiment of the disclosure.Table 1 1 : Use Case of VR PM data available notification[0065] The Table 12 below indicate an example use case of a VR PM data available notification, according to another embodiment of the disclosure.Table 12: Use Case of VR PM data available notificationn m-em- - , - _ _ e- n m-em- -[0066] In some embodiments, the VNF related VR PM data associated with the PM job is stored in repositories associated with the NFV network 200 (e.g., the repository 1 1 2a, the repository 1 10 or the repository 106a in Fig. 1 ). In some embodiments, once the NM 202 receives the EM notification 222 indicating the availability of the VR PM data, the NM 202 is configured to retrieve the VNF related VR PM data directly or indirectly from the PM data repository 232, the PM data repository 230 or the PM data repository 228.[0067] Referring to FIG. 4, illustrated is a block diagram of an apparatus 400 included within a Network Manager (NM) that facilitates to perform performance measurement (PM) job operations associated with an NFV network, according to various aspects described herein. In various aspects, apparatus 400 can be included within a NM of a communications network. The apparatus 400 is explained herein with respect to the NM 202 in Fig. 2. The apparatus 400 include a processor 41 0, optional network interface controller (NIC) circuitry 420 (which can facilitate communication of data via one or more networks in some aspects), and a memory 430 (which can comprise any of a variety of storage mediums and can store instructions and/or data associated with at least one of the processor 410 or NIC circuitry 420). In some embodiments, the apparatus 400 includes all the features and the functionalities of the components illustrated in Fig. 3b. In some aspects, the processor 410, the NIC circuitry 420, and the memory 430 can be included in a single device, while in other aspects, they can be included in different devices, such as part of a distributed architecture. In some embodiments, the processor 410 can include one or more processors. As described in greater detail below, apparatus 400 can facilitate to perform various PM job operations that include PM job creation, PM job deletion, PM job suspension, PM job resume, PM job listing and PM availability notification.[0068] In some example embodiments, the processor 410 is able to readinstructions from a machine-readable or computer-readable medium (e.g., a machine- readable storage medium) and perform any one or more of the PM job operations discussed herein. In order to perform a PM job operation, the processor 410 is configured to output an NM request (e.g., the NM request 210 in Fig. 2) comprising a request associated with one or more performance measurement (PM) jobs to an element manager (EM), for example, the EM 204 in Fig. 2, via the NIC circuitry 420. In some embodiments, the NM request is generated at the processor 410, based on instructions stored in the memory 430. In some embodiments, the NM request comprises a request to create a PM job, a request to stop a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list one or more PM jobs. In some embodiments, the processor 410 is further configured to process an EM response (e.g., the EM response 220 in Fig. 2), received from the EM (e.g., the EM 204 in Fig. 2), via the NIC circuitry 420, in response to the NM request. In someembodiments, the EM response comprises information on a result of the one or more PM jobs.[0069] For PM job creation, in some embodiments, the NM request includes an instance identifier, iocin stanceLi st that identifies the VNF instance for which the PM job is to be created and the EM response comprises a PM job identifier, Jobid , for the PM job created and a parameter, status, indicating a result of the PM job creation as indicated above in Fig. 2. For PM job deletion, the NM request includes a PM job identifier, Jobid that identifies the PM job to be stopped/deleted, and the EM response comprises a parameter, status, indicating a result of stopping/deleting the PM job. For PM job suspension, the NM request includes a PM job identifier, Jobid that identifies the PM job to be suspended, and the EM response comprises a parameter, status, indicating a result of the PM job suspension. For PM job resumption, the NM request includes a PM job identifier, Jobid that identifies the PM job to be resumed, wherein the PM job has been previously suspended and the EM response comprises a parameter, status, indicating a result of the PM job resumption. For PM job listing, the NM request includes a criteria to list the PM jobs, j obidLi st and the EM response comprises information on PM jobs that match the criteria, j obinf oLi st and a parameter, status, indicating a result of the PM job listing.[0070] In some embodiments, the processor 41 0 is further configured to process an EM notification (e.g., the EM notification 222 in Fig. 2) received from the EM (e.g., the EM 204 in Fig. 2), via the NIC circuitry 420, when the one or more PM jobs is in operation. In some embodiments, the EM notification indicates an availability of the VR PM data associated with VNF instances of the respective PM jobs. Upon receiving the EM notification, in some embodiments, the processor 410 is further configured to retrieve the VR PM data from a PM data repository (e.g., the PM data repository 228, 230 or 232 in Fig. 2) associated with the NFV network.[0071] Referring to FIG. 5, illustrated is a block diagram of an apparatus 500 included within an Element Manager (EM) that facilitates to perform performance measurement (PM) job operations associated with an NFV network, according to various aspects described herein. In various aspects, apparatus 500 can be included within an EM of a communications network. The apparatus 500 is explained herein with respect to the EM 204 in Fig. 2. The apparatus 500 include a processor 510, optional network interface controller (NIC) circuitry 520 (which can facilitate communication of data via one or more networks in some aspects), and a memory 530 (which can comprise any of a variety of storage mediums and can store instructions and/or data associated with at least one of the processor 510 or NIC circuitry 520). In some embodiments, the apparatus 500 includes all the features and the functionalities of the components illustrated in Fig. 3b. In some aspects, the processor 510, the NIC circuitry 520, and the memory 530 can be included in a single device, while in other aspects, they can be included in different devices, such as part of a distributed architecture. In some embodiments, the processor 510 can include one or more processors. As described in greater detail below, apparatus 500 can facilitate to perform various PM job operations that include PM job creation, PM job deletion, PM job suspension, PM job resume, PM job listing and PM availability notification.[0072] In some example embodiments, the processor 510 is able to readinstructions from a machine-readable or computer-readable medium (e.g., a machine- readable storage medium) and perform any one or more of the PM job operations discussed herein. In some embodiments, the processor 510 is configured to process a network manager (NM) request (e.g., the NM request 210 in Fig. 2) comprising a request associated with one or more performance measurement (PM) jobs, received from a network manager (NM) (e.g., the NM 202 in Fig. 2), via the NIC circuitry 520. In some embodiments, the NM request comprises a request to perform a PM job operation, for example, a request to create a PM job, a request to stop or delete a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs. Upon receiving the NM request, the processor 510 is configured to generate an EM request (e.g., the EM request 212 in Fig. 2), based on the received NM request and output or provide the generated EM request to a virtualized network function manager (VNFM) (e.g., the VNFM 206 in Fig. 2) associated therewith, via the NIC circuitry 520. In some embodiments, the EM request is generated at the processor 510, in accordance with the instructions stored in the memory 530.[0073] In some embodiments, the processor 510 is further configured to process a VNFM response (e.g., the VNFM response 218 in Fig. 2), received from the VNFM (e.g., the VNFM 206 in Fig. 2), via the NIC circuitry 520, in response to the EM request. Upon receiving the VNFM response, the processor 510 is configured to generate an EM response (e.g., the EM response 220) and output the EM response to the NM (e.g., the NM 202 in Fig. 2), via the NIC circuitry 520. In some embodiments, the EM response is generated based on the received VNFM response and the EM response comprises information associated with a result of the PM jobs.[0074] For example, for PM job creation, NM request and the EM request to a create a PM job, includes a respective instance identifier that identifies the VNF instance for which the PM job is to be created, the VNFM response comprises a PM job identifier that identifies the PM job created, and the EM response comprises a PM job identifier for the PM job created and a parameter, status, indicating a result of the PM job creation as indicated above with respect to Fig. 2. In some embodiments, the NM request, the EM request, the VNFM response and the EM response can further comprise other parameters as indicated above with respect to Fig. 2. For PM job deletion, the NM request and the EM request includes a PM job identifier that identifies the PM job to be deleted, the VNFM response comprises an identifier for the PM job deleted and the EM response comprises a parameter, status, indicating a result of the PM job deletion as indicated above with respect to Fig. 2. For PM job suspension, the NM request and the EM includes a PM job identifier that identifies the PM job to be suspended, the VNFM response comprises an identifier for the PM job suspended and the EM response comprises a parameter, status, indicating a result of the PM job suspension as indicated above with respect to Fig. 2.[0075] For PM job resumption, the NM request and the EM request includes a PM job identifier that identifies the PM job to be resumed, wherein the PM job has been previously suspended as indicated above with respect to Fig. 2. Further, the VNFM response comprises an identifier for the PM job resumed and the EM response comprises a parameter, status, indicating a result of the PM job resumption as indicated above with respect to Fig. 2. For PM job listing, the NM request and the EM request includes a criteria to list one or more PM jobs meeting the criteria, and wherein the VNFM response and the EM response comprises the list of the one or more PM jobs that meets the criteria as indicated above with respect to Fig. 2.[0076] In some embodiments, the processor 510 is further configured to process a VNFM notification (e.g., the VNFM notification 224 in Fig. 2) received from the VNFM (e.g., the VNFM 206 in Fig. 2), via the NIC circuitry 520, when the one or more PM jobs is in operation. In some embodiments, the VNFM notification indicates an availability of the VR PM data associated with VNF instances of the PM jobs. Upon receiving the VNFM notification, in some embodiments, the processor 510 is further configured to generate an EM notification (e.g., the EM notification 222), based on the received VNFM notification and provide the generated EM notification to the NM (e.g., the NM 202), via the NIC circuitry 520. In some embodiments, the EM notification indicates an availability of the VR PM data associated with VNF instances of the PM jobs.[0077] Referring to FIG. 6, illustrated is a block diagram of an apparatus 600 included within a Virtualized Network Function Manager (VNFM) that facilitates to perform performance measurement (PM) job operations associated with an NFV network, according to various aspects described herein. In various aspects, apparatus 600 can be included within a VNFM of a communications network. The apparatus 600 is explained herein with respect to the VNFM 206 in Fig. 2. The apparatus 600 include a processor 610, optional network interface controller (NIC) circuitry 620 (which can facilitate communication of data via one or more networks in some aspects), and a memory 630 (which can comprise any of a variety of storage mediums and can store instructions and/or data associated with at least one of the processor 610 or NIC circuitry 620). In some embodiments, the apparatus 600 includes all the features and the functionalities of the components illustrated in Fig. 3b. In some aspects, the processor 610, the NIC circuitry 620, and the memory 630 can be included in a single device, while in other aspects, they can be included in different devices, such as part of a distributed architecture. In some embodiments, the processor 610 can include one or more processors. As described in greater detail below, apparatus 600 can facilitate to perform various PM job operations that include PM job creation, PM job deletion, PM job suspension, PM job resume, PM job listing and PM availability notification.[0078] In some example embodiments, the processor 610 is able to read instructions from a machine-readable or computer-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the PM job operations discussed herein. In some embodiments, the processor 610 is configured to process an element manager (EM) request (e.g., the EM request 21 2 in Fig. 2) comprising a request associated with one or more performance measurement (PM) jobs, received from an EM (e.g., the EM 204 in Fig. 2), via the NIC circuitry 620. In some embodiments, the EM request comprises a request to create a PM job, a request to delete a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs. Upon receiving the EM request, the processor 610 is configured to generate a VNFM request (e.g., the VNFM request 214 in Fig. 2), based on the received EM request and provide/output the generated VNFM request to a virtualized infrastructure manager (VIM), for example, the VIM 208, associated therewith, via the NIC circuitry 620.[0079] In some embodiments, the processor 610 is further configured to process a VIM response (e.g., the VIM response 216 in Fig. 2) received from the VIM (e.g., the VIM 208 in Fig. 2) via the NIC circuitry 620, in response to the VNFM request. In some embodiments, the VIM response comprises information on identifiers that identifies the one or more PM jobs, when the request associated with the one or more PM jobs is successfully completed. Upon receiving the VIM response, in some embodiments, the processor 610 is further configured to generate a VNFM response (e.g., the VNFM response 218 in Fig. 2), based on the received VIM response and output/provide the generated VNFM response to the EM (e.g., the EM 204 in Fig. 2). In someembodiments, the VNFM response comprises information on the identifiers that identify the one or more PM jobs.[0080] For example, for PM job creation, the EM request and the VNFM request includes an object instance identifier that identifies the VNF instance for which a PM job to be created, and the VIM response and the VNFM response comprises a PM job identifier for the PM job created as indicated above with respect to Fig. 2. For PM job deletion, the EM request and the VNFM request includes a PM job identifier that identifies the PM job to be deleted, and the VIM response and the VNFM response comprises an identifier for the PM job deleted as indicated above with respect to Fig. 2. For PM job suspension, the EM request and the VNFM request includes a PM job identifier that identifies the PM job to be suspended, and the VIM response and the VNFM response comprises an identifier for the PM job suspended as indicated above with respect to Fig. 2. For PM job resumption, the EM request and the VNFM request a PM job identifier that identifies the PM job to be resumed, wherein the PM job has been previously suspended and the VIM response and the VNFM response comprises an identifier for the PM job resumed as indicated above with respect to Fig. 2. For PM job listing, the EM request and the VNFM request includes a criteria to list the one or more PM jobs meeting the criteria, and the VIM response and the VNFM response comprises the list of the one or more PM jobs that meets the criteria as indicated above with respect to Fig. 2.[0081] In some embodiments, the processor 61 0 is further configured to process a VIM notification (e.g., the VIM notification 224 in Fig. 2) received from the VIM (e.g., the VIM 208 in Fig. 2), via the NIC circuitry 620, when the one or more PM jobs is in operation. In some embodiments, the VIM notification indicates an availability of the VR PM data associated with VNF instances of the PM jobs. Upon receiving the VIM notification, in some embodiments, the processor 610 is further configured to generate an VNFM notification (e.g., the VNFM notification 224), based on the received VIM notification and provide the generated VNFM notification to the EM (e.g., the EM 204), via the NIC circuitry 620. In some embodiments, the VNFM notification indicates an availability of the VR PM data associated with VNF instances of the PM jobs. [0082] Referring to FIG. 7, illustrated is a block diagram of an apparatus 700 included within a Virtualized Infrastructure Manager (VIM) that facilitates to perform performance measurement (PM) job operations associated with an NFV network, according to various aspects described herein. In various aspects, apparatus 700 can be included within a VIM of a communications network. The apparatus 700 is explained herein with respect to the VIM 208 in Fig. 2. The apparatus 700 include a processor 71 0, optional network interface controller (NIC) circuitry 720 (which can facilitate communication of data via one or more networks in some aspects), and a memory 730 (which can comprise any of a variety of storage mediums and can store instructions and/or data associated with at least one of the processor 710 or NIC circuitry 720).[0083] In some embodiments, the apparatus 700 includes all the features and the functionalities of the components illustrated in Fig. 3b. In some aspects, the processor 71 0, the NIC circuitry 720, and the memory 730 can be included in a single device, while in other aspects, they can be included in different devices, such as part of a distributed architecture. In some embodiments, the processor 710 can include one or more processors. As described in greater detail below, apparatus 700 can facilitate to perform various PM job operations that include PM job creation, PM job deletion, PM job suspension, PM job resume, PM job listing and PM availability notification. In some example embodiments, the processor 710 is able to read instructions from a machine- readable or computer-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the PM job operations discussed herein.[0084] In some embodiments, the processor 710 is configured to process a virtualized network function manager (VNFM) request (e.g., the VNFM request 214 in Fig. 2) comprising a request associated with one or more performance measurement (PM) jobs, received from a VNFM (e.g., the VNFM 206 in Fig. 2), via the NIC circuitry 720. In some embodiments, the VNFM request comprises a request to create a PM job, a request to delete a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs. Upon receiving the VNFM request, the processor 710 is configured to generate a VIM response (e.g., the VIM response 216 in Fig. 2) and provide/output the VIM response to the VNFM (e.g., the VNFM 206 in Fig. 2), via the NIC circuitry 720. In some embodiments, the VIM response comprises information on identifiers that identifies the one or more PM jobs. In some embodiments, the processor 71 0 is configured to generate the VIM response based on instructions stored in the memory 730.[0085] For example, for PM job creation, the VNFM request includes an object instance identifier that identifies the VNF instance for which the PM job is to be created, and the VIM response comprises a PM job identifier for the PM job created, as indicated above with respect to Fig. 2. For PM job deletion, the VNFM request includes a PM job identifier that identifies the PM job to be deleted, and the VIM response comprises an identifier for the PM job deleted, as indicated above with respect to Fig. 2. For PM job suspension, the VNFM request includes a PM job identifier that identifies the PM job to be suspended, and the VIM response comprises an identifier for the PM job suspended, as indicated above with respect to Fig. 2. For PM job resumption, the VNFM request includes a PM job identifier that identifies the PM job to be resumed, wherein the PM job has been previously suspended, and the VIM response comprises an identifier for the PM job resumed, as indicated above with respect to Fig. 2.[0086] For PM job listing, the VNFM request includes a criteria to list one or more PM jobs meeting the criteria, and the VIM response comprises the list of the one or more PM jobs that meets the criteria, as indicated above with respect to Fig. 2. In some embodiments, the processor 710 is further configured to generate a VIM notification (e.g., the VIM notification 224 in Fig. 2) and provide/output the generated VIMnotification to the VNFM (e.g., the VNFM 206 in Fig. 2), via the NIC circuitry 720, when the one or more PM jobs is in operation. In some embodiments, the VIM notification indicates an availability of the VR PM data associated with VNF instances of the PM jobs.[0087] Fig. 8 illustrates a flowchart of a method 800 for a network manager (NM) of a network function virtualization (NFV) network that facilitates to perform performance measurement (PM) job operations, according to various embodiments of the disclosure. The method 800 is described herein with reference to the apparatus 400 in Fig. 4 and the NFV network 200 in Fig. 2. In some embodiments, the apparatus 400 is included within the NM 202 in Fig. 2. At 802, an NM request comprising a request associated with one or more performance measurement (PM) jobs is generated at the processor 41 0 and provided to an element manager (EM) associated therewith, via the NIC circuitry 420. In some embodiments, the NM request comprises a request to create a PM job, a request to stop a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs. At 804, an EM response received from the EM, via the NIC circuitry 420, in response to the NM request, is processed at the processor 41 0. In some embodiments, the EM response comprises information on identifiers that identifies the one or more PM jobs and status of the respective PM jobs.[0088] Fig. 9 illustrates a flowchart of a method 900 for an element manager (EM) of a network function virtualization (NFV) network that facilitates to perform performance measurement (PM) job operations, according to various embodiments of the disclosure. The method 900 is described herein with reference to the apparatus 500 in Fig. 5 and the NFV network 200 in Fig. 2. In some embodiments, the apparatus 500 is included within the EM 204 in Fig. 2. At 902, a network manager (NM) request comprising a request associated with one or more performance measurement (PM) jobs, received from an NM, via the NIC circuitry 520 is processed at the processor 510. In some embodiments, the NM request comprises a request to create a PM job, a request to stop or delete a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs. At 904, an EM request comprising a request associated with the one or more PM jobs is generated at the processor 510 and provided to a virtualized network function manager (VNFM) associated therewith, via the NIC circuitry 520. In some embodiments, the EM request is generated based on the received NM request.[0089] At 906, a VNFM response received from the VNFM, via the NIC circuitry 520, in response to the EM request is processed at the processor 510. In someembodiments, the VNFM response comprises information on identifiers that identifies the one or more PM jobs. At 908, an EM response is generated at the processor 510 based on the received VNFM response and provided to the NM, via the NIC circuitry 520. In some embodiments, the EM response comprises information on identifiers that identify the one or more PM jobs and a status of the respective PM jobs.[0090] Fig. 10 illustrates a flowchart of a method 1000 for a virtualized network function manager (VNFM) of a network function virtualization (NFV) network that facilitates to perform performance measurement (PM) job operations, according to various embodiments of the disclosure. The method 1000 is described herein with reference to the apparatus 600 in Fig. 6 and the NFV network 200 in Fig. 2. In some embodiments, the apparatus 600 is included within the VNFM 206 in Fig. 2. At 1002, an element manager (EM) request comprising a request associated with one or more performance measurement (PM) jobs, received from an EM, via the NIC circuitry 620 is processed at the processor 61 0. In some embodiments, the EM request comprises a request to create a PM job, a request to delete a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs.[0091] At 1004, a VNFM request comprising a request associated with the one or more PM jobs is generated at the processor 610 and provided to a virtualizedinfrastructure manager (VIM), via the NIC circuitry 620. In some embodiments, the VNFM request is generated at the processor 610 based on the received EM request. At 1006, a VIM response received from the VIM via the NIC circuitry 620, in response to the VNFM request, is processed at the processor 61 0. In some embodiments, the VIM response comprises information on identifiers that identifies the one or more PM jobs, when the request associated with the one or more PM jobs is successfully completed. At 1008, a VNFM response is generated at the processor 610 based on the received VIM response and provided to the EM, via the NIC circuitry 620. In some embodiments, the VNFM response comprises information on identifiers that identify the one or more PM jobs.[0092] Fig. 11 illustrates a flowchart of a method 1 1 00 for a virtualized infrastructure manager (VIM) of a network function virtualization (NFV) network that facilitates to perform performance measurement (PM) job operations, according to various embodiments of the disclosure. The method 1 100 is described herein with reference to the apparatus 700 in Fig. 7 and the NFV network 200 in Fig. 2. In some embodiments, the apparatus 700 is included within the VIM 208 in Fig. 2. At 1 102, a virtualized network function manager (VNFM) request comprising a request associated with one or more performance measurement (PM) jobs, received from a VNFM, via the NIC circuitry 720, is processed at the processor 710. In some embodiments, the VNFM request comprises a request to create a PM job, a request to delete a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs. At 1 104, a VIM response is generated at the processor 71 0, in response to the VNFM request and provided to the VNFM via the NIC circuitry 720. In some embodiments, the VIM response comprises information on identifiers that identify the one or more PM jobs.[0093] While the methods are illustrated and described above as a series of acts or events, it will be appreciated that the illustrated ordering of such acts or events are not to be interpreted in a limiting sense. For example, some acts may occur in different orders and/or concurrently with other acts or events apart from those illustrated and/or described herein. In addition, not all illustrated acts may be required to implement one or more aspects or embodiments of the disclosure herein. Also, one or more of the acts depicted herein may be carried out in one or more separate acts and/or phases.[0094] While the apparatus has been illustrated and described with respect to one or more implementations, alterations and/or modifications may be made to the illustrated examples without departing from the spirit and scope of the appended claims. In particular regard to the various functions performed by the above describedcomponents or structures (assemblies, devices, circuits, systems, etc.), the terms (including a reference to a "means") used to describe such components are intended to correspond, unless otherwise indicated, to any component or structure which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary implementations of the invention.[0095] In particular regard to the various functions performed by the above described components (assemblies, devices, circuits, systems, etc.), the terms (including a reference to a "means") used to describe such components are intended to correspond, unless otherwise indicated, to any component or structure which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary implementations of the disclosure. In addition, while a particular feature may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application.[0096] Examples can include subject matter such as a method, means for performing acts or blocks of the method, at least one machine-readable medium including instructions that, when performed by a machine cause the machine to perform acts of the method or of an apparatus or system for concurrent communication using multiple communication technologies according to embodiments and examples described herein. [0097] Example 1 is computer-readable medium storing executable instructions that, in response to execution, cause one or more processors of a network manager (NM), to perform operations comprising outputting an NM request comprising a request associated with one or more performance measurement (PM) jobs to an element manager (EM) in a network function virtualization (NFV) network, wherein each of the PM job is configured to collect virtualization resource (VR) PM data associated with a virtual network function (VNF) instance of the NFV network, wherein the VNF instance implements a network function associated with an evolved packet core (EPC) network, and wherein the request is a request to create a PM job, a request to stop a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs; and processing an EM response received from the EM, in response to the NM request, wherein the EM response comprises a parameter, status, indicating a result of the respective PM jobs.[0098] Example 2 is a computer-readable medium, including the subject matter of example 1 , further cause the one or more processors to perform operations comprising processing an EM notification received from the EM, when the one or more PM jobs is in operation, wherein the EM notification indicates an availability of the VR PM data associated with the respective VNF instances.[0099] Example 3 is a computer-readable medium, including the subject matter of examples 1 -2, including or omitting elements, further cause the one or more processors to perform operations comprising retrieving the VR PM data from a PM data repository associated with the NFV network, upon receiving the EM notification.[00100] Example 4 is a computer-readable medium, including the subject matter of examples 1 -3, including or omitting elements, wherein the request to create a PM job includes an instance identifier, iOCInstanceList that identifies the VNF instance for which the PM job is to be created and the EM response comprises a PM job identifier, Jobld, for the PM job created and a parameter, status, indicating a result of the PM job creation.[00101 ] Example 5 is a computer-readable medium, including the subject matter of examples 1 -4, including or omitting elements, wherein the request to stop the PM job includes a PM job identifier, Jobld that identifies the PM job to be stopped, and the EM response comprises a parameter, status, indicating a result of stopping the PM job. [00102] Example 6 is a computer-readable medium, including the subject matter of examples 1 -5, including or omitting elements, wherein the request to suspend the PM job includes a PM job identifier, Jobld that identifies the PM job to be suspended, and the EM response comprises a parameter, status, indicating a result of the PM job suspension.[00103] Example 7 is a computer-readable medium, including the subject matter of examples 1 -6, including or omitting elements, wherein the request to resume the PM job includes a PM job identifier, Jobld that identifies the PM job to be resumed, wherein the PM job has been previously suspended and the EM response comprises a parameter, status, indicating a result of the PM job resumption.[00104] Example 8 is a computer-readable medium, including the subject matter of examples 1 -7, including or omitting elements, wherein the request to list one or more PM jobs includes a criteria to list the PM jobs, jobldList and the EM response comprises information on PM jobs that match the criteria, joblnfoList and a parameter, status, indicating a result of the PM job listing.[00105] Example 9 is a computer-readable medium storing executable instructions that, in response to execution, cause one or more processors of an element manager (EM), to perform operations comprising processing a network manager (NM) request comprising a request associated with one or more performance measurement (PM) jobs, received from an NM in a network function virtualization (NFV) network, wherein each of the PM job is configured to collect virtualization resource (VR) PM data associated with a virtual network function (VNF) instance of the NFV network, wherein the VNF instance implements a network function associated with an evolved packet core (EPC) network, and wherein the request is a request to create a PM job, a request to stop or delete a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs; outputting an EM request comprising a request associated with the one or more PM jobs to a virtualized network function manager (VNFM), wherein the EM request is generated based on the received NM request; processing a VNFM response received from the VNFM in response to the EM request, wherein the VNFM response comprises information on respective identifiers that identifies the one or more PM jobs; and outputting an EM response generated based on the received VNFM response, to the NM, wherein the EM response comprises a parameter, status, indicating a result of the respective PM jobs. [00106] Example 10 is a computer-readable medium, including the subject matter of example 9, further cause the one or more processors to perform operations comprising processing a VNFM notification received from the VNFM in response to the EM request when the one or more PM jobs is in operation, wherein the VNFM notification indicates an availability of the VR PM data associated with the respective VNF instances; and outputting an EM notification generated based on the received VNFM notification, to the NM, wherein the EM notification indicates an availability of the VR PM data associated with the respective VNF instances.[00107] Example 1 1 is a computer-readable medium, including the subject matter of examples 9-1 0, including or omitting elements, wherein the NM request and the EM request to a create a PM job, includes a respective instance identifier that identifies the VNF instance for which the PM job is to be created, wherein the VNFM response comprises a PM job identifier for the PM job created, and wherein the EM response comprises a PM job identifier for the PM job created and a parameter, status, indicating a result of the PM job creation.[00108] Example 12 is a computer-readable medium, including the subject matter of examples 9-1 1 , including or omitting elements, wherein the NM request and the EM request to delete a PM job includes a PM job identifier that identifies the PM job is to be deleted, wherein the VNFM response comprises an identifier for the PM job deleted and wherein the EM response comprises a parameter, status, indicating a result of the PM job deletion.[00109] Example 13 is a computer-readable medium, including the subject matter of examples 9-1 2, including or omitting elements, wherein the NM request and the EM request to suspend a PM job includes a PM job identifier that identifies the PM job to be suspended, wherein the VNFM response comprises an identifier for the PM job suspended and wherein the EM response comprises a parameter, status, indicating a result of the PM job suspension.[00110] Example 14 is a computer-readable medium, including the subject matter of examples 9-1 3, including or omitting elements, wherein the NM request and the EM request to resume a PM job includes a PM job identifier that identifies the PM job to be resumed, wherein the PM job has been previously suspended, and wherein the VNFM response comprises an identifier for the PM job resumed and the EM response comprises a parameter, status, indicating a result of the PM job resumption. [00111 ] Example 15 is a computer-readable medium, including the subject matter of examples 9-14, wherein the NM request and the EM request to list PM jobs includes a criteria to list one or more PM jobs meeting the criteria, and wherein the VNFM response and the EM response comprises the list of the one or more PM jobs that meets the criteria.[00112] Example 16 is a computer-readable medium storing executable instructions that, in response to execution, cause one or more processors of a virtualized network function manager (VNFM), to perform operations comprising processing an element manager (EM) request comprising a request associated with one or more performance measurement (PM) jobs, received from an EM in a network function virtualization (NFV) network, wherein each of the PM job is configured to collect virtualization resource (VR) PM data associated with a virtual network function (VNF) instance of the NFV network, wherein the VNF instance implements a network function associated with an evolved packet core (EPC) network, and wherein the request is a request to create a PM job, a request to delete a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs; and outputting a VNFM response to the EM, in response to the EM request, wherein the VNFM response comprises information on the respective identifiers that identify the one or more PM jobs.[00113] Example 17 is a computer-readable medium, including the subject matter of example 16, prior to generating the VNFM response, further cause the one or more processors to perform operations comprising outputting a VNFM request comprising a request associated with the one or more PM jobs to a virtualized infrastructure manager (VIM), wherein the VNFM request is generated based on the received EM request; and processing a VIM response received from the VIM, in response to the VNFM request, wherein the VIM response comprises information on respective identifiers that identifies the one or more PM jobs, when the request associated with the one or more PM jobs is successfully completed.[00114] Example 18 is a computer-readable medium, including the subject matter of examples 1 6-17, including or omitting elements, further cause the one or more processors to perform operations comprising processing a VIM notification received from the VIM in response to the VNFM request when the one or more PM jobs is in operation, wherein the VIM notification indicates an availability of the VR PM data associated with the respective VNF instances; and outputting a VNFM notification generated based on the received VIM notification, to the EM, wherein the VNFM notification indicates an availability of the VR PM data associated with the VNF instance.[00115] Example 19 is a computer-readable medium, including the subject matter of examples 1 6-18, including or omitting elements, wherein the EM request and the VNFM request to create a PM job includes an object instance identifier that identifies the VNF instance for which a PM job to be created, and wherein the VIM response and the VNFM response comprises a PM job identifier for the PM job created.[00116] Example 20 is a computer-readable medium, including the subject matter of examples 1 6-19, including or omitting elements, wherein the EM request and the VNFM request to delete a PM job includes a PM job identifier that identifies the PM job to be deleted, and wherein the VIM response and the VNFM response comprises an identifier for the PM job deleted.[00117] Example 21 is a computer-readable medium, including the subject matter of examples 1 6-20, including or omitting elements, wherein the EM request and the VNFM request to suspend a PM job includes a PM job identifier that identifies the PM job to be suspended, and wherein the VIM response and the VNFM response comprises an identifier for the PM job suspended.[00118] Example 22 is a computer-readable medium, including the subject matter of examples 1 6-21 , including or omitting elements, wherein the EM request and the VNFM request to resume a PM job includes a PM job identifier that identifies the PM job to be resumed, wherein the PM job has been previously suspended and wherein the VIM response and the VNFM response comprises an identifier for the PM job resumed.[00119] Example 23 is a computer-readable medium, including the subject matter of examples 1 6-22, including or omitting elements, wherein the EM request and the VNFM request to list one or more PM jobs includes a criteria to list the one or more PM jobs meeting the criteria, and wherein the VIM response and the VNFM response comprises the list of the one or more PM jobs that meets the criteria.[00120] Example 24 is a computer-readable medium storing executable instructions that, in response to execution, cause one or more processors of a virtualizedinfrastructure manager (VIM), to perform operations comprising processing a virtualized network function manager (VNFM) request comprising a request associated with one or more performance measurement (PM) jobs, received from a VNFM in a network function virtualization (NFV) network, wherein each of the one or more PM jobs is configured to collect virtualization resource (VR) PM data associated with a virtual network function (VNF) instance of the NFV network, wherein the VNF instance implements a network function associated with an evolved packet core (EPC) network, and wherein the request is a request to create a PM job, a request to delete a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs; and outputting a VIM response to the VNFM, in response to the VNFM request, wherein the VIM response comprises information on respective identifiers that identifies the one or more PM jobs.[00121 ] Example 25 is a computer-readable medium, including the subject matter of example 24, further cause the one or more processors to perform operations comprising outputting a VIM notification in response to the VNFM request when the one or more PM jobs is in operation, wherein the VIM notification indicates an availability of the VR PM data associated with the respective VNF instances.[00122] Example 26 is a computer-readable medium, including the subject matter of examples 24-25, including or omitting elements, wherein the VNFM request to create a PM job includes an object instance identifier that identifies the VNF instance for which the PM job is to be created, and wherein the VIM response comprises a PM job identifier for the PM job created.[00123] Example 27 is a computer-readable medium, including the subject matter of examples 24-26, including or omitting elements, wherein the VNFM request to delete a PM job includes a PM job identifier that identifies the PM job to be deleted, and wherein the VIM response comprises an identifier for the PM job deleted.[00124] Example 28 is a computer-readable medium, including the subject matter of examples 24-27, including or omitting elements, wherein the VNFM request to suspend a PM job includes a PM job identifier that identifies the PM job to be suspended, and wherein the VIM response comprises an identifier for the PM job suspended.[00125] Example 29 is a computer-readable medium, including the subject matter of examples 24-28, including or omitting elements, wherein the VNFM request to resume a PM job includes a PM job identifier that identifies the PM job to be resumed, wherein the PM job has been previously suspended, and wherein the VIM response comprises an identifier for the PM job resumed. [00126] Example 30 is a computer-readable medium, including the subject matter of examples 24-29, including or omitting elements, wherein the VNFM request to list one or more PM jobs include a criteria to list one or more PM jobs meeting the criteria, and wherein the VIM response comprises the list of the one or more PM jobs that meets the criteria.[00127] Example 31 is an apparatus for use in a network manager (NM) comprising means for outputting an NM request comprising a request associated with one or more performance measurement (PM) jobs to an element manager (EM) in a network function virtualization (NFV) network, wherein each of the PM job is configured to collect virtualization resource (VR) PM data associated with a virtual network function (VNF) instance of the NFV network, wherein the VNF instance implements a network function associated with an evolved packet core (EPC) network, and wherein the request is a request to create a PM job, a request to stop a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs; and means for processing an EM response received from the EM, in response to the NM request, wherein the EM response comprises a parameter, status, indicating a result of the respective PM jobs.[00128] Example 32 is an apparatus including the subject matter of example 31 , further comprising means for processing an EM notification received from the EM, when the one or more PM jobs is in operation, wherein the EM notification indicates an availability of the VR PM data associated with the respective VNF instances.[00129] Example 33 is an apparatus including the subject matter of examples 31 -32, including or omitting elements, wherein the request to create a PM job includes an instance identifier, iOCInstanceList that identifies the VNF instance for which the PM job is to be created and the EM response comprises a PM job identifier, Jobld, for the PM job created and a parameter, status, indicating a result of the PM job creation.[00130] Example 34 is an apparatus including the subject matter of examples 31 -33, including or omitting elements, wherein the request to stop the PM job includes a PM job identifier, Jobld that identifies the PM job to be stopped, and the EM response comprises a parameter, status, indicating a result of stopping the PM job.[00131 ] Example 35 is an apparatus including the subject matter of examples 31 -34, including or omitting elements, wherein the request to suspend the PM job includes a PM job identifier, Jobld that identifies the PM job to be suspended, and the EM response comprises a parameter, status, indicating a result of the PM job suspension. [00132] Example 36 is an apparatus including the subject matter of examples 31 -35, including or omitting elements, wherein the request to resume the PM job includes a PM job identifier, Jobld that identifies the PM job to be resumed, wherein the PM job has been previously suspended and the EM response comprises a parameter, status, indicating a result of the PM job resumption.[00133] Example 37 is an apparatus including the subject matter of examples 31 -36, including or omitting elements, wherein the request to list one or more PM jobs includes a criteria to list the PM jobs, jobldList and the EM response comprises information on PM jobs that match the criteria, joblnfoList and a parameter, status, indicating a result of the PM job listing.[00134] Example 38 is an apparatus for use in an element manager (EM) comprising means for processing a network manager (NM) request comprising a request associated with one or more performance measurement (PM) jobs, received from an NM in a network function virtualization (NFV) network, wherein each of the PM job is configured to collect virtualization resource (VR) PM data associated with a virtual network function (VNF) instance of the NFV network, wherein the VNF instance implements a network function associated with an evolved packet core (EPC) network, and wherein the request is a request to create a PM job, a request to stop or delete a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs; means for outputting an EM request comprising a request associated with the one or more PM jobs to a virtualized network function manager (VNFM), wherein the EM request is generated based on the received NM request; means for processing a VNFM response received from the VNFM in response to the EM request, wherein the VNFM response comprises information on respective identifiers that identifies the one or more PM jobs; and means for outputting an EM response generated based on the received VNFM response, to the NM, wherein the EM response comprises a parameter, status, indicating a result of the respective PM jobs.[00135] Example 39 is an apparatus including the subject matter of example 38, further comprising means for processing a VNFM notification received from the VNFM in response to the EM request when the one or more PM jobs is in operation, wherein the VNFM notification indicates an availability of the VR PM data associated with the respective VNF instances; and means for outputting an EM notification generated based on the received VNFM notification, to the NM, wherein the EM notification indicates an availability of the VR PM data associated with the respective VNF instances.[00136] Example 40 is an apparatus including the subject matter of examples 38-39, including or omitting elements, wherein the NM request and the EM request to a create a PM job, includes a respective instance identifier that identifies the VNF instance for which the PM job is to be created, wherein the VNFM response comprises a PM job identifier for the PM job created, and wherein the EM response comprises a PM job identifier for the PM job created and a parameter, status, indicating a result of the PM job creation.[00137] Example 41 is an apparatus including the subject matter of examples 38-40, including or omitting elements, wherein the NM request and the EM request to delete a PM job includes a PM job identifier that identifies the PM job is to be deleted, wherein the VNFM response comprises an identifier for the PM job deleted and wherein the EM response comprises a parameter, status, indicating a result of the PM job deletion.[00138] Example 42 is an apparatus including the subject matter of examples 38-41 , including or omitting elements, wherein the NM request and the EM request to suspend a PM job includes a PM job identifier that identifies the PM job to be suspended, wherein the VNFM response comprises an identifier for the PM job suspended and wherein the EM response comprises a parameter, status, indicating a result of the PM job suspension.[00139] Example 43 is an apparatus including the subject matter of examples 38-42, including or omitting elements, wherein the NM request and the EM request to resume a PM job includes a PM job identifier that identifies the PM job to be resumed, wherein the PM job has been previously suspended, and wherein the VNFM response comprises an identifier for the PM job resumed and the EM response comprises a parameter, status, indicating a result of the PM job resumption.[00140] Example 44 is an apparatus including the subject matter of examples 38-43, including or omitting elements, wherein the NM request and the EM request to list PM jobs includes a criteria to list one or more PM jobs meeting the criteria, and wherein the VNFM response and the EM response comprises the list of the one or more PM jobs that meets the criteria.[00141 ] Example 45 is an apparatus for use in a virtualized network function manager (VNFM) comprising means for processing an element manager (EM) request comprising a request associated with one or more performance measurement (PM) jobs, received from an EM in a network function virtualization (NFV) network, wherein each of the PM job is configured to collect virtualization resource (VR) PM data associated with a virtual network function (VNF) instance of the NFV network, wherein the VNF instance implements a network function associated with an evolved packet core (EPC) network, and wherein the request is a request to create a PM job, a request to delete a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs; and means for outputting a VNFM response to the EM, in response to the EM request, wherein the VNFM response comprises information on the respective identifiers that identify the one or more PM jobs.[00142] Example 46 is an apparatus including the subject matter of example 45, further comprising means for outputting a VNFM request comprising a request associated with the one or more PM jobs to a virtualized infrastructure manager (VIM), wherein the VNFM request is generated based on the received EM request; and means for processing a VIM response received from the VIM, in response to the VNFM request, wherein the VIM response comprises information on respective identifiers that identifies the one or more PM jobs, when the request associated with the one or more PM jobs is successfully completed, prior to generating the VNFM response.[00143] Example 47 is an apparatus including the subject matter of examples 45-46, including or omitting elements, further comprising means for processing a VIM notification received from the VIM in response to the VNFM request when the one or more PM jobs is in operation, wherein the VIM notification indicates an availability of the VR PM data associated with the respective VNF instances; and means for outputting a VNFM notification generated based on the received VIM notification, to the EM, wherein the VNFM notification indicates an availability of the VR PM data associated with the VNF instance.[00144] Example 48 is an apparatus including the subject matter of examples 45-47, including or omitting elements, wherein the EM request and the VNFM request to create a PM job includes an object instance identifier that identifies the VNF instance for which a PM job to be created, and wherein the VIM response and the VNFM response comprises a PM job identifier for the PM job created.[00145] Example 49 is an apparatus including the subject matter of examples 45-48, including or omitting elements, wherein the EM request and the VNFM request to delete a PM job includes a PM job identifier that identifies the PM job to be deleted, and wherein the VIM response and the VNFM response comprises an identifier for the PM job deleted.[00146] Example 50 is an apparatus including the subject matter of examples 45-49, including or omitting elements, wherein the EM request and the VNFM request to suspend a PM job includes a PM job identifier that identifies the PM job to be suspended, and wherein the VIM response and the VNFM response comprises an identifier for the PM job suspended.[00147] Example 51 is an apparatus including the subject matter of examples 45-50, including or omitting elements, wherein the EM request and the VNFM request to resume a PM job includes a PM job identifier that identifies the PM job to be resumed, wherein the PM job has been previously suspended and wherein the VIM response and the VNFM response comprises an identifier for the PM job resumed.[00148] Example 52 is an apparatus including the subject matter of examples 45-51 , including or omitting elements, wherein the EM request and the VNFM request to list one or more PM jobs includes a criteria to list the one or more PM jobs meeting the criteria, and wherein the VIM response and the VNFM response comprises the list of the one or more PM jobs that meets the criteria.[00149] Example 53 is an apparatus for use in a virtualized infrastructure manager (VIM) comprising means for processing a virtualized network function manager (VNFM) request comprising a request associated with one or more performance measurement (PM) jobs, received from a VNFM in a network function virtualization (NFV) network, wherein each of the one or more PM jobs is configured to collect virtualization resource (VR) PM data associated with a virtual network function (VNF) instance of the NFV network, wherein the VNF instance implements a network function associated with an evolved packet core (EPC) network, and wherein the request is a request to create a PM job, a request to delete a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs; and means for outputting a VIM response to the VNFM, in response to the VNFM request, wherein the VIM response comprises information on respective identifiers that identifies the one or more PM jobs.[00150] Example 54 is an apparatus including the subject matter of example 53, further comprising means for outputting a VIM notification in response to the VNFM request when the one or more PM jobs is in operation, wherein the VIM notification indicates an availability of the VR PM data associated with the respective VNF instances.[00151 ] Example 55 is an apparatus including the subject matter of examples 53-54, including or omitting elements, wherein the VNFM request to create a PM job includes an object instance identifier that identifies the VNF instance for which the PM job is to be created, and wherein the VIM response comprises a PM job identifier for the PM job created.[00152] Example 56 is an apparatus including the subject matter of examples 53-55, including or omitting elements, wherein the VNFM request to delete a PM job includes a PM job identifier that identifies the PM job to be deleted, and wherein the VIM response comprises an identifier for the PM job deleted.[00153] Example 57 is an apparatus including the subject matter of examples 53-56, including or omitting elements, wherein the VNFM request to suspend a PM job includes a PM job identifier that identifies the PM job to be suspended, and wherein the VIM response comprises an identifier for the PM job suspended.[00154] Example 58 is an apparatus including the subject matter of examples 53-57, including or omitting elements, wherein the VNFM request to resume a PM job includes a PM job identifier that identifies the PM job to be resumed, wherein the PM job has been previously suspended, and wherein the VIM response comprises an identifier for the PM job resumed.[00155] Example 59 is an apparatus including the subject matter of examples 53-58, including or omitting elements, wherein the VNFM request to list one or more PM jobs include a criteria to list one or more PM jobs meeting the criteria, and wherein the VIM response comprises the list of the one or more PM jobs that meets the criteria.[00156] Example 60 is an apparatus including the subject matter of examples 31 -32, including or omitting elements, further comprising means for retrieving the VR PM data from a PM data repository associated with the NFV network, upon receiving the EM notification.[00157] Example 61 is an apparatus configured to be employed within a Network Manager (NM), comprising one or more processors; and a memory includinginstructions comprising operations, for execution via the one or more processors, to output an NM request comprising a request associated with one or more performance measurement (PM) jobs to an element manager (EM) in a network function virtualization (NFV) network, wherein each of the PM job is configured to collect virtualization resource (VR) PM data associated with a virtual network function (VNF) instance of the NFV network, wherein the VNF instance implements a network function associated with an evolved packet core (EPC) network, and wherein the request is a request to create a PM job, a request to stop a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs; and process an EM response received from the EM, in response to the NM request, wherein the EM response comprises a parameter, status, indicating a result of the respective PM jobs.[00158] Example 62 is an apparatus including the subject matter of example 61 , wherein the instructions comprise further operations, for execution via the one or more processors to process an EM notification received from the EM, when the one or more PM jobs is in operation, wherein the EM notification indicates an availability of the VR PM data associated with the respective VNF instances.[00159] Example 63 is an apparatus including the subject matter of examples 61 -62, including or omitting elements, wherein the request to create a PM job includes an instance identifier, iOCInstanceList that identifies the VNF instance for which the PM job is to be created and the EM response comprises a PM job identifier, Jobld, for the PM job created and a parameter, status, indicating a result of the PM job creation.[00160] Example 64 is an apparatus including the subject matter of examples 61 -63, including or omitting elements, wherein the request to stop the PM job includes a PM job identifier, Jobld that identifies the PM job to be stopped, and the EM response comprises a parameter, status, indicating a result of stopping the PM job.[00161 ] Example 65 is an apparatus including the subject matter of examples 61 -64, including or omitting elements, wherein the request to suspend the PM job includes a PM job identifier, Jobld that identifies the PM job to be suspended, and the EM response comprises a parameter, status, indicating a result of the PM job suspension.[00162] Example 66 is an apparatus including the subject matter of examples 61 -65, including or omitting elements, wherein the request to resume the PM job includes a PM job identifier, Jobld that identifies the PM job to be resumed, wherein the PM job has been previously suspended and the EM response comprises a parameter, status, indicating a result of the PM job resumption.[00163] Example 67 is an apparatus including the subject matter of examples 61 -66, including or omitting elements, wherein the request to list one or more PM jobs includes a criteria to list the PM jobs, jobldList and the EM response comprises information on PM jobs that match the criteria, joblnfoList and a parameter, status, indicating a result of the PM job listing.[00164] Example 68 is an apparatus configured to be employed within an Element Manager (EM), comprising one or more processors; and a memory includinginstructions comprising operations, for execution via the one or more processors, to process a network manager (NM) request comprising a request associated with one or more performance measurement (PM) jobs, received from an NM in a network function virtualization (NFV) network, wherein each of the PM job is configured to collect virtualization resource (VR) PM data associated with a virtual network function (VNF) instance of the NFV network, wherein the VNF instance implements a network function associated with an evolved packet core (EPC) network, and wherein the request is a request to create a PM job, a request to stop or delete a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs; output an EM request comprising a request associated with the one or more PM jobs to a virtualized network function manager (VNFM), wherein the EM request is generated based on the received NM request; process a VNFM response received from the VNFM in response to the EM request, wherein the VNFM response comprises information on respective identifiers that identifies the one or more PM jobs; and output an EM response generated based on the received VNFM response, to the NM, wherein the EM response comprises a parameter, status, indicating a result of the respective PM jobs.[00165] Example 69 is an apparatus including the subject matter of example 68, wherein the instructions comprise further operations, for execution via the one or more processors to process a VNFM notification received from the VNFM in response to the EM request when the one or more PM jobs is in operation, wherein the VNFM notification indicates an availability of the VR PM data associated with the respective VNF instances; and output an EM notification generated based on the received VNFM notification, to the NM, wherein the EM notification indicates an availability of the VR PM data associated with the respective VNF instances.[00166] Example 70 is an apparatus including the subject matter of examples 68-69, including or omitting elements, wherein the NM request and the EM request to a create a PM job, includes a respective instance identifier that identifies the VNF instance for which the PM job is to be created, wherein the VNFM response comprises a PM job identifier for the PM job created, and wherein the EM response comprises a PM job identifier for the PM job created and a parameter, status, indicating a result of the PM job creation.[00167] Example 71 is an apparatus including the subject matter of examples 68-70, including or omitting elements, wherein the NM request and the EM request to delete a PM job includes a PM job identifier that identifies the PM job is to be deleted, wherein the VNFM response comprises an identifier for the PM job deleted and wherein the EM response comprises a parameter, status, indicating a result of the PM job deletion.[00168] Example 72 is an apparatus including the subject matter of examples 68-71 , including or omitting elements, wherein the NM request and the EM request to suspend a PM job includes a PM job identifier that identifies the PM job to be suspended, wherein the VNFM response comprises an identifier for the PM job suspended and wherein the EM response comprises a parameter, status, indicating a result of the PM job suspension.[00169] Example 73 is an apparatus including the subject matter of examples 68-72, including or omitting elements, wherein the NM request and the EM request to resume a PM job includes a PM job identifier that identifies the PM job to be resumed, wherein the PM job has been previously suspended, and wherein the VNFM response comprises an identifier for the PM job resumed and the EM response comprises a parameter, status, indicating a result of the PM job resumption.[00170] Example 74 is an apparatus including the subject matter of examples 68-73, including or omitting elements, wherein the NM request and the EM request to list PM jobs includes a criteria to list one or more PM jobs meeting the criteria, and wherein the VNFM response and the EM response comprises the list of the one or more PM jobs that meets the criteria.[00171 ] Example 75 is an apparatus configured to be employed within a virtualized network function manager (VNFM), comprising one or more processors; and a memory including instructions comprising operations, for execution via the one or more processors, to process an element manager (EM) request comprising a request associated with one or more performance measurement (PM) jobs, received from an EM in a network function virtualization (NFV) network, wherein each of the PM job is configured to collect virtualization resource (VR) PM data associated with a virtual network function (VNF) instance of the NFV network, wherein the VNF instance implements a network function associated with an evolved packet core (EPC) network, and wherein the request is a request to create a PM job, a request to delete a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs; and output a VNFM response to the EM, in response to the EM request, wherein the VNFM response comprises information on the respective identifiers that identify the one or more PM jobs.[00172] Example 76 is an apparatus including the subject matter of example 75, wherein the instructions comprise further operations, for execution via the one or more processors to output a VNFM request comprising a request associated with the one or more PM jobs to a virtualized infrastructure manager (VIM), wherein the VNFM request is generated based on the received EM request; and process a VIM response received from the VIM, in response to the VNFM request, wherein the VIM response comprises information on respective identifiers that identifies the one or more PM jobs, when the request associated with the one or more PM jobs is successfully completed, prior to outputting the VNFM response.[00173] Example 77 is an apparatus including the subject matter of examples 75-76, including or omitting elements, wherein the instructions comprise further operations, for execution via the one or more processors to process a VIM notification received from the VIM in response to the VNFM request when the one or more PM jobs is in operation, wherein the VIM notification indicates an availability of the VR PM data associated with the respective VNF instances; and output a VNFM notification generated based on the received VIM notification, to the EM, wherein the VNFM notification indicates an availability of the VR PM data associated with the VNF instance.[00174] Example 78 is an apparatus including the subject matter of examples 75-77, including or omitting elements, wherein the EM request and the VNFM request to create a PM job includes an object instance identifier that identifies the VNF instance for which a PM job to be created, and wherein the VIM response and the VNFM response comprises a PM job identifier for the PM job created.[00175] Example 79 is an apparatus including the subject matter of examples 75-78, including or omitting elements, wherein the EM request and the VNFM request to delete a PM job includes a PM job identifier that identifies the PM job to be deleted, and wherein the VIM response and the VNFM response comprises an identifier for the PM job deleted.[00176] Example 80 is an apparatus including the subject matter of examples 75-79, including or omitting elements, wherein the EM request and the VNFM request to suspend a PM job includes a PM job identifier that identifies the PM job to be suspended, and wherein the VIM response and the VNFM response comprises an identifier for the PM job suspended.[00177] Example 81 is an apparatus including the subject matter of examples 75-80, including or omitting elements, wherein the EM request and the VNFM request to resume a PM job includes a PM job identifier that identifies the PM job to be resumed, wherein the PM job has been previously suspended and wherein the VIM response and the VNFM response comprises an identifier for the PM job resumed.[00178] Example 82 is an apparatus including the subject matter of examples 75-81 , including or omitting elements, wherein the EM request and the VNFM request to list one or more PM jobs includes a criteria to list the one or more PM jobs meeting the criteria, and wherein the VIM response and the VNFM response comprises the list of the one or more PM jobs that meets the criteria.[00179] Example 83 is an apparatus configured to be employed within a virtualized infrastructure manager (VIM) comprising one or more processors; and a memory including instructions comprising operations, for execution via the one or more processors, to process a virtualized network function manager (VNFM) request comprising a request associated with one or more performance measurement (PM) jobs, received from a VNFM in a network function virtualization (NFV) network, wherein each of the one or more PM jobs is configured to collect virtualization resource (VR) PM data associated with a virtual network function (VNF) instance of the NFV network, wherein the VNF instance implements a network function associated with an evolved packet core (EPC) network, and wherein the request is a request to create a PM job, a request to delete a PM job, a request to suspend a PM job, a request to resume a PM job or a request to list PM jobs; and output a VIM response to the VNFM, in response to the VNFM request, wherein the VIM response comprises information on respective identifiers that identifies the one or more PM jobs.[00180] Example 84 is an apparatus including the subject matter of example 83, wherein the instructions comprise further operations, for execution via the one or more processors to output a VIM notification in response to the VNFM request when the one or more PM jobs is in operation, wherein the VIM notification indicates an availability of the VR PM data associated with the respective VNF instances.[00181 ] Example 85 is an apparatus including the subject matter of examples 83-84, including or omitting elements, wherein the VNFM request to create a PM job includes an object instance identifier that identifies the VNF instance for which the PM job is to be created, and wherein the VIM response comprises a PM job identifier for the PM job created.[00182] Example 86 is an apparatus including the subject matter of examples 83-85, including or omitting elements, wherein the VNFM request to delete a PM job includes a PM job identifier that identifies the PM job to be deleted, and wherein the VIM response comprises an identifier for the PM job deleted.[00183] Example 87 is an apparatus including the subject matter of examples 83-86, including or omitting elements, wherein the VNFM request to suspend a PM job includes a PM job identifier that identifies the PM job to be suspended, and wherein the VIM response comprises an identifier for the PM job suspended.[00184] Example 88 is an apparatus including the subject matter of examples 83-87, including or omitting elements, wherein the VNFM request to resume a PM job includes a PM job identifier that identifies the PM job to be resumed, wherein the PM job has been previously suspended, and wherein the VIM response comprises an identifier for the PM job resumed.[00185] Example 89 is an apparatus including the subject matter of examples 83-88, including or omitting elements, wherein the VNFM request to list one or more PM jobs include a criteria to list one or more PM jobs meeting the criteria, and wherein the VIM response comprises the list of the one or more PM jobs that meets the criteria.[00186] Various illustrative logics, logical blocks, modules, and circuits described in connection with aspects disclosed herein can be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or otherprogrammable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform functions described herein. A general-purpose processor can be a microprocessor, but, in the alternative, processor can be any conventional processor, controller, microcontroller, or state machine.[00187] The above description of illustrated embodiments of the subject disclosure, including what is described in the Abstract, is not intended to be exhaustive or to limit the disclosed embodiments to the precise forms disclosed. While specific embodiments and examples are described herein for illustrative purposes, various modifications are possible that are considered within the scope of such embodiments and examples, as those skilled in the relevant art can recognize.[00188] In this regard, while the disclosed subject matter has been described in connection with various embodiments and corresponding Figures, where applicable, it is to be understood that other similar embodiments can be used or modifications and additions can be made to the described embodiments for performing the same, similar, alternative, or substitute function of the disclosed subject matter without deviating therefrom. Therefore, the disclosed subject matter should not be limited to any single embodiment described herein, but rather should be construed in breadth and scope in accordance with the appended claims below.[00189] In particular regard to the various functions performed by the above described components (assemblies, devices, circuits, systems, etc.), the terms(including a reference to a "means") used to describe such components are intended to correspond, unless otherwise indicated, to any component or structure which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary implementations of the disclosure. In addition, while a particular feature may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application.
A computer system includes a memory hub (130) for coupling a processor (104) to a plurality of synchronous dynamic random access memory ("SDRAM") devices (140a-c). The memory hub (130) includes a processor interface (150) coupled to the processor (104) and a plurality of memory interfaces (170a-c) coupled to respective SDRAM devices (140a-c). The processor interface (150) is coupled to the memory interfaces (170a-c) by a switch (160) Each of the memory interfaces (170a-c) includes a memory controller (180), a cache memory (184), and a prediction unit (190). The cache memory (184) stores data recently read from or written to the respective SDRAM device (140a-c) so that it can be subsequently read by processor (104) with relatively little latency. The prediction unit (190) prefetches data from an address from which a read access is likely based on a previously accessed address.
CLAIMS 1. A memory hub, comprising: a memory access device interface structured to interface with a memory access device; a plurality of memory interfaces structured to interface with respective memory devices, each of the memory interfaces including a memory controller and a memory cache; and a switch coupling the memory access device interface to each of the memory interfaces. 2. The memory hub of claim 1 wherein the memory access device interface comprises a processor interface structured to interface with a processor. 3. The memory hub of claim 1 wherein each of the memory interfaces further comprises a prediction unit structured to predict an address from which data are likely to be read based on an address from a prior memory access and to cause the memory controller in the respective memory interface to output signals indicative of a memory read operation from the predicted address. 4. The memory hub of claim 3 wherein the prediction unit is further structured to cause the memory interface to store in the cache memory read data received responsive to the signals indicative of a memory read operation. 5. The memory hub of claim 1 wherein each of the memory interfaces operates at the same clock speed. 6. The memory hub of claim 1 wherein the switch comprises a cross-bar switch. <Desc/Clms Page number 10> 7. The memory hub of claim 1 wherein the switch comprises a multiplexer switch. 8. The memory hub of claim 1 wherein the cache memory comprises dynamic random access memory. 9. A memory hub, comprising: a memory access device interface structured to interface with a memory access device; a plurality of memory interfaces structured to interface with respective memory devices, each of the memory interfaces including a memory controller and a prediction unit structured to predict an address from which data are likely to be read based on an address from a prior memory access and to cause the memory controller in the respective memory interface to output signals indicative of a memory read operation from the predicted address; and a switch coupling the memory access device interface to the memory interfaces. 10. The memory hub of claim 9 wherein the memory access device interface comprises a processor interface. 11. The memory hub of claim 9 wherein each of the memory interfaces operates at the same clock speed. 12. The memory hub of claim 9 wherein the switch comprises a cross-bar switch. 13. The memory hub of claim 9 wherein the switch comprises a multiplexer switch. <Desc/Clms Page number 11> 14. A computer system, comprising: a processing unit operable to perform computing functions; a system controller coupled to the processing unit; at least one input device coupled to the processing unit through the system controller ; at least one output device coupled to the processing unit through the system controller ; at least one data storage devices coupled to the processing unit through the system controller ; a plurality of memory devices; and a memory hub comprising: a processor interface coupled to the processor; a plurality of memory interfaces coupled to respective ones of the memory devices, each of the memory interfaces including a memory controller and a memory cache; and a switch coupling the processor interface to each of the memory interfaces. 15. The computer system of claim 14 wherein the memory hub is physically included in the system controller. 16. The computer system of claim 14 wherein the plurality of memory devices are physically packaged in a memory module, and wherein the memory hub is physically included in the memory module. 17. The computer system of claim 14 wherein each of the memory interfaces further comprises a prediction unit structured to predict an address from which data are likely to be read based on an address from a prior memory access and to cause the memory controller in the respective memory interface to apply to the memory device to which <Desc/Clms Page number 12> the memory interface is coupled output signals indicative of a memory read operation from the predicted address. 18. The computer system of claim 15 wherein the prediction unit is further structured to cause the memory interface to store in the cache memory read data received from the respective memory device responsive to the signals indicative of a memory read operation. 19. The computer system of claim 14 wherein each of the memory interfaces operates at the same clock speed. 20. The computer system of claim 14 wherein the switch comprises a cross-bar switch. 21. The computer system of claim 14 wherein the switch comprises a multiplexer switch. 22. The computer system of claim 14 wherein the cache memory comprises dynamic random access memory. 23. The computer system of claim 14 wherein each of the memory devices comprises a dynamic random access memory device. 24. The computer system of claim 21 wherein each of the dynamic random access memory device comprises a synchronous dynamic random access memory device. 25. A computer system, comprising: a processing unit operable to perform computing functions ; a system controller coupled to the processing unit; <Desc/Clms Page number 13> at least one input device coupled to the processing unit through the system controller ; at least one output device coupled to the processing unit through the system controller; at least one data storage devices coupled to the processing unit through the system controller; a plurality of memory devices; and a memory hub comprising: a processor interface coupled to the processor; a plurality of memory interfaces coupled to respective ones of the memory devices, each of the memory interfaces including a memory controller and a prediction unit structured to predict an address from which data are likely to be read based on an address from a prior memory access and to cause the memory controller in the respective memory interface to output to the memory device to which the memory interface is coupled signals indicative of a memory read operation from the predicted address; and a switch coupling the processor interface to each of the memory interfaces. 26. The computer system of claim 25 wherein the memory hub is physically included in the system controller. 27. The computer system of claim 25 wherein the plurality of memory devices are physically packaged in a memory module, and wherein the memory hub is physically included in the memory module. 28. The computer system of claim 25 wherein each of the memory interfaces operates at the same clock speed. <Desc/Clms Page number 14> 29. The computer system of claim 25 wherein the switch comprises a cross-bar switch. 30. The computer system of claim 25 wherein the switch comprises a multiplexer switch. 31. The computer system of claim 25 wherein each of the memory devices comprises a dynamic random access memory device. 32. A method of accessing a plurality of memory devices, comprising: directing a memory access request to a first of a plurality of memory devices coupled to memory hub; storing data read from or written to the first memory device in a cache memory located in the memory hub; subsequently directing a memory read request to the first memory device; in response to the memory read request, detecting whether the data corresponding to the memory read request are stored in the cache memory located in the memory hub; if the data corresponding to the memory read request are determined to be stored in the cache memory located in the memory hub, providing the read data from the cache memory; and if the data corresponding to the memory read request are determined to be not stored in the cache memory located in the memory hub, providing the read data from the first memory device. 33. The method of claim 32 further comprising: predicting an address from which data are likely to be read from the first memory device based on an address from a prior memory access to the first memory device; providing read data from the predicted address in the first memory device; and <Desc/Clms Page number 15> storing the read data from the predicted address in the cache memory in the memory hub. 34. The method of claim 32 wherein the act of storing data read from or written to the first memory device in a cache memory in the memory hub comprises storing the data read from or written to the first memory device in a cache memory dedicated to the first memory device. 35. The method of claim 32 wherein the memory access request on which the prediction was based comprises a read memory access. 36. The method of claim 32 wherein the memory access request on which the prediction was based comprises a write memory access. 37. A method of accessing a plurality of memory devices, comprising: directing memory access requests to respective addresses in a plurality of memory devices coupled to memory hub; within the memory hub, predicting at least one address from which data are likely to be read from the first memory device based on the addresses to which the memory access requests were directed; and providing respective read data from the predicted addresses in the memory devices prior to receiving memory read requests directed to the predicted addresses. 38. The method of claim 37 wherein the memory access requests on which the predictions were based comprise read memory requests. 39. The method of claim 37 wherein the memory access requests on which the predictions were based comprise write memory requests.
<Desc/Clms Page number 1> MEMORY HUB WITH INTERNAL CACHE AND/OR MEMORY ACCESS PREDICTION TECHNICAL FIELD This invention relates to computer systems, and, more particularly, to a computer system having a memory hub coupling several memory devices to a processor or other memory access device. BACKGROUND OF THE INVENTION Computer systems use memory devices, such as dynamic random access memory ("SDRAM") devices, to store instructions and data that are accessed by a processor. In a typical computer system, the processor communicates with the system memory through a processor bus and a memory controller. The processor issues a command, such as a read command, and an address designating the location from which data or instructions are to be read. The memory controller uses the command and address to generate appropriate command signals as well as row and column addresses, which are applied to the system memory. In response to the commands and addresses, data are transferred between the system memory and the processor. The memory controller is often part of a system controller, which also includes bus bridge circuitry for coupling the processor bus to an expansion bus, such as a PCI bus. Although the operating speed of memory devices has continuously increased, this increase in operating speed has not kept pace with increases in the operating speed of processors. Even slower has been the increase in operating speed of memory controllers coupling processors to memory devices. The relatively low speed of memory controllers and memory devices limits the communication bandwidth between the processor and the memory devices. In addition to the limited bandwidth between processors and memory devices, the performance of computer systems and is also limited by latency problems that increase the time required to read data from system memory devices. More specifically, when a memory device read command is coupled to a system memory <Desc/Clms Page number 2> device, such as an asynchronous DRAM ("SDRAM") device, the read data is output from the SDRAM device only after a delay of several clock periods. Therefore, although SDRAM devices can synchronously output burst data at a high data rate, the delay in initially providing the data can significantly slow the operating speed of a computer system using such SDRAM devices. One approach to alleviating at the memory latency problem is to use multiple memory devices coupled to the processor through a memory hub. Computer systems employing this architecture can have a higher bandwidth because a processor can access one memory device while another memory device is responding to a prior memory access. For example, the processor can output write data to one of the memory devices in the system while another memory device in the system is preparing to provide read data to the processor. However, although computer systems using memory hubs may provide superior performance, they nevertheless often fail to operate at optimum speed. One of the reasons such computer systems fail to operate at optimum speed is that conventional memory hubs are essentially single channel systems since all control, address and data signals must pass through common memory hub circuitry. As a result, when the memory hub circuitry is busy communicating with one memory device, it is not free to communicate with another memory device. Furthermore, although computer systems using memory hubs can provide a greater memory bandwidth, they still suffer from latency problems of the type described above. More specifically, although the processor may communicate with one memory device while another memory device is preparing to transfer data, is sometimes necessary to receive data from one memory device before the data from another memory device can be used. In the event data must be received from one memory device before data received from another memory device can be used, the latency problem continues to slow the operating speed of such computer systems. There is therefore a need for a computer architecture that provides the advantages of a memory hub architecture and also minimize this latency problems common in such systems, thereby providing a memory devices with high bandwidth and low latency. <Desc/Clms Page number 3> SUMMARY OF THE INVENTIONA memory hub that may be used in a computer system includes a memory access device interface coupled to a processor or other memory access device, and a plurality of memory interfaces each of which is coupled to a respective memory device. Each of the memory interfaces includes a memory controller and, according to one aspect of the invention, a memory cache. Each of the memory interfaces is coupled to the memory access device interface by a switch. In operation, data read from or written to a memory device coupled to one of the memory interfaces are stored in the cache memory for the memory interface. In response to a subsequent memory read request, the cache memory is checked to determine whether the data corresponding to the memory read request are stored in the cache memory. In the event of a cache hit, the requested data are provided from the cache memory. Otherwise, the requested data are provided by the memory device. According to another aspect of the invention, each memory interface includes a memory controller and a prediction unit. The prediction unit predicts an address from which data are likely to be read based on an address from a prior memory access. The prediction unit then causes the memory controller in the respective memory interface to read data from the predicted address. The memory hub may be physically included in a system controller, a memory module, or some other component of a computer system or other electronic system using memory devices. BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 is a block diagram of a computer system according to one embodiment of the invention in which the memory hub is included in a system controller. Figure 2 is a block diagram of a computer system according to another embodiment of the invention in which the memory hub is included in a memory module. Figure 3 is a block diagram of a memory hub used in the computer systems of Figures 1 and 2. <Desc/Clms Page number 4> DETAILED DESCRIPTION OF THE INVENTION A computer system 100 according to one embodiment of the invention is shown in Figure 1. The computer system 100 includes a processor 104 for performing various computing functions, such as executing specific software to perform specific calculations or tasks. The processor 104 includes a processor bus 108 that normally includes an address bus, a control bus, and a data bus. In addition, the computer system 100 includes one or more input devices 108, such as a keyboard or a mouse, coupled to the processor 104 through a system controller 110 to allow an operator to interface with the computer system 100. Typically, the computer system 100 also includes one or more output devices 114 coupled to the processor 104 through the system controller 110, such output devices typically being a printer or a video terminal. One or more data storage devices 120 are also typically coupled to the processor 104 through the system controller 110 to allow the processor 104 to store data or retrieve data from internal or external storage media (not shown). Examples of typical storage devices 120 include hard and floppy disks, tape cassettes, and compact disk read-only memories (CD- ROMs). The processor 104 is also typically coupled to cache memory 124, which is usually static random access memory ("SRAM"). The system controller 110 also includes a memory hub 130 for controlling access to several system memory devices 140a-d, each of which may be a synchronous dynamic random access memory ("SDRAM"). The memory hub 130 allows the processor 104 to write data to and read data from each of the system memory devices 140a-d. The memory hub 130 is coupled to each of the system memory devices 140a-d through a bus system 142, which normally includes a control bus, an address bus and a data bus. Although the memory hub 130 is shown in Figure 1 coupled to the processor 104, it will be understood that the memory hub 130 may also be coupled to other components in a computer system chipset (not shown). and may also allow other devices (not shown) to write data to and read data from the system memory devices 140a-d in a direct memory operation, as is well known in the art. Also, the memory hub 130 may be physically included as a part of components of an electronic system other <Desc/Clms Page number 5> than the system controller 110. For example, a computer system 144 shown in Figure 2 uses most of the same components that are used in the computer system 100 of Figure 1. In the interest of brevity, such common components have been provided with the same reference numerals, and an explanation of their operation will not be repeated. The computer system 144 differs from the computer system 100 shown in Figure 1 in that the memory hub 130 is not included in the system controller 110. Instead, the system controller 110 is coupled to a plurality of memory modules 146, such a double in-line memory modules ("DIMMs"). Each of the memory modules 146 includes the memory hub 130 and a plurality of memory devices 148, which may be SDRAM or some other type of memory device. The memory hub 130 operates in essentially the same manner explained above with reference to Figure 1 to cache data stored in the memory modules 146. Although Figures 1 and 2 show the memory hub 130 included in the system controller 110 and the memory modules 146, respectively, it will be understood that the memory hub 130 may be a stand-alone unit or it may be included in other components of a computer system or other system using memory devices. One embodiment of the memory hub 130 is shown in Figure 3 in which the memory hub 130 is coupled to the processor 104 and three memory devices 140a-c, which, in the example illustrated in Figure 3, are SDRAM devices. The memory hub 130 is shown coupled to the processor 104 in a point-to-point arrangement in which there are no other devices coupled to the connection between the processor 104 and the memory hub 130. This type of interconnection provides better signal coupling between the processor 104 and the memory hub 130 for several reasons, including relatively low capacitance, relatively few line discontinuities to reflect signals and relatively short signal paths. However, a multi-drop interconnection may alternatively be used in which other devices (not shown) are coupled to the interconnection between the processor 104 and the memory hub 130. The memory hub 130 includes a processor interface 150 that is coupled to the processor 104 through a plurality of bus and signal lines, as is well known in the art. The processor interface 150 is, in turn, coupled to a switch 160 through a plurality <Desc/Clms Page number 6> of bus and signal lines, including a write data bus 154 and a read data bus 156, although a single bi-directional data bus may alternatively be provided to couple data in both directions between the processor interface 150 in the switch 160. The processor interface 150 is also coupled to switch 160 through a request line 164 and a snoop line 168. A snoop signal coupled from the switch 160 to the processor interface 150 through the snoop line 168 is used to maintain cache consistency, as will be described in greater detail below. A request signal coupled from the processor interface 150 to the switch 160 through the request line 164 provides the switch 160 with information corresponding to a request to transfer data through the switch 160. It will be understood, however, that the processor interface 150 maybe coupled to the switch 160 with a greater or lesser number of buses and signal lines or buses and signal lines different from those illustrated in Figure 3. The switch 160 is also coupled to three memory interfaces 170a-c which are, in turn, coupled to the system memory devices 140a-c, respectively. By providing a separate and independent memory interface 170a-c for each system memory device 140a-c, respectively, the memory hub 130 avoids bus or memory bank conflicts that typically occur with single channel memory architectures. The switch 160 is coupled to each memory interface through a plurality of bus and signal lines, including a write data bus 174, read data bus 176 and a request line 178. However, it will be understood that a single bi-directional data bus may alternatively be used instead of a separate write data bus 174 and read data bus 176. Significantly, each memory interface 170a-c is specially adapted to the system memory devices 140a-c to which it is coupled. More specifically, each memory interface 170a-c is specially adapted to provide and receive the specific signals received and generated, respectively, by the system memory device 140a-c to which it is coupled. Also, the memory interfaces 170a-c are capable of operating with system memory devices 140a-c operating at different clock frequencies. As a result, the memory interfaces 170a-c isolate the processor 104 from changes that may occur at the interface between the memory hub 130 and memory devices 140a-c coupled to the hub 130, and it provides a more controlled environment to which the memory devices 140a-c may interface. <Desc/Clms Page number 7> The switch 160 coupling the processor interface 150 to the memory interfaces 170a-c can be any of a variety of conventional or hereinafter developed switches. For example, the switch 160 may be a cross-bar switch that can simultaneously couple at the processor interface 150 and the memory interfaces 170a-c to each other. The switch 160 can also be a set of multiplexers that do not provide the same level of connectivity as a cross-bar switch but nevertheless can couple the processor interface 150 to each of the memory interfaces 170a-c. The switch 160 may also includes arbitration logic (not shown) to determine which memory accesses should receive priority over other memory accesses. Bus arbitration performing this function is well known to one skilled in the art. With further reference to Figure 3, each of the memory interfaces 170a-c includes a respective memory controller 180 and a respective cache memory unit 184. The memory controller 180 performs the same functions as a conventional memory controller by providing control, address and data signals to the system memory device 140a-c to which it is coupled and receiving data signals from the system memory device 140a-c to which it is coupled. The cache memory unit 184 includes the normal components of a cache memory including a tag memory, a data memory and a comparator, as is well known in the art. The memory devices used in the cache memory unit 184 may be either DRAM devices, static random access memory. ("SRAM") devices, other types of memory devices, or a combination of all three. Furthermore, any or all of these memory devices as well as the other components used in the cache memory unit 184 may be either embedded or stand-alone devices. The use of the cache memory unit 184 in each memory interface 170a-c allows the processor 104 to receive data responsive to a read command directed to a respective system memory device 140a-c without waiting for the memory device 140a-c to provide such data in the event that the data was recently read from or written to that memory device 140a-c. The cache memory unit 184 thus reduces the read latency of the system memory devices 140a-c to maximize the memory bandwidth of the computer system. Similarly, the processor 104 can store write data in the cache memory unit 184 and then perform other functions while the memory controller 180 in the same memory <Desc/Clms Page number 8> interface 170a-c transfers the write data from the cache memory unit 184 to the system memory device 140a-c to which it is coupled. To further reduce the memory access latency provided by the memory hub 130, each memory interface 170a-c may be provided with a prefetch unit 190. The prefetch unit 190 is able to predict the likely address of a subsequent memory read request using conventional algorithms. The memory controller 180 in the same memory interface 170a-c can then perform the memory access in the background while the processor 104 is either accessing a different system memory device 140 or performing other functions. When the processor 104 subsequently provides a command to the memory hub 130 to read data from the predicted address, the read data will already be present in the cache memory unit 180 and can thus be quickly provided to the processor 104. From the foregoing it will be appreciated that, although specific embodiments of the invention have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the invention. Accordingly, the invention is not limited except as by the appended claims.
Various embodiments of methods and systems for energy efficiency aware thermal management in a portable computing device that contains a heterogeneous, multi-processor system on a chip ("SoC") are disclosed. Because individual processing components in a heterogeneous, multi-processor SoC may exhibit different processing efficiencies at a given temperature, energy efficiency aware thermal management techniques that compare performance data of the individual processing components at their measured operating temperatures can be leveraged to optimize quality of service ("QoS") by adjusting the power supplies to, reallocating workloads away from, or transitioning the power mode of, the least energy efficient processing components. In these ways, embodiments of the solution optimize the average amount of power consumed across the SoC to process a MIPS of workload.
1.A method for managing thermal energy generation in a portable computing device having an asynchronous multiprocessor system-on-a-chip ("SoC"), the method comprising:Monitoring temperature readings uniquely associated with each of the plurality of independent processing components of the multi-processor SoC, wherein each processing component is associated with a dedicated supply voltage and clock generator frequency;Monitoring thermal parameters;Receiving an alarm indicating that a threshold associated with the thermal parameter has been exceeded;Sampling the monitored temperature readings uniquely associated with each of the processing components;Querying performance data for each processing component based on the sampled temperature readings, wherein the performance data represents a relationship between power consumption and workload processing capability when a given independent processing component operates at a given temperature ;Comparing the performance data for each processing component to identify the lowest energy efficiency processing component;Adjusting the dedicated supply voltage and clock generator frequency of the lowest energy efficient processing component, wherein adjusting the dedicated supply voltage and clock generator frequency serves to reduce the power consumption of the lowest energy efficient processing component .2.The method of claim 1 wherein said least energy efficient processing component is: a processing component that consumes a maximum amount of power for each processed workload.3.The method of claim 1 further comprising:Determining that the alarm has not been cleared;Re-sampling the monitored temperature readings uniquely associated with each of the processing components;Re-query performance data for each processing component based on the resampled temperature readings;Comparing the re-queried performance data for each processing component to identify new least energy efficient processing components;Adjusting the dedicated supply voltage and clock generator frequency of the new lowest energy efficiency processing component, wherein adjusting the dedicated supply voltage and clock generator frequency to reduce said new minimum energy efficient processing component The role of power consumption.4.The method of claim 1 further comprising:Determining that the alarm has been cleared;It is permitted to increase the supply voltage and clock generator frequency for the lowest energy efficient processing component.5.The method of claim 1 further comprising:The power mode of the lowest energy efficient processing component is converted from an active mode to an idle mode.6.The method of claim 5 further comprising:Determining that the alarm has been cleared;The minimum energy efficient processing component is permitted to return to the active power mode.7.The method of claim 1 wherein the thermal parameter is associated with one of: skin temperature, PoP memory temperature, junction temperature, and battery capacity.8.The method of claim 1 wherein said portable computing device is in the form of a wireless telephone.9.A computer system for managing thermal energy generation in a portable computing device having an asynchronous multiprocessor system-on-a-chip ("SoC"), the system comprising:Monitoring module for:Monitoring temperature readings uniquely associated with each of the plurality of independent processing components of the multi-processor SoC, wherein each processing component is associated with a dedicated supply voltage and clock generator frequency;Monitoring thermal parameters;Receiving an alarm indicating that a threshold associated with the thermal parameter has been exceeded;Sampling the monitored temperature readings uniquely associated with each of the processing components;An Efficiency Manager ("EM") module for:Querying performance data for each processing component based on the sampled temperature readings, wherein the performance data represents a relationship between power consumption and workload processing capability when a given independent processing component operates at a given temperature ;Comparing the performance data for each processing component to identify the lowest energy efficiency processing component;Dynamic Control and Voltage Regulation ("DCVS") module for:Adjusting the dedicated supply voltage and clock generator frequency of the lowest energy efficient processing component, wherein adjusting the dedicated supply voltage and clock generator frequency serves to reduce the power consumption of the lowest energy efficient processing component .10.The computer system of claim 9 wherein said least energy efficient processing component is: a processing component that consumes a maximum amount of power for each processed workload.11.The computer system of claim 9 wherein:The monitoring module is further configured to:Determining that the alarm has not been cleared;Re-sampling the monitored temperature readings uniquely associated with each of the processing components;The EM module is also used to:Re-query performance data for each processing component based on the resampled temperature readings;Comparing the re-queried performance data for each processing component to identify new least energy efficient processing components;The DCVS module is also used to:Adjusting the dedicated supply voltage and clock generator frequency of the new lowest energy efficiency processing component, wherein adjusting the dedicated supply voltage and clock generator frequency to reduce said new minimum energy efficient processing component The role of power consumption.12.The computer system of claim 9 wherein:The monitoring module is further configured to:Determining that the alarm has been cleared;The EM module is also used to:It is permitted to increase the supply voltage and clock generator frequency for the lowest energy efficient processing component.13.The computer system of claim 9 wherein:The monitoring module is further configured to:The power mode of the lowest energy efficient processing component is converted from an active mode to an idle mode.14.The computer system of claim 13 wherein:The monitoring module is further configured to:Determining that the alarm has been cleared;The EM module is also used to:The minimum energy efficient processing component is permitted to return to the active power mode.15.The computer system of claim 9 wherein the thermal parameter is associated with one of: skin temperature, PoP memory temperature, junction temperature, and battery capacity.16.A computer system for managing thermal energy generation in a portable computing device having an asynchronous multiprocessor system-on-a-chip ("SoC"), the system comprising:Means for monitoring temperature readings uniquely associated with each of a plurality of independent processing components of the multiprocessor SoC, wherein each processing component is associated with a dedicated supply voltage and a clock generator frequency;a unit for monitoring thermal parameters;Means for receiving an alarm indicating that a threshold associated with the thermal parameter has been exceeded;Means for sampling the monitored temperature readings uniquely associated with each of the processing components;A unit for querying performance data for each processing component based on the sampled temperature readings, wherein the performance data represents power consumption and workload processing capability when a given independent processing component operates at a given temperature The relationship between;Comparing the performance data for each processing component to identify the unit of the lowest energy efficient processing component;Means for adjusting the dedicated supply voltage and clock generator frequency of the lowest energy efficient processing component, wherein adjusting the dedicated supply voltage and clock generator frequency to reduce said minimum energy efficiency processing component The role of power consumption.17.The computer system of claim 16 wherein said least energy efficient processing component is: a processing component that consumes a maximum amount of power for each processed workload.18.The computer system of claim 16 further comprising:Means for determining that the alarm has not been cleared;Means for resampling the monitored temperature readings uniquely associated with each of the processing components;A unit for re-query performance data for each processing component based on the resampled temperature readings;Comparing the re-queried performance data for each processing component to identify the new minimum energy efficient processing component unit;Means for adjusting the dedicated supply voltage and clock generator frequency of the new lowest energy efficiency processing component, wherein adjusting the dedicated supply voltage and clock generator frequency to reduce the new minimum energy efficiency process The effect of the power consumption of the component.19.The computer system of claim 16 further comprising:a unit for determining that the alarm has been cleared;Means for permitting the addition of the supply voltage and clock generator frequency for the lowest energy efficient processing component.20.The computer system of claim 16 further comprising:A unit for converting a power mode of the lowest energy efficiency processing component from an active mode to an idle mode.21.The computer system of claim 20 further comprising:a unit for determining that the alarm has been cleared;A unit for permitting the lowest energy efficient processing component to return to the active power mode.22.The computer system of claim 16 wherein the thermal parameter is associated with one of: skin temperature, PoP memory temperature, junction temperature, and battery capacity.23.The computer system of claim 16 wherein said portable computing device is in the form of a wireless telephone.24.A computer program product comprising a computer usable medium, wherein the computer can use existing computer readable program code in a medium, the computer readable program code being adapted to be implemented for use in an asynchronous multiprocessor A method of managing thermal energy generation in a system-on-a-chip ("SoC") portable computing device, the method comprising:Monitoring temperature readings uniquely associated with each of the plurality of independent processing components of the multi-processor SoC, wherein each processing component is associated with a dedicated supply voltage and clock generator frequency;Monitoring thermal parameters;Receiving an alarm indicating that a threshold associated with the thermal parameter has been exceeded;Sampling the monitored temperature readings uniquely associated with each of the processing components;Querying performance data for each processing component based on the sampled temperature readings, wherein the performance data represents a relationship between power consumption and workload processing capability when a given independent processing component operates at a given temperature ;Comparing the performance data for each processing component to identify the lowest energy efficiency processing component;Adjusting the dedicated supply voltage and clock generator frequency of the lowest energy efficient processing component, wherein adjusting the dedicated supply voltage and clock generator frequency serves to reduce the power consumption of the lowest energy efficient processing component .25.The computer program product of claim 24 wherein said least energy efficient processing component is: a processing component that consumes a maximum amount of power for each processed workload.26.The computer program product of claim 24, further comprising:Determining that the alarm has not been cleared;Re-sampling the monitored temperature readings uniquely associated with each of the processing components;Re-query performance data for each processing component based on the resampled temperature readings;Comparing the re-queried performance data for each processing component to identify new least energy efficient processing components;Adjusting the dedicated supply voltage and clock generator frequency of the new lowest energy efficiency processing component, wherein adjusting the dedicated supply voltage and clock generator frequency to reduce said new minimum energy efficient processing component The role of power consumption.27.The computer program product of claim 24, further comprising:Determining that the alarm has been cleared;It is permitted to increase the supply voltage and clock generator frequency for the lowest energy efficient processing component.28.The computer program product of claim 24, further comprising:The power mode of the lowest energy efficient processing component is converted from an active mode to an idle mode.29.The computer program product of claim 28, further comprising:Determining that the alarm has been cleared;The minimum energy efficient processing component is permitted to return to the active power mode.30.The computer program product of claim 24, wherein the thermal parameter is associated with one of: skin temperature, PoP memory temperature, junction temperature, and battery capacity.
Energy-efficient thermal management in multiprocessor system-on-chipStatement on related applicationsIn accordance with 35 USC § 119, the present application claims priority to US Provisional Patent Application No. 61/977,013, filed on Apr. 8, 2014, entitled "SYSTEM ANDMETHOD FOR THERMAL MITIGATION IN A SYSTEM ON A CHIP, as a non-provisional application, Therefore, the entire contents of this article are incorporated herein by reference. In addition, according to 35 USC § 119, the present application also claims US Provisional Patent Application No. 61/981,714, entitled "ENERGY EFFICIENCY AWARETHERMAL MANAGEMENT IN A HETEROGENEOUS MULTI-PROCESSOR SYSTEM ON A CHIP", filed on April 18, 2014, as The priority of non-provisional applications is hereby incorporated by reference in its entirety. This application is related to two non-provisional applications submitted to the U.S. Patent and Trademark Office on May 18, 2014 under the heading "ENERGY EFFICIENCYAWARE THERMAL MANAGEMENT IN A MULTI-PROCESSOR SYSTEM ON A CHIP". The numbers are 141,627 U2 and 141,627 U3, respectively, and the entire contents of both applications are hereby incorporated by reference.Background techniquePortable computing devices ("PCD") are becoming a necessity for people at the personal and professional levels. These devices may include cellular telephones, portable digital assistants ("PDAs"), portable game consoles, palmtop computers, and other portable electronic devices.Typically, PCDs are limited in size, and thus the space for components in a PCD is often extremely important. Therefore, there is typically not enough space in a typical PCD specification to be utilized by engineers and designers, such that thermal degradation or thermal failure of the component cannot be mitigated by employing a clever spatial layout or arrangement of passive heat sink components. Therefore, thermal energy generation in PCD is typically managed through the application of various thermal management techniques, including the weakening of electronic products or the shutdown of electronic products at the expense of performance.Thermal management techniques are used in PCDs to try to strike a balance between mitigating thermal energy generation and affecting the quality of service ("QoS") provided by PCD. In a PCD with heterogeneous processing components, the results of the balance tradeoff may be difficult to manage since the various processing components in the PCD are not produced equally. Thus, thermal mitigation measures known in the art, which collectively limit the power frequency uniformly for all processing components in response to thermal triggering, or simply limit the supply voltage and clock generator frequency for the most heat treated components, are generally not The rate of thermal energy generation is reduced to exchange to optimize QoS levels. Since the various processing components (whether homogenous or heterogeneous in design) in a system-on-a-chip ("SoC") inevitably change in performance capabilities, when measuring for effects on QoS, Not always the hottest processing components provide the greatest potential thermal energy reduction.Accordingly, there is a need in the art for methods and systems for energy efficient thermal mitigation. In addition, there is a need in the art for systems and methods for comparing processing components, identifying least efficient processing components, and most efficient processing components.Summary of the inventionVarious embodiments of methods and systems for energy efficient thermal management in a portable computing device including an asynchronous multiprocessor system-on-a-chip ("SoC") are disclosed herein. Since each processing component may have different characteristics (either deliberately designed or derived from different manufacturing processes), multiprocessor SoCs may exhibit different processing efficiencies at a given temperature, so energy-efficient thermal management techniques may be utilized (It compares the performance and power consumption data of the various processing components at their measured operating temperatures) to maximize performance under power and thermal constraints by adjusting the operating frequency and power supply for the lowest energy efficient processing components. In this manner, embodiments of the solution optimize the average amount of power consumed across the SoC for processing known workloads.One such method involves monitoring temperature readings uniquely associated with each of a plurality of independent processing components in a multi-processor SoC. Since the SoC has an asynchronous architecture, each processing component is associated with a dedicated power supply and clock generator. The thermal parameters are monitored and an alarm is received indicating that the threshold associated with the thermal parameter has been exceeded. The monitored temperature readings that are uniquely associated with each processing component are then sampled. The performance data for each processing component is queried based on the sampled temperature readings. The performance data represents the relationship between power consumption and workload processing capability when a given independent processing component is operating at a given temperature. The performance data for these processing components are then compared to identify the lowest energy efficiency processing components. Once the least energy efficient processing component is identified, its dedicated power supply and clock generator are adjusted. Advantageously, adjusting the dedicated supply voltage and clock generator frequency serves to reduce the power consumption of the lowest energy efficient processing component, thereby optimizing the overall processing efficiency and performance of the SoC.DRAWINGSIn the drawings, the same reference numerals are used to refer to the For reference numerals that are named using alphabetic characters such as "102A" or "102B", these alphabetic characters can be distinguished by two identical parts or elements that appear in the same figure. When a reference numeral is intended to cover all parts having the same reference numerals among all the drawings, the alphabetic character naming for the reference numerals may be omitted.1A is a diagram showing a pair of performance curves of an exemplary processing component operating under different thermal conditions;1B is a pair of performance curves showing each of two exemplary processing components ("low performance" CPU processing components and "high performance" GPU processing components) operating under different thermal conditions. Figure1C is a diagram showing a pair of performance curves for a pair of exemplary cores;1D is a diagram showing a pair of different performance curves for a pair of exemplary cores depicted in the diagram of FIG. 1C;2A is a functional block diagram depicting aspects of an asynchronous architecture in a system on a chip that includes multiple processing components;2B is a functional block diagram depicting aspects of a synchronization architecture in a system on a chip that includes multiple processing components;3 is a functional block diagram depicting an embodiment of a system on chip for energy efficient thermal management in a portable computing device ("PCD");4 is a functional block diagram of an exemplary, non-limiting aspect of a PCD in the form of a wireless telephone for enabling monitoring of thermal conditions, comparing performance data, setting optimal power frequencies, and scheduling workloads to A method and system that is most suitable for processing components for efficient processing;5A is a functional block diagram depicting an exemplary spatial layout of hardware for the chip shown in FIG. 4;5B is a schematic diagram showing an exemplary software architecture of the PCD of FIGS. 4 and 5A for supporting the identification of thermal conditions and the application of energy efficient thermal management algorithms;6 is a logic flow diagram depicting an embodiment of a method for energy efficient perceptual thermal management in an asynchronous system on a chip;7 is a logic flow diagram depicting an embodiment of a method 700 of energy efficient thermal management in a synchronous system-on-chip for redistribution via a workload;8 is a logic flow diagram depicting an embodiment of a method for energy efficient thermal management in a synchronized system-on-a-chip via distributed workloads;9 is a logic flow diagram depicting an embodiment of a method for energy efficient thermal management in a synchronous system-on-chip via power mode adjustment;10 is a logic flow diagram depicting an embodiment of a method for energy efficient thermal management in a synchronous system-on-chip controlled via a power mode duty cycle;11 is a logic flow diagram depicting an embodiment of a method for processing runtime verification of component energy efficiency ratings.Detailed waysThe word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as exclusive, more preferred or advantageous over other aspects.In this specification, the term "application" may also include files having executable content such as object code, scripts, bytecodes, markup language files, and patches. In addition, the "application" referred to herein may also include files that are not executable in nature, such as documents that need to be opened or other data files that need to be accessed.As used in this specification, the terms "component", "database", "module", "system", "thermal energy generating component", "processing component", "thermal aggressor", "processing engine", etc. are intended to be Refers to a computer-related entity, whether it is hardware, firmware, a combination of hardware and software, software, or software in operation. For example, a component can be, but is not limited to being, a processor running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. For example, an application running on a computing device and the computing device can be components. One or more components can reside within a process and/or thread of execution, and the components can be located on a computer and/or distributed between two or more computers. Moreover, these components can be executed in accordance with various computer readable media having various data structures stored thereon. These components may communicate by means of local and/or remote processes, such as according to signals having one or more data packets (eg, data from one component that is associated with the local system, another component in the distributed system) Interact and/or signalually interact with other systems across networks such as the Internet).In this specification, the terms "central processing unit ("CPU")", "digital signal processor ("DSP")" and "chip" are non-limiting examples of processing components that may be located within a PCD, which may Replacement use unless otherwise stated. Moreover, as distinguished in this specification, a CPU, DSP or chip may include one or more different processing components, commonly referred to herein as "kernels" and "sub-cores."In this specification, “heterogeneous components” include components that are designed with different intent, as well as homogenous designs (designed in the same manner), but due to variations in production, temperature during operation, and the components on the silicon die. A component that has different electrical characteristics in position. It will be understood by those of ordinary skill in the art that even in the case where the design of the processing component is homogeneous, the electrical characteristics of each processing component on the SOC will vary due to one or more of the following factors (mutually Between different): silicon leakage production changes, switching rate production variations, dynamic temperature changes of various components during operation, and the location of the component on the silicon die. Accordingly, one of ordinary skill in the art will recognize that components on the SOC may not be completely homogeneous and identical from a power and performance perspective.In this specification, it should be understood that the terms "heat" and "thermal energy" may be used in connection with a device or component capable of generating or consuming energy, wherein the energy may be measured in units of "temperature." Thus, it should also be understood that with reference to certain standard values, the term "temperature" is intended to mean any measurement that may indicate the relative or no heat of the device or component that produces "thermal energy." For example, when two components are in a "hot" balance, the "temperature" of the two components is the same.In this specification, the terms "workload," "process load," "process workload," and "code block" are used interchangeably and are generally associated with a given processing component in a given embodiment. The processing burden or processing burden percentage, or the processing burden or processing burden percentage that can be assigned to a given processing component in a given embodiment. In addition to the above, a "processing component" or "thermal energy generating component" or "thermal aggressor" may be, but is not limited to, a central processing unit, a graphics processing unit, a kernel, a main kernel, a sub-core, a processing area , hardware engine, etc., or any component within or outside of an integrated circuit in a portable computing device. Moreover, to some extent the terms "heat load", "heat distribution", "thermal characteristics", "heat treatment load", and the like, indicate a work load on a processing component, and one of ordinary skill in the art will recognize that the present disclosure The use of these "hot" terms in the content is related to process load distribution, workload burden, and power consumption.In this specification, the terms "thermal mitigation technique", "thermal strategy", "thermal management" and "thermal mitigation measures" are used interchangeably.One of ordinary skill in the art will recognize that the term "DMIPS" refers to the number of Dhrystone iterations required to process a given number of millions of instructions per second. In the present specification, the term is used in the general unit of measurement to indicate the relative level of processor performance in the exemplary embodiment, and it should not be construed as a suggestion: fall within the scope of protection of the present disclosure Any given embodiment within it must include or must not include: a processor with any particular Dhrystone rating.In this specification, the term "portable computing device" ("PCD") is used to describe any device that operates on a limited capacity supply voltage and clock generator frequency (eg, a battery). Although battery-powered PCDs have been in use for decades, technological advances in rechargeable batteries and the emergence of third-generation ("3G") and fourth-generation ("4G") wireless technologies have enabled The ability of many PCDs. Thus, the PCD can be a cellular telephone, a satellite telephone, a pager, a PDA, a smart phone, a navigation device, a smartbook or reader, a media player, a combination of the aforementioned devices, a laptop computer with a wireless connection, and the like.Managing performance for QoS optimization in PCDs with heterogeneous processing components can be accomplished by leveraging the different performance characteristics of the various processing engines. Even in the case where the processing components are homogenous in design, the electrical characteristics of each of the processing components on the SOC may vary (between each other) due to any number of factors, including but not limited to: silicon Leakage production variations, switching rate production variations, dynamic temperature changes of various components during operation, and the location of the component on the silicon die. Accordingly, one of ordinary skill in the art will recognize that components on the SOC may not be completely homogeneous and identical from a power and performance perspective. Thus, in the present disclosure, it should be understood that reference to "heterogeneous components" also means having a homogenous design (designed in the same manner), but due to production variations, temperatures during operation, and the components A component that has different electrical characteristics at a location on the silicon die. With regard to the different performance characteristics of various processing engines that may be included in a heterogeneous processing component, one of ordinary skill in the art will recognize that performance differences may be due to any number of reasons including, but not limited to, different levels of Silicon, design changes, and more. Moreover, one of ordinary skill in the art will recognize that the performance characteristics associated with any given processing component can vary in relation to the operating temperature of the processing component, the power provided to the processing component, and the like.For example, considering an exemplary heterogeneous multi-core processor that includes a plurality of different processing cores, the typical performance capabilities range from low to high (notably, one of ordinary skill in the art will recognize that it is also contemplated to include multiple An exemplary heterogeneous multiprocessor system-on-a-chip ("SoC") of different processing components, each of which includes one or more cores). As will be appreciated by those of ordinary skill in the art, low performance to medium performance processing cores in heterogeneous processors will exhibit lower performance at a given workload capacity than processing cores having relatively high performance capabilities. The power leakage rate, and thus the lower rate of thermal energy, is generated. A higher-capacity kernel can handle a given workload in a shorter amount of time than a lower-capacity kernel. Similarly, a high-capacity core that has been degraded in processing speed can exhibit a lower power leakage rate at a given workload capacity than it does at full-speed unverified capabilities, and thus more Low rate of thermal energy is generated.Even so, depending on the thermal state of operation of these cores, lower performance cores may be more efficient or less efficient (in terms of power consumption) when processing a given workload than a high performance core. . In addition, the "hottest" kernel can be the lowest energy efficient core at any given time, but not necessarily in all scenarios. Thus, in order to achieve maximum thermal energy mitigation returns at the expense of processing power, embodiments of energy efficient sensing thermal management solutions, in contrast to relative temperatures, make thermal management decisions in view of the relative processing efficiencies among the various processing components.It is worth noting that the processing efficiency of a given processing component can be considered as the power efficiency ratio expressed by operating frequency/power consumption. Alternatively, power efficiency can be represented by a known workload (eg, DMIPS), which can be expressed by known workload/power consumption. Once the processing efficiency of the various processing components is determined, a dynamic control and voltage scaling ("DCVS") algorithm can be utilized to adjust the power frequency provided to the lowest energy efficient processing component, as is generally understood in the art, such that for the SoC Mitigation of thermal energy can be achieved with minimal impact on the overall workload that can be handled.By considering various performance characteristics (or indicators of performance characteristics (such as, but not limited to, current consumption) of different cores in a heterogeneous processor, where these performance characteristics can be used to infer that a given operation is given by a given core Temperature To handle the power consumed by a given workload, the energy efficiency-aware thermal management algorithm can indicate that the least energy efficient core (and not necessarily the "hottest" core) is "dialed down" so that it has In the case of minimal impact, heat energy is reduced. Similarly, in response to the need to reduce thermal energy generation, energy efficiency-aware thermal management algorithms can cause real-time workloads to be reassigned from less efficient cores to more efficient cores, or to indicate that queued workloads are allocated to more capable ones. Efficient kernel. It is worth noting that the embodiment of the solution is not simply sought to prevent the workload from running on a less energy efficient core. That is, in some embodiments, when the workload is started, the system can consider the energy efficiency of each component and place the workload on the most efficient CPU that the workload is suitable for. For example, if the most efficient core has been used too heavily, you can choose the next most efficient core. In these and other ways, embodiments of the energy efficient thermal management solution can manage thermal energy generation in the PCD while optimizing the overall QoS level experienced by the user.As a non-limiting example, the hot limit of monitoring in the PCD may be exceeded, triggering a thermal alarm. Popularity limits can be associated with, but are not limited to, the "skin" temperature of the PCD, the temperature of the package-in-package ("PoP") memory device, the junction temperature of the core, the power supply, and the clock generator capability. , use case scenarios, and more. Recognizing that the popularity limit has been exceeded, an efficiency manager module for facilitating an energy efficiency aware thermal management strategy can seek to reduce the power consumption of one or more processing components. Advantageously, by reducing power consumption, heat generation can be mitigated and thermal alarms cleared. Certain embodiments of an energy efficient perceptual thermal management solution may permit increased supply voltage and clock generator frequency for less efficient processing components after the thermal alarm has been cleared, wherein the less efficient processing component is previously reduced supply voltage and The receiver of the clock generator frequency. Similarly, certain embodiments of the energy efficient perceptual thermal management solution may permit less efficient processing components to return to the active power mode after the thermal alert has been cleared, wherein the less efficient processing component is the reception of the previous converted power mode. By.The efficiency manager module can query performance data associated with the various processing components or receive measurements indicative of processor performance and determine which one or more of the active and thermal violation processing components are lowest when processing the workload Energy efficient. That is, the efficiency manager module can determine which processing components consume most of the power of the known workload to be processed. Based on this determination, the efficiency manager module can then reduce the power provided to the lowest energy efficient processing component, thereby mitigating the overall thermal energy production of the plurality of processing components without unnecessarily sacrificing each milliwatt consumed ( "mW") The average amount of workload processed at power. In this way, the efficiency manager module can optimize QoS while meeting the need to reduce thermal energy generation.As another non-limiting example, a particular block of code may be processed by any of a central processing unit ("CPU") or a graphics processing unit ("GPU") in an exemplary PCD. For example, the particular code block can be allocated for processing by the CPU. However, the efficiency manager module of the exemplary embodiment of the solution may determine that the GPU is in a position to process the code block more efficiently, in response to which the code block is reassigned from the CPU to the GPU. In this way, the amount of energy required to process the code block can be minimized, with the result that the overall thermal energy production of the SoC is minimized.As another non-limiting example, a particular block of code may be processed by any of a central processing unit ("CPU") or a graphics processing unit ("GPU") in an exemplary PCD. Advantageously, instead of predetermining that the particular code block is to be processed by one of the CPU or GPU, an exemplary embodiment may select which of the processing components to be in need of processing the code. The processing component assigns the task of processing the code block. That is, a "snapshot" of the CPU and GPU performance curves can be compared to distribute the workload to the processor that is best suited for efficient processing of the code block. It is worth noting that it should be understood that as the code block exits the dispatch queue, subsequent processor selections for the assignment of subsequent workloads can be made in real time or near real time. In this manner, the efficiency management module can take advantage of the operating temperatures associated with the various cores in the heterogeneous processor to optimize QoS by selecting the processing core prior to workload distribution.1A is a graph 300 showing a pair of performance curves (core 85 ° C, core 50 ° C) of an exemplary processing component operating under different thermal conditions. The processing component can be a core in a heterogeneous multi-core processor, which can be a high efficiency, medium efficiency, or inefficient core. In particular, as will be recognized by one of ordinary skill in the art, the processing component can be any processing engine capable of processing a given block of code, including but not limited to: CPU, GPU, DSP, programmable array, Video encoder/decoder, system bus, camera subsystem (image processor), MDP, and more. Moreover, as noted above, the exemplary processing engine can be a kernel or sub-core in a CPU, GPU, or the like. It is worth noting that energy efficiency can be defined to indicate the processing performance or speed at which a component is processed at a particular power consumption level. For example, energy efficiency can be expressed in MIPS/mW (how many million instructions per mW power consumption) or MHz/mW (how many megahertz operating clock frequencies per mW).As can be observed by the diagram of Figure 1A, at a workload of 3500 MIPS, an exemplary core operating at 50 °C consumes approximately 620 mW of power (point 315), but at the same 3500 MIPS workload, when operating environment At 85 ° C, the power consumption of the core is increased to almost 1000 mW (point 310). Thus, when the exemplary processing component is operated at a temperature of 50 ° C, the efficiency of the exemplary processing component is better due to approximately 5.6 MIPS/mW compared to 3.5 MIPS/mW operating at 85 °C. Process it. In addition, for a given operating temperature, as the workload increases, the processing efficiency of the core decreases. For example, see the core 50°C curve, when the workload increases from 3500 MIPS to approximately 4300 MIPS, the power consumption increases to almost 1000 mW (point 305).It can be observed from the diagram of FIG. 1A that, for a given processing component, the efficiency of the processing component decreases as the operating temperature increases from a power consumption perspective (ie, as the operating temperature of the processing component increases) The number of MIPS that it can handle at a given operating frequency will decrease). It should be noted that one of ordinary skill in the art will recognize that the increase in operating temperature of an exemplary processing component may be due to any number of factors or combinations of factors including, but not limited to: higher clocks The power leakage in the processing component associated with the speed increases, the thermal aggressor adjacent the processing component, the failed component adjacent to the processing component, changes in the surrounding environment, and the like. Moreover, one of ordinary skill in the art will recognize that as a result of an increase in power leakage rate associated with an increase in power consumption, an increase in the workload of the processing component may result in an operation associated with the processing component at the time of workload distribution. The temperature rises. Regardless of the cause of the rise or fall in the operating temperature of the processing component, it is important to note that, according to the diagram of FIG. 1A, generally, as the operating temperature increases, the processing efficiency of a given processing component decreases inversely.Turning now to FIG. 1B, a depiction of each of two exemplary processing components ("low performance" CPU processing components and "high performance" GPU processing components) operating under different thermal conditions is depicted. A graph 400 of a pair of performance curves (GPU 105 ° C, GPU 95 ° C; CPU 105 ° C, CPU 95 ° C). Essentially, in FIG. 1B, diagram 400 depicts performance curves for two different exemplary processing components, each of which is represented by the FIG. 1A diagram. Moreover, those of ordinary skill in the art will recognize that the two exemplary processor GPUs, CPUs represented by the pair of performance curves in FIG. 2, can be included in a common heterogeneous multiprocessor system-on-a-chip ("SoC"). on.It is worth noting that by superimposing the performance curves of the exemplary engine GPU, CPU, it can be observed that at the intersection of the various curves, various transitions or intersections 405, 410, 415 are defined. These intersections represent thresholds, and different engines are most efficient when they are above or below these thresholds.For example, when each of the processor GPU and the CPU operates at 95 ° C, a comparative analysis of the performance curves of the exemplary GPU and the CPU processor can determine that the two processor GPUs and CPUs are at a workload of approximately 3700 DMIPS ( The processing efficiency of point 410) is substantially equivalent. However, it can also be observed from this comparative analysis that CPU processing components are more efficient below point 410, ie, when the workload is below 3700 DMIPS, the CPU processing components consume less for each DMIPS workload. Power. Conversely, above point 410, the GPU core is more efficient, ie, when the workload exceeds 3700 DMIPS, the GPU core consumes less power for each DMIPS workload.Therefore, depending on the exemplary comparative analysis of the CPU running at 105 ° C and the GPU running at a lower 95 ° C, the efficiency manager module applying the energy-efficient-aware thermal management strategy can be targeted to work in response to triggering to reduce overall thermal energy generation. The load is below point 405, indicating a reduction in power for less efficient GPUs (even if the CPU temperature is higher).Moreover, it should be understood that different processors and/or cores in a heterogeneous multiprocessor SoC may operate under different thermal conditions due to any number of factors. For example, in the diagram of FIG. 1B, transition point 405 represents the intersection of an exemplary CPU processing component operating at 105 ° C and a performance curve of a GPU processing component operating at 95 °C. Thus, by recognizing that these exemplary processors operate at different temperatures, one embodiment can use a comparative analysis to determine which of these processors is most suitable before the workload is allocated, thereby being ready for processing. A given block of code is processed efficiently (which is similar to the exemplary scenario described above). For example, workloads below 2400 DMIPS can be assigned to CPU processing components, and workloads above 2400 DMIPS can be assigned to GPU processing components to ensure that workloads are processed under the most efficient conditions. Moreover, it is contemplated that embodiments of energy efficient perceptual thermal management solutions can redistribute workload among and among processing components to optimize the overall average efficiency of the collective processing engine. To this end, some embodiments are not simply sought to prevent workloads from running on less energy efficient cores. That is, in some embodiments, when the workload is started, the system can consider the energy efficiency of each component and place the workload on the most efficient CPU that the workload is suitable for. For example, if the most efficient core has been used too heavily, you can choose the next most efficient core.It is worth noting that it is contemplated that in the event of allocating the next code block, certain embodiments of the energy efficient thermal management algorithm may be implemented to optimize overall workload processing efficiency. For example, referring back to the exemplary curve GPU 95°C and CPU 105°C of FIG. 1B, assuming that each of the processor GPUs, CPUs associated with these curves are currently processing at a rate of 2000 DMIPS, the efficiency management module is seeking Determining which of these two exemplary processors is best suited for efficient processing of additional 1000 DMIPS workloads. Compared to the 2000DMIPS workload currently being processed, the energy-aware thermal management algorithm can be used for a total workload of 3000 MIPS based on the assumed GPU and CPU of each processing component (previously allocated 2000 MIPS per engine plus to be allocated to these Compare these curves with an additional 1000 MIPS for one of the engines. For this non-limiting example, based on the exemplary curve GPU 95 ° C and CPU 105 ° C of the diagram of FIG. 2, the thermal-aware scheduling module can select a more efficient GPU, with a CPU that consumes more than 500 mW of power processed in accordance with 3000 DMIPS. The GPU will consume less than 400mW of power at the same workload.Extending the above example, after allocating an additional 1000 DMIPS to the GPU, the efficiency management module can move to redistribute the 2000 DMIPS workload running on the CPU to the GPU, further increasing the GPU workload from 3000 DMIPS to 5000 DMIPS. Advantageously, the GPU will consume 1000 mW of power or 5 mW/DMIPS to process the workload at 5000 DMIPS compared to approximately 8 mW/DMIPS consumed when the 2000 DMIPS workload without the CPU is reassigned. Furthermore, as the workload is completely removed from the CPU in this example, it is contemplated that the efficiency manager module can transition the CPU to a hold state or even a collapsed state, thereby further saving energy and mitigating thermal energy. produce.Other embodiments of the energy efficient thermal management algorithm can be used to compare performance curves based on predicted offsets of curves if additional workloads are allocated. For example, referring back to the examples of processor GPUs and CPUs that operate at rates of 2000 DMIPS at operating temperatures of 95 ° C and 105 ° C, respectively, embodiments of the efficiency management module can be predicted to result from the allocation of an additional 1000 DMIPS workload. The offset of the performance curve. It is worth noting that since the additional 1000 DMIPS workload may cause the processing components to which it is allocated to consume more power, the efficiency management module may consider that the result of the additional workload is currently associated with the processing component. The operating temperature will rise, so it is sought to compare the performance curves associated with the predicted temperature rise.Returning to this example, an additional 1000 DMIPS workload may cause the GPU operating temperature to increase from 95 ° C to 100 ° C and, similarly, the operating temperature of the CPU increases from 105 ° C to 110 ° C. Thus, embodiments of the efficiency management module can query and compare performance data associated with kernel GPUs and CPUs operating at predicted temperatures of 100° and 110°, respectively (GPU 100°C and CPU are not shown in FIG. 1B) 110 ° C performance curve).FIG. 1C is a diagram 500 showing a pair of performance curves for an exemplary kernel pair (core 1 and core 2). Kernel 2 can be considered a "faster" kernel than kernel 1 that can be considered a "slower" kernel. It is worth noting that one of ordinary skill in the art will recognize that kernel 2 acts as a faster core for this exemplary pair, since kernel 2 is capable of a higher maximum frequency compared to core 1 (approximately 2100 MHz) (approximate 2500MHz) for processing. Therefore, since the operating frequency is related to MIPS, one of ordinary skill in the art will also recognize that kernel 2 is capable of processing more MIPS than kernel 1.Point 510 in the diagram of Figure 1C represents a frequency (~1600 MHz) above which core 2 is more efficient at processing a MIPS workload than kernel 1 [at 2000 MHz, the core 2 consumes only ~800 mW of power (point 515), while at the same 2000 MHz operating frequency, core 1 consumes ~1100 mW of power (point 520)]. However, it is worth noting that at below point 510, the exemplary core 1 is a more efficient processor of the two. Thus, if both core 1 and core 2 are operating at frequencies below 1600 MHz, the efficiency management module identifies triggers for reducing thermal energy generation (eg, such as thermal alarms for exceeding skin temperature thresholds), the efficiency management module can seek The frequency provided to kernel 2 is reduced, regardless of whether core 2 is "hotter" than kernel 1 at the time of the trigger. In this way, thermal energy generation can be mitigated while optimizing the overall efficiency of processing a given MIPS workload.FIG. 1D is a diagram 600 showing a pair of different performance curves for an exemplary kernel pair (core 1 and core 2) depicted in the FIG. 1C diagram. In Figure 1D, each of these performance curves depicts the energy efficiency of the core versus the frequency provided to the core. It is noted that points 610 and 615 in the diagram of FIG. 1D are associated with points 510 and 515/520, respectively, in the diagram of FIG. 1C. Similarly, the maximum operating frequency depicted in the FIG. 1D diagram is related to the maximum operating frequency depicted in the FIG. 1C diagram.Using the performance data represented by the FIG. 1D diagram, an embodiment of an energy efficient perceptual thermal management solution can first reduce the frequency provided to the core 1 by first reducing both cores at or near their maximum operating frequency. Trigger to respond. It is worth noting that at the maximum operating frequency of 2000 MHz, compared to Core 2, each mW of power consumed, Core 1 handles less MIPS of workload. Therefore, an example performance-aware thermal management solution can reduce the frequency of core 1 by one step in an attempt to mitigate thermal energy generation and clear thermal alarms. If the less efficient kernel 1 frequency reduction does not clear the alarm, then the example performance-aware thermal management solution can reevaluate which of these cores is less efficient after the first step reduction, And then the second step of the frequency is applied to the lowest energy efficient core. This process can be continued in this step-by-step manner, at which time the frequency provided to the lowest energy efficient core is reduced until the thermal alarm is cleared or otherwise achieved.2A is a functional block diagram depicting aspects of an asynchronous architecture in a system on chip 102A that includes heterogeneous processing components. Certain embodiments of the energy efficient thermal management solution may be adapted to manage the thermal energy production of these processing components without unnecessarily degrading the processing efficiency of the workload.The system on chip 102A is depicted as showing a series of processing components PC 0, PC 1, PC 2, and the like. It is worth noting that, as will be understood by those of ordinary skill in the art, since the architecture of system on chip 102A is asynchronous, each of these processing components is associated with a dedicated clock source for controlling the supply voltage and clock generator frequency. (for example, a phase-locked loop ("PLL")) is associated. In this diagram, clock 0 is uniquely associated with the power supply and clock generator for PC 0. Clock 1 is uniquely associated with the power supply and clock generator for PC 1. Clock 2 is uniquely associated with the power and clock generator for PC2 and the like.Advantageously, since each processing component in the asynchronous system-on-chip has a dedicated clock source, embodiments of the energy-efficient-aware thermal management solution can use the DCVS module to make the minimum energy efficiency treatment when thermal energy generation has exceeded the threshold. The target power of the component is reduced.2B is a functional block diagram depicting aspects of a synchronization architecture in a system on chip 102B that includes heterogeneous processing components. Certain embodiments of the energy efficient thermal management solution may be adapted to manage the thermal energy production of these processing components without unnecessarily degrading the processing efficiency of the workload.The system on chip 102B is depicted as showing a series of processing components PC 0, PC 1, PC 2, and the like. It is worth noting that since the architecture of system on chip 102B is synchronous, each of these processing components is associated with a single, common clock source and power supply for all processing components. Advantageously, since each processing component in the synchronous system-on-chip shares a single clock source, embodiments of the energy-efficient-aware thermal management solution can be distributed from less efficient processing components when thermal energy generation has exceeded the threshold. Or redistribute to more efficient processing components to optimize processing efficiency.It is worth noting that an embodiment of an energy efficient perceptual thermal management solution may, for example, indicate that the power state of the less efficient processing component transitions from an active state to a hold state, or from a hold, since less efficient processing components do not have a workload. The state transitions to a power collapsed state. Advantageously, embodiments of these solutions can be optimized for use by allocating new workloads to more efficient processors, and/or reallocating active workloads from less efficient processors to more efficient processors. The amount of power required to process a given workload. Furthermore, by transitioning the less efficient processing components in the synchronous SoC 102B from the active state to the idle state, in order to bring back less efficient processing components in the event that more efficient processing components are unable to maintain acceptable QoS. The online latency parameter, an embodiment of the energy efficient thermal management solution, can also optimize overall power efficiency.FIG. 3 is a functional block diagram depicting an embodiment of a system on chip 102 for energy efficient thermal management in a portable computing device ("PCD") 100. It is worth noting that it is contemplated that system on chip 102 may be either synchronous or asynchronous in architecture. As explained above in connection with FIG. 1, the target reduction of the supply voltage and clock generator frequency and/or the workload distribution across the processing components may be based on unique association with individual cores or processors 222, 224, 226, 228. Comparative analysis of performance data. It should be noted that, as will be appreciated by those of ordinary skill in the art, processing component 110 is depicted as a set of heterogeneous processing engines for illustrative purposes only, which may represent having multiple, heterogeneous cores 222, 224, 226. a single processing component of 228 or a plurality of heterogeneous processors 222, 224, 226, 228, each of which may include multiple cores and/or sub-cores, or may not include multiple cores and / or sub-kernel. Accordingly, the processing engines 222, 224, 226, and 228 are referred to herein as "kernels" and are to be understood as being illustrative in nature and not limiting the scope of the disclosure.The system on chip 102 can monitor the temperature sensors 157 associated with the cores 222, 224, 226, 228, respectively, through the monitoring module 114, the monitoring module 114 and the efficiency manager ("EM") module 101, the DCVS module 26, and the scheduler module. 207 communicates. In addition, the monitoring module 114 can also monitor any number of thermal energy indicators that exceed a threshold, such as, but not limited to, a skin temperature sensor, a PoP memory temperature sensor, a junction temperature sensor, a current sensor for a power rail of a processing component, and a power source related Combined current sensor, power supply capacity sensor, etc.In the event that the popularity limit is exceeded and is recognized by the monitoring module 114, the EM module 101 can be triggered to make measurements to mitigate thermal energy generation in an energy efficient manner. The EM module 101 can receive an indication from the monitoring module 114 of one or more monitoring parameters associated with the energy efficiency of the processing components, and then use the indications to determine which of the processing components is least energy efficient. In some embodiments, EM module 101 can receive temperature measurements from monitoring module 114 and use these measurements to query performance data from kernel performance data store 24. Based on the performance data, the EM module 101 can determine the ordering of the cores 222, 224, 226, 228 based on the workload processing efficiency.Subsequently, in the asynchronous system 102A, the EM module 101 can determine to reduce the supply voltage and clock generator frequency for less efficient cores, or in the synchronization system 102B, the EM module 101 can cause the workload to be re-started from a less efficient core. Assign to a more efficient kernel or schedule queued workloads to a more efficient kernel. The dynamic DCVS adjustment strategy indicated by the EM module 101 can set the processor clock rate for less efficient processing components to a reduced level, and convert the power states of some less efficient processors from active to idle, etc. Wait. In some embodiments, workload allocation and/or reallocation indicated by EM module 101 may be implemented via instructions for scheduler 207. It is worth noting that by applying energy efficiency-aware thermal management strategies, EM module 101 can reduce or mitigate excessive power consumption at the expense of QoS.As will be appreciated by those of ordinary skill in the art, the operating temperatures of one or more of the processing cores 222, 224, 226, 228 may be distributed as the workload is processed, the surrounding conditions are changed, and the adjacent thermal generators are distributed. Fluctuations in energy and so on. Thus, as the operating temperatures of the various processing cores 222, 224, 226, 228 fluctuate, the relevant performance data associated with these engines 222, 224, 226, 228 may also fluctuate. As the operating temperature associated with each of the cores 222, 224, 226, 228 changes, the monitoring module 114 recognizes the change and can send temperature data indicative of the change to the EM module 101. The measured change in operating temperature may trigger the EM module 101 to reference core performance ("CP") data storage 24 to query for one or more of the cores 222, 224, 226, 228 based on the measured operating temperature. Performance curve. Subsequently, the EM module 101 can identify the different cores 222, 224, 226, 228 as the lowest energy efficient core and adjust the power frequency provided thereto (via the DCVS module 26) to mitigate the generated thermal energy while maintaining per consumption. The most efficient processing of a milliwatt power workload. The EM module 101 can also compare the identified performance curves and select the most suitable cores 222, 224, 226, 228 at the time of the comparison to efficiently process the queued code blocks, or from less efficient cores. Reassigned code block.The exemplary EM module 101 is configured to utilize a comparative analysis of one or more performance curves associated with respective different processing components 222, 224, 226, 228 to instruct the DCVS module 26 to adjust power, and/or to indicate a scheduler module 207 allocates or reassigns the workload to a processing component that is most suitable for efficient processing of the workload. It should be noted that one of ordinary skill in the art will recognize that as the operating temperatures of processing components 222, 224, 226, 228 change, the performance curve queried and compared by EM module 101 will also change. Thus, at different times, the EM module 101 can select different processing engines 222, 224, 226, 228 to apply the energy efficiency aware thermal management strategy. In this manner, certain embodiments have the advantage of allowing for more efficient processing by ensuring that workload assignments are assigned to the most efficient processing components available at the dispense time, and/or reducing power consumption of the lowest energy efficient processing components, support. To handle the active workload, the EM module 101 optimizes the QoS when managing thermal energy generation.4 is a functional block diagram of an exemplary, non-limiting aspect of a PCD 100 in the form of a wireless telephone for monitoring thermal conditions, comparing performance data, setting an optimal power frequency, and scheduling workloads A method and system that is most suitable for processing components that are efficiently processed. As shown, PCD 100 includes a system on chip 102 that includes a heterogeneous multi-core central processing unit ("CPU") 110 and analog signal processor 126 coupled together. CPU 110 may include a zeroth core 222, a first core 224, and an Nth core 230, as understood by one of ordinary skill in the art. In addition, a digital signal processor ("DSP") can be used in place of the CPU 110, as will be understood by those of ordinary skill in the art. Moreover, as understood by those skilled in the art of heterogeneous multi-core processors, each of the cores 222, 224, 230 can process the workload at different efficiencies under similar operating conditions.In general, the efficiency manager module 101 can receive temperature data from the monitoring module 114 and use the temperature data to query performance data, or derive performance data associated with the cores 222, 224, 230 to determine the relatives of the cores 222, 224, 230. Processing efficiency and working with DCVS module 26 and/or scheduler 207 to adjust power, convert power states, and/or schedule code blocks to cores 222, 224, 230.The monitoring module 114 communicates with a plurality of operational sensors (eg, thermal sensors 157) distributed throughout the system on chip 102 and with the CPU 110 of the PCD 100 and with the EM module 101. The EM module 101 can operate with the monitoring module 114 to query processor performance curves related to the temperature monitored by the monitoring module 114, compare the curves, set the power frequency to the most efficient level, and select available and capable The most efficient processor for processing code blocks.As shown in FIG. 4, display controller 128 and touch screen controller 130 are coupled to digital signal processor 110. A touch screen display 132 external to the system on chip 102 is coupled to display controller 128 and touch screen controller 130.In addition, PCD 100 may also include video decoder 134, such as a progressive phase inversion ("PAL") decoder, a sequential and memory color television system ("SECAM") decoder, National Television System Committee ("NTSC") decoding. Or any other type of video decoder 134. Video decoder 134 is coupled to a multi-core central processing unit ("CPU") 110. Video amplifier 136 is coupled to video decoder 134 and touch screen display 132. Video port 138 is coupled to video amplifier 136. As shown in FIG. 4, a universal serial bus ("USB") controller 140 is coupled to CPU 110. Additionally, USB port 142 is coupled to USB controller 140. Memory 112 and Subscriber Identity Module (SIM) card 146 may also be coupled to CPU 110. Further, as shown in FIG. 4, digital camera 148 can be coupled to CPU 110. In an exemplary aspect, digital camera 148 is a charge coupled device ("CCD") camera or a complementary metal oxide semiconductor ("CMOS") camera.As further shown in FIG. 4, stereo audio CODEC 150 can be coupled to analog signal processor 126. Additionally, audio amplifier 152 can be coupled to stereo audio CODEC 150. In an exemplary aspect, first stereo speaker 154 and second stereo speaker 156 are coupled to audio amplifier 152. FIG. 4 shows a microphone amplifier 158 that can also be coupled to a stereo audio CODEC 150. Additionally, the microphone 160 can be coupled to a microphone amplifier 158. In a particular aspect, a frequency modulated ("FM") wireless tuner 162 can be coupled to the stereo audio CODEC 150. In addition, FM antenna 164 is coupled to FM wireless tuner 162. Additionally, stereo headset 166 can be coupled to stereo audio CODEC 150.In addition, FIG. 4 also indicates that a radio frequency ("RF") transceiver 168 can be coupled to the analog signal processor 126. The RF switch 170 can be coupled to an RF transceiver 168 and an RF antenna 172. As shown in FIG. 4, keyboard 174 can be coupled to analog signal processor 126. Additionally, a mono headset 176 having a microphone can be coupled to the analog signal processor 126. Additionally, vibrator device 178 can be coupled to analog signal processor 126. In addition, FIG. 4 also shows that a power source 180 (eg, a battery) is coupled to the system on chip 102. In a particular aspect, the power source includes a rechargeable DC battery or a DC power source that is derived from an alternating current ("AC") connected to the AC power source to the DC converter.In addition, CPU 110 may also be coupled to one or more internal, on-chip thermal sensors 157A, and one or more external, off-chip thermal sensors 157B. The on-chip thermal sensor 157A may include one or more temperature sensors proportional to absolute temperature ("PTAT"), which are based on a vertical PNP structure and are typically dedicated to complementary metal oxide semiconductor ("CMOS") very large scale integration (" VLSI") circuit. Off-chip thermal sensor 157B may include one or more thermistors. Thermal sensor 157 can generate a voltage drop that is converted to a digital signal by an analog to digital converter ("ADC") controller 103. However, other types of thermal sensors 157 can be used without departing from the scope of the invention.In addition to being controlled and monitored by the ADC controller 103, the thermal sensor 157 can also be controlled and monitored by one or more EM modules 101. The EM module 101 can include software that is executed by the CPU 110. However, the EM module 101 can also be formed using hardware and/or firmware without departing from the scope of the invention. The EM module 101 can be responsible for querying processor performance data and/or receiving an indication of processor performance and adjusting the power frequency of the lowest energy efficiency processor based on the analysis of the data, and/or allocating or reallocating code blocks to The processor that processes the code most efficiently when workload is allocated.Returning to FIG. 4, touch screen display 132, video port 138, USB port 142, camera 148, first stereo speaker 154, second stereo speaker 156, microphone 160, FM antenna 164, stereo headset 166, RF switch 170, RF antenna 172 The keyboard 174, the mono headset 176, the vibrator 178, the thermal sensor 157B, and the power source 180 are external to the system on chip 102. However, it should be understood that the monitoring module 114 may also receive one or more indications or signals from one or more of the external devices by means of the analog signal processor 126 and the CPU 110 to facilitate real-time management at the PCD 100. Resources on the operation.In particular aspects, one or more of the method steps described herein can be implemented by executable instructions and parameters stored in memory 112 that form one or more EM modules 101. In addition to the ADC controller 103, these instructions for forming the EM module 101 can be executed by the CPU 110, analog signal processor 126, or other processor to perform the methods described herein. Moreover, the processors 110, 126, the memory 112, the instructions stored therein, or a combination thereof, can be utilized as a unit for performing one or more of the method steps described herein.FIG. 5A is a functional block diagram depicting an exemplary spatial layout of hardware for the chip 102 shown in FIG. According to this exemplary embodiment, application CPU 110 is located in the leftmost region of chip 102, while modem CPUs 168, 126 are located in the rightmost region of chip 102. Application CPU 110 may include a heterogeneous multi-core processor including a zeroth core 222, a first core 224, and an Nth core 230. Application CPU 110 may execute EM module 101A (when embodied as software), or it may include EM module 101A (when embodied as hardware). In addition, application CPU 110 is also shown as including an operating system ("O/S") module 208 and monitoring module 114.Application CPU 110 may be coupled to one or more phase locked loops ("PLLs") 209A, 209B that are positionally adjacent to application CPU 110 and located in the left region of chip 102. Adjacent to the PLLs 209A, 209B and below the application CPU 110, an analog to digital ("ADC") controller 103 can be included, wherein the ADC controller 103 can include its own operation in cooperation with the main module 101A of the application CPU 110. EM module 101B.The EM module 101B of the ADC controller 103 can be combined with the monitoring module 114 to be responsible for monitoring and tracking a plurality of thermal sensors 157 that can be provided on the "on-chip" 102 and "off-chip" 102. The on-chip or internal thermal sensor 157A can be located at various locations.By way of non-limiting example, first internal thermal sensor 157A1 may be located in the top intermediate region of chip 102 between application CPU 110 and modem CPUs 168, 126 and adjacent to internal memory 112. The second internal thermal sensor 157A2 can be located on the right side of the chip 102 below the modem CPUs 168, 126. The second internal thermal sensor 157A2 can also be located between a High Level Reduced Instruction Set Computer ("RSIC") instruction set machine ("ARM") 177 and the first graphics processor 135A. A digital to analog controller ("DAC") 173 can be located between the second internal thermal sensor 157A2 and the modem CPUs 168, 126.The third internal thermal sensor 157A3 may be located between the second graphics processor 135B and the third graphics processor 135C in the rightmost region of the chip 102. The fourth internal thermal sensor 157A4 can be located in the rightmost region of the chip 102 and below the fourth graphics processor 135D. Additionally, the fifth internal thermal sensor 157A5 can be located in the leftmost region of the chip 102 and adjacent to the PLL 209 and the ADC controller 103.One or more external thermal sensors 157B may also be coupled to the ADC controller 103. The first external thermal sensor 157B1 can be located off-chip and adjacent to the upper right quarter portion of the chip 102, which can include modem CPUs 168, 126, ARM 177, and DAC 173. The second external thermal sensor 157B2 can be located off-chip and adjacent to the lower right quarter portion of the chip 102 that can include the third and fourth graphics processors 135C, 135D.Those of ordinary skill in the art will recognize that various other spatial arrangements of the hardware illustrated in Figure 5A can be provided without departing from the scope of the present invention. FIG. 5A illustrates an exemplary spatial layout, and how the main EM module 101A and the ADC controller 103 with its EM module 101B work with the monitoring module 114 to recognize that depending on the exemplary space illustrated in FIG. 5A The thermal state of the layout, comparing processing efficiency data, and assigning workloads or adjusting power to manage thermal conditions without unnecessarily affecting QoS.FIG. 5B is a schematic diagram showing an exemplary software architecture 200 of the PCD 100 of FIGS. 4 and 5A for supporting the identification of thermal conditions and the application of energy efficient thermal management algorithms. Any number of algorithms may form at least one energy-aware thermal management technique that may be applied by the EM module 101 when certain thermal conditions are met, or these algorithms may be part of the at least one energy-efficient-aware thermal management technique.As shown in FIG. 5B, CPU or digital signal processor 110 is coupled to memory 112 via bus 211. As described above, the CPU 110 is a multi-core heterogeneous processor having N core processors. That is, the CPU 110 includes a first core 222, a second core 224, and an Nth core 230. As will be appreciated by those of ordinary skill in the art, each of the first core 222, the second core 224, and the Nth core 230 can be used to support a dedicated application or program, and as part of a heterogeneous processor, can be similar Provides different levels of performance under hot operating conditions. Alternatively, one or more applications or programs may be distributed for processing across two or more of these available heterogeneous cores.CPU 110 may receive commands from EM module 101 (which may include software and/or hardware). If the EM module 101 is implemented as software, the EM module 101 includes instructions executed by the CPU 110, and the CPU 110 issues commands to other applications executed by the CPU 110 and other processors.The first core 222, the second core 224 to the Nth core 230 of the CPU 110 may be integrated on a single integrated circuit chip, or they may be integrated or coupled on separate dies in a multi-circuit package. The designer can couple the first core 222, the second core 224 to the Nth core 230 via one or more shared caches, and the designer can go through network topologies such as bus, ring, mesh, and cross-topologies. To achieve the transfer of messages or instructions.Bus 211 can include multiple communication paths via one or more wired or wireless connections, as is known in the art. Bus 211 may have additional elements (eg, controllers, buffers (caches), drivers, repeaters, and receivers) for enabling communication, but these elements are omitted for simplicity. In addition, bus 211 can include address, control, and/or data connections to enable proper communication among the aforementioned components.When the logic used by the PCD 100 is implemented in software, as shown in FIG. 5B, it should be noted that one or more of the following may be stored on any computer readable medium for any A computer related system or method is used or used in conjunction with any computer related system or method: boot logic 250, management logic 260, energy efficient thermal management interface logic 270, applications in application store 280, and partial file system 290.In the context of this document, a computer readable medium is an electrical, magnetic, optical, or other physical device or unit that can contain or store a computer program and data for use by or in connection with a computer related system or method. . The various logical units and data storage may be embodied by any computer readable medium for use by or in connection with an instruction execution system, apparatus, or device, for example, an execution system, apparatus, or device. It is a computer based system, a system containing a processor, or other system that can take instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a "computer-readable medium" can be any unit that can store, transfer, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. .The computer readable medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (non-exhaustive list) of computer readable media include the following: electrical connections (electrical) with one or more wires, portable computer disk (magnetic), random access memory (RAM) (electric), Read only memory (ROM) (electrical), erasable programmable read only memory (EPROM, EEPROM or flash) (electric), optical (optical) and portable compact disk read only memory (CDROM) (optical). It should be noted that the computer readable medium may even be paper or other suitable medium on which the program can be printed, as the program can be electronically captured, for example, via optical scanning via a paper medium or other medium, followed by appropriate The way to compile, interpret or otherwise process (if needed) and then store it in computer memory.In an alternate embodiment, where one or more of boot logic 250, management logic 260, and perhaps energy-efficient-aware thermal management interface logic 270 are implemented in hardware, each of the following techniques or combinations may be utilized to implement each Logic, wherein each of these techniques is well known in the art: discrete logic circuits having logic gates for implementing logic functions in accordance with data signals, application specific integrated circuits (ASICs) having appropriately combined logic gates, Programmable Gate Array (PGA), Field Programmable Gate Array (FPGA), and more.Memory 112 is a non-volatile data storage device such as a flash memory or solid state memory device. Although memory 112 is depicted as a single device, memory 112 may be a distributed memory device having data storage coupled to a digital signal processor (or additional processor core), respectively.Boot logic 250 includes one or more executable instructions for selectively identifying, loading, and executing selected programs for energy efficient perceptual analysis, and identifying one or more cores in an available kernel to apply energy sensible heat. Management strategy.Management logic 260 includes one or more executable instructions for terminating an energy-efficiently-aware thermal management program, and selectively identifying, loading, and executing a more appropriate replacement program for energy-efficient perception comparison analysis, selection of adjusted power supplies And/or workload allocation for one or more of the available kernels. Management logic 260 is arranged to perform these functions at runtime or when PCD 100 is powered up and used by an operator of the device. The replacement program can be found in the program store 296 of the embedded file system 290.When the replacement program is executed by one or more of the core processors in the digital signal processor, it can operate in accordance with one or more signals provided by the EM module 101 and the monitoring module 114. In this aspect, the monitoring module 114 can provide one or more indicators of events, processes, applications, resource status conditions, elapsed time, temperature, etc., in response to control signals originating from the EM module 101.Interface logic 270 includes one or more executable instructions for presenting, managing, and interacting with external inputs to observe, configure, or otherwise update information stored in embedded file system 290. In one embodiment, interface logic 270 can operate in conjunction with manufacturer input received via USB port 142. These inputs may include one or more programs to be deleted from program storage 296 or added to program storage 296. Alternatively, these inputs may include edits or changes to one or more of the programs in program store 296. Moreover, these inputs can identify one or more changes to one or both of boot logic 250 and management logic 260, or a complete replacement of one or both of boot logic 250 and management logic 260. For example, these inputs can include changes to management logic 260 for instructing PCD 100 to reallocate workloads from less efficient cores to more when temperature measurements associated with skin temperature exceed a certain identified threshold. Efficient kernel. By way of further example, these inputs may include changes to management logic 260 for instructing PCD 100 to reduce the power for the lowest energy efficient processing core by one increment when the battery level reaches a certain amount.Interface logic 270 enables manufacturers to controllably configure and adjust the end user experience based on defined operational conditions on PCD 100. When the memory 112 is a flash memory, one or more of the following may be edited, replaced, or otherwise modified: the startup logic 250, the management logic 260, the interface logic 270, the application in the application store 280, or Information in embedded file system 290. In some embodiments, interface logic 270 may permit an end user or operator of PCD 100 to search, locate, modify, or replace information in boot logic 250, management logic 260, applications in application store 280, and embedded file system 290. . The operator can use the resulting interface to make changes, which will be implemented at the next startup of the PCD 100. Alternatively, the operator can use the resulting interface to make changes, where these changes are implemented during runtime.Embedded file system 290 includes a hierarchically arranged kernel performance data store 24. In this regard, file system 290 can include a reserved portion of its overall file system capabilities for storing information associated with performance curves for various cores 222, 224, 226, 228 at various operating temperatures.6 is a logic flow diagram depicting an embodiment of a method 600 for energy efficient perceptual thermal management in an asynchronous system on a chip. In the embodiment of FIG. 6, the performance curves for each of the processing cores 222, 224, 226, 228 may be experimentally determined based on actual performance data collected by the monitoring module 114, or in some In an embodiment, these performance curves may be a priori curves driven by the performance specifications of the individual cores.In some embodiments, to experimentally determine the performance curves of the various processing cores 222, 224, 226, 228, the monitoring module 114 can interact with the temperature sensor 157 and the power consumption for monitoring the cores 222, 224, 226, 228. Other voltage or current sensors communicate. In this embodiment, one of ordinary skill in the art will recognize that the data collected by the monitoring module 114 can be coupled to previous workload assignments and compiled into an experimental performance curve. These experimental performance curves can be stored in the CP data store 24 and used by the energy efficient thermal management algorithm.Beginning at block 605, the monitoring module 114 can recognize a thermal event (eg, a temperature reading that exceeds a predetermined temperature threshold) as a trigger to reduce thermal energy production. As previously described, the monitoring module 114 can provide the thermal alert information to the EM module 101 to apply an energy efficient thermal management solution.At block 610, the EM module 101 can query performance data associated with various heterogeneous processing components in the SoC. The relevant performance data can be queried based on the operating temperature provided by the monitoring module 114 to the EM module 101. Using this performance data, the EM module 101 can determine their ordering based on the relative ability of the processing components to efficiently process the workload.At block 615, the EM module 101 can reduce the frequency of power provided to one or more processing cores in the less efficient processing core by a predetermined increment. It should be noted that one of ordinary skill in the art will recognize that the reduction in frequency is directly related to the reduction in the amount of workload handled by the processing component.Next, at decision block 620, the EM module 101 can work with the monitoring module 114 to determine if the thermal alarms that triggered the blocks 610 and 615 have been cleared. If the alarm has been cleared (i.e., the decrease in frequency achieved at block 615, causing a decrease in thermal energy production such that the alarm is successfully cleared), then the "yes" branch is followed and method 600 returns. The EM module 101 can permit an increase in the frequency provided to a less efficient processing core. However, if the alarm has not been cleared as a result of the action taken at block 615, then the "no" branch is returned to block 610 and the "new" minimum energy efficient processing component is identified for power frequency increase. The amount is reduced. It is worth noting that it should be understood that the "new" minimum energy efficiency processing component may be the same processing component that was previously identified as the lowest energy efficiency processing component. Continuing from the loops of blocks 610 through 620, the frequency of power supplied to the lowest energy efficient processing component is progressively reduced until the thermal alarm is cleared.7 is a logic flow diagram depicting an embodiment of a method 700 of energy efficient thermal management in a synchronous system-on-chip for redistribution via a workload. In the FIG. 7 embodiment, performance curves for each of the processing cores 222, 224, 226, 228 may be experimentally determined based on actual performance data collected by the monitoring module 114, or in some implementations. In the example, these performance curves can be a priori curves driven by the performance specifications of the individual cores.In some embodiments, to experimentally determine the performance curves of the various processing cores 222, 224, 226, 228, the monitoring module 114 can interact with the temperature sensor 157 and the power consumption for monitoring the cores 222, 224, 226, 228. Other voltage or current sensors communicate. In this embodiment, one of ordinary skill in the art will recognize that the data collected by the monitoring module 114 can be coupled to previous workload assignments and compiled into an experimental performance curve. These experimental performance curves can be stored in the CP data store 24 and used by the energy efficient thermal management algorithm.Beginning at block 705, the monitoring module 114 can recognize the thermal event (eg, the temperature reading exceeds a predetermined temperature threshold) as a trigger to reduce thermal energy production. As previously described, the monitoring module 114 can provide the thermal alert information to the EM module 101 to apply an energy efficient thermal management solution.At block 710, the EM module 101 can query performance data associated with various heterogeneous processing components in the SoC. The relevant performance data can be queried based on the operating temperature provided by the monitoring module 114 to the EM module 101. Using this performance data, the EM module 101 can determine their ordering based on the relative ability of the processing components to efficiently process the workload.At block 715, the EM module 101 can reallocate active workloads processed by less efficient processors to more efficient processors. It should be noted that any one of ordinary skill in the art will recognize that moving a workload from a less efficient processor to a more efficient processor can cause the workload to be processed with a reduced amount of power, even if it is not Efficient processors and more efficient processors share a common power and clock generator.Next, at decision block 720, the EM module 101 can work with the monitoring module 114 to determine if the thermal alarms that triggered the blocks 710 and 715 have been cleared. If the alarm has been cleared (ie, the workload is reassigned from a less efficient processor to a more efficient processor, causing a reduction in thermal energy that causes the alarm to be successfully cleared), then along the "yes" branch, And method 700 returns. The EM module 101 may permit future allocation or reallocation of workloads to less efficient processing cores. However, if the alarm has not been cleared as a result of the action taken at block 715, then the "no" branch is returned to block 710 and the "new" least energy efficient processing component is identified for workload reduction. It is worth noting that it should be understood that the "new" minimum energy efficiency processing component may be the same processing component that was previously identified as the lowest energy efficiency processing component. Continuing the loop from block 710 to 720, the workload is removed from the lowest energy efficient processing component until the thermal alarm is cleared.8 is a logic flow diagram depicting an embodiment of a method 800 for energy efficient thermal management in a synchronized system-on-a-chip via a distributed queued workload. In the FIG. 8 embodiment, performance curves for each of the processing cores 222, 224, 226, 228 may be experimentally determined based on actual performance data collected by the monitoring module 114, or in some implementations. In the example, these performance curves can be a priori curves driven by the performance specifications of the individual cores.In some embodiments, to experimentally determine the performance curves of the various processing cores 222, 224, 226, 228, the monitoring module 114 can interact with the temperature sensor 157 and the power consumption for monitoring the cores 222, 224, 226, 228. Other voltage or current sensors communicate. In this embodiment, one of ordinary skill in the art will recognize that the data collected by the monitoring module 114 can be coupled to previous workload assignments and compiled into an experimental performance curve. These experimental performance curves can be stored in the CP data store 24 and used by the energy efficient thermal management algorithm.Beginning at block 805, the monitoring module 114 can recognize a thermal event (eg, a temperature reading that exceeds a predetermined temperature threshold) as a trigger to reduce thermal energy production. As previously described, the monitoring module 114 can provide the thermal alert information to the EM module 101 to apply an energy efficient thermal management solution.At block 810, the EM module 101 can query performance data associated with various heterogeneous processing components in the SoC. The relevant performance data can be queried based on the operating temperature provided by the monitoring module 114 to the EM module 101. Using this performance data, the EM module 101 can determine their ordering based on the relative ability of the processing components to efficiently process the workload.At block 815, the EM module 101 can instruct the scheduler 207 to allocate the queued workloads to the processor that is most suitable for the most efficient processing of these workloads. It should be noted that one of ordinary skill in the art will recognize that assigning a workload to a more efficient processor can cause the workload to be processed with a reduced amount of power.It is worth noting that the EM module 101 can continue to work with the monitoring module 114 to identify "new" more efficient processing components for workload distribution. It is worth noting that it should be understood that a "new" more efficient processing component may be the same processing component that was previously identified as a more efficient processing component. In this manner, the energy efficient wireless management solution of method 800 can consistently ensure that new workloads are scheduled to the most appropriate processing components that will consume the least amount of power when processing.9 is a logic flow diagram depicting an embodiment of a method 900 for energy efficient thermal management in a synchronous system-on-chip via power mode adjustment. In the FIG. 9 embodiment, performance curves for each of the processing cores 222, 224, 226, 228 may be experimentally determined based on actual performance data collected by the monitoring module 114, or in some implementations. In the example, these performance curves can be a priori curves driven by the performance specifications of the individual cores.In some embodiments, to experimentally determine the performance curves of the various processing cores 222, 224, 226, 228, the monitoring module 114 can interact with the temperature sensor 157 and the power consumption for monitoring the cores 222, 224, 226, 228. Other voltage or current sensors communicate. In this embodiment, one of ordinary skill in the art will recognize that the data collected by the monitoring module 114 can be coupled to previous workload assignments and compiled into an experimental performance curve. These experimental performance curves can be stored in the CP data store 24 and used by the energy efficient thermal management algorithm.Beginning at block 905, the monitoring module 114 can recognize a thermal event (eg, a temperature reading that exceeds a predetermined temperature threshold) as a trigger to reduce thermal energy production. As previously described, the monitoring module 114 can provide the thermal alert information to the EM module 101 to apply an energy efficient thermal management solution.At block 910, the EM module 101 can query performance data associated with various heterogeneous processing components in the SoC. The relevant performance data can be queried based on the operating temperature provided by the monitoring module 114 to the EM module 101. Using this performance data, the EM module 101 can determine their ordering based on the relative ability of the processing components to efficiently process the workload.At block 915, the EM module 101 can adjust the power mode of the less efficient processor to try to reduce unnecessary power consumption. It is contemplated that the EM module 101 can identify a processor that is best suited to adjust its power mode based on various parameters including, but not limited to, power from the given idle state with the processor. The mode transitions to the latency associated with the active power mode. It is worth noting that any one of ordinary skill in the art will recognize that adjusting the power mode of the processor from active mode to hold mode or power collapsed mode can result in an average savings in power consumption across the SoC.Next, at decision block 920, the EM module 101 can work with the monitoring module 114 to determine if the thermal alarms that triggered the blocks 910 and 915 have been cleared. If the alarm has been cleared (i.e., the power mode of the least energy efficient processor is adjusted to cause a decrease in thermal energy generation such that the alarm is successfully cleared), then the "yes" branch is followed, and method 900 returns. The EM module 101 can permit a less efficient processing core to return to a higher power consumption mode. However, if the alarm has not been cleared as a result of the action taken at block 915, then the "no" branch is returned to block 910 and the "new" lowest energy efficiency processing component is identified for power mode transition. It is worth noting that it should be understood that the "new" minimum energy efficiency processing component may be the same processing component that was previously identified as the lowest energy efficiency processing component.10 is a logic flow diagram depicting an embodiment of a method 1000 for energy efficient thermal management in a synchronous system-on-chip controlled via a power mode duty cycle. In the FIG. 10 embodiment, performance curves for each of the processing cores 222, 224, 226, 228 may be experimentally determined based on actual performance data collected by the monitoring module 114, or in some implementations. In the example, these performance curves can be a priori curves driven by the performance specifications of the individual cores.In some embodiments, to experimentally determine the performance curves of the various processing cores 222, 224, 226, 228, the monitoring module 114 can interact with the temperature sensor 157 and the power consumption for monitoring the cores 222, 224, 226, 228. Other voltage or current sensors communicate. In this embodiment, one of ordinary skill in the art will recognize that the data collected by the monitoring module 114 can be coupled to previous workload assignments and compiled into an experimental performance curve. These experimental performance curves can be stored in the CP data store 24 and used by the energy efficient thermal management algorithm.Beginning at block 1005, the monitoring module 114 can recognize a thermal event (eg, a temperature reading that exceeds a predetermined temperature threshold) as a trigger to reduce thermal energy production. As previously described, the monitoring module 114 can provide the thermal alert information to the EM module 101 to apply an energy efficient thermal management solution.At block 1010, the EM module 101 can query performance data associated with various heterogeneous processing components in the SoC. The relevant performance data can be queried based on the operating temperature provided by the monitoring module 114 to the EM module 101. Using this performance data, the EM module 101 can determine their ordering based on the relative ability of the processing components to efficiently process the workload.At block 1015, the EM module 101 can cycle through the power modes of the less efficient processors to try to reduce unnecessary power consumption. By cycling the power mode, the processor can transition through its various power modes, such as, for example, switching between a hold state and an active state. The average power consumption of the processing component can be optimized by camping for a period of time in each of a plurality of power modes. It is contemplated that the EM module 101 can identify a processor that is best suited to cycle its power modes based on various parameters including, but not limited to, conversion from the given power mode with the processor The delay associated with another power mode. It should be noted that one of ordinary skill in the art will recognize that cycling a processor through its power mode can result in an average savings in power consumption across the SoC without completely sacrificing the processing power of the loop processor.Next, at decision block 1020, the EM module 101 can work with the monitoring module 114 to determine if the thermal alarms that triggered the blocks 1010 and 1015 have been cleared. If the alarm has been cleared (i.e., cycling the power mode of the lowest energy efficient processor, causing a decrease in thermal energy generation such that the alarm is successfully cleared), then the "yes" branch is followed, and method 1000 returns. The EM module 101 can permit interrupting the duty cycle for power modes of the less efficient processing core. However, if the alarm has not been cleared as a result of the action taken at block 1015, then the "no" branch is returned to block 1010 and the "new" least energy efficient processing component is identified for power mode cycling. It is worth noting that it should be understood that the "new" minimum energy efficiency processing component may be the same processing component that was previously identified as the lowest energy efficiency processing component.11 is a logic flow diagram depicting an embodiment of a method 1100 for processing runtime verification of component energy efficiency ratings. As explained above with respect to an exemplary embodiment of an energy efficient thermal management solution, performance data associated with each heterogeneous processing component can be used to determine which of the processing components is the least energy efficient in processing the workload. (Assume that the power frequency for the processing component is directly related to the processing power of the processing component, and the energy efficiency can be measured in units of MIPS/mW or MHz/mW). Performance data may be experimentally determined based on actual performance data collected by monitoring module 114, or in some embodiments, these performance curves may be a priori curves driven by performance specifications of individual cores.It is worth noting that the a priori performance data derived from the performance specifications of the processing component may be inaccurate or lose its accuracy as the processing component is consumed over time. Accordingly, embodiments of method 1100 seek to verify the validity of performance data associated with a given processing core before the energy efficient sensing thermal management solution relies on performance data to make decisions regarding energy efficiency.Since the monitoring module 114 and/or the EM module 101 can access current sensors and temperature sensors around the chip 102, and since the EM module 101 can query previously characterized performance data stored in the CP data store 24, the method 1100 can be used. Embodiments verify the accuracy of the stored performance data associated with the processing component and update as needed. The associated current and temperature measurements can be sampled and compared to stored performance data to determine if the processing component or subsystem is actively exhibiting the expected current leakage characteristics.At block 1105, the monitoring module 114 can monitor the current, operating temperature, and operating point associated with the particular processing component (eg, setting the power frequency of the MHz). At block 1110, the EM module 101 can query the stored performance data based on the operating temperatures and operating points measured by the monitoring module 114 at block 1105. The stored performance data may include an expected current level in view of performance specifications, operating points, and operating temperatures of the processing component.Next, at block 1115, the EM module 101 can compare the expected current leakage to the measured current leakage. At decision block 1120, if the expected current leakage is greater than the actually measured leakage, then the "yes" branch is taken to block 1140 and the processing component is designated as a lower leakage processor relative to its initial classification. The expected current leakage associated with the processing component in the CP storage 24 can be updated to reflect the actual measured current leakage. Method 1100 returns.Returning to decision block 1120, if the expected current leakage is not greater than the actual measured current leakage, then branch along "No" to decision block 1125. At decision block 1125, if the expected current leakage is substantially equivalent to the actual measured current leakage (within some statistically acceptable range), then branch along "Yes" to block 1135 and maintain the processing component Classification. Method 1100 returns.If at decision block 1125, the expected current leakage is substantially not equal to the actual measured current leakage (ie, the expected current leakage is lower than the actual measured current leakage), then branch along "No" to block 1130. At block 1130, the processing component is designated as a higher leakage processor relative to its initial classification. The expected current leakage associated with the processing component in the CP storage 24 can be updated to reflect the actual measured current leakage. Method 1100 returns.Advantageously, by verifying and updating the relative leakage classification of the processing components, the energy efficient thermal management solution can be made more suitable for making more efficient or less efficient processing components than other processing components prior to applying thermal mitigation measures. evaluation of.Certain steps in the processes or process flows described in this specification naturally operate as described above prior to other steps of the invention. However, the invention is not limited to the order of the steps described, if such order or sequence does not alter the function of the invention. That is, it should be appreciated that some steps may be performed before, after, or in parallel (substantially concurrently) without departing from the scope or spirit of the invention. In some instances, certain steps may be omitted or not performed without departing from the invention. Further, words such as "thereafter", "subsequent", "follow" and the like are not intended to limit the order of the steps. These terms are merely a description for guiding the reader through the exemplary method.In addition, one of ordinary skill in the programming arts can write computer code or identify appropriate hardware and/or circuitry to implement the disclosed invention without difficulty, for example, based on the flowcharts and associated descriptions in this specification. Therefore, it is not considered necessary to disclose a particular set of program code instructions or detailed hardware devices for a full understanding of how to utilize and use the invention. The inventive functions of the claimed computer-implemented processes are explained in more detail in the above description and in conjunction with the drawings depicting various process flows.In one or more exemplary aspects, the functions described herein can be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted as one or more instructions or code on a computer readable medium. Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another. A storage medium may be any available media that can be accessed by a computer. By way of example, and not limitation, such a computer readable medium may include a RAM, ROM, EEPROM, CD-ROM or other optical disk storage, disk storage or other magnetic storage device, or can be used for carrying or storing instructions or The desired program code in the form of a data structure and any other medium that can be accessed by a computer.Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line ("DSL"), or wireless technologies such as infrared, wireless, and microwave, Then coaxial cable, fiber optic cable, twisted pair, DSL or wireless technologies such as infrared, wireless and microwave are included in the definition of the medium.As used herein, a disk and a disc include a compact disk ("CD"), a laser disk, an optical disk, a digital versatile disk ("DVD"), a floppy disk, and a Blu-ray disk, where the disk is typically magnetically replicated. Data, while optical discs use lasers to optically replicate data. Combinations of the above should also be included within the scope of computer readable media.Accordingly, the present invention has been described and described in detail, and it is understood that the invention may be in the scope of the invention and the scope of the invention (as defined by the appended claims) Make various alternatives and changes.
Generally, this disclosure describes devices, methods and systems and for securely updating software on a mobile platform using trusted hardware based authentication. The device may include an image update module configured to receive a software update image from an update server, the image update module executing at an operating system (OS) level; a critical component database configured to identify critical software components associated with the secure operation of the device; a secure update application module configured to verify the inclusion of the critical software components in the software update image prior to installation of the software update image on the device; and a trusted execution environment (TEE) configured to restrict control access and data access to the secure update application module and the critical component database, the restriction enforced against the OS and against modules executing at the OS level.
CLAIMS What is claimed is: 1. A communication device comprising: an image update module configured to receive a software update image from an update server, said image update module executing at an operating system (OS) level; a critical component database configured to identify critical software components associated with secure operation of said device; a secure update application module configured to verify the inclusion of said critical software components in said software update image prior to installation of said software update image on said device; and a trusted execution environment (TEE) operating on said device configured to restrict control access and data access to said secure update application module and said critical component database, said restriction enforced against said OS and against modules executing at said OS level. 2. The device of claim 1, further comprising a trusted user authentication module (TUAM) configured to authenticate a user of said device based on authentication information maintained in said TEE. 3. The device of claim 2, wherein said authentication information is a password. 4. The device of claim 2, wherein said authentication is to be performed prior to said installation of said software update image. 5. The device of any of claims 1 or 2, wherein said image update module is further configured to report a failure of said inclusion verification to said update server. 6. The device of any of claims 1 or 2, wherein said secure update application module is further configured to verify a digital signature associated with said software update image. 7. The device of any of claims 1 or 2, wherein said image update module is further configured to report the identity of components included in said software update image in response to receiving a query. 8. A method for securely updating a software image for a communication device, said method comprising: receiving said software image from an update server, wherein said software image comprises one or more downloaded software components; providing a critical component database configured to identify critical software components associated with the secure operation of said device, said database maintained in a trusted execution environment (TEE), wherein said TEE is configured to enforce access restrictions against software running at an operating system level on said device; matching said downloaded software components to said critical software components, said matching performed in said TEE; and installing said software image on said device based on the results of said matching. 9. The method of claim 8, further comprising rejecting said software image update in response to determining that said device is in a locked state. 10. The method of any of claims 8 or 9, further comprising rejecting said software image update in response to a failure to verify a digital signature associated with said software image. 11. The method of any of claims 8 or 9, further comprising rejecting said software image update in response to a failure to authenticate a user of said device based on authentication information maintained in said TEE. 12. The method any of claims 8 or 9, further comprising reporting to said update server a failure of said matching. 13. The method of any of claims 8 or 9, further comprising reporting the identity of said downloaded software components included in said software image in response to receiving a query. 14. A computer-readable storage medium having instructions stored thereon which when executed by a processor result in the following operations for securely updating a software image for a communication device, said operations comprising: receiving said software image from an update server, wherein said software image comprises one or more downloaded software components; providing a critical component database configured to identify critical software components associated with the secure operation of said device, said database maintained in a trusted execution environment (TEE), wherein said TEE is configured to enforce access restrictions against software running at an operating system level on said device; matching said downloaded software components to said critical software components, said matching performed in said TEE; and installing said software image on said device based on the results of said matching. 15. The computer-readable storage medium of claim 14, wherein said operations further comprise rejecting said software image update in response to determining that said device is in a locked state. 16. The computer-readable storage medium of any of claims 14 or 15, wherein said operations further comprise rejecting said software image update in response to a failure to verify a digital signature associated with said software image. 17. The computer-readable storage medium of any of claims 14 or 15, wherein said operations further comprise rejecting said software image update in response to a failure to authenticate a user of said device based on authentication information maintained in said TEE. 18. The computer-readable storage medium of any of claims 14 or 15, wherein said operations further comprise reporting to said update server a failure of said matching. 19. The computer-readable storage medium of any of claims 14 or 15, wherein said operations further comprise reporting the identity of said downloaded software components included in said software image in response to receiving a query. 20. A mobile communication platform comprising: a processor; a memory coupled to said processor; an input/output (I/O) system coupled to said processor; a user interface coupled to said I/O system; an image update module configured to receive a software update image from an update server, said image update module executing at an operating system (OS) level; a critical component database configured to identify critical software components associated with the secure operation of said platform; a secure update application module configured to verify the inclusion of said critical software components in said software update image prior to installation of said software update image on said platform; and a trusted execution environment (TEE) operating on said platform configured to restrict control access and data access to said secure update application module and said critical component database, said restriction enforced against said OS and against modules executing at said OS level. 21. The mobile communication platform of claim 20, further comprising a trusted user authentication module (TUAM) configured to authenticate a user of said platform based on authentication information maintained in said TEE. 22. The mobile communication platform of claim 21, wherein said authentication information is a password. 23. The mobile communication platform of claim 21 , wherein said trusted user authentication module is further configured to perform authentication prior to said installation of said software update image. 24. The mobile communication platform of any of claims 20 or 21, wherein said image update module is further configured to report a failure of said inclusion verification to said update server. 25. The mobile communication platform of any of claims 20 or 21, wherein said secure update application module is further configured to verify a digital signature associated with said software update image. 26. The mobile communication platform of any of claims 20 or 21, wherein said image update module is further configured to report the identity of components included in said software update image in response to receiving a query. 27. The mobile communication platform of any of claims 20 or 21, wherein said platform is one of a smartphone, a laptop computing device or a tablet. 28. The mobile communication platform of any of claims 20 or 21, further comprising a plurality of said platforms, each configured to communicate over a wireless network. 29. The mobile communication platform of any of claims 20 or 21, wherein said user interface is a touchscreen.
MOBILE PLATFORM SOFTWARE UPDATE WITH SECURE AUTHENTICATION FIELD The present disclosure relates to secure operating system and firmware updates for mobile platforms, and more particularly, to secure operating system and firmware updates for mobile platforms with trusted hardware based authentication. BACKGROUND Mobile devices and platforms, such as, for example, smartphones, typically provide the capability for operating system (OS) and firmware (FW) updates or re- installations with reduced user involvement. The user involvement may often be limited to clicking an icon or accepting an agreement. While this reduced level of involvement may provide convenience and an improved user experience, it fails to address the issue of secure user authentication. A stolen phone, for example, can be re-flashed with a new OS or FW image allowing the unauthorized user to bypass the OS login screen or other methods of user authentication. An additional problem with automatic wireless (or Over-The-Air) software updates, is the lack of a mechanism by which the user, or a remote authorized administrator, can verify that the new OS/FW image includes all the required software components necessary to meet the needs of the enterprise and that the update did not roll back any previously made changes. BRIEF DESCRIPTION OF THE DRAWINGS Features and advantages of embodiments of the claimed subject matter will become apparent as the following Detailed Description proceeds, and upon reference to the Drawings, wherein like numerals depict like parts, and in which: Figure 1 illustrates a top level system diagram of one exemplary embodiment consistent with the present disclosure; Figure 2 illustrates a block diagram of one exemplary embodiment consistent with the present disclosure; Figure 3 illustrates a data structure consistent with an exemplary embodiment of the present disclosure; Figure 4 illustrates a flowchart of operations of one exemplary embodiment consistent with the present disclosure; Figure 5 illustrates a system diagram showing platforms consistent with an exemplary embodiment of the present disclosure in a network; and Figure 6 illustrates a flowchart of operations of another exemplary embodiment consistent with the present disclosure. Although the following Detailed Description will proceed with reference being made to illustrative embodiments, many alternatives, modifications, and variations thereof will be apparent to those skilled in the art. DETAILED DESCRIPTION Generally, this disclosure provides devices, systems and methods for securely updating software, including operating system (OS) and/or firmware (FW), on a mobile platform or device using trusted hardware based authentication. A trusted execution environment (TEE) on the device hosts a database of critical software components and a secure update application module. The TEE may restrict control access and data access to both the database and the secure update application module from entities outside of the TEE, including the OS and other modules executing at the OS level. The critical software components identified in the database may be those components that are recognized as necessary for the secure operation of the device, and the secure update application module may ensure that software update images include all of these components before allowing installation of the update. The secure update application module may also maintain user authentication information used to verify the identity and/or authority of the user to install the update. The system may also provide the capability for a local user, or a remote administrator, to query the device regarding the identity of the software components included in the update image and to verify that the device is properly configured. The term access point (AP) as used herein, is defined as any entity that has station (ST A) functionality and provides access to the distribution services, via the wireless medium (WM) for associated ST As. The term Personal basic service set Control Point (PCP) as used herein, is defined as a STA that operates as a control point of the millimeter-wave (mm-wave) network. The term wireless network controller as used herein, is defined as a station that operates as a PCP and/or as an AP of the wireless network. The terms "traffic" and/or "traffic stream(s)" as used herein, are defined as a data flow and/or stream between wireless devices such as STAs. The term "session" as used herein is defined as state information kept or stored in a pair of stations that have an established a direct physical link (e.g., excludes forwarding); the state information may describe or define the session. The term "wireless device" as used herein includes, for example, a device capable of wireless communication, a communication device capable of wireless communication, a communication station capable of wireless communication, a portable or non-portable device capable of wireless communication, or the like. In some embodiments, a wireless device may be or may include a peripheral device that is integrated with a computer, or a peripheral device that is attached to a computer. In some embodiments, the term "wireless device" may optionally include a wireless service. It should be understood that the present invention may be used in a variety of applications. Although the present invention is not limited in this respect, the circuits and techniques disclosed herein may be used in many apparatuses such as stations of a radio system. Stations intended to be included within the scope of the present invention include, by way of example only, wireless local area network (WLAN) stations, wireless personal network (WPAN), and the like. Some embodiments may be used in conjunction with various devices and systems, for example, a video device, an audio device, an audio-video (A/V) device, a Set-Top-Box (STB), a Blu-ray disc (BD) player, a BD recorder, a Digital Video Disc (DVD) player, a High Definition (HD) DVD player, a DVD recorder, a HD DVD recorder, a Personal Video Recorder (PVR), a broadcast HD receiver, a video source, an audio source, a video sink, an audio sink, a stereo tuner, a broadcast radio receiver, a display, a flat panel display, a Personal Media Player (PMP), a digital video camera (DVC), a digital audio player, a speaker, an audio receiver, an audio amplifier, a data source, a data sink, a Digital Still camera (DSC), a Personal Computer (PC), a desktop computer, a mobile computer, a laptop computer, a notebook computer, a tablet computer, a smartphone, a digital television, a server computer, a handheld computer, a handheld device, a Personal Digital Assistant (PDA) device, a handheld PDA device, an on-board device, an off-board device, a hybrid device, a vehicular device, a non-vehicular device, a mobile or portable device, a consumer device, a non-mobile or non-portable device, a wireless communication station, a wireless communication device, a wireless AP, a wired or wireless router, a wired or wireless modem, a wired or wireless network, a wireless area network, a Wireless Video Are Network (WVAN), a Local Area Network (LAN), a WLAN, a PAN, a WPAN, devices and/or networks operating in accordance with existing Wireless HDTM and/or Wireless-Gigabit-Alliance (WGA) specifications and/or future versions and/or derivatives thereof, devices and/or networks operating in accordance with existing IEEE 802.11 (IEEE 802.11-2007: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications) standards and amendments ("the IEEE 802.11 standards"), IEEE 802.16 standards for Worldwide Interoperability for Microwave Access (WiMAX), Third Generation Partnership Project (3GPP) including Long Term Evolution (LTE) and Long Term Evolution Advanced (LTE-A), and/or future versions and/or derivatives thereof, units and/or devices which are part of the above networks, one way and/or two-way radio communication systems, cellular radio-telephone communication systems, Wireless-Display (WiDi) device, a cellular telephone, a wireless telephone, a Personal Communication Systems (PCS) device, a PDA device which incorporates a wireless communication device, a mobile or portable Global Positioning System (GPS) device, a device which incorporates a GPS receiver or transceiver or chip, a device which incorporates an RFID element or chip, a Multiple Input Multiple Output (MIMO) transceiver or device, a Single Input Multiple Output (SIMO) transceiver or device, a Multiple Input Single Output (MISO) transceiver or device, a device having one or more internal antennas and/or external antennas, Digital Video Broadcast (DVB) devices or systems, multi-standard radio devices or systems, a wired or wireless handheld device (e.g., BlackBerry, Palm Treo), a Wireless Application Protocol (WAP) device, or the like. Some embodiments may be used in conjunction with one or more types of wireless communication signals and/or systems, for example, Radio Frequency (RF), Infra Red (IR), Frequency-Division Multiplexing (FDM), Orthogonal FDM (OFDM), Time-Division Multiplexing (TDM), Time-Division Multiple Access (TDMA), Extended TDMA (E- TDMA), General Packet Radio Service (GPRS), extended GPRS, Code-Division Multiple Access (CDMA), Wideband CDMA (WCDMA), CDMA 2000, single-carrier CDMA, multi-carrier CDMA, Multi-Carrier Modulation (MDM), Discrete Multi-Tone (DMT), Bluetooth®, Global Positioning System (GPS), Wi-Fi, Wi-Max, Wireless Metropolitan Area Networks (WMAN), Wireless Wide Area Networks (WWAN), ZigBeeTM, Ultra- Wideband (UWB), Global System for Mobile communication (GSM), 2G, 2.5G, 3G, 3.5G, Enhanced Data rates for GSM Evolution (EDGE), or the like. Other embodiments may be used in various other devices, systems and/or networks. Some embodiments may be used in conjunction with suitable limited-range or short-range wireless communication networks, for example, "piconets", e.g., a wireless area network, a WVAN, a WPAN, and the like. Figure 1 illustrates a top level system diagram 100 of one exemplary embodiment consistent with the present disclosure. An update server 102, which may be a trusted or secure server, may provide software updates to a mobile platform 104 with secure authentication and update capability as will be described in more detail below. Mobile platform 104 may be any type of mobile or wireless communication device, such as, for example, a smartphone, a laptop or a tablet. The software updates may be provided wirelessly to any number of mobile platforms 104. In some embodiments the updates may be provided as a response to a request from the platform 104 or as a "push" from the server 102, i.e., the server schedules or initiates the update transmission. The platform 104 may send a response to the server 102. The response may verify that the update was successful or it may indicate a problem. Problems may include, for example, that critical software components were missing from the update image or that a user identity could not be verified. Figure 2 illustrates a block diagram 200 of one exemplary embodiment consistent with the present disclosure. Update server 102 and mobile platform 104 are shown in greater detail. Mobile platform 104 is shown to include a trusted user authentication module (TUAM) 210, an image update and query module 212, a user interface 208, a trusted execution environment (TEE) 214, a secure update application module 216 and a software critical component database (SCCD) 218. The TEE 214 provides a secure environment within which the secure update application module 216 and the software critical component database (SCCD) 218 may reside and operate. Other secure application modules 222, unrelated to software updates, may also reside in the TEE. Additionally, the TEE 214 may handle at least portions of encryption, decryption and authentication operations. In some embodiments, the TEE 214 may be considered to reside in a FW layer. The TEE 214 provides security and isolation from other entities that are outside the TEE, such as, for example, the OS and other non- trusted applications operating at the OS level or layer. The OS level may generally be considered to be a less secure and more easily modified level of software in a multi-layer abstraction model of software and generally resides between the lower level (more secure) firmware and the higher level (less secure) user applications. The isolation may prevent external entities from exercising control over the secure processing modules 216, 222 or obtaining access to data stored in the SCCD 212. In some embodiments, the TEE 214 may comprise separate physical hardware, for example an integrated circuit (IC) that is separate from an IC associated with the mobile platform 104. In some embodiments, the TEE 214 may comprise a separate controller or processor within an IC that is shared with the mobile platform 104. In some embodiments, the TEE 214 may comprise a separate domain within a controller or processor that is shared with the mobile platform 104. Various techniques may be employed to securely isolate the TEE 214 including situations where hardware is being shared between the TEE 214 and the mobile platform 104. These techniques may include privileged execution modes associated with a shared processor and access protection mechanisms associated with a shared memory. The software critical component database (SCCD) 218 may be provided to identify those software components that are recognized as necessary for the secure operation of the device, and the secure update application module 216 may ensure that software update images include all of these components before allowing installation of the update. This may be accomplished by checking information contained in headers associated with the images against information in the SCCD 218 as will be described in greater detail below. The secure update application module may also maintain user authentication information that is employed to verify the identity and/or authority of the user to install the update. The user authentication information may include, for example, passwords or any other suitable type of authenticating information. The trusted user authentication module (TUAM) 210, which may execute at the OS layer, is provided to authenticate the identity of the user based on authentication information maintained in the TEE 214 by the secure update application module 216 against credentials supplied by the user through user interface 208. User authentication may be required prior to the installation of the software update. In some embodiments, the device may be disabled or allowed to operate for limited durations or with reduced capabilities if user authentication is not performed within a pre-determined time following the software update. Image update and query module 212, which may also execute at the OS layer, is provided to interface with the update server 102 to receive software update images 220 and provide responses 224. Communication between the mobile platform 104 and the update server 102 may be accomplished wirelessly. The image update and query module 212 may also provide the capability for a local user, or a remote authorized administrator, to query the device regarding the identity of the software components included in the update image and to verify that the device is properly configured. Update server 102 is shown to include a secure update server agent 202 and a secure update server engine 204. In some embodiments, secure update server engine 204 may include a library of software subroutines or functions that may be made available and employed in the construction of software update images for mobile platforms or devices in general. This library may thus provide advantages associated with standardization. In contrast, secure update server agent 202 may be provided, developed or maintained by 3 rdparty application developers and may be configured to produce software update images that are configured for specific mobile platforms. Figure 3 illustrates a data structure 300 consistent with an exemplary embodiment of the present disclosure. A software update image 220 is shown to include a number of software components, some or all of which may be critical components. The components comprise a FW or SW image 302 which may be a binary executable, a header 304 and a digital signature 306. The digital signature 306, which may be an encrypted signature, is used to verify the integrity of the component and that the component is provided by a trusted source. In some embodiments, the header 304 may be omitted for non-critical components. The header 304 is shown to include an application ID 308 which identifies the component, a presence flag 310 to indicate the presence or absence of that component, and, optionally, an area for application specific data 312 associated with the component. The ID 308 and presence flag 310 information in the header may be matched against information in the SCCD 218 by the secure update application module 216 to ensure that software update images include all of the critical components before allowing installation of the update. Figure 4 illustrates a flowchart of operations 400 of one exemplary embodiment consistent with the present disclosure. At operation 402, a software update image is downloaded. The download may be accomplished wirelessly from an update server to the mobile platform. At operation 404, a check is performed to determine if the platform or device is locked and, if so, the image update is rejected or postponed at operation 414. At operation 406, the digital signature of the software update image or components included in the image is verified. If the verification fails, the image update is rejected or postponed at operation 414. At operation 408, headers within the software update image are checked to verify the presence of all critical software components. The check may be performed as a match against a database that identifies critical components for the device. If the check fails, the image update is rejected or postponed at operation 414, otherwise the image update is allowed at operation 412. Figure 5 illustrates a system diagram 500 showing platforms consistent with an exemplary embodiment of the present disclosure in a network. A platform 104 may be a mobile communication device with secure authentication and update capability, such as, for example, a smartphone, a tablet, a laptop computing device or any other device configured to transmit or receive wireless signals. In some embodiments, platform 104 may comprise a processor 508, memory 510, an input/output (I/O) system 512, a display/keyboard or other type of user interface (UI) 514 such as, for example, a touchscreen. Platform 104 may also comprise a TUAM 210, an image update module 212 and a TEE 214 as described previously. Any number of platforms 104 may transmit or receive signals over a network 506, which may be a wireless network, to an update server 102. Figure 6 illustrates a flowchart of operations 600 of another exemplary embodiment consistent with the present disclosure. At operation 610, a software image is received from an update server. The software image includes one or more downloaded software components. At operation 620, a critical component database is provided. The database is configured to identify critical software components associated with the secure operation of the device. The database is maintained in a TEE that is configured to enforce access restrictions against software running at an operating system level on the device. At operation 630, the downloaded software components are matched to the critical software components in the database. The matching is performed in the TEE. At operation 640, the software image is installed on the device based on the results of the matching. Embodiments of the methods described herein may be implemented in a system that includes one or more storage media having stored thereon, individually or in combination, instructions that when executed by one or more processors perform the methods. Here, the processor may include, for example, a system CPU (e.g., core processor) and/or programmable circuitry. Thus, it is intended that operations according to the methods described herein may be distributed across a plurality of physical devices, such as processing structures at several different physical locations. Also, it is intended that the method operations may be performed individually or in a subcombination, as would be understood by one skilled in the art. Thus, not all of the operations of each of the flow charts need to be performed, and the present disclosure expressly intends that all subcombinations of such operations are enabled as would be understood by one of ordinary skill in the art. The storage medium may include any type of tangible medium, for example, any type of disk including floppy disks, optical disks, compact disk read-only memories (CD- ROMs), compact disk rewritables (CD-RWs), digital versatile disks (DVDs) and magneto- optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic and static RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), flash memories, magnetic or optical cards, or any type of media suitable for storing electronic instructions. "Circuitry", as used in any embodiment herein, may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. An app may be embodied as code or instructions which may be executed on programmable circuitry such as a host processor or other programmable circuitry. A module, as used in any embodiment herein, may be embodied as circuitry. The circuitry may be embodied as an integrated circuit, such as an integrated circuit chip. Thus, the present disclosure provides a device, method, system and computer readable storage media for secure operating system and firmware updates for mobile platforms with trusted hardware based authentication. The following additional example embodiments may be provided. Example 1 is a device that may include an image update module configured to receive a software update image from an update server, the image update module executing at an OS level. The device of this example may also include a critical component database configured to identify critical software components associated with the secure operation of the device. The device of this example may further include a secure update application module configured to verify the inclusion of the critical software components in the software update image prior to installation of the software update image on the device. The device of this example may further include a TEE configured to restrict control access and data access to the secure update application module and the critical component database, the restriction enforced against the OS and against modules executing at the OS level. Example 2 includes the subject matter of example 1 and also includes the forgoing components and a TUAM configured to authenticate a user of the device based on authentication information maintained in the TEE. Example 3 is another example device including the subject matter of either of examples 1 or 2, and also wherein the authentication information is a password and/or the authentication is performed prior to the installation of the software update image. Example 4 is another example device including the subject matter of either of examples 1 or 2, and also wherein the image update module is further configured to report a failure of the inclusion verification to the update server and/or to report the identity of components included in the software update image in response to receiving a query. Example 5 is another example device including the subject matter of either example 1 or 2, and also wherein the secure update application module is further configured to verify a digital signature associated with the software update image. Example 6 is a method that may include receiving the software image from an update server, and the software image includes one or more downloaded software components. The method of this example may also include providing a critical component database configured to identify critical software components associated with the secure operation of the device, the database maintained in a TEE, and the TEE is configured to enforce access restrictions against software running at an operating system level on the device. The method of this example may further include matching the downloaded software components to the critical software components, the matching performed in the TEE. The method of this example may further include installing the software image on the device based on the results of the matching. Example 7 includes the subject matter of example 6 and also includes the forgoing operations and further includes rejecting the software image update in response to determining that the device is in a locked state. Example 8 is another example method including the subject matter of either of examples 6 or 7, and includes rejecting the software image update in response to a failure to verify a digital signature associated with the software image and/or rejecting the software image update in response to a failure to authenticate a user of the device based on authentication information maintained in the TEE. Example 9 is another example method including the subject matter of any of claims 6 through 8 and further includes reporting to the update server a failure of the matching and/or reporting the identity of the downloaded software components included in the software image in response to receiving a query. Example 10 is at least one computer-readable storage medium having instructions stored thereon which when executed by a processor, cause the processor to perform the steps of the method as described in examples 6 through 9. Example 11 is a mobile communication platform. The platform may include a processor, a memory coupled to the processor, an I/O system coupled to the processor and a user interface coupled to the I/O system. The platform of this example may also include an image update module configured to receive a software update image from an update server, the image update module executing at an OS level. The platform of this example may further include a critical component database configured to identify critical software components associated with the secure operation of the platform. The platform of this example may further include a secure update application module configured to verify the inclusion of the critical software components in the software update image prior to installation of the software update image on the platform. The platform of this example may further include a TEE configured to restrict control access and data access to the secure update application module and the critical component database, the restriction enforced against the OS and against modules executing at the OS level. Example 12 includes the subject matter of example 11 and also includes the forgoing components and a TUAM configured to authenticate a user of the platform based on authentication information maintained in the TEE. Example 13 is another example platform including the subject matter of either of examples 11 or 12, and also wherein the authentication information is a password and/or the authentication is performed prior to the installation of the software update image. Example 14 is another example platform including the subject matter of either of examples 11 or 12, and also wherein the image update module is further configured to report a failure of the inclusion verification to the update server and/or to report the identity of components included in the software update image in response to receiving a query. Example 15 is another example platform including the subject matter of either example 11 or 12, and also wherein the secure update application module is further configured to verify a digital signature associated with the software update image. Example 16 is another example platform including the subject matter of either example 11 or 12, and the platform is one of a smartphone, a laptop computing device or a tablet, and the user interface is a touchscreen. A plurality of platforms may be included, each configured to communicate over a wireless network. The terms and expressions which have been employed herein are used as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications are possible within the scope of the claims. Accordingly, the claims are intended to cover all such equivalents. Various features, aspects, and embodiments have been described herein. The features, aspects, and embodiments are susceptible to combination with one another as well as to variation and modification, as will be understood by those having skill in the art. The present disclosure should, therefore, be considered to encompass such combinations, variations, and modifications.
A processor may include an address monitor table and an atomic update table to support speculative threading. The processor may also include one or more registers to maintain state associated with execution of speculative threads. The processor may support one or more of the following primitives: an instruction to write to a register of the state, an instruction to trigger the committing of buffered memory updates, an instruction to read the a status register of the state, and/or an instruction to clear one of the state bits associated with trap/exception/interrupt handling. Other embodiments are also described and claimed.
What is claimed is: 1. An apparatus, comprising: a plurality of thread units to concurrently execute a plurality of threads; and a memory buffer storage area to store data for a memory write instruction encountered during execution of an atomic block of instructions for a particular one of the plurality of threads; wherein the memory buffer storage area is part of a persistent state such that precise architected state is defined at the retirement boundary of each instruction of the atomic block. 2. The apparatus of Claim 1, further comprising: a control storage area whose contents may be updated responsive to a user-level programming instruction in the particular thread. 3. The apparatus of Claim 2, wherein: the contents of the control storage area are to control whether the memory- write data is to be stored in the memory address storage area. 4. The apparatus of Claim 2, wherein: said control storage area is a register that includes one or more fields to hold a state value; said state value to indicate one or more of the following states: (a) whether to store the memory- write data in the memory buffer storage area, (b) whether to reset the memory buffer storage area, and (c) whether to bypass the memory buffer storage area and instead write directly to a memory. 5. The apparatus of Claim 1 , further comprising: a memory address storage area to maintain the address of a memory read instruction encountered during execution of the atomic block. 6. The apparatus of Claim 1, further comprising: logic to perform an atomic update from the memory buffer storage area to a memory. 7. The apparatus of Claim 6, wherein said logic to perform an atomic update is further to: perform an atomic update only if the atomic block has completed execution successfully. 8. The apparatus of Claim 1, further comprising: a user- visible status storage area whose contents reflects whether the atomic block has failed to be successfully executed. 9. A method, comprising: executing a selected instruction during execution by a processor of a transactional block of instructions in a speculative thread; and maintaining precise architected state of the processor at the execution boundary of the selected instruction. 10. The method of claim 9, further comprising: servicing a trap or exception while maintaining precise architected state for the transactional block. 11. The method of claim 9, further comprising: performing single-stepping of the transactional block instructions while maintaining precise architected state for the transactional block. 12. A method, comprising: buffering local memory writes during execution of an atomic block, where said buffering is performed responsive to a first user-level programming instruction; monitoring for a failure during execution of the atomic block; taking, as a non- failure condition, a trap or exception during execution of the atomic block; maintaining the buffered local memory writes as persistent state during handling of the trap or exception; resuming execution of the atomic block after handling the exception or interrupt; and selectively performing an atomic memory update of the buffered memory writes, based on whether the failure has occurred. 13. The method of Claim 12, wherein: said monitoring is performed responsive to a user-level programming instruction that indicates a trigger scenario and a handler address for an interrupt. 14. The method of Claim 13, wherein: wherein said trigger scenario further comprises a change in the value of one or more status bits. 15. The method of Claim 14, wherein: wherein said one or more status bits are specified by a mask associated with the user-level programming instruction. 16. A method, comprising: concurrently executing a plurality of cooperative threads; suspending execution of all but a first one of the cooperative threads in order to allow the first thread to execute a block of instructions atomically; wherein said suspending is triggered by action of the first thread to invoke a hardware mechanism; and resuming the other cooperative threads after the first thread has completed atomic execution of the block of instructions. 17. The method of Claim 16, wherein: said action of a first thread to invoke a hardware mechanism further comprises writing a pre-defined value to a specified memory location. 18. The method of Claim 17, wherein: said suspending is further triggered by an interrupt generated as a result of said action of the first thread, such that said suspending is achieved without polling, by the other cooperative threads, of the specified memory location. 19. The method of Claim 16, wherein: said method is performed by a multi-threaded processor that includes hardware to support transactional execution. 20. The method of Claim 19, wherein: said hardware includes a storage area to buffer memory writes of an atomic block. 21. The method of Claim 19, wherein: said hardware includes a storage area to maintain addresses of memory reads of an atomic block. 22. The apparatus of Claim 1, wherein: each said thread unit further comprises decode logic to receive and decode a user- level transactional execution instruction. 23. The apparatus of Claim 22, wherein: said decode logic is further to receive and decode a user-level atomic demarcation instruction. 24. The apparatus of Claim 22, wherein: said decode logic is further to receive and decode a user-level instruction to read a transaction status. 25. The apparatus of Claim 22, wherein: said decode logic is further to receive and decode a user-level instruction to enable traps during transactional execution. 26. The apparatus of Claim 22, wherein: said decode logic is further to receive and decode a user-level instruction to perform an atomic memory update. 27. The apparatus of Claim 5, further comprising: logic to determine whether another of the plurality of threads has written to the address of the memory read instruction during the particular thread's execution of the atomic block. 28. The apparatus of Claim 5, further comprising: a user- visible mechanism to control whether the memory-read address is to be stored in the second storage area. 29. The apparatus of Claim 28, wherein said user- visible mechanism further comprises: a storage area whose contents may be updated responsive to a user-level programming instruction. 30. The apparatus of Claim 1, wherein said plurality of thread units further comprise: a plurality of processor cores. 31. The apparatus of Claim 1 , wherein said plurality of thread units further comprise: a plurality of logical processors associated with a single processor core. 32. The method of Claim 16, wherein: said suspending is initiated in response to a user-level software instruction. 33. An system, comprising: a memory to store software instructions for a plurality of threads; a plurality of thread units to concurrently execute the plurality of threads; and a memory buffer storage area to store data for a memory write instruction encountered during execution of an atomic block of instructions for a particular one of the plurality of threads; wherein the memory buffer storage area is part of a persistent state such that precise architected state is defined at the retirement boundary of each instruction of the atomic block. 34. The system of Claim 33, wherein: said memory is a DRAM. 35. The system of Claim 33 , further comprising: a memory address storage area to maintain the address of a memory read instruction encountered during execution of the atomic block. 36. The system of Claim 33, further comprising: a control storage area whose contents may be updated responsive to a user-level programming instruction in the particular thread. 37. The system of Claim 33, wherein: the contents of the control storage area are to control whether a trap may be taken as a non- failure condition during execution of the atomic block. 38. The system of Claim 33 , wherein: each said thread unit further comprises decode logic to receive and decode a user- level transactional execution instruction.
PRIMITIVES TO ENHANCE THREAD-LEVEL SPECULATIONBackgroundTechnical Field[0001] The present disclosure relates generally to information processing systems and, more specifically, to support for thread-level speculation.Background Art[0002] Increasingly, multithreading is supported in hardware. For instance, in one approach, processors in a multi-processor system, such as a chip multiprocessor ("CMP") system, may each act on one of the multiple software threads concurrently. In another approach, referred to as simultaneous multithreading ("SMT"), a single physical processor is made to appear as multiple logical processors to operating systems and user programs. For SMT, multiple software threads can be active and execute simultaneously on a single processor without switching. That is, each logical processor maintains a complete set of the architecture state, but many other resources of the physical processor, such as caches, execution units, branch predictors, control logic and buses are shared. For SMT, the instructions from multiple software threads thus execute concurrently on each logical processor.[0003] For a system that supports concurrent execution of software threads, such as SMT and/or CMP systems, an application may be parallelized into multi-threaded code to exploit the system's concurrent-execution potential. The threads of a multi-threaded application may need to communicate and synchronize, and this is often done through shared memory. Otherwise single-threaded program may also be parallelized into multithreaded code by organizing the program into multiple threads and then concurrently running the threads, each thread on a separate thread unit. When certain assumptions regarding dependencies are made during the parallelization process for an otherwise single-threaded program, the technique is sometimes referred to as speculative multithreading.[0004] To increase the performance of, and/or to make it easier to write, multithreaded programs thread-level speculation can be used. Thread-level speculation refers to a thread's performance of a block of instructions speculatively. That is, the thread executes the instructions but other threads are not allowed to see the result of the instructions until the thread makes a decision to commit or discard (also known as abort) the work done speculatively.[0005] Processors can make thread-level speculation more efficient by providing the ability to buffer and contain memory updates done as part of a speculative block of instructions. The memory updates may be buffered until directed to perform or discard the speculative memory updates.[0006] One of the things that a program may want to speculate on is whether a block of code is dependent on other code running concurrently on other threads. Processors can make this more efficient by providing support for detecting dependencies. For example, a processor may provide support to detect whether a speculative block of code reads any memory locations that is subsequently modified by another concurrent thread.Brief Description of the Drawings [0007] Embodiments of the present invention may be understood with reference to the following drawings in which like elements are indicated by like numbers. These drawings are not intended to be limiting but are instead provided to illustrate selected embodiments of a systems, methods and mechanisms to provide speculative multithreading with transactional execution support. [0008] Fig. 1 is a block diagram presenting a graphic representation of a general parallel programming approach.[0009] Fig. 2 is a block diagram illustrating selected features of a processor according to at least one embodiment of the present invention.[00010] Figs. 3, 4 and 5 are flowcharts illustrating data and control flow for at least one embodiment of a method for performing speculative multithreading with transactional execution support.[00011] Fig. 6 is a data flow diagram illustrating at least one embodiment of a mechanism to determine that execution of a transactional block has failed.[00012] Fig. 7 is a block diagram illustrating at least one embodiment of a system capable of performing disclosed techniques. [00013] Fig. 8 is a block diagram illustrating at least one embodiment of processor that includes an address monitor table and an atomic update table to support transactional execution.Detailed Description [00014] The following discussion describes selected embodiments of methods, systems and mechanisms to provide hardware support for thread-level speculation. The apparatus, system and method embodiments described herein may be utilized with single- core or multi-core multithreading systems. In the following description, numerous specific details such as processor types, multithreading environments, system configurations, data structures, and instruction mnemonics and semantics have been set forth to provide a more thorough understanding of embodiments of the present invention. It will be appreciated, however, by one skilled in the art that the invention may be practiced without such specific details. Additionally, some well known structures, circuits, and the like have not been shown in detail to avoid unnecessarily obscuring the present invention.[00015] For multi-threaded workloads that exploit thread-level speculation, at least some, if not all, of the concurrently executing threads may share the same memory space. As used herein, the term "cooperative threads" describes a group of threads that share the same memory space. Because the cooperative threads share memory space, they may read and/or write to the same memory items. Accordingly, concurrently-executed cooperative threads should be synchronized with each other in order to do correct, meaningful work.[00016] Fig. 1 is a block diagram illustrating, in graphical form, two cooperative threads 125, 126 that share a common logical view of memory. Such a shared-memory multiprocessing paradigm may be used in an approach referred to as parallel programming. According to this approach, an application programmer may split a software program, sometimes referred to as an "application" or "process," into multiple threads to be run concurrently in order to express parallelism for the software program. That is, an otherwise single-threaded program, or "process" 120, may be broken up into two threads 126, 125 that may execute concurrently. [00017] Fig. 1 illustrates that each thread 125, 126 has its own application and system state 202a, 202b, respectively. A particular logical view 204 of memory is shared by the cooperative threads 125, 126 associated with a particular process 120. Accordingly, for at least one embodiment, the cooperative threads 125, 126 may each share the same view of virtual memory that is constructed by the operating system for the process 120 and may have visibility to each others' virtual address space. [00018] Fig. 1 illustrates, for simplicity of discussion, only two threads 125, 126 for a process 120. However, such example illustration should not be taken to be limiting. The number of cooperative threads associated with a process 120 may be more than two. The upper bound on the number of threads associated with a process 120 may be limited by an OS program (not shown). [00019] Various approaches have been devised to deal with synchronization of memory accesses for cooperative threads. A common approach for dealing with the synchronization of cooperative threads is the use of memory locks. Memory locks may be used to guarantee that a particular thread has exclusive access to shared data for a particular section of code. In traditional multi-threaded algorithms, locks may be used around any critical section of code that may ever cause incorrect behavior if multiple threads execute critical sections concurrently. For such approach, a thread may acquire the lock, execute its critical section, and then release the lock. Performance can be degraded by locks because they can inhibit multiple threads from running concurrently. Performance can be further degraded if, "just to be safe", locks are held more than necessary. That is, locks may often be used rather pessimistically.[00020] To minimize the performance impact of locks, multiple different locks can be used by an application so that locking is done at a fine level of granularity associated with either different sections of code or with different blocks of code. Fine-grain locking may be cumbersome to implement in code, and may be prone to deadlock when a single thread must acquire ownership of multiple locks.[00021] For a variety of reasons, concurrent accesses to a set of shared data structures by multiple threads within critical sections may, in fact, not conflict for any specific occurrence. For such cases, the serialization provided by locks is not necessary in order to ensure correct execution. Pessimistic use of locks in such cases may prohibit full realization of the benefits of parallelism because one thread will wait for a free lock, and then acquire the lock in a serialized manner, even if such serialization is not required to maintain accurate memory values. [00022] Accordingly, one approach to avoiding unnecessary serialization is known as Speculative Lock Elision ("SPE"). The SPE approach may reduce the cost of locks. Such approach is described in "Speculative Lock Elision: Enabling Highly Concurrent Multithreaded Execution," Raj war et al, Proc. 30<th> ACM/IEEE Int'l. Symp. on Microarchitecture, 2001. For such approach, it is recognized that some synchronization instructions have been used pessimistically and are not necessary. For SPE, some synchronization instructions are predicted as unnecessary and are elided. In cases where two concurrently-executing critical sections do not touch any of the same memory locations, then the artificial serialization of locks is avoided. [00023] As an alternative approach to the locking and SPE schemes discussed above, transactional execution has emerged. Under a transactional execution approach, a block of instructions may be demarcated as an atomic block and may be executed atomically without the need for a lock. (As used herein, the terms "atomic block" and "transactional block" may be used interchangeably.) Semantics may be provided such that either the net effects of the each of demarcated instructions are all seen and committed to the processor state, or else none of the effects of any of the demarcated instructions are seen or committed. This provides an alternative form of synchronization for accessing shared memory, with a number of benefits in terms of concurrency and also in the reasoning that needs to be done by the program writer. [00024] Speculative Lock Elision and Transaction Execution can both be achieved with thread-level speculation support. In both cases, the semantics require a block of code to be executed speculatively while monitoring for data dependencies. The required support includes some way to buffer memory updates performed within the speculative region and then commit or discard the updates. The required support also includes some way to detect if a memory read within the speculative region observed a value that was modified by another thread during the period of speculation. This requires some way to remember all the memory addresses read within a speculative region and monitor them for updates by other threads.[00025] Speculative Multi-threading is another approach to multi-threading a program and using thread-level speculation. For Speculative multi-threading a sequential program is partitioned into sequential tasks, or blocks of code, that are then run in parallel. The tasks are ensured to commit their updates in order to preserve the original sequential semantics. The tasks also monitor if any updates by previous tasks change the values they observed, in which case they need to discard their speculative updates and redo the work. The hardware support for this is fundamentally the same thread-level speculation support discussed above. [00026] There have been many different proposals on how to build the hardware support for thread-level speculation, as well as how to provide the software interface. Most of these approaches have provided the same basic functionality through varying interfaces. Effectively they checkpoint some of the architected state. Then they continue execution, buffering memory updates and monitoring memory locations that are read for foreign writes.[00027] During execution of an atomic block of a cooperative thread, for at least one known transactional execution approach, the memory state created by the thread is speculative because it is not known whether the atomic block of instructions will successfully complete execution. That is, a second cooperative thread might contend for the same data, and then it is known that the first cooperative thread cannot be performed atomically. That is, it is known that there has been a misspeculation regarding the first and/or second cooperative thread. To provide for misspeculation, the processor state is not updated during execution of the instructions of the atomic block, according to at least some proposed transactional execution approaches. Instead, processor state is maintained as an undefined intermediate state until the atomic block completes execution.[00028] For such approaches, the state of the processor at each instruction of the atomic block depends on whether or not the state of the atomic block will ultimately be committed. Thus, during execution of the atomic block the intermediate state is 1) a first state if the state is ultimately be committed (analogous to the state that would be maintained in a speculative memory buffer, discussed above) and 2) a second state if the state is not ultimately committed.[00029] Accordingly, for some common transactional execution approaches, the intermediate state for an atomic block is not defined. This makes certain operations, such as precise trap-handling and single-step debugging, infeasible for instructions inside an atomic block. However, Fig. 2 illustrates at least one embodiment of a thread execution unit that supports speculative threading and transactional execution, and that also provides a precise architected state at the boundary (such as retirement) of every instruction in an atomic block.[00030] Fig. 2 is a block diagram illustrating a multi-threaded processor 200 that provides the ability to implement transactional execution while providing precise architected state at the boundary of every instruction, including instructions within a transactional block. The processor 200 supports concurrent execution of more than one thread at a time. As used herein, the term "thread" includes, at least, the concept of independent execution of a stream of instructions that may be executed concurrently with other threads of a process. The "thread" term encompasses the idea, therefore, of execution of a software instruction stream along with the associated processor state.[00031] For at least on embodiment, the processor 200 may execute a portion of an application's code that has been parallelized through the use of cooperative threads. For example, a speculative thread, referred to as the spawnee thread, may run on the processor 200 to execute instructions that are ahead, in program order, of the code being executed, on the processor 200, by the thread that performed the spawn. The thread that performed the spawn is referred to as the spawner thread.[00032] Fig. 2 illustrates at least one CMP embodiment, where each of multiple thread units 104 is a processor core, with the multiple cores 104a-104n residing in a single chip package 103. Each core 104 may be either a single-threaded or multi-threaded processor. For at least one embodiment, a CMP core (such as, e.g., 104a) separate from the core executing the spawner thread (such as, e.g., 104c) executes the spawnee thread.[00033] For at least one alternative embodiment, the processor 200 may be a single- core processor that supports concurrent multithreading. For such embodiment, each thread unit 104 is a logical processor having its own next- instruction pointer and fetch logic, although the same processor core executes all thread instructions. (The terms "thread unit" and "sequencer" may be used interchangeably herein). For such embodiment, the logical processor 104 maintains its own version of the architecture state, although execution resources of the single processor core are shared among all threads.[00034] For such alternative embodiment, the spawnee thread is executed in a single- core simultaneous multithreading system that supports speculative multithreading. For such embodiment, the spawnee thread is executed by a second SMT logical processor (such as, e.g., 104a) on the same physical processor 200 as the spawner thread, while the spawner thread is executed by another SMT logical processor (such as, e.g., 104n). One skilled in the art will recognize that the transactional execution embodiments discussed herein may be utilized in any multithreading approach, including SMT, CMP multithreading or other multiprocessor multithreading, or any other known multithreading approach.[00035] While the CMP embodiments of processor 200 discussed herein refer to only a single thread per processor core 104, it should not be assumed that the disclosures herein are limited to single-threaded processors. The techniques discussed herein may be employed in any CMP system, including those that include multiple multi-threaded processor cores in a single chip package 103.[00036] Accordingly, Fig. 2 illustrates that the processor 200 includes two or more thread units 104a - 104n. For purposes of discussion, the number of thread units is referred to as "N." The optional nature of thread units 104 in excess of two such thread units is denoted by dotted lines and ellipses in Fig. 2. That is, Fig. 2 illustrates N > 2. For simplicity of discussion, a CMP embodiment is discussed in further detail herein. That is, each thread unit 104 may be representative of 32-bit and/or 64-bit processors such as Pentium(R), Pentium(R) Pro, Pentium(R) II, Pentium(R) III, Pentium(R) 4, and Itanium(R) and Itanium(R) 2 microprocessors. Such partial listing should not, however, be taken to be limiting.[00037] The embodiment of a processor 200 illustrated in Fig. 2 is designed to provide certain semantics in support of speculative multithreading. (Each is discussed in further detail below). While certain specific implementations of such features are discussed below, it should be understood that such implementation details are provided for purposes of example only and should not be taken to be limiting.[00038] First, the processor 200 provides some way to demarcate the beginning and end of a set of instructions (referred to interchangeably herein as an "atomic block" or "transactional block" ) that includes a memory operation for shared data.[00039] Second, the processor 200 includes hardware that monitors load (memory read) addresses in order to detect contention among cooperative threads. [00040] Third, the processor 200 includes hardware (a "store buffer") to buffer store (memory write) operations.[00041] Fourth, the processor 200 is designed to perform atomic updates of memory from the store buffer (if no contention is perceived during execution of the atomic block). [00042] Finally, the processor 200 is designed to discard the memory updates of the store buffer and to signal a failure if contention is detected during execution of the atomic block. Such general capabilities are provided by at least one embodiment of the processor 200.[00043] Regarding the demarcation of an atomic block, the processor 200 may provide such support in any of several manners. For at least one embodiment, a programmer may indicate that a read or write instruction is part of an atomic block by setting particular bits in the instruction opcode itself. For example, an "atomic" indicator may be part of the instruction opcode, or may be indicated by a particular prefix for the load or store instructions. [00044] For at least one other embodiment, an instruction set supported by the processor 200 may include explicit architectural demarcation instructions. That is, the instruction set for the processor 200 may include a "begin monitor" instruction that may be placed by the programmer at the beginning of the atomic block. Similarly, the instruction set for the processor 200 may also include a "stop monitor" instruction that may be placed by the programmer after the last instruction of the atomic block. For at least one embodiment, a single instruction may be used to manipulate a control register to perform both the "begin monitor" and "stop monitor" instructions. Further discussion for at least one embodiment of such instruction and control register are set forth below in connection with Fig. 7. [00045] As is stated above, an embodiment of a processor 200 that supports speculative multithreading and transactional execution may provide hardware-based monitoring of load (memory read) addresses in order to detect contention among cooperative threads. Fig. 2 illustrates that each thread unit 104 may include a table 106 to store one or more addresses to be monitored for external updates. Such table 106 may be referred to as an address monitor table ("AMT"). The logical concept of the AMT 106 may be architecturally defined for the thread unit 104 but does necessarily need to be implemented as a discrete hardware table structure.[00046] The AMT 106 may be useful because, as is stated above, the potential dependencies and/or shared data contention within an atomic block may be ambiguous. If the programmer had known that another thread would try to write to an address used in the atomic block, during execution of the atomic block, the programmer would presumably not have tried to read the location during concurrent execution. In other words, if the programmer had known that the contention/dependency existed in the original program, an attempt to parallelize the code in this manner would not have been made; the code would have been permitted to execute the contentious instructions sequentially, as originally written. The AMT 106 thus may be useful in identifying misspeculations.[00047] In addition, Fig. 2 illustrates that each thread unit 104 may also include a table 108 to buffer memory updates that may be performed later, if it is determined that the thread performing the updates was not misspeculated. Such table 108 may be referred to as an atomic update table ("AUT"). (For an SMT embodiment, a single AMT 106 and AUT 108 may be shared among logical processors, with different portions of the tables being allocated to each logical processor). The AUT 108 may buffer memory writes performed during an atomic block. Such approach avoids making other threads utilize the intermediate state of the atomic block. [00048] When it is finally determined whether or not the atomic block has been able to complete execution without unresolved dependencies or contention with another thread, then the memory updates buffered in the AUT 108 may be performed atomically. If, however, the transaction fails (that is, if the atomic block is unable to complete execution due to contention or unresolved data dependence), then the AUT 108 may be cleared and the buffered updates are not performed. In this manner, already-performed memory writes need not be unrolled responsive to a determination that a misspeculation has occurred.[00049] At least one embodiment of the processor 200 illustrated in Fig. 2 provides a precise architected state at the boundary (such as retirement) of every instruction in an atomic block in the following manner. Certain user-controllable state in the processor 200 may be set to indicate that a transaction failure should not occur if a trap or exception occurs during execution of the instructions of an atomic block. Instead, the contents of the AMT 106 and AUT 108 are preserved while the exception/trap is handled. After such handling, execution of the atomic block may continue. In this manner, a precise state is maintained so that execution of the atomic block may be resumed after the trap or exception is handled[00050] Although the AMT 106 and AUT 108 are illustrated as discrete blocks in Fig. 2, such illustration is meant to convey that such tables are logically distinct structures. Although such tables 106, 108 may be architecturally explicit, their specific organization and physical structure is a matter of design choice and the exact manner of physical implementation should not be taken to be limited to any particular structure or organization. Generally, the information of the AMT 106 and AUT 108 may be maintained in any storage area. For example, the logical "tables" 106, 108 may be a collection of bits or may be extensions to other existing hardware structures.[00051] Regarding of the specific manner of implementing the AMT 106 and AUT 108, the tables 106, 108 may be generally implemented in one or more physical storage area(s) as a finite logical construct. The finite nature of the tables 106, 108 necessarily restricts the number of instructions that can be successfully executed as a transaction. Accordingly, one or more memory tables in a backstore 160 may be used to extend the size of the AMT 106 and/or AUT 108.[00052] Fig. 2 illustrates that at least one embodiment of the processor 200 may be coupled to a memory 150, where a portion of the memory 150 may be utilized by software to maintain a backstore 160 for the AMT 106 and/or the AUT 108. Software may control spilling of overflow entries from the tables 106, 108 to the backstore 160.[00053] For at least one embodiment, the AMT 106 may be implemented as a structure that is parallel to a load buffer. Similarly, the AUT 108 may be implemented as a structure that is parallel to a store buffer. One possible configuration for such embodiment is illustrated in Fig. 8.[00054] Fig. 8 is a block diagram illustrating in further detail at least one embodiment of a processor 1004 that includes an AMT 106 and AUT 108 as well as including load request buffers 440 and store request buffers 450. One or more of the AMT 106, AUT 108, store request buffers 440 and/or load request buffers 450 may be part of a memory ordering buffer (MOB) 223. The processor 1004 may also include a decoder 1022, to receive and decode instructions of an instruction set. The decoder 1022 may be capable of receiving and decoding instructions; the instructions to be decoded by the decoder 1022 may include one or more instructions to perform the operations described below in connection with Table 1.[00055] Fig. 8 illustrates a processor 1004 that implements a non-blocking cache memory subsystem (the cache memory subsystem will sometimes be referred to herein by the shorthand terminology "cache system"). The cache system includes an LO cache 460 and an Ll cache 410. For at least one embodiment, the LO cache 460 and Ll cache 410 are on-die caches. The processor 1004 may also retrieve data from a main memory 102. The main memory 102, Ll cache 410, and LO cache 460 together form a memory hierarchy 240.[00056] The memory order buffer ("MOB") 223 may temporarily hold the state of outstanding load and store instructions from dispatch to completion. For at least one embodiment, this state information for store instructions may be maintained in store request buffers 450 and this state information for load instructions may be maintained in load request buffers 440.[00057] For at least one embodiment, tracking of load instructions may optionally be handled via the AMT 106, which may be utilized along with load request buffers 440 during transactional execution.[00058] For at least one embodiment, the state information for outstanding store instructions may be maintained in store request buffers 450 for normal operation or, instead, may be maintained in the AUT 108 during transactional execution.[00059] Fig. 8 illustrates that each store buffer entry 450a - 45On may include a control portion 515. Although logically associated with each other as illustrated in Fig. 8, one skilled in the art will recognize that the control portion 515 and the data portion 480 of a store request buffer entry 450a-450n need not necessarily physically reside in contiguous storage areas of a storage device, nor even reside in the same storage device. For instance, Fig. 8 illustrates that the control portion 515 of the store buffers 450 may be included in the MOB 223 while the data portion 480 may reside in an on-die cache 410.[00060] For at least one embodiment, the MOB 223 includes control logic 475. Control logic 475 includes selection logic 236 to determine whether store data should be buffered in store request buffers 450 or in the AUT 108. For at least one embodiment, theselection logic 236 may direct that a store should be recorded in only one of the store request buffers 450 or the AUT 108. That is, determination of where to hold store data may be an "exclusive-OR" operation. The selection logic 236 may indicate that, when atomic execution is not being performed, store state may be buffered in the store request buffers 450. However, during atomic execution, the selection logic 236 may instead cause the store state to be buffered in the AUT 108.[00061] For at least one embodiment, the selection logic 236 is also to determine whether the memory address for load data, which has been read from memory, should be entered into the AMT 106. Such entry may be made, during atomic execution, along with the normal operation of pulling memory read data into the load request buffers 440. That is, determination of whether to monitor load addresses in the AMT 106 may be a selective operation, such that monitoring is performed in addition to normal load request buffer 440 operation.[00062] The use of the AMT 106 and AUT 108 allows speculative multithreading of code that would otherwise be hard to parallelize because of ambiguous data dependencies or data contention. Through the use of the logical address monitor table 106 and the logical address update table 108, the processor 200 may detect that certain potential data dependencies or contention, which appear ambiguous before execution, may indeed exist between threads during execution. As is explained above, the tables 106, 108 thus support monitoring of load (memory read) operations and buffering of store (memory write) operations, respectively.[00063] Fig. 3 is a flow diagram illustrating data and control flow for at least one embodiment of a method 300 for performing speculative multithreading with transactional execution support using the AMT 106 and AUT 108. Generally, the method 300 executes instructions of an atomic block but buffers updates to memory. Also, the method 300 generally provides for monitoring memory addresses that are read during execution of the atomic block, in order to determine if another thread attempts to perform a write to the same address. If so, there is contention for that memory address during the execution of the atomic block, and transactional execution of the block fails due to the contention for the memory address.[00064] Fig. 3 illustrates that the method 300 begins at block 302. It is assumed that the method 300 is performed on a block that has been demarcated as a transactional block. As is mentioned above, it is therefore assumed for at least one embodiment that a "begin monitor" instruction has been executed prior to beginning the method 300. For such embodiment, it is also assumed that execution of a "stop monitor" instruction will cause the determination at block 314 to evaluate to a false value. [00065] Alternatively, the demarcation may be denoted by marking each load and store instruction within the atomic block with a prefix, opcode field, or other individualized indicator that the instruction is to be performed as part of an atomic block. For such embodiment, the optional blocks 308 and 312 (denoted as optional by the use of broken lines), are performed to determine whether the instruction is part of an atomic block.[00066] It is assumed that, for at least one embodiment, the method 300 is performed by a thread execution unit (see, e.g., 104 of Fig. 2) of a processor that includes an AMT 106 and an AUT 108 (see Fig. 2). Accordingly, it will be understood by one of skill in the art that the determination of whether an instruction is part of an atomic block is also an indication that any memory writes performed during normal execution of the demarcated instructions should be buffered in the 108 and that the address for any memory reads performed during normal execution of the demarcated instructions should be maintained in the AMT 106.[00067] Fig. 3 illustrates that, at any time during execution of the atomic block according to the method 300 shown, a trap, exception or interrupt may be taken. If such event is taken, precise architected state may be maintained. In other words, the contents of the AMT 106 and AUT may be maintained during the handling of the exception/interrupt/trap event. Such event is not treated as a condition that causes a failure. Instead, execution of the atomic block according to the method 300 illustrated in Fig. 3 may be resumed when after handling of the event. Fig. 3 illustrates at block 390 that such an event is not a failure condition for at least one embodiment of the method 300.[00068] Fig. 3 illustrates that processing for the method 300 proceeds from block 302 to block 304. At block 304, the next instruction of a thread is fetched and decoded. Processing then proceeds to block 306. At block 306, it is determined whether the instruction fetched and decoded at bock 304 is a memory read instruction (such as, for example, a load instruction). If so, then processing proceeds to optional block 308. Otherwise, processing proceeds to block 310.[00069] Optional block 308 determines whether the instruction is part of an atomic block. The manner of such determination may differ across various implementations. For an implementation that does not embed such information in the memory write instruction itself, but instead uses a "begin monitor" instruction, such determination 308 need not be performed for each memory read instruction. Instead, it is assumed that a "being monitor" instruction has been executed prior to beginning execution of the method 300 at block 302 and that the method 300 is aware of this during execution. For at least one embodiment, for example, such information may be maintained in a control register, such as the transaction control register ("TCR") discussed below. For such embodiments, processing proceeds from block 306 to connector "A", and does not perform optional block 308. The processing associated with connector "A" is described in further detail in connection with Fig. 4. [00070] At block 310, it is determined whether the instruction fetched and decoded at bock 304 is a memory write instruction (such as, for example, a store instruction). If so, then processing proceeds to optional block 312. Otherwise, processing proceeds to block 311.[00071] Optional block 312 determines whether a memory write instruction is part of an atomic block. Again, the manner of such determination may differ across various implementation embodiments. For an implementation that does not embed such information in the memory write instruction itself, but instead uses a "begin monitor" instruction, such determination 312 need not be performed for each memory write instruction. Instead, as is explained above, it is assumed that a "being monitor" instruction has been executed prior to beginning execution of the method 300. Again, such information may be stored in a control register. For such embodiments, processing proceeds from block 310 to connector "B", and does not perform optional block 312. The processing associated with connector "B" is described in further detail in connection with Fig. 5. [00072] If the current instruction that has been fetched at block 304 is neither a memory read instruction nor a memory write instruction, processing falls through to block 311. The instruction is executed at block 311. Processing then proceeds to block 314. [00073] Block 314 is performed for embodiments that utilize a "begin monitor" and "stop monitor" instruction. For such embodiments, the determination at block 314 evaluates to "false" if no "stop monitor" instruction has been encountered.[00074] Block 314 is also performed for embodiments that do not utilize a "begin monitor" demarcation instruction and that instead associate an atomic block indicator with individual memory instructions. For such embodiments, the determination at block 314 determines whether some kind of termination indicator has been reached. For at least one embodiment, the termination indicator may be an instruction, or opcode bits or prefix for an instruction, that indicate that the buffered updates in the AUT (see 108, Fig. 2) should be committed to memory. For such embodiment, the determination at block 314 evaluates to "true" if the termination indicator has not been encountered.[00075] Processing loops back to block 304 in order to fetch the next instruction if the determination at block 314 evaluates to "true." Otherwise, processing may end at block 318 or may optionally proceed to block 316. [00076] If the method 300 reaches block 316 without suffering a transaction failure interrupt, the atomic block has successfully completed execution without contention. Accordingly, the memory updates that have been buffered during execution of the atomic block may be committed 316 to memory. At block 316, the buffered memory updates from the AUT 108 are thus committed to memory atomically. The entries of the AUT 108 may then be cleared. The atomic update that commits the entries of the AUT 108 to memory at block 316 may be performed responsive to an instruction (placed, for example, by the programmer after the last instruction of the atomic block). An example embodiment of such instruction, a speculative execution commit instruction, is discussed in greater detail below in connection with Table 1. [00077] For at least one embodiment, other actions may also be performed at block 316. For example, actions may be taken to disable, now that the atomic block has completed execution, updating of the AMT 106 for subsequent memory reads. Buffering of subsequent memory writes in the AUT table 108 may also be disabled at block 316. Processing for the method 300 then ends at block 318. [00078] Fig. 4 is a block diagram illustrating at least one embodiment of processing "A" that is performed if the determination at block 306 (and optional block 308, when appropriate) of Fig. 3 indicates that the current instruction is a memory read instruction of an atomic block. For such case, processing proceeds to block 402. At block 402, the instruction is executed in order to read the indicated memory address. Processing then proceeds to block 404. [00079] At block 404, the indicated memory address is added to the address monitor table ("AMT") 106. Again, it should be noted that the AMT 106 is a logical construct. For example, at block 404, instead of actually modifying an entry of an AMT table to include the designated memory address, the processing of block 404 may be handled differently for different embodiments. As just one example, a status bit associated with an on-chip data cache may be toggled to indicate that a memory address in the cache line is to be monitored for foreign writes. After the AMT 106 is updated at block 404, processing returns to block 314 of Fig. 3.[00080] Fig. 5 is a block diagram illustrating at least one embodiment of processing "B" that is performed if the determination at block 310 (and optional block 312, when appropriate) of Fig. 3 indicates that the current instruction is a memory write instruction of an atomic block. For such case, processing proceeds to block 502. At block 502, the memory write instruction is executed. However, the memory write instruction updates an entry of the AUT 108 rather than updating memory. In this manner, memory writes performed during an atomic block are buffered in the AUT 108. [00081] Again, the AUT 108 is a logical construct and may be implemented in hardware in various manners. For at least one example embodiment, for instance, the AUT 108 may be implemented as a gated store queue. After the AUT 108 is updated at block 502, processing then proceeds to block 314 of Fig. 3.[00082] The discussion above illustrates that the use of the AMT 106 and AUT 108, along with some form of demarcation for atomic blocks, supports hardware thread speculation. In addition, certain instructions and state may also be integrated into such a scheme. Together, such elements may allow efficient execution of speculative threads to enable a broad range of speculative threading models.[00083] Fig. 7 is a block diagram illustrating at least one embodiment of a thread unit 904 that includes the logical AMT 106 and AUT 108 tables, as well as certain transactional execution state 950. In addition, the thread unit 904 may be capable of executing certain instructions such that transactional execution of an atomic block may be supported in a manner that provides precise state at the boundary of each instruction of the atomic block.[00084] The transaction state 950 illustrated in Fig. 7 is optional, as denoted by broken lines. That is, the state may be maintained in memory, via message-passing through a specified memory address, rather than being maintained as hardware state in the execution core 930. For at least one embodiment, however, the transaction state 950 is maintained in one or more hardware registers.[00085] For at least one embodiment, registers to maintain the transaction state 950 include a transaction control register 951 (referred to herein as "TCR") and a transaction status register 952 (referred to herein as "TSR"). The transaction control register controls updates to the AMT 106 and AUT 108. The transaction status register may report the state of the AMT and AUT and may also indicate transaction failure.[00086] The transaction control register 951 may include various bits that, when set, cause various types of behavior related to the AMT and AUT tables 106, 108. The transaction control register 951 may control whether memory updates are buffered and whether memory references are monitored for dependency checking. For example, the transaction control register may include one or more bits to denote each of the following behaviors: - Force reset of the AUT- Force reset of the AMT- Direct update of the AMT- Direct buffering of memory writes (updates to the AUT)For at least one embodiment, multiple behaviors may be indicated by a single bit. For example, a single bit in the transaction control register 951 may denote that both the AUT and AMT should be reset.[00087] For one specific embodiment, the transaction control register 951 ("TCR") includes fields that may, depending on the value stored in a field at any given time, determine the behavior of the AMT and AUT and/or may affect the execution of each instruction. Of course, other embodiments may utilize more or less bits. For an embodiment of the transaction control register 951, the fields may be defined as follows. Specific bit numbers are provided for illustrative purposes only and should not be taken to be limiting. For an embodiment that implements the bit fields described below in a register that is of any arbitrary length, additional fields not described below may be "reserved". Such reserved bits may be implemented as write ignore, read zero.[00088] TCR Reset Bits. Two one-bit fields of the TCR 951 may be write-only bits that are used to reset and clear the AMT and the AUT:[00089] AMT clear bit (TCR bit 0, write-only) : controls the resetting of the AMT. If a ' 1 ' is written to this bit position the AMT is cleared so that there are no valid entries. The AMT clear bit reads as zero.[00090] AUT clear bit (TRC bit 1, write-only): controls the resetting of the AUT. If a ' 1 ' is written to this bit position the buffered speculative memory updates are discarded. The AUT clear bit reads as zero.[00091] TCR Update Bits. Two one-bit fields of the TCR 951 may be used to control the behavior of instruction execution with respect to updating the AMT or AUT:[00092] AMT update bit (TRC bit 2): controls the updating of the AMT. If the AMT update bit is set (value of ' 1 ') then the AMT is updated for every memory location read by an instruction. If it is not set (value of '0') the AMT is not updated when an instruction is executed. Software can toggle the state of this bit to enable mixing monitored and unmonitored memory references. If the AMT update bit is set and the transaction has failed (see status bits) the AMT need not be updated.[00093] AUT update bit (TRC bit 3): controls the buffering of memory updates at ring-level 3 (user mode). If the AUT update bit is set (value of ' 1 ') then memory updates done at ring-level 3 by instructions are buffered and not performed to memory until a transaction commit operation. If the bit is not set (value of O') then memory updates by instructions are not buffered and are directly performed to memory as usual. If the AUT updates bit is set and the transaction has failed (see status bits) the memory updates done at ring-level3 need not be buffered and can be simply discarded. [00094] Optional TCR bits. Alternative implementations may provide for one or more of the following fields to be defined in the TCR:[00095] AUT No bypass bit (TCR bit 6): causes memory reads by instructions to see the value of that memory location without checking the AUT for read-after- write bypassing as would normally be performed when the AUT is enabled. If the bit is not supported in an implementation then an attempt to set the bit (write the value ' 1 ') causes the mode not supported bit to be set in the Transaction Status Register; this forces the failure of the active transaction.[00096] AUT update in handler (TCR bit 7): effects memory updates at ring-levels lower than 3. If the AUT enable bit and this bit are both set (value of ' 1 ') then memory updates at any ring-level will be buffered in the AUT. Updates to this bit at ring-level 3 are ignored (value is unchanged). This bit may be automatically cleared to zero on the transition from ring-level 3 to a lower ring-level (on a trap/exception/interrupt). If this bit is not implemented then an attempt to update it at ring-level less than 3 may cause the trap force failure bit to be set in the Transaction Status Register; this may force the failure of the active transaction.[00097] The transaction status register 952 may include one or more bits to reflect certain status states related to execution of an atomic block. The contents of the transaction status register 952 may indicate the status of a transaction and may indicate a transaction failure. For example, the transaction status register 952 may include one or more bits to denote the following status states:- Whether a transaction failure has occurred- Reason for a transaction failure (if one has occurred). Values for this field may include overflow, collision, etc.- AMT state. Values for this field may include full, not full, empty, not empty, or the like- AUT state. Values for this field may include full, not full, empty, not empty, or the like- Whether a trap has been taken during execution of an atomic block - Whether a trap has caused a transaction failure[00098] For one specific embodiment, the transaction status register 952 ("TSR") is a read-only register that includes fields that may, depending on the value stored in a field at any given time, may provide status information about the state of the AMT, AUT and the current transaction in general. Of course, other embodiments may utilize more or less bits. For an embodiment of the transaction status register 952, the fields may be defined as follows. Specific bit numbers are provided for illustrative purposes only and should not be taken to be limiting. For an embodiment that implements the bit fields described below in a register that is of an arbitrary size, additional fields not described below may be "reserved". Such reserved bits may be implemented as write ignore, read zero.[00099] For at least one embodiment, the first bit of the TSR 952 indicates if the current transaction has failed. The next 4 bits are informational bits about the state of the AMT and AUT. The sixth bit indicates that a trap/exception/interrupt occurred while there was an active transaction (the AMT and/or the AUT is non-empty). The final set of bits may be used to indicate that the current transaction has failed and provide information as to why.[000100] Each of the bits of the TSR may be set by the hardware in specific situations. Each bit can be affected by one or more events. If multiple events occur simultaneously, events that clear a bit may have precedence over events that set bits. [000101] Transaction Fail Bit. The first bit of the Transaction Status Register is set if the current transaction has failed (any of the last eight status bits, bits 6 through 13, are set).[000102] Transaction Fail Bit (TSR bit 0): indicates that the current transaction has failed. If this bit is set then at least one of the bits 6 through 13 are also set to indicate the cause of failure.[000103] Information Bits. The next 4 bits of the TSR are informational bits about the status of the AMT and AUT. A transaction is considered active if either the AUT or the AMT, or both, are non-empty; this is indicated by the non-empty bits defined below. The bits are: [000104] AMT non-empty bit (TSR bit 1): indicates that the AMT has at least one valid entry. [000105] AMT full bit (TSR bit 2): indicates that the AMT is full or nearly full (the precise definition is implementation dependent). This bit indicates that subsequent updates to the AMT will likely cause the structure to overflow (if it has not already overflowed). [000106] AUT non-empty bit (TSR bit 3): indicates that the AUT has at least one buffered memory update.[000107] AUT full bit (TSR bit 4): indicates that the AUT is full or nearly full (the precise definition is implementation dependent). This bit indicates that subsequent updates to the AUT will likely cause the structure to overflow (if it has not already overflowed).[000108] Trap Bit. The 5<th> bit of the TSR 952 may be used as a Trap bit to indicate that a trap/exception/interrupt has occurred when the AMT 106 or AUT 108 is non-empty. This bit can be cleared by a transaction clear trap bit instruction (see, e.g., the TRNXOK instruction in Table 1, below). If this bit is still set when a trap handler returns or when a subsequent trap/exception/interrupt occurs, it may result in the Trap Force Fail bit being set and the transaction failing:[000109] Trap bit (TSR bit 5): may be automatically set by hardware on a trap/exception/interrupt if either the AMT or AUT is non-empty. The bit may not be set for user-level handlers. Transaction-aware handlers that know they are transaction-safe may clear this bit on entry to the handler with the transaction clear trap bit instruction (see, e.g., the TRNXOK instruction in Table 1, below). In this manner, a trap or exception may be handled as a non- failure condition, such that execution of an atomic block that was being performed when the trap/exception/interrupt was taken may be resumed after handling the event.[000110] Transaction Failure Bits. The next 8 bits of the TSR 108 may used as fields to indicate that a transaction has failed. If there is a transaction active (either or both AUT 106 and AMT 108 are non-empty) and any of the following 8 bits become set, then a transaction is considered to have failed: [000111] AMT overflow bit (TSR bit 6): indicates that the AMT has overflowed and at least one memory location read by the transaction has not been logged in the AMT for monitoring.[000112] AMT coherency collision bit (TSR bit 7): indicates that the AMT has had a collision, or possible collision (conservative approximations are allowed), between an entry and a foreign update to memory. [000113] AUT overflow bit (TSR bit 8) : indicates that the AUT has overflowed and at least one memory update that was supposed to be buffered has been dropped.[000114] AUT coherency collision bit (TSR bit 9): indicates that the AUT has observed a coherency event that will not allow it to complete the buffered updates. [000115] AUT buffer bypass not allowed bit (TSR bit 10): this bit may be set by hardware if the AUT update bit is cleared while the AUT is enabled and nonempty, if the processor does not support direct memory updates bypassing buffered updates in the AUT.[000116] AUT failed RAW bit (TSR bit 11): indicates that a load performed may have seen an inconsistent value because it failed to get a value bypassed from the AUT to provide correct read-after-write semantics or there was ambiguity with respect to updates in the AUT and the correct value of the load could not be determined. If there is ambiguity the value returned for the load will be the value from memory and not a value from the AUT. [000117] Trap Force Failure bit (TSR bit 12): indicates that a failure has been forced by a trap/exception/interrupt while the AMT or the AUT was non-empty. This indicates that some action by the act of transitioning to a lower ring-level or by an action within a lower ring-level caused the active transaction to be failed. This bit can be set because the AUT was not empty and the processor does not support memory updates bypassing buffered updates. This bit may also be set by hardware when a trap/exception/interrupt occurs, or a return from trap/exception/interrupt occurs, and the Trap bit (TSR bit 5) is currently set.[000118] Unsupported Mode bit (TSR bit 13): is automatically set by hardware if a write to the Transaction Control Register attempts to put the processor in a mode that is not supported. [000119] All TSR 952 status bits associated with the AMT 106 may be automatically cleared to zero when the AMT 106 is cleared. Such clearing may occur, for example, responsive to a write of 1 ' to the AMT clear bit of the TCR 951 or by a transaction commit instruction. [000120] Similarly, all TSR 952 status bits associated with the AUT 108 may be automatically cleared to zero when the AUT 108 is cleared. Such clearing may occur, for example, responsive to a write of ' 1' to the AUT clear bit of the TCR 951 or by a transaction commit instruction.[000121] All the remaining bits of the TSR 951 (i.e., those not directly associated with the AMT 106 or AUT 108) may be automatically cleared to zero when both the AMT 106 and AUT 108 are cleared simultaneously or when either the AMT 106 or AUT 108 are cleared and the other structure is empty. The clearing can be done by a write to the AMT 106 clear bit and/or the AUT 108 clear bit of the TCR 951 or by a transaction commit instruction.[000122] Fig. 7 illustrates that a user program 960 stored in a memory system 902 may include instructions that are useful in implementing any of several multithreading paradigms. Using such instructions, for example, a programmer may implement transactional execution, SPE, lock elision, and/or other multi-threaded programming paradigms. [000123] Fig. 7 illustrates, via broken lines, that the use of any or all such instructions are optional. A thread unit 904 according to at least one embodiment of the present invention may decode and execute one or more of the instructions, or "primitives", described below in Table 1. Generally, the instructions may include one or more of the following: an instruction to write the transaction control register, an instruction to trigger the committing of buffered memory updates, an instruction to read the transaction status register, and/or an instruction to clear one of the transaction status register bits associated with trap/exception/interrupt handling. Of course, alternative embodiments may use more or fewer instructions than those shown in Table 1, in order to implement the functionality described.[000124] Table 1 [000125] The TRNXSET instruction writes, for at least one embodiment, values into the transaction control register (TCR) 951. Execution of the TRNXSET instruction may cause a transaction to start, or to fail. The instruction may also be used to temporarily disable monitoring of memory read (load) addresses.[000126] The TRNXSET instruction can be used to demarcate the beginning of transactions by setting bits in the TCR 951 that will cause clearing the AMT 106 and AUT 108, and by setting bits in the TCR 951 that will cause updating and checking of the tables 106, 108 based on memory instructions in the atomic block. The value written into the TCR 951 as a result of execution of the TRNXSET instruction may be based on a value in a source register. A portion of the bits of the source register may be used as the value to be written into the TCR. Another portion of the bits of the source register may be used as a preserve mask (inverse of an update mask). Each bit with a value of zero in the preserve mask has the value in the TCR updated with the value from the update value, while each bit with a value of one in the preserve mask has the value in the TCR preserved as its previous value. The TRNXSET instruction can be executed at any privilege level (but it is assumed it will commonly only be used at CPL3 or in specific trap handlers that are transaction-aware) .[000127] The TRNXSET instruction can also be used to turn off address monitoring (turn off AMT updating) within a transaction, and later to turn back on monitoring, to allow specific memory addresses to be accessed without having the AMT monitor the address. This is important for implementing speculative multi-threading (multiscalar) execution so that the concept of the head token can be passed into a speculative block without leading to the block's failure.[000128] The TRNXSET instruction can also be used to force the failure of a transaction by clearing the AMT and AUT and setting the TCR 951 to "disabled" and "no update".[000129] For at least one embodiment, execution of the TRNXCMT instruction may cause the processor 904 to check the value of the transaction status register 952 (TSR). If the transaction fail bit is not set, then execution of the TRNXCMT instruction may cause the processor 904 to attempt to perform buffered memory updates from the AUT 108 to memory 902 such that they appear to be performed atomically.[000130] Execution of the TRNXCMT instruction may cause the processor 904 to clear the AMT 106 and the AUT 108. Such execution may also clear the Transaction Control register TCR 951 to a value of all zeros. The TRNXCMT instruction may return in the source register a value to indicate if it successfully performed the buffered updates from the AUT 108. If the updates could not be performed, and the updates were instead discarded, then the processor 904 may update the source register with a value of zero. If the updates were performed, then the processor 904 may update the source register with a non-zero value. If the AUT 108 is empty, the commit may be considered successful, for at least one embodiment, and a non-zero value may be returned in the source register.[000131] Execution of the TRNXRD instruction may cause the processor 904 to read the value of the transaction control register (TCR) 951 and the transaction status register (TSR) 952 into a destination register. For at least one embodiment, the value of the transaction control register 951 is shifted left by some fixed amount and ORed with the value of the transaction status register 951 to generate a value that is written into the destination register.[000132] For at least one embodiment, execution of the TRNXOK instruction causes the processor 904 to write a value of zero to the Transaction Trap Bit (bit 5) of the Transaction Status Register. When the transaction trap bit is set, a trap handler may avoid forcing an error if a trap is taken during execution of an atomic block. [000133] That is, a programmer may, by using the TRNXOK instruction and by setting certain bits in the TCR 951, explicitly control whether or not to update the AUT/ AMT during trap handling. By default, the processor 904 may be designed such that taking a trap during turns off updates to the AMT 106 and AUT 108 tables. For such default operation, a trap taken during an atomic block terminates the transaction and causes a rest of the AMT 106 and AUT 108. When the trap returns, the transaction will have failed, causing the intermediate state to be discarded. However, such default operation may be overridden by the TRNXOK instruction, which allows a trap handler to avoid forcing a transaction failure when a trap is taken during execution of an atomic block and allows the state of the AMT 106 and AUT 108 to be persistent through the handling of a trap or exception that occurs during execution of the atomic block. For such embodiment, the transaction will not have failed when the trap returns, and execution of the atomic block may be resumed with the precise processor state that existed at the time the trap or exception occurred. [000134] For at least one embodiment, the operation of the TRNXOK instruction allows a trap handler to perform work as part of the transaction. The AMT 106 and AUT 108 tables may be updated during trap handling, if so indicated by the current value of the TCR 951. Thus, for at least one embodiment, at least some classes of traps and exceptions may be serviced from within an atomic block. [000135] At least one embodiment of a processor 904 may allow single-stepping through an atomic block. This allows running a single-step debugger from outside the atomic block, while maintaining the value of the AMT 106 and AUT 108. The effect is that a programmer may, according to at least one embodiment of the present invention, single-step through an atomic block and see the architected state at the end of each instruction. Such approach allows for traditional approaches for software debugging to be employed within an atomic block.[000136] This feature is in contrast to other schemes where the intermediate state is undefined during execution of the instructions of an atomic block. For such schemes, the intermediate state is either committed or discarded before a trap may be serviced or single- stepping may be performed.[000137] For at least one other embodiment, the TRNXOK instruction may allow a trap handler to perform work as part of the transaction, but the trap handler, from outside the atomic block, may read and write directly from/to memory, bypassing the AMT 106 and AUT 108 tables. Whether or not the AMT 106 and AUT 108 are to be bypassed may be indicated by the value of the TCR 951. Such approach allows the trap handler to execute while outside the atomic block. [000138] In sum, the instructions enumerated in Table 1 may be implemented as a set of instruction set extensions that allows one to demarcate a block of instructions in a speculative thread as a transactional block and have hardware execute them such that updates are buffered and are either later discarded or are later performed atomically. The extensions may also provide that memory addresses read are monitored to detect foreign updates to detect memory dependencies. These extensions may thus allow software to attempt to execute speculative threads. This hardware provides support to allow efficient execution of speculative threads to enable a broad range of speculative threading models.[000139] A processor, such as processor 904 shown in Fig. 7, which supports such instructions, is not necessarily required to provide any guarantee that speculative transactions will succeed successfully. Instead, hardware may fail a transaction as long as it correctly notifies software of the failure.[000140] Fig. 6 is a data flow diagram illustrating at least one embodiment of a mechanism to determine that execution of a transactional block has failed. Fig. 6 illustrates two cooperative threads, Thread A 125 and Thread B 126. Of course, one of skill in the art will recognize that the mechanism illustrated in Fig. 6 may be employed for any number (Y) of cooperative threads, where Y > 2.[000141] Fig. 6 illustrates that a first time, time tl, the first cooperative thread 125 begins execution of an atomic block 602. During execution of the atomic block 602, at time t2, the first thread 125 executes a memory read instruction for a particular address. The address, illustrated in Fig. 6 as memory address "000 A", is entered into the AMT 106 at the time the instruction is executed (t2).[000142] Fig. 6 illustrates that, at time t3, a second thread, Thread B 126, executes a write to the memory address that was read by the first thread. The update by the second thread occurs after the first thread has read the memory address and before the first thread, Thread A 125, has completed execution. This attempt to write, by a second cooperative thread, to an address that has already been read by a first thread during execution of an atomic block, is noted by the hardware because it is recognized as a "foreign" write to one of the addresses in the AMT 106. Such event may trigger an asynchronous event to indicate that the execution of atomic block 602 has failed.[000143] The data flow diagram illustrated in Fig. 6 shows just one instance of a failure during execution of an atomic block. Other events besides a foreign write to a previously- read memory address may cause execution of an atomic block to fail. One such event, for example, is a "table full" or overflow of the AMT 106 or AUT 108. Another such event, for example, is a read-after- write ("RAW") violation in the AUT 108. Other failure conditions may also be implemented, such as coherency collision, etc.. [000144] Various mechanisms may be utilized to inform software that execution of an atomic block has failed. For any of the events that may cause a failure of an atomic block, such events may be reported by the thread unit (such as, e.g., thread unit 104 of Fig. 2) as an asynchronous yield event, such as an interrupt. For at least one embodiment, the failure events may trigger a user-level interrupt. [000145] One manner of implementing a user-level interrupt to indicate failure of transactional execution is referred to herein as user-level fly-weight interrupt handling. Such mechanism may include a channel in which certain triggering events may be indicated. The triggering event may be referred to as a "scenario." The triggering scenario may be an architecturally-defined set of one or more events. Alternatively, the triggering scenario may be a user-defined set of one or more events. Upon detection of the triggering scenario specified in the channel, control may be transferred to a user-level handler routine. Further description for at least one embodiment of such user-level flyweight interrupt handling mechanism may be found in co-pending patent application Ser. No. (Atty Docket Number P14912X), entitled " A Programmable Event Driven Yield Mechanism Which May Activate Service Threads".[000146] For the processor embodiment 904 illustrated in Fig. 7, one or more scenarios may be defined to support one or more embodiments of the transactional execution scheme discussed herein. For at least one embodiment, a scenario (referred to as a "status- update scenario") may be defined such that an interrupt is generated when the contents of the TSR 952 change. That is, when the contents of the TSR 952 are updated in a particular manner, an interrupt may be generated. [000147] The status-update scenario may be implemented by monitoring the transaction status register (TSR) with a mask. The status-update scenario thus may be associated with a mask that is applied to the TSR. If the ANDing of the mask and the TSR results in a non-zero value and the processor is in ring-level 3, then the scenario may trigger a user-level event handler. The mask may be defined such that an interrupt based on the status-update scenario may be generated when the TCR 952 indicates that a transaction has failed.[000148] Accordingly, the discussion above indicates that a processor, such as processor 904 illustrated in Fig. 7, may fail a transaction as long as it correctly notifies software of the failure. That is, a programmer is not guaranteed, based on the hardware scheme discussed above, that an atomic block will successfully execute. For at least one embodiment, an additional capability is provided by the processor 904 so that a programmer may, when desired, employ one or more user-level software instructions to be assured that an atomic block will complete execution without contention from other cooperative threads. Such additional capability is referred to here in as "stop-the-world" capability.[000149] Stop-the-world capability may be utilized to ensure atomic block execution even if the atomic block includes a large number of memory instructions (e.g., load and/or store instructions). Stop-the-world capability may also be utilized to ensure atomic block execution even if the atomic block always collides with other transactions. In general, stop-the-world may be utilized in software when that software has attempted to execute a speculative block of code using the instructions discussed above (see Table 1) and has determined that success is unlikely because of finite resource limits or because of repetitive memory dependency violations. The software may initiate a stop-the-world programming abstraction without using the speculative threading hardware[000150] Rather than relying on logical structures such as the AMT 106 and AUT 108, stop-the-world may be provided by a software layer (such as a library or runtime routine) that utilizes user-level interrupts to ensure atomicity by suspending all other cooperative threads during execution of an atomic block. Stop-the-world capability may utilize two interrupt scenarios that are supported by the processor 904.[000151] The first scenario that may be utilized to implement stop-the-world is a foreign update scenario. That is, the scenario provides the ability to monitor a certain memory address for a foreign update, and to generate an interrupt if such update occurs. A "foreign" update may be understood to mean that the value at a memory address has been written by another cooperative thread. The foreign-update scenario may thus provide a mechanism for one thread to interrupt all other cooperative threads in order to synchronize on implementing the underlying programming model. This same scenario may also be used so that a speculative task can be informed when all earlier speculative tasks have completed and the speculative task can transition to become non-speculative.[000152] The second scenario that may be utilized to implement stop-the-world is a return-from-privilege scenario. The second scenario is to invoke a user-handler when control returns to user code from a trap/exception/interrupt handler. The scenario detects when a transition to ring-level 3 occurs and invokes a user-level handler. Such scenario basically allows for a user-level handler to be invoked whenever control returns to ring- level 3 from a trap/exception/interrupt handler. This scenario allows a thread to check if its cooperative threads are currently running software that this thread should be synchronized with. This could happen if the cooperative threads had synchronized while a thread was in a handler or was not actively scheduled.[000153] Utilizing these two scenarios, a programmer may suspend all other cooperative threads so that a particular cooperative thread may execute an atomic block without contention from the suspended cooperative threads. [000154] It will be noted that stop-the-world is an alternative manner of implementing transactional execution without the additional hardware structures 106, 108, state 951, 952 and instructions (see Table 1) discussed above (referred to herein as the "hardware" embodiment). Both approaches may be used together. It may be a very desirable programming model to let software program to the concept that there is a block of code that executes with the semantics that it is atomic. Stop-the-world may be used to preserve the programming semantics when transactional execution fails according to an embodiment of the hardware scheme described above.[000155] The foreign update and return-from-handler scenarios discussed above can be used to implement stop-the-world behavior. To do so, the thread wishing to execute an atomic block may perform a swap to an agreed memory location used for synchronization. The swap may write a "busy" value to the memory location, and may check that the previous value was an "idle" value. If the previous value was not "idle," the thread may repeat until an "idle" value is detected.[000156] All cooperative threads may have a scheme to monitor this synchronization location. For at least one embodiment, each cooperative thread may have a "foreign update" scenario active in a channel, so that an interrupt will be generated responsive to the conditions of the scenario being me. For at least one embodiment, if the "busy" value is written to the synchronization location, then the scenario has been satisfied, and a user- level interrupt is generated for all the other cooperative threads accordingly. (It should be noted that, for an alternative embodiment, similar functionality could be implemented via message passing through a memory interface, rather than via a user-level interrupt mechanism.) The associated event handlers for each of the cooperative threads may cause the cooperative threads to go into a spin lock, or other waiting mode, until the value at the synchronization location is set back to "idle" value.[000157] All cooperative threads may also have a "return-to ring-level 3" scenario active in a channel. The cooperative threads may thus be disrupted, and an interrupt handler invoked, when control returns to a user-privilege level from a trap/exception/interrupt handler. Upon satisfaction of the scenario, an interrupt may be generated. The associated interrupt handler may cause cooperative threads to check the synchronization location and spin-lock, or wait with other waiting mode, if it the value at the synchronization location is not "idle".[000158] After waiting a bounded time, to allow time for all other cooperative threads to observe the synchronization event and stall execution, the thread that initiated the stop- the- world can then execute the atomic block. At the end of the atomic block, the thread may write the idle value to the synchronization location so that all cooperative threads can continue execution.[000159] Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs executing on programmable systems comprising at least one processor, a data storage system (including volatile and non- volatile memory and/or storage elements), at least one input device, and at least one output device. Program code may be applied to input data to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor. [000160] The programs may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The programs may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language [000161] The programs may be stored on a storage media or device (e.g., hard disk drive, floppy disk drive, read only memory (ROM), CD-ROM device, flash memory device, digital versatile disk (DVD), or other storage device) readable by a general or special purpose programmable processing system. The instructions, accessible to a processor in a processing system, provide for configuring and operating the processing system when the storage media or device is read by the processing system to perform the procedures described herein. Embodiments of the invention may also be considered to be implemented as a machine-readable storage medium, configured for use with a processing system, where the storage medium so configured causes the processing system to operate in a specific and predefined manner to perform the functions described herein. [000162] An example of one such type of processing system is shown in Fig. 7.System 900 is representative of processing systems based on the Pentium(R), Pentium(R) Pro, Pentium(R) II, Pentium(R) III, Pentium(R) 4 , and Itanium(R) and Itanium(R) II microprocessors available from Intel Corporation, although other systems (including personal computers (PCs) having other microprocessors, engineering workstations, set-top boxes and the like) may also be used. In one embodiment, sample system 900 may be executing a version of the WINDOWS(R) operating system available from Microsoft Corporation, although other operating systems and graphical user interfaces, for example, may also be used.[000163] Fig. 7 illustrates that a processing system 900 capable of performing disclosed techniques may include a memory system 940 and a processor 904. Memory system 940 may include a memory 902 as well as one or more on- or off-chip caches. For example, memory system 940 may include a data cache 942 and/or an instruction cache 944.[000164] Memory system 940 may store instructions 910 and/or data 912 for controlling the operation of the processor 904. The instructions 910 and/or data 912 may include code for performing any or all of the techniques discussed herein. Memory system 940 is intended as a generalized representation of memory and may include a variety of forms of memory, such as a hard drive, CD-ROM, random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), etc, as well as related circuitry. [000165] Fig. 7 illustrates that the processor 904 may include a front end 920. The front end 920 may include fetch and decode logic. For instance, the front end 920 may include logically independent next-instruction-pointer and fetch logic 320 to fetch instructions for each thread context, even though the multiple logical sequencers may be implemented in a single physical fetch/decode unit 322. The fetch/decode unit 322 may include decoder logic to decode instructions, such as the TRNXSET, TRNXCMT,TRNXRD and/or TRNXOK instructions described above. Responsive to receiving one of the instructions, the decode logic may send one or more signals to an execution core 930 that causes the execution core 930 to perform the desired operation. (Operations associated with at least one embodiment of each such instruction are set forth above in connection with the discussion of Table 1).[000166] For an SMT embodiment of the multi-sequencer system 900 illustrated in Fig. 7, the term "sequencer" encompasses at least the next-instruction-pointer and fetch logic 320 for a thread context, along with at least some of the associated architecture state for that thread context. It should be noted that the sequencers of an SMT system 900 need not be symmetric. For example, two SMT sequencers for the same physical core 904 may differ in the amount of architectural state information that they each maintain.[000167] Thus, for at least one embodiment, the multi-sequencer system 900 is a single-core processor 904 that supports concurrent multithreading. For such embodiment, each sequencer is a logical processor having its own instruction next-instruction-pointer and fetch logic 320 and its own architectural state information, although the same physical processor core 304 executes all thread instructions. For such embodiment, the logical processor maintains its own version of the architecture state, although execution resources of the single processor core may be shared among concurrently-executing threads.[000168] At least one alternative embodiment of the system 900 illustrated in Fig. 7 be based on a multi-core processor (see, .e.g., processor 200 illustrated in Fig. 2). Such a system may include two or more separate physical processors (see, e.g., 104 - 104n of Fig. 2) that is each capable of executing a different thread such that execution of at least portions of the different threads may be ongoing at the same time. Each processor includes a physically independent fetch unit 322 to fetch instruction information for its respective thread. In an embodiment where each processor executes a single thread, the fetch/decode unit 322 implements a single next-instruction-pointer and fetch logic 320. However, in an embodiment where each processor supports multiple thread contexts, the fetch/decode unit 322 implements distinct next-instruction-pointer and fetch logic 320 for each supported thread context. The optional nature of additional next-instruction-pointer and fetch logic 320 in a processor 904 is denoted by dotted lines in Fig. 7. [000169] While particular embodiments of the present invention have been shown and described, it will be obvious to those skilled in the art that changes and modifications can be made without departing from the scope of the appended claims. Accordingly, one of skill in the art will recognize that changes and modifications can be made without departing from the present invention in its broader aspects. The appended claims are to encompass within their scope all such changes and modifications that fall within the true scope of the present invention.
A memory array comprises vertically-alternating tiers of insulative material and memory cells. The memory cells individually comprise a transistor and a capacitor. The capacitor comprises a first electrode electrically coupled to a source/drain region of the transistor. The first electrode comprises an annulus in a straight-line horizontal cross-section and a capacitor insulator radially inward of the first electrode annulus. A second electrode is radially inward of the capacitor insulator. A capacitor-electrode structure extends elevationally through the vertically-alternating tiers. Individual of the second electrodes of individual of the capacitors are electrically coupled to the elevationally-extending capacitor- electrode structure. A sense line is electrically coupled to another source/drain region of multiple of the transistors that are in different memory-cell tiers. Additional embodiments and aspects are disclosed, including methods.
CLAIMS:1. A memory array comprising vertically-alternating tiers of insulative material and memory cells, the memory cells individually comprising a transistor and a capacitor, the capacitor comprising:a first electrode electrically coupled to a source/drain region of the transistor, the first electrode comprising an annulus in a straight- line horizontal cross-section;a capacitor insulator radially inward of the first electrode annulus; anda second electrode radially inward of the capacitor insulator; anda capacitor-electrode structure extending elevationally through the vertically-alternating tiers, individual of the second electrodes of individual of the capacitors being electrically coupled to the elevationally-extending capacitor- electrode structure; anda sense line electrically coupled to another source/drain region of multiple of the transistors that are in different memory-cell tiers.2. The array of claim 1 wherein the capacitor-electrode structure comprises a pillar. 3. The array of claim 1 wherein the capacitor insulator extends elevationally through the vertically-alternating tiers.4. The array of claim 1 wherein the capacitor insulator comprises an annulus in a straight-line horizontal cross-section.5. The array of claim 4 wherein the capacitor insulator extends elevationally through the vertically-alternating tiers.6. The array of claim 1 wherein the second electrode is not annular in any straight-line horizontal cross-section.7. The array of claim 1 wherein the capacitor-electrode structure is not annular in any straight-line horizontal cross-section.8. The array of claim 1 wherein the capacitor-electrode structure extends vertically or within 10° of vertical.9. The array of claim 1 wherein the sense line comprises a sense-line structure extending elevationally through the vertically-alternating tiers, individual of the second source/drain regions of individual of the transistors that are in different memory-cell tiers being electrically coupled to the elevationally- extending sense-line structure.10. The array of claim 9 wherein the sense-line structure comprises a pillar.11. The array of claim 9 wherein the sense line comprises a horizontal longitudinally-elongated conductive line that is above or below the vertically- alternating tiers and is directly electrically coupled to the sense-line structure.12. The array of claim 1 wherein the capacitor-electrode structure is directly electrically coupled to a horizontally-elongated capacitor-electrode construction that is above or below the vertically-alternating tiers.13. The array of claim 1 wherein,the sense line comprises a sense-line structure extending elevationally through the vertically-alternating tiers, individual of the second source/drain regions of individual of the transistors being electrically coupled to theelevationally-extending sense-line structure;the sense line comprises a horizontal longitudinally-elongated conductive line that is above the vertically-alternating tiers and is directly electrically coupled to the sense-line structure; andthe capacitor-electrode structure is directly electrically coupled to a horizontally-elongated capacitor-electrode construction that is above thevertically-alternating tiers and below the horizontal longitudinally-elongated conductive line.14. A memory array, comprising:vertically-alternating tiers of insulative material and memory cells, the memory cells individually comprising:a transistor comprising first and second source/drain regions having a channel region there-between and a gate operatively proximate the channel region, at least a portion of the channel region being horizontally-oriented for horizontal current flow in the portion between the first and second source/drain regions; anda capacitor comprising:a first electrode electrically coupled to the first source/drain region of the transistor, the first electrode comprising an annulus in a straight-line horizontal cross- section;a capacitor insulator radially inward of the first electrode annulus; anda second electrode radially inward of the capacitor insulator;a capacitor-electrode structure extending elevationally through the vertically-alternating tiers, the second electrode of individual of the capacitors comprising a portion of the elevationally-extending capacitor-electrode structure; anda sense-line structure extending elevationally through thevertically-alternating tiers, individual of the second source/drain regions of individual of the transistors that are in different memory-cell tiers beingelectrically coupled to the elevationally-extending sense-line structure.15. The array of claim 14 wherein all of the channel region ishorizontally-oriented for horizontal current flow there-through.16. The array of claim 14 wherein the first electrode is directlyelectrically coupled to the first source/drain region.17. The array of claim 14 wherein the sense-line structure is directly electrically coupled to a horizontal longitudinally-elongated sense line that is above or below the vertically-alternating tiers.18. The array of claim 14 wherein the capacitor-electrode structure is directly electrically coupled to a horizontally-elongated capacitor-electrode construction that is above or below the vertically-alternating tiers.19. A method of forming a memory array, the memory array comprising memory cells individually comprising a transistor and a capacitor, the method comprising:forming vertically-alternating tiers of insulative material and transistors, the tiers of transistors comprising horizontally-alternating lines of active area and insulating material, the transistors individually comprising:first and second source/drain regions having a channel region there-between and a gate operatively proximate the channel region, the gate comprising a portion of a horizontal longitudinally-elongated access line that interconnects multiple of the gates along that access line; individual of the active-area lines comprising the first source/drain region, the second source/drain region, and the channel region;forming capacitors individually comprising first and second electrodes having a capacitor insulator there-between, the first electrode being electrically coupled to individual of the first source/drain regions of individual of the transistors, the second capacitor electrodes of multiple of the capacitors in the array being electrically coupled with one another; andforming a sense-line structure extending elevationally through the vertically- alternating tiers, individual of the second source/drain regions of the individual transistors that are in different transistor tiers being electrically coupled to the elevationally-extending sense-line structure.20. The method of claim 19 comprising using masking steps to pattern individual of the transistor tiers before forming the insulative material tier that is immediately-vertically thereover, said masking steps totaling two and only two for each transistor tier.21. A method of forming a memory array, the memory array comprising memory cells individually comprising a transistor and a capacitor, the method comprising:forming vertically-alternating tiers of insulative material and transistors, the tiers of transistors comprising horizontally-alternating lines of active area and insulating material, the transistors individually comprising:first and second source/drain regions having a channel region there-between and a gate operatively proximate the channel region, the gate comprising a portion of a horizontal longitudinally-elongated access line that interconnects multiple of the gates along that access line; individual of the active-area lines comprising the first source/drain region, the second source/drain region, and the channel region;forming capacitors of individual memory cells, comprising:forming an opening extending elevationally through multiple of the vertically-alternating tiers;within the opening, forming a first electrode electrically coupled to the first source/drain region of individual of the transistors, the first electrode comprising an annulus within the opening;forming a capacitor insulator within the opening radially inward of the first electrode annulus; andforming a capacitor-electrode structure within the opening radially inward of the capacitor insulator and extending elevationally through the multiple vertically-alternating tiers, the elevationally- extending capacitor-electrode structure comprising a second electrode of individual of the capacitors; andforming a sense line electrically coupled to multiple of the second source/drain regions of the individual transistors that are in different transistor tiers.22. The method of claim 21 comprising using masking steps to pattern individual of the transistor tiers before forming the insulative material tier that is immediately-vertically thereover, said masking steps totaling two and only two for each transistor tier.23. The method of claim 21 comprising forming the opening after forming the access line.24. The method of claim 21 wherein forming the opening comprises widening the opening at least within the transistor tiers before forming the first electrodes.25. The method of claim 21 wherein forming the first electrodecomprises:within the opening, etching material of the transistor tiers selectively relative to the insulative-material tiers to radially expand the opening and form annular void spaces in the transistor tiers, individual of the annular void spaces extending radially to individual of the first source/drain regions;forming conductive material in the opening along sidewalls of the opening and in the annular void spaces; andremoving the conductive material from the opening sidewalls and leaving the conductive material in the annular void spaces to form the individual first electrodes.26. The method of claim 21 comprising forming the capacitor insulator to extend elevationally through the vertically-alternating tiers.27. The method of claim 21 wherein forming the sense line comprises forming a sense-line structure extending elevationally through thevertically-alternating tiers, individual of the second source/drain regions of the individual transistors being electrically coupled to the elevationally-extending sense-line structure.
DESCRIPTIONMEMORY ARRAYS COMPRISING VERTICALLY-ALTERNATING TIERS OF INSULATIVE MATERIAL AND MEMORY CELLS AND METHODS OF FORMING A MEMORY ARRAYTECHNICAL FIELDEmbodiments disclosed herein pertain to memory arrays comprising vertically-alternating tiers of insulative material and memory cells and to methods of forming a memory array. BACKGROUNDMemory is one type of integrated circuitry, and is used in computer systems for storing data. Memory may be fabricated in one or more arrays of individual memory cells. Memory cells may be written to, or read from, using digit lines (which may also be referred to as bit lines, data lines, or sense lines) and access lines (which may also be referred to as word lines). The sense lines may conductively interconnect memory cells along columns of the array, and the access lines may conductively interconnect memory cells along rows of the array. Each memory cell may be uniquely addressed through the combination of a sense line and an access line.Memory cells may be volatile, semi-volatile, or non-volatile.Non-volatile memory cells can store data for extended periods of time in the absence of power. Non-volatile memory is conventionally specified to be memory having a retention time of at least about 10 years. Volatile memory dissipates, and is therefore refreshed/rewritten to maintain data storage. Volatile memory may have a retention time of milliseconds or less.Regardless, memory cells are configured to retain or store memory in at least two different selectable states. In a binary system, the states are considered as either a "0" or a "1". In other systems, at least some individual memory cells may be configured to store more than two levels or states of information.A capacitor is one type of electronic component that may be used in a memory cell. A capacitor has two electrical conductors separated by electrically insulating material. Energy as an electric field may be electrostatically stored within such material. Depending on composition of the insulator material, that stored field will be volatile or non-volatile. For example, a capacitor insulator material including only S1O2 will be volatile. One type of non-volatile capacitor is a ferroelectric capacitor which has ferroelectric material as at least part of the insulating material. Ferroelectric materials are characterized by having two stable polarized states and thereby can comprise programmable material of a capacitor and/or memory cell. The polarization state of the ferroelectric material can be changed by application of suitable programming voltages, and remains after removal of the programming voltage (at least for a time). Each polarization state has a different charge-stored capacitance from the other, and which ideally can be used to write (i.e., store) and read a memory state without reversing the polarization state until such is desired to be reversed. Less desirable, in some memory having ferroelectric capacitors the act of reading the memory state can reverse the polarization. Accordingly, upon determining the polarization state, a re-write of the memory cell is conducted to put the memory cell into the pre-read state immediately after its determination.Regardless, a memory cell incorporating a ferroelectric capacitor ideally is non-volatile due to the bi-stable characteristics of the ferroelectric material that forms a part of the capacitor. Programmable materials other than ferroelectric materials may be used as a capacitor insulator to render capacitors non-volatile.A field effect transistor is one type of electronic component that may be used in a memory cell. These transistors comprise a pair of conductive source/drain regions having a semiconductive channel region there-between. A conductive gate is adjacent the channel region and separated there-from by a thin gate insulator. Application of a suitable voltage to the gate allows current to flow from one of the source/drain regions to the other through the channel region. When the voltage is removed from the gate, current is largely prevented from flowing through the channel region. Field effect transistors may also include additional structure, for example reversibly programmable charge storage/trap regions as part of the gate construction between the gate insulator and the conductive gate.One type of transistor is a ferroelectric field effect transistor (FeFET) wherein at least some portion of the gate construction (e.g., the gate insulator) comprises ferroelectric material. The two different polarized states of the ferroelectric material in field effect transistors may be characterized by different threshold voltage (Vt) for the transistor or by different channel conductivity for a selected operating voltage. Again, polarization state of the ferroelectric material can be changed by application of suitable programming voltages, and which results in one of high channel conductance or low channel conductance. The high and low conductance, invoked by the ferroelectric polarization state, remains after removal of the gate programming voltage (at least for a time). The status of the channel can be read by applying a small drain voltage which does not disturb the ferroelectric polarization. Programmable materials other than ferroelectric materials may be used as a gate insulator to render a transistor to be nonvolatile.BRIEF DESCRIPTION OF THE DRAWINGSFig. 1 is a diagrammatic sectional view of a substrate fragment comprising a memory array in accordance with an embodiment of the invention.Fig. 2 is a sectional view taken through line 2-2 in Fig. 1.Fig. 3 is a sectional view taken through line 3-3 in Fig. 1.Fig. 4 is a diagrammatic perspective view of a predecessor substrate to that shown by Figs. 1 -3.Fig. 5 is a sectional view of the Fig. 4 substrate at a processing step subsequent to that shown by Fig. 4.Fig. 6 is a sectional view taken through line 6-6 in Fig. 5.Fig. 7 is a sectional view taken through line 7-7 in Fig. 5.Fig. 8 is a sectional view of the Fig. 5 substrate at a processing step subsequent to that shown by Fig. 5.Fig. 9 is a sectional view taken through line 9-9 in Fig. 8.Fig. 10 is a sectional view taken through line 10- 10 in Fig. 8.Fig. 11 is a sectional view of the Fig. 8 substrate at a processing step subsequent to that shown by Fig. 8.Fig. 12 is a sectional view taken through line 12- 12 in Fig. 11.Fig. 13 is a sectional view taken through line 13- 13 in Fig. 11.Fig. 14 is a sectional view of the Fig. 11 substrate at a processing step subsequent to that shown by Fig. 11.Fig. 15 is a sectional view taken through line 15- 15 in Fig. 14.Fig. 16 is a sectional view taken through line 16- 16 in Fig. 14. Fig. 17 is a sectional view of the Fig. 14 substrate at a processing step subsequent to that shown by Fig. 14.Fig. 18 is a sectional view taken through line 18- 18 in Fig. 17.Fig. 19 is a sectional view taken through line 19- 19 in Fig. 17.20 is a sectional view of the Fig. 17 substrate at a processing step subsequent to that shown by Fig. 17.Fig. 21 is a sectional view taken through line 21 -21 in Fig. 20.Fig. 22 is a sectional view taken through line 22-22 in Fig. 20.Fig. 23 is a sectional view of the Fig. 20 substrate at a processing step subsequent to that shown by Fig. 20.Fig. 24 is a sectional view taken through line 24-24 in Fig. 23.Fig. 25 is a sectional view taken through line 25-25 in Fig. 23.Fig. 26 is a sectional view of the Fig. 23 substrate at a processing step subsequent to that shown by Fig. 23.Fig. 27 is a sectional view taken through line 27-27 in Fig. 26.Fig. 28 is a sectional view taken through line 28-28 in Fig. 26.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTSEmbodiments of the invention encompass memory arrays and methods of forming memory arrays. A first example structure embodiment of an example memory array is shown in and described with reference to Figs. 1 - 3. Such includes a substrate structure or construction 8 comprising a memory array 10 fabricated relative to a base substrate 11. Example base substrate 11 may comprise any one or more ofconductive/conductor/conducting (i.e., electrically herein),semiconductive/semiconductor/semiconducting, andinsulative/insulator/insulating (i.e., electrically herein) materials. Various materials have been formed elevationally over base substrate 11. Materials may be aside, elevationally inward, or elevationally outward of theFigs. 1 -3-depicted materials. For example, other partially or wholly fabricated components of integrated circuitry may be provided somewhere above, about, or within base substrate 11. Control and/or other peripheral circuitry for operating components within a memory array may also be fabricated, and may or may not be wholly or partially within a memory array or sub-array. Further, multiple sub-arrays may also be fabricated and operated independently, in tandem, or otherwise relative one another. As used in this document, a "sub-array" may also be considered as an array.Construction 8 includes vertically-alternating tiers 12 and 14 of insulative material 16 (e.g., comprising, consisting essentially of, or consisting of silicon nitride and/or doped or undoped silicon dioxide of a thickness of 200 Angstroms to 600 Angstroms) and memory cells 19, respectively. In some embodiments, tiers 14 may be considered as transistor tiers 14. Memory-cell tiers 14 may be of the same or different thickness as that of insulative material tiers 12, with different and greater thickness being shown (e.g., 500 Angstroms to 1 ,500 Angstroms). Construction 8 is shown as having eight vertically-alternating tiers 12 and 14, although fewer or likely many more (e.g., dozens, hundreds, etc.) may be formed.Accordingly, more tiers 12 and 14 may be below the depicted tiers and above base substrate 11 and/or more tiers 12 and 14 may be above the depicted tiers. Tiers 14 comprise horizontally-alternating lines 7 and 9 of active area (variously appropriately doped semiconductor material) and insulating material 13 (e.g., the other of silicon nitride or silicon dioxide where insulative material 16 is one of silicon nitride or silicon dioxide), respectively.Memory cells 19 individually comprise a transistor 25 and a capacitor 34. Transistor 25 comprises a first source/drain region 20 and a second source/drain region 22 (e.g., conductively-doped semiconductor material such as polysilicon or semiconductively-doped semiconductor material such as polysilicon for each) having a channel region 24 there- between (e.g., doped semiconductor material, such as polysilicon, but not to be intrinsically conductive). In some embodiments (but not shown), a conductively-doped semiconductor region and/or or another semiconductive region (e.g., LDD and/or halo regions) may be between channel region 24 and one or both of source/drain regions 20 and 22. In the example embodiment, individual active-area lines 7 comprise first source/drain region 20, second source/drain region 22, and channel region 24.A gate 26 (e.g., one or more of elemental metal, a mixture or alloy of two or more elementals, conductive metal compounds, and conductively- doped semiconductive materials) is operatively proximate channel region 24. Specifically, in the depicted example, a gate insulator material 28 (e.g., silicon dioxide, silicon nitride, hafnium oxide, other high k insulator material, and/or ferroelectric material) is between gate 26 and channel region 24. Gate 26 as shown may comprise a portion of a horizontal longitudinally-elongated access line 27 that interconnects multiple of gates 26 along that access line.At least a portion of channel region 24 is horizontally-oriented for horizontal current flow in the portion between first source/drain region 20 and second source/drain region 22. In the depicted example embodiment, all of channel region 24 is horizontally-oriented for horizontal current flow there-through. Regardless, when suitable voltage is applied to gate 26, a conductive channel can form within channel region 24 proximate gate insulator material 28 such that current is capable of flowing between source/drain regions 20 and 22.Capacitor 34 comprises a pair of electrodes, for example a first electrode 46 and a second electrode 48 (e.g., conductively-dopedsemiconductive material and/or metal material for each), having a capacitor insulator 50 there-between (e.g., silicon dioxide, silicon nitride, hafnium oxide, other high k insulator material and/or ferroelectric material). First electrode 46 is electrically coupled, in one embodiment directly electrically coupled, to first source/drain region 20 of transistor 25. Additionally, in one embodiment, first electrode 46 comprises an annulus 41 in a straight-line horizontal cross-section (e.g., the cross-section shown by Fig. 2). Capacitor insulator 50 is radially inward of first-electrode annulus 41 , in one embodiment extends elevationally through vertically-alternating tiers 12 and 14, and regardless in one embodiment comprises an annulus 43 in a straight-line horizontal cross-section (e.g., the cross-section shown by Fig. 2). Second electrode 48 is radially inward of capacitor insulator 50, and in one embodiment as shown is not annular in any straight-line horizontal cross-section.A capacitor-electrode structure 52 (e.g., a solid or hollow pillar, a solid or hollow wall, etc.) extends elevationally through vertically- alternating tiers 12 and 14, with individual second electrodes 48 of individual capacitors 34 that are in different memory-cell tiers 14 being electrically coupled, in one embodiment directly electrically coupled, to elevationally-extending capacitor-electrode structure 52. In one embodiment and as shown, second electrode 48 of individual capacitors 34 comprises a portion of elevationally-extending capacitor-electrode structure 52. In one embodiment and as shown, capacitor-electrode structure 52 is not annular in any straight-line horizontal cross-section, and in one embodiment extends vertically or within 10° of vertical. Example materials for capacitor-electrode structure 52 are metal materials and conductively-doped semiconductor materials. In one embodiment and as shown, capacitor- electrode structure 52 comprises a pillar 55, with capacitor insulator 50 being received circumferentially about structure 52/pillar 55. In one embodiment, such, by way of example only, is one example of how second capacitor electrodes 48 of multiple of capacitors 34 that are in different memory-cell tiers 14 in the array may be electrically coupled with one another. In one embodiment and as shown, capacitor-electrode structure 52 is directly electrically coupled to a horizontally-elongated capacitor- electrode structure 29 (e.g., a line or a plate) that is above or below (above being shown) vertically-alternating tiers 12 and 14. Construction(s) 29 may, in one embodiment, directly electrically couple together all second electrodes 48 within the array.A sense line is electrically coupled, in one embodiment directly electrically coupled, to multiple of the second source/drain regions of individual of the transistors that are in different memory-cell tiers 14. In one embodiment and as shown, a sense-line structure 56 (e.g., a solid or hollow pillar, a solid or hollow wall, etc.) extends elevationally through vertically-alternating tiers 12 and 14, with individual second source/drain regions 22 of individual transistors 25 that are in different memory-cell tiers 14 being electrically coupled, in one embodiment directly electrically coupled, thereto. In one embodiment and as shown, sense-line structure 56 extends vertically or within 10° of vertical. In one embodiment and as shown, sense-line structure 56 comprises a pillar 59. In one embodiment and as shown, sense-line structure 56 comprises a peripheral conductively- doped semiconductive material 58 (e.g., polysilicon) and a central metal material core 60 (e.g., TiN and/or W). In one embodiment, sense-line structure 56 is directly electrically coupled to a horizontal longitudinally- elongated sense line 57 that is above or below (above being shown) vertically-alternating tiers 12 and 14. In the example embodiment, structure 29 and sense line 57 are both above tiers 12 and 14. This may be reversed, or one may above and the other below tiers 12 and 14Some embodiments of the invention comprise a memory array(e.g., 10) comprising vertically-alternating tiers (e.g., 12, 14) of insulative material (e.g., 16) and memory cells (e.g., 19), respectively. The memory cells individually comprise a transistor (e.g., 25) and a capacitor (e.g., 34). The capacitor comprises a first electrode (e.g., 46) electrically coupled to a source/drain region (e.g., 20) of the transistor. The first electrode comprises an annulus (e.g., 41) in a straight-line horizontal cross-section (e.g., the cross-section shown by Fig. 2). A capacitor insulator (e.g., 50) is radially inward of the first electrode annulus. A second electrode (e.g., 48) is radially inward of the capacitor insulator. A capacitor-electrode structure (e.g., 52) extends elevationally through the vertically-alternating tiers.Individual of the second electrodes of individual of the capacitors are electrically coupled to the elevationally-extending capacitor-electrode structure. A sense line (e.g., 56) is electrically coupled to anothersource/drain region (e.g., 22) of multiple of the transistors that are in different memory-cell tiers.The above example structures may be manufactured by any existing or yet-to-be-developed techniques. Further, embodiments of the invention encompass methods of forming a memory array comprising memory cells individually comprising a transistor and a capacitor. Such methods may have or use any of the structural attributes described and shown above with respect to the largely finished circuitry construction of Figs. 1 -3, or may not. Additionally, aspects of the invention include a memory array comprising vertically-alternating tiers of insulative material and memory cells as herein disclosed and described independent of method of manufacture. Regardless, one example technique of manufacturing the embodiment shown by Figs. 1 - 3 and a method embodiment of the invention are described with reference to Figs. 4-28. Like numerals from the above-described embodiments have been used for predecessor construction(s), regions, and like/predecessor materials thereof.Referring to Figs. 4-7, an example method comprises forming vertically-alternating tiers (e.g., 12, 14) of insulative material (e.g., 16) and transistors (e.g., 25), respectively. Tiers 14 of transistors 25 comprise horizontally-alternating active area lines 7 and insulating-material lines 9. Transistors 25 individually comprise first source/drain regions (e.g., 20) and second source/drain regions (e.g., 22) having a channel region (e.g., 24) there-between. A gate (e.g., 26) is operatively proximate the channel region. The gate comprises a portion of a horizontal longitudinally-elongated access line (e.g., 27) that interconnects multiple of the gates along that access line. Individual active-area lines 7 comprise the first source/drain region, the second source/drain region, and the channel region.Fig. 4 shows a single transistor tier 14 absent insulative material 16 for clarity. In one embodiment, masking steps (e.g., photolithography and/or e-beam lithography, followed by sacrificial etching) are used to pattern individual transistor tiers 14 before forming insulative material 16 of the insulative material tier 12 that is immediately-vertically thereover. Pitch multiplication may be used. Regardless, in one embodiment the number of masking steps used per fabrication of individual transistor tiers 14 totals two, and only two, for each transistor tier 14. Specifically, one masking step would be used for forming active-area lines 7 and insulating-material lines 9. Such, by way of example, may include using one masking step in which semiconductor material is subtractively patterned to leave active-area lines 7, followed by deposition and planarizing back of insulating material 13 there-between, which thereby forms insulating-material lines 9 in a self- aligned manner. The other masking step would then be used for formation of access lines 27 (e.g., subtractively, and regardless of whether gate insulator 28 is also patterned when patterning access lines 27). Suitable doping of one or more of regions 20, 22, and 24 may have occurred previously, may occur at this point in the method, or may occursubsequently.Capacitors are formed that individually comprise first and second electrodes having a capacitor insulator there-between. The first electrode is electrically coupled, in one embodiment directly electrically coupled, to individual of the first source/drain regions of individual of the transistors. The second capacitor electrodes of multiple of the capacitors in the array are electrically coupled, in one embodiment directly electrically coupled, with one another. One example such embodiment is described with reference to Figs. 8-25. Referring to Figs. 8- 10, openings 91 have been formed to extend elevationally through multiple of tiers 12 and 14 as shown. As an example, such may be formed using a suitable masking step, and with or without pitch multiplication. While multiple openings 91 are shown, the discussion largely proceeds relative to fabrication associated with respect to a single opening 91. Regardless, in one embodiment and as shown, opening 91 is formed after forming access lines 27.Referring to Figs. 11 - 13, and in one embodiment, within opening 91 , insulating material 13 and material of regions 20 of transistor tiers 14 have been etched selectively relative to insulative material 16 of insulative tiers 12 to widen and/or radially expand openings 91 and form annular void spaces 92 in transistor tiers 14. Individual annular void spaces 92 extend radially to individual first source/drain regions 20. Such may be conducted in a single or more than one etching step. An example etching chemistry that may be used where insulating material 13 is silicon dioxide is dilute HF and where material of region 20 comprises elemental-form silicon is tetramethyl-ammonium hydroxide.Referring to Figs. 14- 16, conductive material 46 (e.g., metal material, such as TiN) has been formed in opening 91 along sidewalls of such opening and in annular void spaces 92, for example to fill and essentially overfill such void spaces as shown.Referring to Figs. 17- 19, conductive material 46 has been removed from the sidewalls of opening 91 to leave conductive material 46 in annular void spaces 92 to form individual annular first electrodes 46 (e.g., individually comprising an annulus 41). Such may be conducted, for example, by a suitable dry anisotropic etch or by a timed wet etch of conductive material 46 selectively relative to other exposed materials. Such comprises but one example of, within opening 91 , forming a first electrode 46 electrically coupled, in one embodiment directly electrically coupled, to first source/drain region 20 of individual transistors 25, and wherein first electrode 46 comprises an annulus 41 within widened opening 91 in individual transistor tiers 14.Referring to Figs. 20-22, a capacitor insulator (e.g., 50) is formed within opening 91 radially inward of first electrode annulus 41. In one embodiment and as shown, the capacitor insulator is formed to extend elevationally through vertically-alternating tiers 12 and 14.In one embodiment, a capacitor-electrode structure (e.g., 52 in Figs. 23-25) is formed within opening 91 radially inward of capacitor insulator 50, and to extend elevationally through multiple vertically- alternating tiers 12 and 14. Elevationally-extending capacitor-electrode structure 52 comprises a second electrode 48 of individual capacitors 34. In one embodiment and as shown, capacitor-electrode structure 52 is directly electrically coupled to a horizontally-elongated capacitor-electrode structure (e.g., 29 as a line or a plate) that is formed above or below (above being shown) vertically-alternating tiers 12 and 14.A sense line is formed that is electrically coupled, in one embodiment directly electrically coupled, to multiple of the second source/drain regions of the individual transistor that are in different memory-cell tiers. In one embodiment, a sense-line structure (e.g., 56) is formed to extendelevationally through the vertically-alternating tiers. Individual of the second source/drain regions of individual transistors that are in different memory-cell tiers are electrically coupled, in one embodiment directly electrically coupled, to the elevationally-extending sense-line structure. For example, Figs. 26-28 show deposition of more insulative material 16 and formation of openings 93 through alternating tiers 12 and 14, including through second source/drain regions 22. Subsequent processing may be conducted to produce, for example, the structure of Fig. 1 to include sense- line structure 56 and a horizontally longitudinally-elongated sense line 57 that is above vertically-alternating tiers 12 and 14 and above horizontally- elongated capacitor-electrode construction 29.Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.In this document unless otherwise indicated, "elevational", "higher", "upper", "lower", "top", "atop", "bottom", "above", "below", "under", "beneath", "up", and "down" are generally with reference to the vertical direction. "Horizontal" refers to a general direction (i.e., within 10 degrees) along a primary substrate surface and may be relative to which the substrate is processed during fabrication, and vertical is a direction generally orthogonal thereto. Reference to "exactly horizontal" is the direction along the primary substrate surface (i.e., no degrees there-from) and may be relative to which the substrate is processed during fabrication. Further, "vertical" and "horizontal" as used herein are generally perpendicular directions relative one another and independent of orientation of the substrate in three-dimensional space. Additionally, "elevationally- extending" and "extending elevationally" refer to a direction that is angled away by at least 45° from exactly horizontal. Further, "extend(ing) elevationally" and "elevationally-extending" with respect to a field effect transistor are with reference to orientation of the transistor's channel length along which current flows in operation between the source/drain regions. For bipolar junction transistors, "extend(ing) elevationally" and"elevationally-extending" are with reference to orientation of the base length along which current flows in operation between the emitter and collector.Further, "directly above" and "directly under" require at least some lateral overlap (i.e., horizontally) of two statedregions/materials/components relative one another. Also, use of "above" not preceded by "directly" only requires that some portion of the stated region/material/component that is above the other be elevationally outward of the other (i.e., independent of whether there is any lateral overlap of the two stated regions/materials/components). Analogously, use of "under" not preceded by "directly" only requires that some portion of the stated region/material/component that is under the other be elevationally inward of the other (i.e., independent of whether there is any lateral overlap of the two stated regions/materials/components).Any of the materials, regions, and structures described herein may be homogenous or non-homogenous, and regardless may be continuous or discontinuous over any material which such overlie. Further, unless otherwise stated, each material may be formed using any suitable or yet-to-be-developed technique, with atomic layer deposition, chemical vapor deposition, physical vapor deposition, epitaxial growth, diffusion doping, and ion implanting being examples.Additionally, "thickness" by itself (no preceding directional adjective) is defined as the mean straight-line distance through a given material or region perpendicularly from a closest surface of an immediately-adjacent material of different composition or of an immediately-adjacent region. Additionally, the various materials or regions described herein may be of substantially constant thickness or of variable thicknesses. If of variable thickness, thickness refers to average thickness unless otherwise indicated, and such material or region will have some minimum thickness and some maximum thickness due to the thickness being variable. As used herein, "different composition" only requires those portions of two stated materials or regions that may be directly against one another to be chemically and/or physically different, for example if such materials or regions are not homogenous. If the two stated materials or regions are not directly against one another, "different composition" only requires that those portions of the two stated materials or regions that are closest to one another be chemically and/or physically different if such materials or regions are not homogenous. In this document, a material, region, or structure is "directly against" another when there is at least some physical touching contact of the stated materials, regions, or structures relative one another. In contrast, "over", "on", "adjacent", "along", and "against" not preceded by "directly" encompass "directly against" as well as construction where intervening material(s), region(s), or structure(s) result(s) in no physical touching contact of the stated materials, regions, or structures relative one another.Herein, regions-materials-components are "electrically coupled" relative one another if in normal operation electric current is capable of continuously flowing from one to the other, and does so predominately by movement of subatomic positive and/or negative charges when such are sufficiently generated. Another electronic component may be between and electrically coupled to the regions-materials-components. In contrast, when regions-materials-components are referred to as being "directly electrically coupled", no intervening electronic component (e.g., no diode, transistor, resistor, transducer, switch, fuse, etc.) is between the directly electrically coupled regions-materials-components.Additionally, "metal material" is any one or combination of an elemental metal, a mixture or an alloy of two or more elemental metals, and any conductive metal compound.In this document, a selective etch or removal is an etch or removal where one material is removed relative to another stated material or materials at a rate of at least 2.0: 1. Further, selectively growing or selectively forming is growing or forming one material relative to another stated material or materials at a rate of at least 2.0: 1 for at least the first 100 Angstroms of growing or forming.Further, a "self-aligned manner" means a technique whereby at least a lateral surface of a structure is defined by deposition of material against a sidewall of a previously-patterned structure.CONCLUSIONIn some embodiments, a memory array comprises vertically- alternating tiers of insulative material and memory cells. The memory cells individually comprise a transistor and a capacitor. The capacitor comprises a first electrode electrically coupled to a source/drain region of the transistor. The first electrode comprises an annulus in a straight-line horizontal cross-section and a capacitor insulator radially inward of the first electrode annulus. A second electrode is radially inward of the capacitor insulator. A capacitor-electrode structure extends elevationally through the vertically-alternating tiers. Individual of the second electrodes of individual of the capacitors are electrically coupled to the elevationally-extending capacitor-electrode structure. A sense line is electrically coupled to another source/drain region of multiple of the transistors that are in different memory-cell tiers.In some embodiments, a memory array comprises vertically- alternating tiers of insulative material and memory cells. The memory cells individually comprise a transistor comprising first and second source/drain regions having a channel region there-between and a gate operatively proximate the channel region. At least a portion of the channel region is horizontally-oriented for horizontal current flow in the portion between the first and second source/drain regions. The memory cells individually include a capacitor comprising a first electrode electrically coupled to the first source/drain region of the transistor. The first electrode comprises an annulus in a straight-line horizontal cross-section. A capacitor insulator is radially inward of the first electrode annulus. A second electrode is radially inward of the capacitor insulator. A capacitor-electrode structure extends elevationally through the vertically-alternating tiers. The second electrode of individual of the capacitors comprises a portion of the elevationally-extending capacitor-electrode structure. A sense-line structure extends elevationally through the vertically-alternating tiers. Individual of the second source/drain regions of individual of the transistors that are in different memory-cell tiers are electrically coupled to the elevationally- extending sense-line structure.In some embodiments, a method of forming a memory array comprising memory cells individually comprising a transistor and a capacitor includes forming vertically-alternating tiers of insulative material and transistors. The tiers of transistors comprise horizontally-alternating lines of active area and insulating material. The transistors individually comprise first and second source/drain regions having a channel region there-between and a gate operatively proximate the channel region. The gate comprises a portion of a horizontal longitudinally-elongated access line that interconnects multiple of the gates along that access line. Individual of the active-area lines comprise the first source/drain region, the second source/drain region, and the channel region. Capacitors are formed that individually comprise first and second electrodes having a capacitor insulator there-between. The first electrode is electrically coupled to individual of the first source/drain regions of individual of the transistors. The second capacitor electrodes of multiple of the capacitors in the array are electrically coupled with one another. A sense-line structure is formed that extends elevationally through the vertically-alternating tiers. Individual of the second source/drain regions of the individual transistors that are in different transistor tiers are electrically coupled to theelevationally-extending sense-line structure.In some embodiments, a method of forming a memory array comprising memory cells individually comprising a transistor and a capacitor includes forming vertically-alternating tiers of insulative material and transistors. The tiers of transistors comprise horizontally-alternating lines of active area and insulating material. The transistors individually comprise first and second source/drain regions having a channel region there-between and a gate operatively proximate the channel region. The gate comprises a portion of a horizontal longitudinally-elongated access line that interconnects multiple of the gates along that access line. Individual of the active-area lines comprise the first source/drain region, the second source/drain region, and the channel region. Capacitors of individual memory cells are formed and comprises forming an opening extending elevationally through multiple of the tiers. Within the opening, a first electrode is formed that is electrically coupled to the first source/drain region of individual of the transistors. The first electrode comprises an annulus within the opening. A capacitor insulator is formed within the opening radially inward of the first electrode annulus. A capacitor-electrode structure is formed within the opening radially inward of the capacitor insulator and extends elevationally through the multiplevertically-alternating tiers. The elevationally-extending capacitor-electrode structure comprises a second electrode of individual of the capacitors. A sense line is formed that is electrically coupled to multiple of the second source/drain regions of the individual transistors that are in different transistor tiers.
Some embodiments include an integrated assembly having a semiconductor structure extending from a first wiring to a second wiring. A ferroelectric transistor includes a first transistor gate adjacent a first region of the semiconductor structure. A first non-ferroelectric transistor includes a second transistor gate adjacent a second region of the semiconductor structure. The second region of the semiconductor structure is between the first region of the semiconductor structure and the first wiring. A second non-ferroelectric transistor includes a third transistor gate adjacent a third region of the semiconductor structure. The third region of the semiconductor structure is between the first region of the semiconductor structure and the second wiring.
CLAIMS l/we claim,1 . An integrated assembly, comprising :a semiconductor structure coupled with a conductive structure ; a ferroelectric transistor comprising a first transistor gate adjacent a first region of the semiconductor structu re; anda non-ferroelectric transistor comprising a second transistor gate adjacent a second region of the semiconductor structure; the second region of the semiconductor structure being between the first region of the semiconductor structu re and the conductive structu re.2. The integrated assembly of claim 1 wherein the first region is a same composition as the second region.3. The integrated assembly of claim 1 wherein the first region is a different composition relative to the second region.4. The integrated assembly of claim 1 wherein the first and second transistor gates are vertically spaced from one another.5. The integrated assembly of claim 4 wherein the first and second transistor gates are coupled to different voltage sources relative to one another.6. The integrated assembly of claim 4 wherein the first and second transistor gates are coupled to a common voltage source.7. The integrated assembly of claim 1 wherein a single conductive structu re comprises the first and second transistor gates.8. The integrated assembly of claim 7 wherein said single conductive structure has a region of the first transistor gate which is different in composition relative to a region of the second transistor gate directly against said region of the first transistor gate.9. An integrated assembly, comprising : a semiconductor structure extending from a first wiring to a second wiring ;a ferroelectric transistor comprising a first transistor gate adjacent a first region of the semiconductor structure;a first non-ferroelectric transistor comprising a second transistor gate adjacent a second region of the semiconductor structure; the second region of the semiconductor structure being between the first region of the semiconductor structure and the first wiring ; anda second non-ferroelectric transistor comprising a third transistor gate adjacent a third region of the semiconductor structure; the third region of the semiconductor structure being between the first region of the semiconductor structu re and the second wiring.1 0. The integrated assembly of claim 9 wherein the first, second and third transistor gates are coupled with a common voltage source.1 1 . The integrated assembly of claim 9 wherein the second and third transistor gates are coupled with a common voltage sou rce, and wherein the first transistor gate is coupled with another voltage source which is different from said common voltage sou rce.1 2. The integrated assembly of claim 9 wherein the first and second wirings are first and second digit lines which are comparatively coupled to one another through a sense amplifier.1 3. The integrated assembly of claim 12 being one of many substantially identical memory cells within a memory array.14. The integrated assembly of claim 9 wherein the second region is over the first region, which in turn is over the third region.1 5. The integrated assembly of claim 9 wherein the second and third regions are laterally spaced from one another.1 6. The integrated assembly of claim 1 5 wherein the first region is shaped as an upwardly-opening container with one side of the container being directly u nder the second region, and with another side of the container being directly under the third region.1 7. The integrated assembly of claim 9 wherein the first region is directly against the second region, and is directly against the third region.1 8. The integrated assembly of claim 9 wherein :the first region of the semiconductor structure is between first and second source/drain sections;the second region of the semiconductor structure is between third and fou rth source/drain sections ;the third region of the semiconductor structu re is between fifth and sixth source/drain sections;the first source/drain section is coupled with the fourth source/drain section ;the second source/drain section is coupled with the fifth source/drain section ;the third sou rce/drain section is coupled with the first wiring ; and the sixth source/drain section is coupled with the second wiring.1 9. The integrated assembly of claim 9 wherein :the first transistor gate is spaced from the first region by an intervening region comprising ferroelectric material ;the second transistor gate is spaced from the second region by an intervening region comprising first insulative material ;the third transistor gate is spaced from the third region by an intervening region comprising third insulative material ;an upper portion of the ferroelectric material abuts directly against the first insulative material ; anda lower portion of the ferroelectric material abuts directly against the second insulative material.20. The integrated assembly of claim 9 wherein the first, second and third regions of the semiconductor structure comprise a same composition as one another.21 . The integrated assembly of claim 9 wherein at least one of the first, second and third regions of the semiconductor structu re comprises a different composition relative to another the first, second and third regions of the semiconductor structure.22. An integrated assembly, comprising :a first comparative digit line over a second comparative digit line; a semiconductor pillar extending from the first comparative digit line to the second comparative digit line ;a first non-ferroelectric transistor u nder the first comparative digit line and gating an upper region of the semiconductor pillar;a ferroelectric transistor u nder the first non-ferroelectric transistor and gating a middle region of the semiconductor pillar; and a second non-ferroelectric transistor under the ferroelectric transistor and gating a lower region of the semiconductor pillar.23. The integrated assembly of claim 22 wherein :the first non-ferroelectric transistor comprises a first transistor gate;the second non-ferroelectric transistor comprises a second transistor gate ;the ferroelectric transistor comprises a third transistor gate ; and the first, second and third transistor gates are vertically spaced from one another.24. The integrated assembly of claim 22 wherein :the first non-ferroelectric transistor comprises a first transistor gate;the second non-ferroelectric transistor comprises a second transistor gate ;the ferroelectric transistor comprises a third transistor gate ; and a single conductive structure comprises the first, second and third transistor gates.25. The integrated assembly of claim 24 wherein said single conductive structu re has a region of the third transistor gate which is different in composition relative to regions of the first and second transistor gates directly against said region of the third transistor gate.26. The integrated assembly of claim 22 wherein :the first non-ferroelectric transistor comprises a first transistor gate ;the second non-ferroelectric transistor comprises a second transistor gate ;the ferroelectric transistor comprises a third transistor gate ; and the first, second and third transistor gates are coupled with a common voltage source.27. The integrated assembly of claim 22 wherein :the first non-ferroelectric transistor comprises a first transistor gate ;the second non-ferroelectric transistor comprises a second transistor gate ;the ferroelectric transistor comprises a third transistor gate ;the first and second transistor gates are coupled with a common voltage source ; andthe third transistor gate is coupled with another voltage source which is different from said common voltage sou rce.28. The integrated assembly of claim 27 configu red to refresh the first and second non-ferroelectric transistors to remove excess charge buildup along channel regions associated with the first and second non-ferroelectric transistors.29. The integrated assembly of claim 22 being configured as one of many substantially identical memory cells within a memory array.30. An integrated assembly, comprising :a first comparative digit line laterally offset from a second comparative digit line;a semiconductor structure extending from the first comparative digit line to the second comparative digit line; the semiconductor structure having a first stem extending downwardly from the first comparative digit line, a second stem extending downwardly from the second comparative digit line, and a segment extending from the first stem to the second stem ; a trough being defined between the first and second stems, and over the segment;a first non-ferroelectric transistor u nder the first comparative digit line and gating an upper region of the first stem ;a second non-ferroelectric transistor under the second comparative digit line and gating an upper region of the second stem ; anda ferroelectric configuration under the first and second non- ferroelectric transistors and gatedly coupling lower regions of the first and second stems to one another through a body region that extends along the segment.31 . The integrated assembly of claim 30 wherein :the first and second non-ferroelectric transistors share a first transistor gate which is within an upper region of the trough ; andthe ferroelectric transistor configuration comprises a second transistor gate which is within a lower region of the trough.32. The integrated assembly of claim 31 wherein the first and second transistor gates are vertically spaced from one another.33. The integrated assembly of claim 31 wherein a single conductive structu re comprises the first and second transistor gates.34. The integrated assembly of claim 33 wherein a region of the second transistor gate is different in composition relative to a region of the first transistor gate which is directly against said region of the second transistor gate.35. The integrated assembly of claim 30 wherein :the segment extends along a first direction ;the first and second non-ferroelectric transistors comprise first and second transistor gates, respectively;the first and second transistor gates are along a first conductive line which extends along the first direction ; andthe ferroelectric transistor configu ration comprises a third transistor gate, with the third transistor gate being along a second conductive line which extends along the first direction.36. The integrated assembly of claim 35 wherein the first and second conductive lines are vertically spaced from one another.37. The integrated assembly of claim 35 wherein a single conductive structu re comprises the first and second conductive lines.38. The integrated assembly of claim 30 being configured as one of many substantially identical memory cells within a memory array.
INTEGRATED ASS EMBLI ES COM PRISING FERROELECTRIC TRANSISTORS AN D NON-FERROELECTRIC TRANSISTORSRELATE D PATENT APPLICATION DATAThis application claims priority to U.S. Patent Application Serial No. 16/046,803 which was filed July 26, 2018, and which is hereby incorporated by reference herein.TECHN ICAL FI ELDIntegrated assemblies comprising ferroelectric transistors and non-ferroelectric transistors.BACKG ROUN DMemory is one type of integrated circuitry, and is used in computer systems for storing data. Memory may be fabricated in one or more arrays of individual memory cells. Memory cells may be written to, or read from, using digit lines (which may also be referred to as bitlines, data lines, sense lines, or data/sense lines) and access lines (which may also be referred to as wordlines). The digit lines may conductively interconnect memory cells along colu mns of the array, and the access lines may conductively interconnect memory cells along rows of the array.Memory cells may be volatile or nonvolatile. Nonvolatile memory cells can store data for extended periods of time including when the computer is turned off. Volatile memory dissipates and therefore requires being refreshed/rewritten, in many instances multiple times per second. Regardless, memory cells are configured to retain or store memory in at least two different selectable states. In a binary system, the states are considered as either a“0” or a“1”. In other systems, at least some individual memory cells may be configu red to store more than two levels or states of information.Ferroelectric field effect transistors (FeFET) may be utilized as memory cells. Specifically, the FeFETs may have two selectable memory states corresponding to two different polarization modes of ferroelectric material within the FeFETS. The different polarization modes may be characterized by, for example, different threshold voltages (Vt) or by different channel conductivities for a selected operating voltage. The ferroelectric polarization mode of a FeFET may remain in the absence of power (at least for a measurable duration).One type of ferroelectric transistor is a metal-ferroelectric-metal- insulator-semiconductor (M FM IS) transistor. Such has a gate dielectric (insulator, I) between metal (M) and a semiconductor substrate (S). Such also has ferroelectric (F) material over the metal, and has a gate (typically comprising metal, M) over the ferroelectric material. In operation, an electric field across the ferroelectric material is used to switch the ferroelectric material from one polarization mode to another. The ferroelectric transistor comprises a pair of source/drain regions, and a channel region between the source/drain regions. Conductivity across the channel region is influenced by the polarization mode of the ferroelectric material. Another type of ferroelectric transistor is metal- ferroelectric-insulator-semiconductor (MFIS) ; in which ferroelectric material directly touches the insulator (i.e., in which there is no intervening metal between the ferroelectric material and the insulator).The channel region may be considered to be contained within a body region of the ferroelectric transistor. During programming operations, carriers (holes and electrons) migrate into and out of the body region.It can be difficult to incorporate ferroelectric-transistor-based memory cells into memory arrays. For instance, the operation of a first memory cell may adversely impact the memory state of a second memory cell (e.g., the memory state of the second memory cell may be disturbed when voltage is applied along wiring common to both the first memory cell and the second memory cell).It is desired to develop ferroelectric-transistor-based memory cells suitable for incorporation into memory arrays. It would be desirable for such ferroelectric-transistor-based memory cells to have configu rations which are scalable to ever-increasing levels of integration.BRI EF DESCRI PTION OF TH E DRAWI NGSFIGS. 1 , 2, 4 and 6-13 are diagrammatic cross-sectional side views of regions of example integrated assemblies. FIGS. 1 2A and 1 3A are diagrammatic cross-sectional side views along the lines A-A of FIGS. 12 and 13, respectively.FIGS. 3 and 5 are diagrammatic schematic views of regions of example memory arrays.DETAILED DESCRI PTION OF TH E ILLUSTRATED EMBODIMENTSSome embodiments include assemblies in which ferroelectric transistors are provided along semiconductor structu res between a pair of wirings (e.g., comparative digit lines). Field effect transistors are also provided along the semiconductor structu res, and are utilized to selectively impede carrier flow between the wirings and the ferroelectric transistors. In some embodiments, the ferroelectric transistors and associated field effect transistors may be incorporated into memory cells, and the field effect transistors may be utilized to prevent an associated ferroelectric transistor of a first memory cell from being undesirably distu rbed as a second memory cell is operated with a voltage applied to wiring shared by the first and second memory cells. Example embodiments are described with reference to FIGS. 1 -13.Referring first to FIG. 1 , such illustrates an assembly 10 which includes a semiconductor structure 12 extending from a first conductive structure 14 to a second conductive structure 16.The semiconductor structure 1 2 comprises semiconductor material 1 5. The semiconductor material 15 may comprise any suitable composition(s) ; and in some embodiments may comprise, consist essentially of, or consist of one or more of silicon, germanium, l l l/V semiconductor material (e.g., galliu m phosphide), semiconductor oxide, etc. ; with the term l l l/V semiconductor material referring to semiconductor materials comprising elements selected from groups I I I and V of the periodic table (with groups I II and V being old nomenclature, and now being referred to as groups 1 3 and 1 5). In some example embodiments, the semiconductor material 15 may comprise, consist essentially of, or consist of silicon. The silicon may be in any suitable form, including, for example, monocrystalline, polycrystalline, amorphous, etc.The semiconductor structure 12 is shown to be a vertically- extending pillar in the assembly 1 0 of FIG. 1 . In other embodiments, the semiconductor structure 1 2 may have other configurations ; with examples of such other configurations being described below with reference to FIGS. 1 0-13.Referring still to FIG. 1 , the conductive structures 14 and 1 6 may comprise any suitable electrically conductive composition(s) ; such as, for example, one or more of various metals (e.g., titaniu m, tungsten, cobalt, nickel, platinum, rutheniu m, etc.), metal-containing compositions (e.g., metal silicide, metal nitride, metal carbide, etc.), and/or conductively-doped semiconductor materials (e.g., conductively-doped silicon, conductively-doped germaniu m, etc.). The conductive structures 14 and 1 6 may comprise a same composition as one another in some embodiments, and in other embodiments may comprise different compositions relative to one another.In some example embodiments, the conductive structures 14 and 16 correspond to wiring ; such as, for example, conductive lines extending across a memory array. For instance, the semiconductor structure 12 may be comprised by a memory cell 1 8, and the conductive structures 14 and 1 6 may correspond to comparative digit lines (i.e., bitlines, sense lines, etc.) utilized for addressing such memory cell. The illustrated comparative digit lines are arranged in a paired set comprising a true digit line (DL-T) and a complementary digit line (DL- C). The terms“true” and“complementary” are arbitrary. The electrical values of the true and complementary digit lines are utilized together during reading/writing operations conducted relative to the memory cell 18. In some embodiments, the memory cell 18 may be considered to be a representative memory cell of a plu rality of substantially identical memory cells within a memory array (with the term “substantially identical” meaning identical to within reasonable tolerances of fabrication and measurement).The comparative bitlines 14 and 16 are electrically coupled with a device 20. Such device 20 may be a sense amplifier utilized to compare properties of the true digit line (DL-T) with those of the complementary digit line (DL-C) during a READ operation relative to the memory cell 1 8. Alternatively, or additionally, the device 20 may be utilized to impart desired electrical properties to the true and complementary digit lines (DL-T and DL-C) du ring a programming (i.e., WRITE) operation.The semiconductor structu re 12 is shown to be subdivided amongst regions 22, 24 and 26. Such regions may be referred to as first, second and third regions in order to distinguish them from one another. For instance, the region 24 may be referred to as the first region, while the regions 22 and 26 are referred to as the second and third regions. As another example, the region 22 may be referred to as the first region, while the regions 24 and 26 are referred to as the second and third regions.In some embodiments, the regions 22, 24 and 26 may all have a same composition as one another. In other embodiments, one of the regions 22, 24 and 26 may comprise a different composition relative to another of the such regions.The region 24 is incorporated into a ferroelectric field effect transistor (i.e., a FeFET) 28; and the regions 22 and 26 are incorporated into non-ferroelectric field effect transistors 30 and 32, respectively. For purposes of u nderstanding this disclosure and the claims that follow, the terms “non-ferroelectric transistor” and “non- ferroelectric FET” are utilized to refer to transistors which operate without the polarization modes of a ferroelectric transistor. In contrast, the terms“ferroelectric transistor”,“FeFET” and“ferroelectric FET” are utilized to refer to transistors having the polarization modes obtained through utilization of ferroelectric materials within the transistors.In the shown embodiment, the non-ferroelectric transistor 30 is between the ferroelectric transistor 28 and the first wiring structu re 14, and the non-ferroelectric transistor 32 is between the ferroelectric transistor 28 and the second wiring structure 1 6.The ferroelectric transistor 28 includes a transistor gate 34 adjacent the region 24 of the semiconductor structure 12 ; and the non- ferroelectric transistors 30 and 32 comprise transistor gates 36 and 38, respectively, which are adjacent the regions 22 and 26 of the semiconductor structure 12. The transistor gates 34, 36 and 38 comprise conductive materials 40, 42 and 44, respectively. Such conductive materials may comprise any suitable electrically conductive composition(s) ; such as, for example, one or more of various metals (e.g., titaniu m, tu ngsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitride, metal carbide, etc.), and/or conductively-doped semiconductor materials (e.g., conductively-doped silicon, conductively-doped germaniu m, etc.). In some embodiments, the conductive materials 40, 42 and 44 may all be the same composition as one another. In other embodiments, one of the conductive materials 40, 42 and 44 may comprise a different composition relative to another of the conductive materials 40, 42 and 44.In some embodiments, the conductive gates 34, 36 and 38 may be referred to as first, second and third conductive gates to distinguish them from one another. For instance, the conductive gate 34 may be referred to as the first conductive gate, while the conductive gates 36 and 38 are referred to as the second and third conductive gates. As another example, the conductive gate 36 may be referred to as the first conductive gate, while the conductive gates 34 and 38 are referred to as the second and third conductive gates.The transistor gate 34 of the ferroelectric transistor 28 is spaced from the region 24 of the semiconductor structure 1 2 by intervening regions 46 which comprise ferroelectric material 48. The ferroelectric material may be within an MFMIS configu ration or an M FIS configu ration ; with example configu rations being described in more detail below with reference to FIGS. 6-8.The transistor gates 36 and 38 of the non-ferroelectric transistors 30 and 32 are spaced from the regions 22 and 26 of the semiconductor structure 1 2 by intervening regions 50 and 52, which comprise insulative materials 54 and 56, respectively. The insulative materials 54 and 56 may be referred to as first and second insulative materials to distinguish them from one another. The insulative materials 54 and 56 may comprise any suitable composition(s) ; and in some embodiments may comprise one or more of silicon dioxide, silicon nitride, alu minu m oxide, hafnium oxide, etc. Fu rther, the insulative materials 54 and 56 may comprise low concentrations of ferroelectric compositions, provided that the regions 50 and 52 operate as insulative regions of non-ferroelectric transistors rather than as ferroelectric regions of ferroelectric transistors (i.e., provided that the insulative regions 50 and 52 have such high switching voltage that for all practical purposes the regions 50 and 52 are traditional insulative regions of non- ferroelectric transistors rather than being ferroelectric regions of ferroelectric transistors).The insulative materials 54 and 56 may comprise the same composition as one another, or may comprise different compositions relative to one another.In the shown embodiment, the insulative material 54 of the non- ferroelectric transistor 30 is directly over the ferroelectric material 48 of the dielectric transistor 28, and abuts directly against an upper portion (i.e., upper surface) of such ferroelectric material. Similarly, the insulative material 56 is directly under the ferroelectric material 48 and abuts directly against a lower portion (i.e., lower su rface) of the ferroelectric material.The ferroelectric transistor 28 comprises a channel region (i.e., channel section) 60, and a pair of source/drain regions (i.e., source/drain sections) 62 and 64; with the channel region 60 being between the sou rce/drain regions 62 and 64. The gate 34 of the ferroelectric transistor 28 may be utilized to gatedly couple the source/drain regions 62 and 64 to one another through the channel region 60.The non-ferroelectric transistor 30 comprises a channel region (i.e., channel section) 66, and a pair of sou rce/drain regions (i.e., source/drain sections) 68 and 70; with the channel region 66 being between the source/drain regions 68 and 70. The gate 36 of the non- ferroelectric transistor 30 may be utilized to gatedly couple the source/drain regions 68 and 70 to one another through the channel region 66.The non-ferroelectric transistor 32 comprises a channel region (i.e., channel section) 72, and a pair of sou rce/drain regions (i.e., source/drain sections) 74 and 76; with the channel region 72 being between the source/drain regions 74 and 76. The gate 38 of the non- ferroelectric transistor 32 may be utilized to couple the source/drain regions 74 and 76 to one another through the channel region 72.In some embodiments, the channel regions 60, 66 and 72 may be referred to as first, second and third regions (or sections) of the semiconductor structure 12. In such embodiments, the source/drain regions 62 and 64 may be referred to as first and second source/drain regions (or sections), the source/drain regions 68 and 70 may referred to as third and fourth source/drain regions (or sections), and the source/drain regions 74 and 76 may referred to as a fifth and sixth source/drain regions (or sections). In the shown embodiment, the first source/drain region 62 is coupled with the fourth source/drain region 70, the second source/drain region 64 is coupled with the fifth source/drain region 74, the third source/drain region 68 is coupled with the first wiring 14, and the sixth sou rce/drain region 76 is coupled with the second wiring 16.The gate 34 of the ferroelectric transistor 28 is coupled with a wordline WL; and the gates 36 and 38 of the non-ferroelectric transistors 30 and 32 are coupled with first and second voltage sources Vi and V2. The voltage sou rces Vi and V2 may be operated independently of one another, or may be coupled together (as discussed below with reference to, for example, FIGS. 2 and 3). The wordline WL may be operated independently of the voltage sources Vi and V 2, or may be coupled together with such voltage sources (as discussed below with reference to, for example, FIGS. 4 and 5).In some embodiments, the source/drain regions 62, 64, 68, 70, 74 and 76 may be n-type doped regions. For instance, such regions may be doped to a concentration of at least about 1 021atoms/cm3with n-type conductivity-enhancing dopant (e.g., phosphorus and/or arsenic). In such embodiments, the memory cell 18 may be programmed into a first memory state (a so-called "1 " state) by operating the wordline WL and the comparative digit lines 14 and 1 6 to form electrons within the channel region 60. During such programming, the non-ferroelectric transistors 30 and 32 are maintained in an ON state by providing suitable voltage from the voltage sources Vi and V2. The memory cell 1 8 may be programmed into a second memory state (a so-called "0" state) by operating the wordline WL and the comparative digit lines 14 and 16 to form holes within the channel region 60. The programming operations may be referred to as WRITE operations.The memory cell 1 8 may be read by providing appropriate voltages along the wordline and the comparative digit lines 14 and 1 6. During the READ operation, the non-ferroelectric transistors 30 and 32 are maintained in the ON state.In between the READ and WRITE operations, the memory cell 1 8 is in a RESTING state (i.e., is not being addressed for a READ or WRITE operation). The non-ferroelectric transistors 30 and 32 may be utilized to impede carrier flow between the ferroelectric transistor 28 and the comparative digit lines 14 and 16 when the memory cell 18 is in the RESTING state by maintaining the ferroelectric transistors 30 and 32 in an OFF state. Such may be particularly advantageous if one or both of the comparative digit lines 14 and 1 6 is being utilized to address another memory cell while memory cell 18 is in the RESTING state.It is noted that the electrons provided to the channel region 60 during the above-discussed programming operations may originate from the n-type doped sou rce/drain regions 62 and 64. It is also noted that the holes provided to the channel region 60 du ring the above- discussed programming operations may be transferred to the channel region 60 through appropriate body contacts (not shown) and/or through gate induced drain leakage (G I DL). Also, although the above- discussed programming operations were discussed relative to a configu ration having n-type sou rce/drain regions, it is to be understood that analogous programming operations may be conducted relative to a configuration having p-type source/drain regions.FIG. 2 shows an integrated assembly 1 0a analogous to the assembly 10 described above with reference to FIG. 1 , and comprising a memory cell 18a analogous to the memory cell 18 described above with reference to FIG. 1 . The memory cell 1 8a includes the semiconductor structure 1 2 configured as a semiconductor pillar extending vertically between the first comparative digit line 14 and the second comparative digit line 1 6. The non-ferroelectric transistor 30, ferroelectric transistor 28 and non-ferroelectric transistor 32 are along the semiconductor pillar 12. The non-ferroelectric transistor 30 gates an upper region of the semiconductor pillar 12, the ferroelectric transistor 28 gates a middle region of the semiconductor pillar, and the non-ferroelectric transistor 32 gates a lower region of the semiconductor pillar. The channel regions and sou rce/drain regions are not shown in FIG. 2 in order to simplify the drawing, but such may be analogous to the channel regions and source/drain regions shown in FIG. 1 .The transistors 30, 28 and 32 comprise the gates 36, 34 and 38, respectively; and such gates are vertically-spaced from one another.In the shown embodiment, the upper non-ferroelectric transistor 30 and the lower non-ferroelectric transistor 38 are both spaced from the ferroelectric transistor 28 by about a same distance as one another. In other embodiments, such spacing may vary. Also, although two non- ferroelectric transistors are shown, in other embodiments there may be additional non-ferroelectric transistors incorporated into the memory cell 18a. Also, it is to be understood that the memory cell 18a is one example embodiment for utilizing a non-ferroelectric transistor between a conductive structure and a ferroelectric transistor. Other embodiments (besides those specifically illustrated herein) may utilize other conductive structures besides comparative bitlines. Such other embodiments may have only one conductive structure (e.g., wiring) coupled with the ferroelectric transistor through a semiconductor structure; and in such other embodiments there may be only a single non-ferroelectric transistor utilized together with the ferroelectric transistor.The ferroelectric transistor 34 is referred to as gating a“middle” region of the semiconductor pillar 1 2. In the context of such discussion, the term “middle” simply means that the -ferroelectric transistor is gating a region between the upper region gated by the non-ferroelectric transistor 30 and the lower region gated by the non-ferroelectric transistor 32. The“middle” region may or may not be a region which is about halfway between the upper and lower regions ; and may or may not be a region which is about halfway along the vertical semiconductor pillar 1 2.In the embodiment of FIG. 2, the gates 36 and 38 of the non- ferroelectric transistors 30 and 32 are coupled with a common voltage source V, and the transistor gate 34 of the ferroelectric transistor 28 is coupled with a wordline WL. The wordline WL may be considered to correspond to another voltage sou rce different from the common voltage source.The memory cell 18a may be considered to be a representative memory cell of a plurality of substantially identical memory cells within a memory array. FIG. 3 schematically illustrates a region of a memory array 80 comprising a plurality of substantially identical memory cells 18a. Each memory cell comprises a ferroelectric transistor 28, and a pair of non-ferroelectric transistors 30 and 32. The illustrated region of the memory array comprises a first pair of comparative digit lines (DL- 1 T, DL-1 C), a second pair of comparative digit lines (DL-2T, DL-2C), and a pair of wordlines (WL-1 , WL-2). The wordlines may be considered to extend along rows of the memory array, and the comparative digit lines may be considered to extend along colu mns of the memory array. The non-ferroelectric transistors along a first row of the memory array (i.e., the row comprising the wordline WL-1 ) are coupled with a first voltage source V-1 , and the non-ferroelectric transistors along a second row of the memory array (i.e., the row comprising the wordline WL-2) are coupled with a second voltage sou rce V-2. Such enables the non- ferroelectric transistors along the first row to be controlled independently of the ferroelectric transistors along the second row.The coupling of the non-ferroelectric transistors 30 and 32 to a separate voltage source than the wordline (as shown in FIGS. 2 and 3) may be advantageous in some embodiments. Specifically, charge may accu mulate within the channel region of one or both of the first and second non-ferroelectric transistors 30 and 32 over time, and the voltage source coupled with the non-ferroelectric transistors 30 and 32 may be utilized to discharge such charge accu mulation without disturbing a memory state retained on the ferroelectric transistor 28.FIG. 4 shows an integrated assembly 1 0b analogous to the assembly 1 0a described above with reference to FIG. 2, and comprising a memory cell 1 8b analogous to the memory cell 1 8a described above with reference to FIG. 2. The memory cell 1 8b includes the semiconductor structure 12 configured as the semiconductor pillar extending vertically between the first comparative digit line 14 and the second comparative digit line 1 6. The non-ferroelectric transistor 30, ferroelectric transistor 28 and non-ferroelectric transistor 32 are along the semiconductor pillar 12.The memory cell 18b of FIG. 4 differs from the memory cell 18a of FIG. 2 in that the gates 36 and 38 of the non-ferroelectric transistors 30 and 32 are coupled with the wordline WL. The wordline WL may be considered to correspond to a common voltage source coupled with all of the transistor gates 34, 36 and 38.FIG. 4 also shows the semiconductor pillar 1 2 comprising two different compositions 15 and 17 ; with the composition 1 7 being associated with the non-ferroelectric transistors 30 and 32, and the composition 1 5 being associated with the ferroelectric transistor 28. The utilization of two different semiconductor compositions 1 5 and 1 7 may enable the performance of the ferroelectric transistor 28 to be tailored relative to performances of the non-ferroelectric transistors 30 and 32. The semiconductor compositions 1 5 and 17 may comprise any suitable compositions, including any of silicon, germanium, l l l/V semiconductor material, semiconductor oxide, etc. For instance, in some embodiments both of the materials 15 and 17 may comprise silicon, and one of the materials 1 5 and 17 may also include germanium.Although FIG. 4 is shown comprising a different composition of the semiconductor material within the ferroelectric transistor 28 as compared to the non-ferroelectric transistors 30 and 32, it is to be understood that the invention also includes embodiments analogous to FIG. 4 in which the semiconductor pillar 12 comprises a single u niform semiconductor composition extending across all the ferroelectric transistor and the non-ferroelectric transistors (i.e., embodiments having the semiconductor pillar of FIG. 4 being identical to the pillar shown relative to the assembly 1 0a of FIG. 2). Also, it is to be understood that any of the embodiments described herein may have a different composition of semiconductor material associated with a ferroelectric transistor relative to an adjacent non-ferroelectric transistor; and such is not limited simply to the embodiment of FIG. 4.The memory cell 18b of FIG. 4 may be considered to be a representative memory cell of a plu rality of substantially identical memory cells within a memory array. FIG. 5 schematically illustrates a region of a memory array 82 comprising a plurality of substantially identical memory cells 18b. Each memory cell comprises a ferroelectric transistor 28, and a pair of non-ferroelectric transistors 30 and 32. The illustrated region of the memory array comprises the first pair of comparative digit lines (DL-1 T, DL-1 C), the second pair of comparative digit lines (DL-2T, DL-2C), and the pair of wordlines (WL-1 , WL-2). The non-ferroelectric transistors along each row of the memory array (e.g., the row comprising the wordline WL-1 ) are coupled with the wordline of such row. Such enables the non-ferroelectric transistors along each row to be controlled with the wordline. Specifically, when the wordline is ON, the ferroelectric transistors 28 along the wordline are activated and simultaneously the non-ferroelectric transistors 30 and 32 are also activated. The activated ferroelectric transistor 28 enables READING/WRITING operations to be performed relative to a memory cell 18b; and the activated non-ferroelectric transistors 30 and 32 enable carriers to pass between the comparative digit lines (e.g. DL- 1 T, DL-1 C) and the activated ferroelectric transistor 28. When the wordline is OFF, the ferroelectric transistors 28 along the wordline are not activated and the memory cells 1 8b along the wordline are in a RESTING state. Also, the non-ferroelectric transistors 30 and 32 along the wordline are not activated (i.e., are OFF), and preclude charge carriers from passing between the comparative digit lines and the ferroelectric transistors of the RESTING memory cells. In some embodiments, the non-ferroelectric transistors 30 and 32 may be considered to function as “chokes” which are CLOSED and restrict charge-carrier migration when a memory cell 1 8b is in a RESTING state, and which are OP EN and substantially non-restrictive of charge- carrier migration when the memory cell is in a READ/WRITE state.The ferroelectric transistors 28 described herein may have any suitable configu rations. FIGS. 6-8 illustrate a few example configu rations.FIG. 6 shows a configuration in which the ferroelectric material is within a stack 84 comprising the ferroelectric material 85 between a pair of metal-containing materials 81 and 83 (so-called M FM stacks). Dashed lines are utilized to diagrammatically illustrate approximate bou ndaries between the various materials within the stack 84. The metal-containing materials 81 and 83 may comprise any suitable metals or metal-containing compositions; including, for example, one or more of tu ngsten, titanium, titanium nitride, etc. The ferroelectric material 85 may comprise any suitable composition or combination of compositions; and may, for example, comprise, consist essentially of, or consist of one or more materials selected from the group consisting of transition metal oxide, zirconium, zirconium oxide, hafniu m, hafnium oxide, lead zirconiu m titanate, tantalum oxide, and bariu m strontiu m titanate; and having dopant therein which comprises one or more of silicon, aluminum, lanthanum, yttrium, erbium, calciu m, magnesiu m, strontium, and a rare earth element. The ferroelectric material may be provided in any suitable configuration ; such as, for example, a single homogeneous material, or a laminate of two or more discrete separate materials.An insulative material 87 is between the M FM stacks 84 and the semiconductor material 1 5 of the semiconductor pillar 12. The insulative 87 may comprise any suitable composition(s) ; and in some embodiments may comprise, consist essentially of, or consist of silicon dioxide. The configuration of FIG. 6 may be considered to be an example of an MFM IS configu ration.FIG. 7 shows a configuration similar to that of FIG. 6, except that the stack 84 only comprises the metal-containing material 83 and the ferroelectric material 85. The configuration of FIG. 7 may be considered to be an example of an M FIS configu ration.FIG. 8 shows a configu ration in which the ferroelectric material 85 is the only material between the insulative material 87 and the conductive gate material 40 of the ferroelectric transistor 28. The conductive gate material 40 may comprise metal adjacent the ferroelectric material 85, and accordingly FIG. 8 may be considered to be another example of an M FIS configuration. It is noted that FIGS. 7 and 8 are basically the same configuration as one another, with the only difference being whether the metal of the M FIS configuration is defined as being part of the gate material 40, or is instead defined as being part of a separate stack 84. Analogously, the M FM IS configuration of FIG. 6 may include material of the gate 40 as the first metal of the M FMIS structure, rather than having such metal being considered to be part of the stack 84.Referring to FIG. 9, such shows an assembly 10c illustrating another memory cell configuration (specifically, a configuration of a memory cell 1 8c). The assembly 1 0c of FIG. 9 is similar to the assembly 10b of FIG. 4, except that the transistors 28, 30 and 32 are not vertically spaced from one another. Instead, a single conductive structu re 86 comprises the transistor gates 34, 36 and 38 of the ferroelectric transistor 28 and the non-ferroelectric transistors 30 and 32. In other words, the single conductive structure 86 comprises the conductive materials 40, 42 and 44 of the transistor gates 28, 30 and 32.In some embodiments, the gate material 40 of the ferroelectric transistor 28 may comprise a same composition as the gate materials 42 and 44 of the non-ferroelectric transistors 30 and 32. Accordingly, the conductive structure 86 may comprise a single u niform composition throughout. In other embodiments, at least a portion of the conductive material 40 of the ferroelectric transistor 28 may differ in composition relative to a region of the conductive material 42 or 44 directly against such portion. In some embodiments, the entirety of the conductive material 40 of the ferroelectric transistor 28 may differ in composition from the compositions of the materials 42 and 44 of the non- ferroelectric transistors 30 and 32. The compositions the conductive gate materials 40, 42 and 44 may be tailored to optimize performance of the ferroelectric transistor 28 and the non-ferroelectric transistors 30 and 32. Alternatively, the conductive gate materials 40, 42 and 44 may all have the same composition as one another in order to simplify fabrication of the conductive structure 86.The memory cell 18c of FIG. 9 may be utilized in a memory array analogous to the array 82 described above with reference to FIG. 5. Referring to FIG. 1 0, such shows an assembly 10d illustrating another memory cell configuration (specifically, a configuration of a memory cell 18d). The conductive structures 14 and 1 6 (the comparative bitlines DL-T and DL-C in the shown embodiment) are laterally offset from one another. In the illustrated embodiment, the conductive structu res 14 and 16 are at about a same horizontal level as one another, but in other embodiments the conductive structu res 14 and 16 may be vertically offset from one another as well as being laterally offset from one another.The semiconductor structure 12 extends from the first conductive structure 14 to the second conductive structu re 1 6. The semiconductor structure 12 is shaped as an upwardly-opening container; and specifically has a first stem 90 extending downwardly from the first conductive structure 14, a second stem 92 extending downwardly from the second conductive structure 1 6, and a segment 94 extending from the first stem 90 to the second stem 92. A trough 96 may be defined as being between the first and second stems 90 and 92, and over the segment 94 (i.e., may be defined as being within the upwardly-opening container shape of the semiconductor structure 1 2).The first non-ferroelectric transistor 30 is under the first conductive structure 14 and gates an upper region of the first stem 90, and the second non-ferroelectric transistor 32 is under the second conductive structure 1 6 and gates an upper region of the second stem 92. The first and second non-ferroelectric transistors share a conductive gate 36 comprising the gate material 42. The gate material 42 is spaced from the semiconductor material 15 of the stems 90 and 92 by insulative regions comprising the insulative material 54.The ferroelectric transistor 28 is under the first and second non- ferroelectric transistors 30 and 32, and gatedly couples lower regions of the first and second stems (90 and 92) to one another through a body region 93 that extends along the segment 94. In some embodiments, the ferroelectric transistor 28 may be considered to correspond to a ferroelectric configu ration which is under the non-ferroelectric transistors 30 and 32.The ferroelectric transistor 28 comprises the transistor gate 38. In some embodiments, the gates 36 and 38 may be referred to as first and second transistor gates. The first transistor gate 36 is within an upper region of the trough 96, and the second transistor gate 38 is within a lower region of the trough 96.The first and second transistor gates 36 and 38 are vertically spaced from one another; and in the shown embodiment an insulative material 98 is between the first and second gates 36 and 40. The insulative material 98 may comprise any suitable composition(s) ; and in some embodiments may comprise, consist essentially of, or consist of silicon dioxide.The body region 93 comprises a portion of the ferroelectric- transistor channel region 60. The stems 90 and 92 comprise the ferroelectric-transistor source/drain regions 62 and 64; the non- ferroelectric-transistor channel regions 66 and 72 ; and the non- ferroelectric-transistor sou rce/drain regions 68, 70, 74 and 76.In some embodiments, the semiconductor structu re 1 2 of FIG. 10 may be considered to comprise regions 22, 24 and 26 analogous to those discussed above with reference to FIG. 1 . For instance, the region 24 may be considered to be a first region corresponding to a ferroelectric-transistor-region ; and the regions 22 and 26 may be considered to be second and third regions corresponding to non- ferroelectric-transistor regions. In the shown embodiment, the first region 24 is shaped as an upwardly-opening container, with one side of the container being directly u nder the second region 22, and another side of the container being directly under the third region 26. The first region 24 is directly against both the second region 22 and the third region 26 in the illustrated embodiment of FIG. 1 0.In some embodiments, the non-ferroelectric-transistor region 22 may be considered to be between the ferroelectric transistor region 24 and the first conductive structure 14, and the non-ferroelectric- transistor region 26 may be considered to be between the ferroelectric- transistor region 24 and the second conductive structure 1 6.The segment 94 may be considered to comprise a body region of the ferroelectric transistor 28. The segment 94 is part of an expanse 95 of the semiconductor material 1 5, with such expanse extending beyond the memory cell 18d.The memory cell 1 8d may be operated analogously to the memory cell 18a of FIG. 2. Specifically, the transistor gate 36 of the non- ferroelectric transistors 30 and 32 may be coupled with the voltage source V, and the transistor gate 38 of the ferroelectric transistor 28 may be coupled with the wordline WL. Accordingly, the memory cell 18d may be incorporated into a memory array 80 of the type described above with reference to FIG. 3. Alternatively, the transistor gates 36 and 38 may both be coupled with the wordline WL, and the memory cell may be incorporated into a memory array 82 of the type described above with reference to FIG. 5.FIG. 1 1 shows an assembly 1 0e comprising a memory cell 18e analogous to the memory cell 18d of FIG. 10, but in which the non- ferroelectric transistor gate 36 is directly against the ferroelectric transistor gate 38. The gates 36 and 38 are together comprised by a single conductive structure 86a analogous the structure 86 described above with reference to FIG. 9. The structure 86a comprises the gate material 40 of the ferroelectric transistor 28, and the gate material 42 of the non-ferroelectric transistors 30 and 32. The structure 86a may comprise a single u niform composition throughout (e.g., the gate materials 40 and 42 may be identical in composition relative to one another), or may comprise multiple compositions (e.g., the gate materials 40 and 42 may be different from one another). In some embodiments, a portion of the second transistor gate 38 may be different in composition relative to a region of the first transistor gate 36 which is directly against the second transistor gate. Accordingly, the compositions of the non-ferroelectric transistor gate 36 and the ferroelectric transistor gate 38 may be optimized relative to another. In other embodiments, the materials 40 and 42 are identical to one another in order to simplify fabrication of the conductive structu re 86a.The memory cell 18e of FIG. 1 1 may be operated analogously to the memory cell 18b of FIG. 4. Accordingly, the memory cell 18e may be incorporated into a memory array 82 of the type described above with reference to FIG. 5.Referring to FIGS. 12 and 12A, such show an assembly 10f illustrating another memory cell configu ration (specifically, a configu ration of a memory cell 1 8f). The conductive structures 14 and 16 (the comparative bitlines DL-T and DL-C in the shown embodiment) are laterally offset from one another. In the illustrated embodiment, the conductive structu res 14 and 16 are at about a same horizontal level as one another, but in other embodiments the conductive structu res 14 and 16 may be vertically offset from one another as well as being laterally offset from one another.The semiconductor structure 12 is shaped as an upwardly- opening container analogous to that of FIG 10 ; and has the first stem 90 extending downwardly from the first conductive structure 14, the second stem 92 extending downwardly from the second conductive structure 1 6, and the segment 94 extending from the first stem 90 to the second stem 92. Alternatively, the semiconductor structure 12 may be considered to be configu red as an upwardly-opening container 107 comprising the segment 94 along a bottom of the container, the stem 90 corresponding to a first leg extending upwardly from a first side of the bottom segment 94, and the stem 92 corresponding to a second leg extending upwardly from a second side of the bottom segment 94.The segment 94 extends along a first direction represented by an axis 5. A dashed line 97 in FIG. 1 2A represents an approximate location of an upper surface of the segment 94.The non-ferroelectric transistor 30 is within an upper region of the stem (i.e. leg) 90, and the non-ferroelectric transistor 32 is within an upper region of the stem (i.e. leg) 92. A conductive line 100 passes across the stems (i.e. legs) 90 and 92. Such conductive line is out of the plane relative to the view of FIG. 1 2, and accordingly is shown in dashed-line (phantom) view relative to FIG. 1 2. The conductive line 1 00 comprises the conductive gate material 42 (shown in the cross-section of FIG. 12A), and comprises the gates 36 and 38 of the non-ferroelectric transistors 30 and 32. In some embodiments, the gates 36 and 38 may be referred to as first and second transistor gates, respectively. The gates 36 and 38 may be considered to be along the regions 22 and 26 of the semiconductor structu re 12 ; and in the embodiment of FIGS. 1 2 and 12A such regions are laterally spaced from one another.The ferroelectric transistor 28 is along lower regions of the stems (i.e. legs) 90 and 92, and extends across a region of the bottom segment 94. In some embodiments, the ferroelectric transistor 28 may be considered to represent a ferroelectric configuration that couples lower regions of the stems (i.e., legs) 90 and 92 to one another through a body region that extends along the segment 94.The ferroelectric transistor 28 may be considered to be along the region 24 of the semiconductor structure 1 2. The region 24 may be referred to as a first region of the semiconductor structu re, and the regions 22 and 26 may be referred to as second and third regions of the semiconductor structu re. The region 24 is u nder the regions 22 and 26. The region 24 is spaced from the first conductive structure 14 by an intervening portion of semiconductor structu re 1 2 which includes the region 22, and is spaced from the second conductive structure 16 by an intervening portion of semiconductor structu re 1 2 which includes the region 26. In the illustrated embodiment of FIG. 1 2, the first region 24 is directly against the second region 22 and is also directly against the third region 26.A conductive line 102 passes across the stems (i.e. legs) 90 and 92, with the conductive line 1 02 being under the conductive line 100. The conductive line 1 02 is also out of the plane relative to the view of FIG. 1 2, and accordingly is shown in dashed-line view relative to such figure. The conductive line 102 comprises the conductive gate material 40 (shown in FIG. 1 2A), and comprises the gate 34 of the ferroelectric transistor 28.The conductive lines 1 00 and 1 02 extend along the first direction of axis 5; and may be referred to as first and second conductive lines, respectively. The conductive lines 100 and 102 are vertically spaced from one another in the embodiment of FIGS. 1 2 and 12A.The memory cell 1 8f may be utilized analogously to the memory cell 18a of FIG. 2. Specifically, the transistor gates 36 and 38 of the non-ferroelectric transistors 30 and 32 may be coupled with the voltage source V, and the transistor gate 34 of the ferroelectric transistor 28 may be coupled with the wordline WL. Accordingly, the memory cell 1 8f may be incorporated into a memory array 80 of the type described above with reference to FIG. 3. Alternatively, the transistor gates 36 and 38 may be coupled with the wordline WL, and the memory cell may be incorporated into a memory array 82 of the type described above with reference to FIG. 5.FIGS. 1 3 and 13A show an assembly 10g comprising a memory cell 18g analogous to the memory cell 18f of FIGS. 12 and 1 2A, but in which the first conductive line 1 00 is directly against the second conductive line 1 02. The lines 100 and 1 02 are together comprised by a single conductive structure 86b analogous the structu re 86 described above with reference to FIG. 9. The structure 86b comprises the gate material 40 of the ferroelectric transistor, and the gate material 42 of the non-ferroelectric transistors. The structu re 86b may comprise a single uniform composition throughout (e.g., the gate materials 40 and 42 may be identical in composition relative to one another), or may comprise multiple compositions (e.g., the gate materials 40 and 42 may be different from one another). In some embodiments, the compositions of the non-ferroelectric-transistor-gate material 42 and the ferroelectric- transistor-gate material 40 may be optimized relative to another. In other embodiments, the materials 40 and 42 are identical to one another in order to simplify fabrication of the conductive structu re 86a. The memory cell 1 8g of FIGS. 13 and 1 3A may be operated analogously to the memory cell 1 8b of FIG. 4. Accordingly, the memory cell 18g may be incorporated into a memory array 82 of the type described above with reference to FIG. 5.The assemblies and structu res discussed above may be utilized within any suitable integrated circuits (with the term“integrated circuit” meaning an electronic circuit supported by a semiconductor substrate) ; and may be incorporated into electronic systems. Such electronic systems may be used in, for example, memory modules, device drivers, power modules, communication modems, processor modules, and application-specific modules, and may include multilayer, multichip modules. The electronic systems may be any of a broad range of systems, such as, for example, cameras, wireless devices, displays, chip sets, set top boxes, games, lighting, vehicles, clocks, televisions, cell phones, personal computers, automobiles, industrial control systems, aircraft, etc.Unless specified otherwise, the various materials, substances, compositions, etc. described herein may be formed with any suitable methodologies, either now known or yet to be developed, including, for example, atomic layer deposition (ALD), chemical vapor deposition (CVD), physical vapor deposition (PVD), etc.The terms“dielectric” and“insulative” may be utilized to describe materials having insulative electrical properties. The terms are considered synonymous in this disclosure. The utilization of the term “dielectric” in some instances, and the term“insulative” (or“electrically insulative”) in other instances, may be to provide language variation within this disclosure to simplify antecedent basis within the claims that follow, and is not utilized to indicate any significant chemical or electrical differences.The particular orientation of the various embodiments in the drawings is for illustrative purposes only, and the embodiments may be rotated relative to the shown orientations in some applications. The descriptions provided herein, and the claims that follow, pertain to any structures that have the described relationships between various featu res, regardless of whether the structures are in the particular orientation of the drawings, or are rotated relative to such orientation.The cross-sectional views of the accompanying illustrations only show features within the planes of the cross-sections, and do not show materials behind the planes of the cross-sections, unless indicated otherwise, in order to simplify the drawings.When a structure is referred to above as being“on”,“adjacent” or “against” another structure, it can be directly on the other structu re or intervening structu res may also be present. In contrast, when a structure is referred to as being “directly on”, “directly adjacent” or “directly against” another structure, there are no intervening structures present.Structu res (e.g., layers, materials, etc.) may be referred to as “extending vertically” to indicate that the structu res generally extend upwardly from an u nderlying base (e.g., substrate). The vertically- extending structu res may extend substantially orthogonally relative to an upper su rface of the base, or not.Some embodiments include an integrated assembly having a semiconductor structure coupled with a conductive structu re. A ferroelectric transistor includes a first transistor gate adjacent a first region of the semiconductor structure. A non-ferroelectric transistor includes a second transistor gate adjacent a second region of the semiconductor structure. The second region of the semiconductor structure is between the first region of the semiconductor structu re and the conductive structure.Some embodiments include an integrated assembly having a semiconductor structure extending from a first wiring to a second wiring. A ferroelectric transistor includes a first transistor gate adjacent a first region of the semiconductor structure. A first non-ferroelectric transistor includes a second transistor gate adjacent a second region of the semiconductor structure. The second region of the semiconductor structure is between the first region of the semiconductor structure and the first wiring. A second non-ferroelectric transistor includes a third transistor gate adjacent a third region of the semiconductor structure. The third region of the semiconductor structure is between the first region of the semiconductor structu re and the second wiring.Some embodiments include an integrated assembly having a first comparative digit line over a second comparative digit line. A semiconductor pillar extends from the first comparative digit line to the second comparative digit line. A first non-ferroelectric transistor is u nder the first comparative digit line and gates an upper region of the semiconductor pillar. A ferroelectric transistor is u nder the first non- ferroelectric transistor and gates a middle region of the semiconductor pillar. A second non-ferroelectric transistor is under the ferroelectric transistor and gates a lower region of the semiconductor pillar.Some embodiments include an integrated assembly having a first comparative digit line laterally offset from a second comparative digit line. A semiconductor structure extends from the first comparative digit line to the second comparative digit line. The semiconductor structu re has a first stem extending downwardly from the first comparative digit line, a second stem extending downwardly from the second comparative digit line, and a segment extending from the first stem to the second stem. A trough is defined between the first and second stems, and over the segment. A first non-ferroelectric transistor is under the first comparative digit line and gates an upper region of the first stem. A second non-ferroelectric transistor is under the second comparative digit line and gates an upper region of the second stem. A ferroelectric configuration is under the first and second non- ferroelectric transistors and gatedly couples lower regions of the first and second stems to one another through a body region that extends along the segment.In compliance with the statute, the subject matter disclosed herein has been described in language more or less specific as to structural and methodical features. It is to be understood, however, that the claims are not limited to the specific features shown and described, since the means herein disclosed comprise example embodiments. The claims are thus to be afforded full scope as literally worded, and to be appropriately interpreted in accordance with the doctrine of equivalents.
A composite alpha-Ta/graded tantalum nitride/TaN barrier layer is formed in Cu interconnects with a controlled surface roughness for improved adhesion, electromigration resistance and reliability. Embodiments include lining a damascene opening, such as a dual damascene opening in a low-k interlayer dielectric, with an initial layer of TaN, forming a graded tantalum nitride layer on the initial TaN layer and then forming an alpha-Ta layer on the graded TaN layer, the composite barrier layer having an average surface roughness (Ra) of about 25 Å to about 50 Å. Embodiments further include controlling the surface roughness of the composite barrier layer by varying the N<SUB>2 </SUB>flow rate and/or ratio of the thickness of the combined alpha-Ta and graded tantalum nitride layers to the thickness of the initial TaN layer.
What is claimed is:1. A method of manufacturing a semiconductor device, the method comprising:forming an opening in a dielectric layer over a semiconductor wafer;forming a composite barrier layer with an exposed surface having an average surface roughness (Ra) of about 25 Ȧ to about 50 Ȧ lining the opening, the composite barrier layer comprising a layer of [alpha]-tantalum ([alpha]-Ta) over an initial layer of tantalum nitride (TaN); andfilling the opening with copper (Cu) or a Cu alloy.2. The method according to claim 1, further comprising:depositing a graded tantalum nitride layer on the initial TaN layer lining the opening; anddepositing the [alpha]-Ta layer on the graded tantalum nitride layer.3. The method according to claim 2, comprising:depositing the TaN layer by ionized sputter deposition using a Ta target and a sufficiently high nitrogen (N2) flow rate to poison the Ta target with N2;discontinuing the flow of N2; anddepositing the graded tantalum nitride and [alpha]-Ta layers using the Ta target.4. The method according to claim 2, comprising:depositing the TaN layer using a nitrogen (N2) flow rate; andcontrolling the surface roughness (Ra) by varying:a) the ratio of the thickness of the combined [alpha]-Ta and graded tantalum nitride layers to the thickness of the initial TaN layer; and/orb) the N2 flow rate during deposition of the TaN layer.5. The method according to claim 4, comprising:varying the ratio between about 50 to about 250.6. The method according to claim 4, comprising varying the N2 flow rate between about 10 to about 100 sccm.7. The method according to claim 2, wherein the opening is formed in dielectric material having dielectric constant less than about 3.9.8. The method according to claim 7, wherein the opening is a dual damascene opening, the method comprising filling the dual damascene opening to form a lower Cu or Cu alloy via in electrical contact with a lower metal feature and connected to an upper Cu or Cu alloy line.
TECHNICAL FIELDThe present invention relates to copper (Cu) and/or Cu alloy metallization in semiconductor devices, and to a method for manufacturing semiconductor devices with reliable, low resistance Cu or Cu alloy interconnects. The present invention is particularly applicable to manufacturing high speed integrated circuits having submicron design features and high conductivity interconnect structures.BACKGROUND ARTThe escalating requirements for high density and performance associated with ultra large scale integration semiconductor wiring require responsive changes in interconnection technology. Such escalating requirements have been found difficult to satisfy in terms of providing reliable low R*C (resistance*capacitance) interconnect patterns with higher electromigration resistance.Conventional semiconductor devices comprise a semiconductor substrate, typically doped monocrystalline silicon, and a plurality of sequentially formed interlayer dielectrics and conductive patterns. An integrated circuit is formed containing a plurality of conductive patterns comprising conductive lines separated by interwiring spacings, and a plurality of interconnect lines, such as bus lines, bit lines, word lines and logic interconnect lines. Typically, the conductive patterns on different layers, i.e., upper and lower layers, are electrically connected by a conductive plug filling a via hole, while a conductive plug filling a contact hole establishes electrical contact with an active region on a semiconductor substrate, such as a source/drain region. Conductive lines are formed in trenches which typically extend substantially horizontal with respect to the semiconductor substrate. Semiconductor "chips" comprising five or more levels of metallization is becoming more prevalent as device geometry's shrink to submicron levels.A conductive plug filling a via hole is typically formed by depositing an interlayer dielectric on a conductive layer comprising at least one conductive pattern, forming an opening through the interlayer dielectric by conventional photolithographic and etching techniques, and filling the opening with a conductive material, such as tungsten (W). Excess conductive material on the surface of the interlayer dielectric is typically removed by chemical mechanical polishing (CMP). One such method is known as damascene and basically involves forming an opening in the interlayer dielectric and filling the opening with a metal. Dual damascene techniques involve forming an opening comprising a lower contact or via hole section in communication with an upper trench section, which opening is filled with a conductive material, typically a metal, to simultaneously form a conductive plug in electrical contact with a conductive line.High performance microprocessor applications require rapid speed of semiconductor circuitry. The control speed of semiconductor circuitry varies inversely with the resistance and capacitance of the interconnection pattern. As integrated circuits become more complex and feature sizes and spacings become smaller, the integrated circuit speed becomes less dependent upon the transistor itself and more dependent upon the interconnection pattern. Miniaturization demands long interconnects having small contacts and small cross-sections. As the length of metal interconnects increases and cross-sectional areas and distances between interconnects decrease, the R*C delay caused by the interconnect wiring increases. If the interconnection node is routed over a considerable distance, e.g., hundreds of microns or more as in submicron technologies, the interconnection capacitance limits the circuit node capacitance loading and, hence, the circuit speed. As design rules are reduced to about 0.15 micron and below, e.g., about 0.12 micron and below, the rejection rate due to integrated circuit speed delays significantly reduces production throughput and increases manufacturing costs. Moreover, as line widths decrease electrical conductivity and electromigration resistance become increasingly important.Cu and Cu alloys have received considerable attention as a candidate for replacing Al in interconnect metallizations. Cu is relatively inexpensive, easy to process, and has a lower resistively than Al. In addition, Cu has improved electrical properties vis-à-vis W, making Cu a desirable metal for use as a conductive plug as well as conductive wiring.An approach to forming Cu plugs and wiring comprises the use of damascene structures employing CMP. However, due to Cu diffusion through inter dielectric layer materials, such as silicon dioxide, Cu interconnect structures must be encapsulated by a diffusion barrier layer. Typical diffusion barrier metals include tantalum (Ta), tantalum nitride (TaN), titanium nitride (TiN), titanium (Ti), titanium-tungsten (TiW), tungsten (W), tungsten nitride (WN), Ti-TiN, titanium silicon nitride (TiSiN), tungsten silicon nitride (WSiN), tantalum silicon nitride (TaSiN) and silicon nitride for encapsulating Cu. The use of such barrier materials to encapsulate Cu is not limited to the interface between Cu and the dielectric interlayer, but includes interfaces with other metals as well.In implementing Cu metallization, particularly in damascene techniques wherein an opening is formed in a dielectric layer, particularly a dielectric layer having a low dielectric constant, e.g., a dielectric constant less than about 3.9, various reliability, electromigration and resistance issues are generated. Reliability issues stem, in part, from the use of Ta or TaN, the barrier layers of choice in Cu metallization. Ta has been found to lack adequate adhesion to various interlayer dielectric materials, particularly, interlayer dielectric materials having a low dielectric constant, e.g., a dielectric constant (k) less than about 3.9. TaN has been found to lack adequate adhesion to Cu and Cu alloys filling a damascene opening. Moreover, Ta and TaN are typically deposited by physical vapor deposition (PVD) techniques, such as ionized (I) PVD. The resulting layer of Ta is typically [beta]-phase Ta or [beta]-Ta which exhibits a relatively high resistivity, e.g., about 200 to about 250 [mu]ohm-cm. TaN is typically deposited with a nitrogen (N2) content of about 30 to about 55 at. %, and exhibits a resistivity in excess of 200 [mu]ohm-cm.The adhesion problems adversely impact electromigration resistance and device reliability, while the high resistivity of TaN and [beta]-Ta manifestly adversely impact circuit speed. Accordingly, there exists a need for reliable, low resistance Cu and Cu alloy interconnects, particularly interconnects formed in low dielectric constant materials, and for enabling methodology.DISCLOSURE OF THE INVENTIONAn advantage of the present invention is a semiconductor device having low resistance Cu or Cu alloy interconnects exhibiting improved electromigration resistance and device reliability.Another advantage of the present invention is a method of manufacturing a semiconductor device having low resistance Cu or Cu alloy interconnects with improved electromigration resistance and device reliability.Additional advantages and other features of the present invention will be set forth in the description which follows and, in part, will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from the practice of the present invention. The advantages of the present invention may be realized and obtained as particularly pointed out in the appended claims.According to the present invention, the foregoing and other advantages are achieved in part by a semiconductor device having a copper (Cu) or Cu alloy interconnect comprising: an opening formed in a dielectric layer; a composite barrier layer, comprising a layer of [alpha]-tantalum ([alpha]-Ta) over a tantalum nitride (TaN) layer, lining the opening; and Cu or a Cu alloy filling the opening and forming an interface with the composite barrier layer, wherein the composite barrier layer has an average surface roughness (Ra) at the interface with the Cu or Cu alloy of about 25A[deg.] to about 50A[deg.].Another advantage of the present invention is a method of manufacturing a semiconductor device, the method comprising: forming an opening in a dielectric layer over a semiconductor wafer; forming a composite barrier layer with an exposed surface having an average surface roughness (Ra) of about 25A[deg.] to about 50A[deg.] lining the opening, the composite barrier layer comprising a layer of [alpha]-tantalum ([alpha]-Ta) over an initial layer of tantalum nitride (TaN); and filling the opening with copper (Cu) or a Cu alloy.Embodiments of the present invention comprise dual damascene interconnect structures comprising a lower Cu or Cu alloy via in electrical contact with a lower metal feature and connected to an upper Cu or Cu alloy line, wherein the dual damascene structure is formed in a dielectric layer or layers comprising a dielectric material having a dielectric constant less than about 3.9.Embodiments of the present invention comprise controlling the average surface roughness (Ra) of the exposed surface of the composite barrier layer by varying: (a) the ratio of the thickness of the combined [alpha]-Ta and graded tantalum nitride layers to the thickness of the initial TaN layer; and/or (b) the N2 flow rate during deposition of the initial TaN layer.Additional advantages of the present invention will become readily apparent to those skilled in this art from the following detailed description, wherein embodiments of the present invention are described, simply by way of illustration of the best mode contemplated for carrying out the present invention. As will be realized, the present invention is capable of other and different embodiments, and its several details are capable of modifications in various obvious respects, all without departing from the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.BRIEF DESCRIPTION OF DRAWINGSFIG. 1 illustrates a typical hysteresis curve during IPVD employing a Ta target and an N2 flow.FIGS. 2 through 4 illustrate a single damascene embodiment in accordance with the present invention.FIG. 5 through illustrates a dual damascene embodiment in accordance with the present invention.DESCRIPTION OF THE INVENTIONThe present invention addresses and solves various problems attendant upon forming Cu or Cu alloy interconnects, particularly, damascene structures in dielectric layer(s) comprising dielectric material having a dielectric constant (k) less than about 3.9. As employed throughout this application, the symbol Cu is intended to encompass high purity elemental copper as well as Cu-based alloys, such as Cu alloys containing minor amounts of tantalum, indium, tin, zinc, manganese, titanium, magnesium, chromium, titanium, germanium, strontium, platinum, magnesium, aluminum or zirconium.As design rules are scaled down into the deep submicron range, such as about 0.12 micron and under, electromigration and contact resistance issues associated with Cu interconnects become increasingly significant. Reliability and electromigration issues stem, in part, from the poor adhesion of [beta]-Ta to various low-k dielectric materials and poor adhesion of TaN to Cu and Cu alloys. TaN and [beta]-Ta exhibit high resistivities, thereby adversely impacting circuit speed.In U.S. patent application Ser. No. 09/874,255, filed on Jun. 6, 2001, now abandoned, such problems are addressed by providing a composite barrier layer comprising an initial TaN layer lining a damascene opening and a layer of [alpha]-Ta on the TaN layer, or by providing a composite barrier layer comprising a graded tantalum nitride layer between the initial TaN layer lining the damascene opening and the [alpha]-Ta layer. The formation of a composite barrier layer comprising an initial TaN layer in contact with dielectric material and a layer of [alpha]-Ta in contact with the Cu metallization solves adhesion issues generated by the poor adhesion of [beta]-Ta to dielectric material and the poor adhesion of TaN to Cu metallization. It was found that upon depositing Ta on a layer of TaN, the TaN serves as a template for the growth of [alpha]-Ta, a low resistivity form of Ta, typically exhibiting a resistivity of about 40 to about 50 [mu]ohm-cm vis-à-vis about 200 to about 250 [mu]ohm-cm for [beta]-Ta. It was found particularly advantageous to deposit both the composite barrier layers by IPVD, e.g., ionized sputter deposition (ISD). The initial layer of TaN is typically deposited at a thickness of about 25 Ȧ to about 150 Ȧ, e.g., about 50 Ȧ to about 100 Ȧ, while the layer of [alpha]-Ta is typically deposited at a thickness of about 100 to about 300 Ȧ, e.g., about 200 Ȧ to about 300 Ȧ. The layer of TaN contains nitrogen at a concentration of about 30 to about 65 at. %, e.g., about 40 to about 55 at. %.It should be understood that suitable deposition conditions are dependent upon the particular situation and can be optimized accordingly. It was found suitable, for example, to employ an argon (Ar) flow rate of about 40 to about 60 sccm, e.g., about 45 to about 60 sccm, a N2 flow rate of about 10 to about 100 sccm, e.g., about 20 to about 70 sccm, a D.C. power of about 1,000 to about 40,000 watts, an RF power of about 1,000 to about 3,000 watts, and a pressure of about 1 to about 45 mTorr, depending upon the particular deposition system and technique. The TaN layer can be deposited for about 3 to about 20 sec., at which point the N2 flow is turned off and a layer of [alpha]-Ta is deposited as for about 10 to about 30 sec.The present invention constitutes a refinement of the methodology and resulting structure disclosed in abandoned application Ser. No. 09/874,255 by focusing upon improved electromigration performance. The present invention stems, in part, from the recognition that the dominant Cu diffusion path for electromigration is along the Cu/barrier layer interface at the bottom and side walls of a via electrically connecting upper and lower features, and in part from the recognition that Cu diffusion along this interface is heavily dependent upon, inter alia, the surface roughness of the Cu barrier layer interface. An extremely smooth interface between the Cu inlay and barrier layer constitutes a rapid diffusion path and degrades electromigration performance. On the other hand, as the interface roughness increases, a shadowing effect occurs during subsequent deposition resulting in Cu discontinuities.The present invention focuses upon improved electromigration resistance while avoiding the shadowing effect during subsequent Cu deposition by controlling the Cu/composite barrier layer interface roughness, as to an average surface roughness (Ra) of about 25 Ȧ to about 50 Ȧ. Embodiments of the present invention comprise depositing the composite barrier layer by controlling the N2 flow between about 10 to about 100 sccm and/or controlling the ratio of the combined thickness of the [alpha]-Ta and graded tantalum nitride layers to the thickness of the initial TaN layer to about 1:1 to about 6:1, to achieve a suitable average surface roughness (Ra) on the exposed surface of the [alpha]-Ta layer of about 25 Ȧ to about 50 Ȧ, prior to Cu deposition. By controlling the interface roughness between the composite barrier layer and Cu metalization, as at an average surface roughness (Ra) of about 25 Ȧ to about 50 Ȧ, the dominant Cu diffusion path for electromigration is significantly reduced while preventing shadowing during subsequent Cu deposition, thereby enhancing device integrity.In an embodiment of the present invention, a three barrier layer composite is formed comprising an initial layer of TaN, a graded layer of tantalum nitride on the initial TaN layer, and a layer of [alpha]-Ta on the graded tantalum nitride layer. The graded tantalum nitride layer typically has a N2 content which decreases from proximate the initial TaN layer formed lining the opening to about zero proximate the [alpha]-Ta layer, and typically contains [alpha]-Ta in an amount from about zero proximate initial TaN layer increasing to about 100% proximate the [alpha]-Ta layer. The initial TaN layer typically has a N2 content substantially corresponding to that of the initial TaN layer proximate the initial TaN layer, i.e., about 30 to about 65 at. % and decreases to about zero proximate the [alpha]-Ta layer. The resistivity of the graded tantalum nitride layer depends upon the N2 content and is typically about 200 to about 900 [mu]ohm-cm proximate the initial TaN layer decreasing toward the [alpha]-Ta layer, e.g., about 20A[deg.] to about 300A[deg.]. The three barrier layer composite embodiment of the present invention typically has an overall thickness of about 50A[deg.] to about 500A[deg.].The three barrier layer composite embodiment of the present invention can be implemented by a strategic ISD deposition-technique in which the N2 flow rate is increased to a level above that employed in conventional practices and yet achieves a desirable stable uniform deposition rate and enables the subsequent formation of the low resistivity [alpha]-Ta layer. In FIG. 1 a typical hysteresis curve for ISD is illustrated with the abscissa denoting the N2 flow rate and the ordinate denoting the target voltage. Conventional practices tend to employ a low N2 flow rate operating in region I to maintain reactive deposition without poisoning the Ta target with N2. It should be understood that Ar is also employed during ISD deposition, with N2 being the reactive species. Region II is not consistent with conventional wisdom in that a small variation in the N2 flow rate results in a large variation in N2 target poisoning resulting in an unstable process causing drifts or variations in the deposition rate, such as variations in thickness and composition. The adverse impact of N2 poisoning on target composition and target surface causing non-uniform deposition results in an uncontrolled process and adversely impacts product-to-product uniformity. Region III is, similarly, not consistent with conventional wisdom due to the high degree of N2 target poisoning.As disclosed in abandoned application Ser. No. 09/874,255, it was found that the use of a high N2 flow rate, in excess of that conventionally employed, i.e., operating in region III, caused a sufficiently high degree of Ta target poisoning such that, upon discontinuing the flow of N2 subsequent to deposition of the initial TaN layer, deposition conditions can be otherwise maintained to deposit a graded tantalum nitride layer and [alpha]-Ta layer thereon, using the N2-poisoned Ta target containing a surface layer of TaN. By continuing deposition conditions in the absence of flowing N2, the N2-poisoned Ta target is actually cleaned of N2 to form the graded tantalum nitride layer having a decreasing N2 content and increasing [alpha]-Ta content across its thickness proceeding away from the initial TaN barrier layer. As deposition continues, a layer of essentially pure [alpha]-Ta is formed on the graded tantalum nitride layer from the cleaned Ta target completing the three composite layer barrier. Experimental results confirmed that electromigration resistance is optimized by forming a three barrier layer composite comprising an initial TaN layer, a graded tantalum nitride layer thereon and an [alpha]-Ta layer.The mechanism underpinning the dramatic improvement in electromigration results achieved with the present invention is not known with certainty. However, it is believed that the formation of a composite barrier layer having a controlled average surface roughness (Ra) of about 25 Ȧ to about 50 Ȧ[deg.] significantly reduces Cu diffusion along the Cu/composite barrier at the bottom and sidewalls of the via, which is the dominant Cu diffusion path for electromigration, while avoiding shadowing during subsequent Cu deposition, thereby improving electromigration performance and device reliability. In addition, the formation of a graded tantalum nitride layer results in optimum adhesion between the [alpha]-Ta layer and initial TaN layer and, by operating in region III, a desirable stable deposition is obtained yielding improved uniformity in composition and thickness. Thus, not only is electromigration resistance enhanced, but product-to-product uniformity significantly improved. In addition, the advantageous formation of an [alpha]-Ta layer results in a significant reduction in contact resistance.The deposition conditions used in forming the three barrier layer composite embodiment of the present invention are also dependent upon a particular situation and, hence, can be optimized accordingly. For example, it was found suitable to conduct ISD of the three barrier layer composite at an Ar flow rate of about 40 to about 60 sccm, e.g., about 45 to about 60 sccm, an RF power of about 1,000 to about 2,000 watts, and a pressure of about 20 to about 45 mTorr. During initial deposition of the TaN layer, a N2 flow rate of about 10 to about 100 sccm, e.g., about 30 to about 70 sccm, may be employed. After deposition of the initial TaN layer, as after about 2 to about 10 seconds to a thickness of about 20 Ȧ to about 100 Ȧ, the N2 flow rate is discontinued. The high N2 flow rate employed during deposition of the initial TaN layer, e.g., about 30 to about 70 sccm, operates in range III (FIG. 3) and poisons the Ta target with N2 forming a layer of TaN on the Ta target.After stopping the N2 flow, ISP deposition continues using the N2-poisoned TaN target to sequentially form the graded tantalum nitride layer, which is typically deposited over a period of about 2 to about 10 seconds and to a thickness of about 10 Ȧ to about 100 Ȧ, and the [alpha]-Ta layer, which is typically deposited over a period of about 5 to about 30 seconds and to a thickness of about 20 Ȧ to about 300 Ȧ, on the graded tantalum nitride layer. The surface roughness of the [alpha]-Ta layer may be controlled to an average surface roughness (Ra) of about 25 Ȧ to about 50 Ȧ by controlling the N2 flow rate between about 30 to about 70 sccm during deposition of the initial TaN layer and/or by controlling the ratio of the thickness of the combined [alpha]-Ta and graded tantalum nitride layers to the thickness of the initial TaN layer between about 50 to about 250.Advantageously, the bias power applied during deposition of the initial TaN layer, and/or during deposition of the subsequent graded tantalum nitride and [alpha]-Ta layers can be separately optimized. For example, an A.C. bias power of about zero to about 500 watts can be applied during deposition of the TaN layer, and an A.C. bias power of about 200 to about 400 watts can be applied during deposition of the graded tantalum nitride and [alpha]-Ta layers.An embodiment of the present invention comprising a three barrier layer composite in Cu metallization to form a Cu line is schematically illustrated in FIGS. 2 through 4, wherein similar features or elements are denoted by similar reference numerals. Adverting to FIG. 2, a trench 41 is denoted by similar reference numerals. Adverting to FIG. 2, a trench 41 is formed in low-k interlayer dielectric 42 overlying layer 40, e.g., an interlayer dielectric. An initial TaN layer 43 is deposited on the side surfaces of low-k interlayer dielectric 42 defining trench 41. TaN layer 43 is deposited by ISD at a sufficiently high N2 flow rate, e.g., about 30 to about 100 sccm, to poison the Ta target with N2, forming a surface layer of TaN. The initial TaN layer 43 is typically deposited at a thickness of about 20 Ȧ to about 100 Ȧ. During deposition of the initial TaN layer 43, a bias power up to about 500 watts can be applied to the substrate.After deposition of the initial TaN layer 43, the N2 flow is shut off and ISD continues utilizing the N2-poisoned Ta target. During such continued deposition using the N2-poisoned Ta target without the flow of N2, a graded tantalum nitride layer 44 is deposited and, during such continued deposition, the N2-poisoned Ta target is cleaned, i.e., the surface layer of TaN is removed. During continued deposition an essentially pure [alpha]-Ta layer 45 is formed on graded tantalum nitride layer 44.The N2 flow rate during deposition of the initial TaN layer is controlled and/or the ratio of the thickness of the combined [alpha]-Ta and graded tantalum nitride layers to the thickness of the initial TaN layer is controlled such that the exposed surface of the [alpha]-Ta layer has an average surface roughness (Ra) of about 25 Ȧ to about 50 Ȧ, thereby significantly reducing Cu diffusion at the Cu/composite barrier layer interface with an attendant improvement in the electromigration resistance and device integrity.Typically, the initial TaN layer 43 has a N2 content of about 30 to 65 at. %, and the graded tantalum nitride layer 44 has a N2 content of about 30 to 65 at. % proximate the initial TaN layer 43 decreasing to zero at proximate the [alpha]-Ta layer 45, while the concentration of [alpha]-Ta within the graded tantalum nitride 44 layer is about zero proximate the initial TaN 43 layer increasing to about 100% proximate the [alpha]-Ta layer 45. The graded tantalum nitride layer 44 has a resistivity of about 200 to about 900 [mu]ohm-cm at the initial TaN layer 43 decreasing toward the [alpha]-Ta layer 45. The [alpha]-Ta layer 45 exhibits a resistivity considerably lower than that of the conventionally deposited [beta]-Ta layer. The resistivity of [alpha]-Ta layer 45 typically ranges from about 40 to about 50 [mu]ohm-cm, while the resistivity of a conventionally deposited [alpha]-Ta layer is typically about 200 to about 250 [mu]ohm-cm.FIG. 3 represents an expanded view of region A of FIG. 2 showing the relatively constant N2 content of the initial TaN layer 43 and the decreasing N2 content of the graded tantalum nitride (TaNx) layer 44. Also depicted are the Cu metallization and [alpha]-Ta layer 45.Subsequently, a seedlayer 60 can be deposited on [alpha]-Ta layer 45, and the trench 41 is filled with Cu, as by electroless deposition or electroplating. CMP is then conducted to planarize the upper surface resulting in the structure schematically illustrated in FIG. 4 containing Cu line 61. The three barrier layer composite advantageously provides enhanced electromigration resistance, believed to be due in part to the superior adhesion between the initial TaN layer 43 and the low-k interlayer dielectric 42, superior adhesion between the [alpha]-Ta layer 45 and both the Cu metallization and the graded tantalum nitride layer 44, the graded tantalum nitride layer 44 enhancing adhesion between the [alpha]-Ta layer 45 and the initial TaN layer 43.The embodiment illustrated in FIGS. 2 through 4 relates to a single damascene structure. However, it should be understood that the present invention is also applicable to dual damascene structures. For example, a dual damascene structure formed with the three barrier layer composite embodiment of the present invention is schematically illustrated in FIG. 5, wherein a lower metal feature 81, e.g., a Cu line, is formed in an underlying dielectric layer containing a low-k dielectric material. Also illustrated in FIG. 5 is a capping layer 82, such as silicon nitride or silicon carbide, dielectric layer 83 and dielectric layer 84 separated by middle etch stop layer 85, such as silicon nitride or silicon carbide. A dual damascene opening is formed by any conventional technique, such as a via first-trench last or trench first-via last technique. An initial layer of TaN 85 is deposited by ISD followed using a N2 flow sufficient to poison the Ta target. After depositing initial TaN layer 85, the N2 flow is discontinued and ISD continued using the N2-poisoned Ta target to sequentially deposit graded tantalum nitride layer 86 and [alpha]-Ta layer 87. A seedlayer 88 can then be deposited followed by Cu deposition, e.g., electroplating or electroless deposition, and CMP to form Cu line 89A connected to Cu via 89B which is in electrical contact with underlying metal feature 81. A capping layer 801, such as silicon nitride or silicon carbide, is then deposited to complete the interconnect structure illustrated in FIG. 5. The nitrogen flow rate during deposition of the initial TaN layer and/or ratio of the thickness of the combined [alpha]-Ta and graded tantalum nitride layers to the thickness of the initial TaN layer are controlled such that the exposed surface of the [alpha]-Ta layer has an average surface roughness (Ra) of about 25 Ȧ to about 50 Ȧ, thereby significantly reducing Cu diffusion at the Cu/composite barrier layer interface while preventing shadowing. The resulting structure exhibits improved electromigration resistance and device reliability.In implementing various damascene techniques in accordance with embodiments of the present invention, Cu can be deposited by electroless deposition or electroplating using a seed layer. Typical seed layers include Cu alloys containing magnesium, aluminum, zinc, zirconium, tin, nickel, palladium, silver or gold in a suitable amount, e.g., about 0.3 to about 12 at. %. CMP is then performed such that the upper surface of the inlaid Cu is substantially coplanar with the upper surface of the interlayer dielectric.In accordance with embodiments of the present invention, the damascene opening can also be filled with Cu by PVD at a temperature of about 50[deg.] C. to about 150[deg.] C. or by CVD at a temperature under about 200[deg.] C. In various embodiments of the present invention, conventional substrates and interlayer dielectrics, can be employed. For example, the substrate can be doped monocrystalline silicon or gallium-arsenic. The interlayer dielectric employed in the present invention can comprise any dielectric material conventionally employed in the manufacture of semiconductor devices. For example, dielectric materials such as silicon dioxide, phosphorous-doped silicate-glass (PSG), boron-and phosphorus doped silicate glass (BPSG), and silicon dioxide derived from tetraethylorthosilicate (TEOS) or saline by PECVD can be employed. The openings formed in dielectric layers are effected by conventional photolithographic and etching techniques.Advantageously, dielectric materials for use as interlayer dielectrics in accordance with embodiments of the present invention can comprise dielectric materials with lower values of permitivity and those mentioned above, in order to reduce interconnect capacitance. The expression "low-k" material has evolved characterized materials with a dielectric constant less than about 3.9, e.g., about 3.5 or less. The value of a dielectric constant expressed herein is based upon the value of (1) for a vacuum.A wide variety of low-k materials can be employed in accordance with embodiments of the present invention, both organic and inorganic. Suitable organic materials include various polyimides and BCB. Other suitable low-k dielectrics include poly(arylene)ethers, poly(arylene)ether azoles, parylene-N, polyimides, polynapthalene-N, polyphenylquinoxalines (PPQ), polyphenyleneoxide, polyethylene and polypropylene. Other low-k materials suitable for use in embodiments of the present invention include FOx(TM) (HSQ-based), XLK(TM) (HSQ-based), and porous SILK(TM), an aromatic hydrocarbon polymer (each available from Dow Chemical Co., Midland, Mich.); Coral(TM), a carbon-doped silicon oxide (available from Novellus Systems, San Jose, Calif.), silicon-carbon-oxygen-hydrogen (SiCOH) organic dielectrics, Black-Diamond(TM) dielectrics, Flare(TM), an organic polymer, HOSP(TM), a hybrid sioloxane-organic polymer, and Nanoglass(TM), a nanoporous silica (each available from Honeywell Electronic Materials) and halogen-doped (e.g., fluorine-doped) silicon dioxide derived from tetraethyl orthosilicate (TEOS) and fluorine-doped silicate glass (FSG).The present invention enables the manufacture of semiconductor devices having Cu interconnects with improved electromigration resistance, enhanced reliability and reduced contact resistance and improved wafer-to-wafer uniformity, by controlling the roughness at the Cu/composite barrier layer interface. The formation of a composite barrier layer comprising [alpha]-Ta graded tantalum nitride and TaN also avoids adhesion problems attendant upon conventional practices, thereby further increasing device reliability and improving electromigration resistance, and also reduces contact resistance.The present invention enjoys industrial applicability in the formation of various types of inlaid Cu metallization interconnection patterns. The present invention is particularly applicable to manufacturing semiconductor devices having submicron features and high aspect ratio openings.In the previous description, numerous specific details are set forth, such as specific materials, structures, chemicals, processes, etc., to provide a better understanding of the present invention. However, the present invention can be practiced without resorting to the details specifically set forth. In other instances, well known processing and materials have not been described in detail in order not to unnecessarily obscure the present invention.Only the preferred embodiment of the present invention and but a few examples of its versatility are shown and described in the present invention. It is to be understood that the present invention is capable of use in various other combinations and environments and is capable of changes or modifications within the scope of the inventive concept as expressed herein.
In one example, a method includes responsive to receiving, by a processing unit, one or more instructions requesting that a first value be moved from a first general purpose register (GPR) to a third GPR and that a second value be moved from a second GPR to a fourth GPR, copying, by an initial logic unit and during a first clock cycle, the first value to an initial pipeline register, copying, by the initial logic and during a second clock cycle, the second value to the initial pipeline register, copying, by a final logic unit and during a third clock cycle, the first value from a final pipeline register to the third GPR, and copying, by the final logic unit and during a fourth clock cycle, the second value from the final pipeline register to the fourth GPR.
1.A method comprising:Receiving, by the processing unit, requesting to move the first value from the first GPR of the plurality of source GPRs of the plurality of general-purpose registers GPR to the third GPR of the plurality of destination GPRs of the plurality of GPRs, Moving a second GPR of the plurality of source GPRs to a fourth GPR of the plurality of destination GPRs and moving a third value from a fifth GPR of the plurality of source GPRs to the plurality of destination GPRs a single instruction of a sixth GPR, wherein the single instruction is one of: a centralized instruction that does not individually identify all of the destination GPRs, wherein when the single instruction is the centralized instruction a plurality of source GPRs are discontinuously located, and wherein the destination GPR is continuously positioned when the single instruction is the centralized instruction; or a scatter instruction that does not individually identify all of the source GPRs, wherein when the single instruction The source GPR is continuously positioned when the scatter instruction is, and wherein the destination GPR is discontinuously positioned when the single instruction is the scatter instruction, wherein the single instruction includes requesting the first value to be Moving the first GPR to the third GPR, the first Value from the second GPR GPR and moved to the fourth to the third value from the fifth to the sixth moved GPR GPR single uninterrupted command; andIn response to receiving the single instruction:Copying the first value to an initial pipeline register in a plurality of pipeline registers of a pipeline through an initial logic unit in the processing unit and during a first clock cycle, wherein the plurality of pipeline registers are different from the plurality GPR;Copying the second value to the initial pipeline register by the initial logic unit in the processing unit and during a second clock cycle subsequent to the first clock cycle;Copying the first value from a final pipeline register of the plurality of pipeline registers to the first logic cell through the final logic unit in the processing unit and during a third clock cycle subsequent to the second clock cycle a third GPR, wherein the first value copied to the third GPR is the same as the first value copied from the first GPR; andCopying the second value from the final pipeline register to the fourth GPR by the final logic unit in the processing unit and during a fourth clock cycle subsequent to the second clock cycle, wherein The second value copied to the fourth GPR is the same as the second value copied from the second GPR.2.The method of claim 1 wherein the single instruction does not individually identify any of the pipeline registers.3.The method of claim 1 wherein said plurality of pipeline registers are not individually accessible by instructions, and wherein said plurality of GPRs are individually accessible by instructions.4.The method of claim 1 wherein the processing unit consumes less power when accessing a pipeline register of the plurality of pipeline registers than when accessing a GPR of the plurality of GPRs.5.The method of claim 1 wherein said pipeline is a multi-cycle computation pipeline comprising one or more arithmetic logic units ALU.6.The method of claim 1 further comprising:Copying, by the intermediate logic unit, the first value from the initial pipeline register to an intermediate pipeline register in the plurality of pipeline registers after the first clock cycle and before the third clock cycle; andThe second value is copied from the initial pipeline register to the intermediate pipeline register by the intermediate logic unit after the second clock cycle and before the fourth clock cycle.7.The method of claim 1 wherein said processing unit is comprised of a central processing unit CPU or a graphics processing unit GPU.8.The method of claim 1Copying the first value to the initial pipeline register includes any of the following operations:Copying the first value from the first GPR to the initial pipeline register; orCopying the first value from a pipeline register in the plurality of pipeline registers to the initial pipeline register;Copying the second value to the initial pipeline register includes any of the following operations:Copying the second value from the second GPR to the initial pipeline register; orThe second value is copied from the pipeline register in the plurality of pipeline registers to the initial pipeline register.9.A processing unit comprising:Multiple general purpose registers GPR;a pipeline comprising a plurality of pipeline registers, wherein the plurality of pipeline registers are different from the plurality of GPRs;Multiple logical units; anda controller configured to receive a request to move a first value from a first GPR of the plurality of source GPRs of the plurality of GPRs to a third GPR of the plurality of destination GPRs of the plurality of GPRs, Binary moving from a second GPR of the plurality of source GPRs to a fourth GPR of the plurality of destination GPRs and moving a third value from a fifth GPR of the plurality of source GPRs to the plurality a single instruction of a sixth GPR in a destination GPR, wherein the single instruction is one of: a centralized instruction that does not individually identify all of the destination GPRs, wherein when the single instruction is the concentration The plurality of source GPRs are not continuously positioned when instructed, and wherein the destination GPR is continuously positioned when the single instruction is the centralized instruction; or a scatter instruction that does not individually identify all of the source GPRs, wherein The single instruction is the source GPR continuously positioned when the scatter instruction, and wherein the destination GPR is discontinuously positioned when the single instruction is the scatter instruction, and wherein, in response to receiving the single instruction, Wherein the single instruction includes requesting to recite the first value from the Moving a GPR to the third GPR, moving the second value from the second GPR to the fourth GPR, and moving the third value from the fifth GPR to the sixth GPR A single uninterrupted instruction, the controller is configured to do the following:Causing an initial logic unit of the plurality of logic cells to copy the first value to an initial pipeline register of the plurality of pipeline registers during a first clock cycle;Causing the initial logic unit to copy the second value to the initial pipeline register during a second clock cycle subsequent to the first clock cycle;Causing a final one of the plurality of logic cells to copy the first value from a final pipeline register of the plurality of pipeline registers to the third clock cycle subsequent to the second clock cycle a third GPR, wherein the first value copied to the third GPR is the same as the first value copied from the first GPR; andCausing the final logic unit and copying the second value from the final pipeline register to the fourth GPR during a fourth clock cycle subsequent to the second clock cycle, wherein the copy is to the The second value of the four GPRs is the same as the second value copied from the second GPR.10.The processing unit of claim 9 wherein said single instruction does not individually identify any of said pipeline registers.11.The processing unit of claim 9 wherein said plurality of pipeline registers are not individually accessible by instructions, and wherein said plurality of GPRs are individually accessible by instructions.12.The processing unit of claim 9 wherein said processing unit consumes less power when accessing a pipeline register of said plurality of pipeline registers as compared to accessing a GPR of said plurality of GPRs.13.The processing unit of claim 9 wherein said pipeline is a multi-cycle computation pipeline comprising one or more arithmetic logic units ALU.14.The processing unit of claim 9 wherein in response to receiving one or more instructions, the controller is further configured to:Causing an intermediate logic unit of the plurality of logic cells to copy the first value from the initial pipeline register to the plurality of pipeline registers after the first clock cycle and before the third clock cycle Intermediate pipeline register; andThe intermediate logic unit is caused to copy the second value from the initial pipeline register to the intermediate pipeline register after the second clock cycle and before the fourth clock cycle.15.A processing unit according to claim 9, wherein said processing unit is constituted by a central processing unit CPU or a graphics processing unit GPU.16.The processing unit according to claim 9,Wherein the initial logic unit is configured to copy the first value to the initial pipeline register by any of the following operations:Copying the first value from the first GPR to the initial pipeline register; orCopying the first value from a pipeline register in the plurality of pipeline registers to the initial pipeline register;Wherein the initial logic unit is configured to copy the second value to the initial pipeline register by any of the following operations:Copying the second value from the second GPR to the initial pipeline register; orThe second value is copied from the pipeline register in the plurality of pipeline registers to the initial pipeline register.17.A non-transitory computer readable storage medium storing for a processing unit to request a first value to be moved from a first GPR of a plurality of source GPRs of a plurality of general purpose registers GPR to a plurality of destination GPRs of the plurality of GPRs a third GPR, moving a second value from a second GPR of the plurality of source GPRs to a fourth GPR of the plurality of destination GPRs and a third value from the plurality of source GPRs a fifth GPR moves to a single instruction of a sixth GPR of the plurality of destination GPRs, wherein the single instruction is one of: a centralized instruction that does not individually identify all of the destination GPRs, wherein The single instruction is the plurality of source GPRs that are discontinuously positioned when the centralized instruction is, and wherein the destination GPR is continuously positioned when the single instruction is the centralized instruction; or a scatter instruction that is not individually identified All of the source GPRs, wherein the source GPR is continuously positioned when the single instruction is the scatter instruction, and wherein the destination GPR is not continuously positioned when the single instruction is the scatter instruction, when A single instruction is a discontinuous positioning when the scatter instruction is used. The single instruction in the method includes requesting to move the first value from the first GPR to the third GPR, moving the second value from the second GPR to the fourth GPR, and A third value moves from the fifth GPR to a single uninterrupted instruction of the sixth GPR, and wherein the single instruction, when executed, causes the processing unit to:Causing an initial logic unit in the processing unit to copy the first value to an initial pipeline register in a plurality of pipeline registers of a pipeline during a first clock cycle;Causing the initial logic unit to copy the second value to the initial pipeline register during a second clock cycle subsequent to the first clock cycle;Causing a final logic unit in the processing unit to copy the first value from a final pipeline register of the plurality of pipeline registers to the third during a third clock cycle subsequent to the second clock cycle a GPR, wherein the first value copied to the third GPR is the same as the first value copied from the first GPR; andCausing the final logic unit and copying the second value from the final pipeline register to the fourth GPR during a fourth clock cycle subsequent to the second clock cycle, wherein the copy is to the The second value of the four GPRs is the same as the second value copied from the second GPR.18.The non-transitory computer readable storage medium of claim 17, wherein the single instruction does not individually identify any of the pipeline registers.19.The non-transitory computer readable storage medium of claim 17, wherein the plurality of pipeline registers are not individually accessible by the instructions, and wherein the plurality of GPRs are individually accessible by the instructions.20.The non-transitory computer readable storage medium of claim 17, wherein the processing unit accesses a pipeline register of the plurality of pipeline registers when accessing a GPR of the plurality of GPRs Consume less power.21.The non-transitory computer readable storage medium of claim 17 wherein, when executed, said single instruction causes said processing unit to:Causing an intermediate logic unit in the processing unit to copy the first value from the initial pipeline register to the plurality of pipeline registers after the first clock cycle and before the third clock cycle Intermediate pipeline register; andThe intermediate logic unit is caused to copy the second value from the initial pipeline register to the intermediate pipeline register after the second clock cycle and before the fourth clock cycle.22.A method comprising:Receiving code through a compiler module; andGenerating the single combined move instruction in response to determining, by the compiler module, that the plurality of operations indicated by the code can be combined into a single combined move instruction,Wherein, when executed by the processing unit, the combined move instruction causes the processing unit to move a plurality of values from a plurality of source GPRs of the plurality of general purpose registers GPR to a plurality of destination GPRs of the plurality of GPRs Utilizing a plurality of pipeline registers as temporary storage, wherein the single combined movement instruction is one of: a centralized instruction that does not individually identify all of the destination GPRs, wherein when the single combined movement instruction is the concentration The plurality of source GPRs are not continuously positioned when instructed, and wherein the destination GPR is continuously positioned when the single combined move instruction is the centralized instruction; or a scatter instruction that does not individually identify all of the source GPRs, Wherein the source GPR is continuously positioned when the single combined move instruction is the scatter instruction, and wherein the destination GPR is not continuously positioned when the single combined move instruction is the scatter instruction, and wherein the plurality of At least one GPR of the source GPRs is included in the plurality of destination GPRs, and wherein the plurality of pipeline registers are different from the plurality of GPRs, and wherein the single combination The motion instruction includes a single uninterrupted instruction requesting to move the first value from the first GPR to the third GPR, moving the second value from the second GPR to the fourth GPR, and moving the third value from the fifth GPR to the sixth GPR .23.The method of claim 22 wherein determining that the plurality of operations indicated by the code are combinable into the single combined move instruction comprises:In response to determining that the plurality of operations indicated by the code comprise moving a plurality of values between the plurality of GPRs, determining that the plurality of operations indicated by the code can be combined into the single combined move instruction .24.The method of claim 22 wherein said code comprises a plurality of instructions, and wherein said method further comprises:The instructions corresponding to the plurality of operations that can be combined into the single combined move instruction are replaced with the generated single combined move instruction.
Method, processing unit and computer readable storage medium using pipeline register as intermediate memoryTechnical fieldThe present invention relates to processing units and, more particularly, to moving multiple values between general purpose registers of processing units.Background techniqueProcessing units, such as graphics processing units (GPUs) and central processing units (CPUs), can be used to perform a wide variety of operations within a computing device. For example, a GPU can be a graphics rendering device for manipulating and displaying computerized graphics on a display. GPUs are built with highly parallel structures that provide more efficient processing than typical, general purpose central processing units (CPUs) for complex algorithm ranges. A processing unit typically contains a plurality of general purpose registers (GPRs) for storing data. When performing an operation, the processing unit typically executes instructions that cause the processing unit to move values between GPRs.Summary of the inventionIn general, the present invention describes a processing unit that uses pipeline registers to move values between GPRs. The processing unit can be configured to operate in accordance with an instruction set architecture (ISA), which can include a plurality of instructions, each instruction specifying a particular operation that can be performed by the processing unit. As an example, the move instructions included in the ISA may specify to move values from the source GPR to the destination GPR.In one example, the method includes receiving, by the processing unit, a request to move the first value from a first GPR of the plurality of GPRs to a third GPR of the plurality of GPRs and from the plurality of GPRs The second GPR moves to one or more instructions of the fourth GPR of the plurality of GPRs. In this example, the method further comprises: in response to receiving the one or more instructions, copying the first value to the pipeline by the initial logic unit in the processing unit and during the first clock cycle An initial pipeline register in a pipeline register, wherein the plurality of pipeline registers are different from the plurality of GPRs; passing the initial logic unit in the processing unit and following a second clock subsequent to the first clock cycle Copying the second value to the initial pipeline register during a period; passing the first value from a final logic unit in the processing unit and during a third clock cycle subsequent to the second clock cycle Copying a final pipeline register of the plurality of pipeline registers to the third GPR, wherein the first value copied to the third GPR represents the same first value copied from the first GPR; Copying, by the final logic unit in the processing unit, the second value from the final pipeline register to the fourth clock cycle subsequent to the second clock cycle Four GPR, which by the fourth copy to the second GPR represent the same value of the second value from the second GPR copy.In another example, a processing unit includes a plurality of GPRs; a pipeline including a plurality of pipeline registers, wherein the plurality of pipeline registers are different from the plurality of GPRs; a plurality of logic units; and a controller. In this example, the control is configured to receive a request to move a first value from a first GPR of the plurality of GPRs to a third GPR of the plurality of GPRs and to derive a second value from the plurality of GPRs The second GPR moves to one or more instructions of the fourth GPR of the plurality of GPRs. In this example, in response to receiving the one or more instructions, the controller is configured to cause an initial one of the plurality of logic units to copy the first value to during a first clock cycle An initial pipeline register of the plurality of pipeline registers; causing the initial logic unit to copy the second value to the initial pipeline register during a second clock cycle subsequent to the first clock cycle; causing the A final one of the plurality of logic cells copies the first value from a final pipeline register of the plurality of pipeline registers to the third GPR during a third clock cycle subsequent to the second clock cycle, The first value copied to the third GPR represents the same first value copied from the first GPR; and causing the final logical unit to be fourth after the second clock cycle Copying the second value from the final pipeline register to the fourth GPR during a clock cycle, wherein the second value copied to the fourth GPR represents the same copy from the second GPR Binary.In another example, the non-transitory computer readable storage medium storage is for the processing unit to request to move the first value from the first GPR of the plurality of GPRs to the third GPR of the plurality of GPRs and to remove the second value The second GPR of the plurality of GPRs moves to one or more instructions of the fourth GPR of the plurality of GPRs. In this example, when executed, the one or more instructions cause the processing unit to: cause an initial logic unit of the plurality of logic units to assert the first value during a first clock cycle Copying to an initial pipeline register in a plurality of pipeline registers of the pipeline; causing the initial logic unit to copy the second value to the initial pipeline register during a second clock cycle subsequent to the first clock cycle; causing A final one of the plurality of logic cells copies the first value from a final pipeline register of the plurality of pipeline registers to the third GPR during a third clock cycle subsequent to the second clock cycle The first value copied to the third GPR represents the same first value copied from the first GPR; and the first logical unit is caused to follow the second clock cycle Copying the second value from the final pipeline register to the fourth GPR during a four clock cycle, wherein the second value copied to the fourth GPR indicates copying from the second GPR The second value is the same.In another example, a method includes receiving code by a compiler module, and generating the combined move instruction in response to determining, by the compiler module, that a plurality of operations indicated by the code can be combined into a combined move instruction . In this example, when executed by the processing unit, the generated combined move instruction causes the processing unit to move a plurality of values from a plurality of source GPRs of the plurality of GPRs to a plurality of the plurality of GPRs The plurality of pipeline registers are used as temporary storage for the destination GPR, and wherein the plurality of pipeline registers are different from the plurality of GPRs.The details of one or more examples are set forth in the accompanying drawings and description. Other features, objects, and advantages will be apparent from the description and accompanying drawings.DRAWINGS1 is a block diagram of an integrated circuit including an example processing unit that uses pipeline registers to move values between general purpose registers (GPRs) when executing a combined move instruction, in accordance with one or more techniques of the present disclosure.2 is a block diagram of an integrated circuit including an example processing unit that uses pipeline registers to move values between GPRs when executing a combined move instruction, in accordance with one or more techniques of the present disclosure.3A-3C are timing diagrams illustrating example data flows within a processing unit that uses pipeline registers to move values between GPRs when executing a combined move instruction, in accordance with one or more techniques of this disclosure.4 is a flow diagram illustrating an example operation of a processing unit that uses pipeline registers to move values between GPRs when executing a combined move instruction in accordance with one or more techniques of the present disclosure.5 is a block diagram of an example compiler module that outputs a combined move instruction in accordance with one or more techniques of the present disclosure.6 is a flow diagram illustrating an example operation of a compiler module that outputs a combined move instruction in accordance with one or more techniques of the present disclosure.FIG. 7 is a block diagram illustrating an example apparatus 100 including the integrated circuit of FIG. 1 in accordance with one or more techniques of the present disclosure.Detailed waysIn general, the present invention describes a processing unit that uses pipeline registers to move values between GPRs. The processing unit can be configured to operate in accordance with an instruction set architecture (ISA), which can include a plurality of instructions, each instruction specifying a particular operation that can be performed by the processing unit. As an example, the move instructions included in the ISA may specify to move values from the source GPR to the destination GPR. In some instances, such as when moving multiple values between registers, executing separate instructions for each movement may be less efficient.In accordance with one or more techniques of this disclosure, a combined move instruction included in an ISA may dictate moving multiple values from multiple source GPRs to multiple destination GPRs. In some examples, a particular GPR may be included in both the source GPR and the plurality of destination GPRs. For example, combining move instructions, eg, swap instructions, may specify moving a first value stored in the first GPR to a second GPR and moving a second value stored in the second GPR to the first GPR ( For example, swapping the values stored in the first GPR and the second GPR). In some examples, to perform a combined move instruction, the processing unit may utilize a third GPR to temporarily store one of the values during execution. For example, the processing unit may copy the first value from the first GPR to the third GPR, copy the second value from the second GPR to the first GPR, and copy the first value from the third GPR to the second GPR.In some examples, the processing unit can include one or more pipelines. Each of the pipelines can include multiple phases, including at least an initial phase that can include an initial logical unit and an initial pipeline register, and a final phase that includes the final logical unit. As such, in some examples, the N-phase pipeline can include N logic cells and N-1 pipeline registers, and the pipeline registers included in the N-1th stage can be referred to as final pipeline registers.In operation, the initial logic unit of the N-phase pipeline may receive a value from another component of the processing unit (eg, GPR) during the first clock cycle. During subsequent clock cycles, the value can pass through subsequent elements of the pipeline, and during the Nth clock cycle, the value can be used to copy from the final stage. For example, during a first clock cycle, an initial logic unit can receive a value, perform any requested logic operation on the value, and provide the value to an initial pipeline register such that at the end of the first clock cycle, an initial The pipeline register stores the value. Then, during the second clock cycle, the second logic unit can receive a value from the initial pipeline register, perform any requested logic operation on the value, and provide the value to the second pipeline register such that the second clock cycle At the end, the second pipeline register stores the value. If the pipeline is a three-phase pipeline (ie, N=3), during a third clock cycle, the third logic unit can receive a value from the second pipeline register, perform any requested logic operations on the value, and The value is provided to one or more other components of the processing unit (eg, GPR) such that one or more other components store the value at the end of the third clock cycle.In some examples, the initial logic unit may receive the second value before the first value has exited the pipeline (eg, before the final logic unit has copied the first value to the GPR). For example, during a second clock cycle, the initial logic unit can receive a second value, perform any requested logic operation on the second value, and provide the second value to an initial pipeline register such that At the end of the clock cycle, the initial pipeline register stores the second value. Additionally, as discussed in the above example where the value is the first value, the second pipeline register may store the first value at the end of the second clock cycle. Next, during a third clock cycle, the second logic unit can receive a second value from the first pipeline register, perform any requested logic operation on the second value, and provide the second value to the second pipeline register So that at the end of the second clock cycle, the second pipeline register stores the second value. In this way, the first value can be copied from the pipeline register and the second value can be copied to the pipeline register during a single clock cycle.While the utilization of temporary GPR enables the processing unit to perform the execution of the instructions, in some instances, additional GPR may not be expected to be used. In accordance with one or more techniques of this disclosure, a processing unit may utilize a pipeline register of a processing unit as a temporary memory, as opposed to using an additional GPR for temporary storage. For example, upon execution of a swap instruction, the processing unit may copy the first value from the first GPR to the initial pipeline register of the pipeline during the first cycle, and copy the second value from the second GPR during the second cycle to The initial register of the pipeline copies the first value from the final pipeline register of the pipeline to the second GPR during the third cycle, and copies the second value from the final pipeline register to the first GPR during the fourth cycle. In other words, the processing unit can use the pipeline and its constituent pipeline registers as a first in first out (FIFO) queue, where the values in each of the pipeline registers are copied to the corresponding next pipeline register in the clock cycle. In this way, the processing unit can swap the values of the two GPRs without the need for additional GPR for temporary storage.As described in more detail below, the GPR and pipeline registers are different, and the access pipeline registers may require less power than accessing the GPR. Thus, by using pipeline registers for temporary storage instead of GPR, the technique reduces power consumption and avoids unnecessarily making GPR unusable (eg, GPR originally used for temporary storage).1 is a block diagram of an integrated circuit 1 ("IC 1") including an example processing unit 2 that uses pipeline registers to move values between GPRs when executing a combined move instruction in accordance with one or more techniques of the present disclosure. In some examples, IC 1 can be included in a device, such as a mobile computing device (eg, a "smart phone"), a computing device (eg, a desktop computer, a laptop computer, a server, and the like), A computing device module (eg, a graphics card), a personal digital assistant (PDA), a handheld video game device, a game console, and/or a television device. In some examples, IC 1 can include a graphics processing unit (GPU) configured to manipulate and display computer graphics on a display and/or a central processing unit (CPU) configured to perform general computing operations. For example, processing unit 2 can be a shader processor of a GPU. As illustrated in FIG. 1, IC 1 may include a processing unit 2 that includes general purpose registers (GPR) 4A through 4N (collectively "GPR 4"), pipeline 6, controller 12, and clock 14.In some examples, processing unit 2 may include GPR 4, which may be configured to store data for use by processing unit 2. Since GPR 4 is a general purpose register, GPR 4 can store a wide variety of information. Examples of information that GPR can store are data such as, but not limited to, integer values, floating point values, characters, and bit arrays. For example, one or more of GPRs 4 may store vector components, such as graphics vectors. As such, in some examples, one or more of GPRs 4 can be considered a vector register. As another example, one or more of the GPRs 4 may store an address.In some examples, all GPRs 4 may have the same data capacity (ie, GPRs 4 may all be the same size). For example, each of the GPRs 4 can have an 8-bit, 16-bit, 32-bit, or 64-bit data capacity. In some examples, GPR 4 can be a varying size. For example, a first GPR in GPR 4 may have a 32-bit data capacity and a second GPR in GPR 4 may have a 64-bit data capacity.In some examples, processing unit 2 may include a pipeline 6 that may be configured to process data. Line 6 can be a multi-stage computation pipeline that includes logic units 10A through 10N (collectively "logic units 10"), where each logical unit of logic unit 10 represents a discrete phase. Line 6 may include an initial phase (ie, logical unit 10A) and a final phase (ie, logical unit 10N). In some examples, pipeline 6 may also include one or more intermediate stages (ie, logical unit 10B to logical unit 10N-1). In order to maintain a value determined by one or more of the phases (eg, one or more of logical units 10), pipeline 6 may include one of pipeline registers 8 after one or more of logical units 10. For example, pipeline 6 may include pipeline register 8A after logic unit 10A. In some examples, pipeline register 6 may not include a pipeline register after the final stage (ie, logic unit 10N). As such, in some examples, where pipeline 6 includes N stages, pipeline 6 may include N logic units 10 and N-1 pipeline registers 8, and pipeline register 8N-1 may be referred to as a final pipeline register.As discussed above, pipeline 6 may include one or more logic units 10 that may be configured to process values. For example, each of the logical units 10 can be configured to receive data, perform one or more operations (eg, for example, adding one or more arithmetic operations of two values, such as "and" two values One or more logical operations of operation, and/or one or more other mathematical operations), and output the results. In some examples, one or more of the logic units 10 can be programmable. As an example, one or more of the logic units 10 can be programmed to add two values. As another example, one or more of the logic units 10 can be programmed to pass unmodified values. As another example, one or more of the logic units 10 can be programmed to modify the data type of the value while passing the value. For example, the techniques described in this disclosure utilize pipeline register 8 for temporary storage to avoid the use of general purpose registers for temporary storage. Thus, in some examples, for example, when using pipeline register 8 for temporary storage, logic unit 10 may be configured to pass a modified or unmodified value of a data type. In some examples, controller 12 can program one or more of logic units 10. In some examples, one or more of the logical units 10 can include an arithmetic logic unit (ALU).Line 6 can be configured to operate in response to a clock signal received from clock 14. For example, in response to receiving an edge (eg, a rising edge or a falling edge) of a clock signal received from clock 14, pipeline 6 may advance to the next cycle. As such, in some examples, the period of pipeline 6 can be referred to as a clock cycle. In each cycle of pipeline 6, logic cells in logic unit 10 can receive input values from input registers, process input values to determine output values, and provide output values to output registers. In other words, the value can be advanced through line 6 in each cycle. As an example, during a cycle, logic unit 10B can receive a first input value from pipeline register 8A, process the first input value to determine a first output value, and provide a first output value to pipeline register 8B. Also during the same cycle, logic unit 10A may receive a second input value from the GPR in GPR 4, process the second input value to determine a second output value, and provide a second output value to pipeline register 8A. As such, during a single cycle, pipeline register 8A can provide both the first input value to logic unit 10B and the second unit output value from logic unit 10A. As another example, during a cycle, logic unit 10N can receive input values from pipeline register 8N, process the input values to determine output values, and provide output values to GPRs in GPR 4.In some examples, for example, where pipeline 6 is a multi-stage computing pipeline, a value may be provided to the first cycle of the beginning of pipeline 6 (ie, to logic 10A) until the end of pipeline 6 (ie, There are multiple cycles between the termination periods of the resulting values from the logic 10N). In some instances, this number of cycles may be referred to as the latency of pipeline 6. In some examples, the latency of pipeline 6 may be equal to the number of phases included in pipeline 6. For example, where pipeline 6 contains four phases, pipeline 6 may have a wait time of four.As also discussed above, pipeline 6 may include one or more pipeline registers 8, each of which may be configured to store values. For example, a pipeline register in pipeline register 8 can be configured to store an output value determined by a logic unit in logic unit 10 during a cycle and provide the output value to logic unit 10 during the next cycle. A logical unit. Additionally, each of the pipeline registers 8 may not be individually accessed and/or addressed by software (e.g., instructions executed by controller 12). In some instances, processing unit 2 may consume a smaller amount of power when accessing the pipeline registers in pipeline register 8 than when accessing GPRs in GPR 4. For example, processing unit 2 can consume half the power when accessing the pipeline registers as compared to when accessing GPR. In some examples, each of the pipeline registers 8 may have a data capacity greater than or equal to the data capacity of the GPRs in the GPR 4.As described above, pipeline registers 8 may not be individually accessible and/or addressable by software. For example, an instruction cannot move a value stored by any pipeline register to another arbitrary pipeline register or any GPR. This characteristic of pipeline register 6 is directly opposite to GPR 4, each of which can be individually accessed and/or addressed by software. Thus, only data can be inserted into pipeline 6 at the beginning of the pipeline (ie, logic 10A) and can be accessed only at pipeline 6 (ie, logic 10N) at the end of the pipeline (eg, copied to GPR) ), and cannot be accessed between them (for example, data in pipeline registers 8A through 8N cannot be accessed by components other than the corresponding subsequent logical unit and/or initial logical unit).Processing unit 2 may include a controller 12 that may control the operation of one or more components of processing unit 2. For example, controller 12 may control the operation of GPR 4 and pipeline 6 in response to receiving an instruction. The controller 12 can be a more general controller for the particular operation of the processing unit 2 or the overall operation of the control device 1. In some examples, controller 12 can include an instruction decoder.2 is a block diagram of an integrated circuit 1 ("IC 1") including an example processing unit 2A that uses pipeline registers to move values between GPRs when executing a combined move instruction in accordance with one or more techniques of the present disclosure. Processing unit 2A can be similar to processing unit 2 of FIG. For example, processing unit 2A can use pipeline registers to move values between GPRs when executing combined move instructions. As illustrated in FIG. 2, processing unit 2A may include general purpose registers (GPR) 4A through 4N (collectively "GPR4"), pipeline 6A, controller 12, and clock 14. Since GPR 4, controller 12, and clock 14 are described above with respect to FIG. 1, additional description of GPR 4, controller 12, and clock 14 is not provided for FIG.In some examples, processing unit 2A can include a pipeline 6A that can be configured to process data. Line 6A can be similar to line 6 of Figure 1. For example, pipeline 6A can be a multi-stage pipeline that includes logic unit 10 and pipeline register 8. As illustrated in the example of FIG. 2, pipeline 6A may include bypass channels 9A through 9N-1 (collectively "bypass channels 9"), each of which may be configured to enable values from The corresponding pipeline register of pipeline register 8 is copied to initial logic unit 10A. For example, logic unit 10A can utilize bypass channel 9B to copy values from pipeline register 8B to pipeline register 8A.3A-3C are timing diagrams illustrating example data flows within a processing unit that uses pipeline registers to move values between GPRs when executing a combined move instruction, in accordance with one or more techniques of this disclosure. The data streams of Figures 3A through 3C may represent data streams within a processing unit (e.g., processing unit 2 of Figure 1 or processing unit 2A of Figure 2). For purposes of illustration, the data streams of FIGS. 3A through 3C are described within the context of processing unit 2 of FIG. 1, but processing units having configurations other than the configuration of processing unit 2 may have data streams similar to those of FIGS. 3A through 3C. The data stream.Each of FIGS. 3A through 3C illustrates an example data flow within a processing unit during execution of a particular combined move instruction and includes a horizontal axis indicating a plurality of time periods (eg, t0, t1, etc.) and indicating a plurality of GPRs 4 and The vertical axes of the plurality of pipeline registers 8 cause each of the data blocks to identify what value is stored by each register during each time period. 3A illustrates an example data flow within a processing unit during execution of a reordering instruction, FIG. 3B illustrates an example data flow within a processing unit during execution of a centralized instruction, and FIG. 3C illustrates processing during execution of a decentralized instruction An instance data stream within a cell.In accordance with one or more techniques of this disclosure, controller 12 can be configured to control by using a pipeline register as a temporary memory when moving values between general purpose registers (GPRs) according to an instruction set architecture (ISA) that includes combined movement instructions The operation of the processing unit 2. Some example combination move instructions that may be included in an ISA that may be executed by controller 12 include, but are not limited to, rearrangement instructions, swap instructions, centralized instructions, and scatter instructions. By using a combined move instruction, as opposed to executing a separate instruction for each move operation, the controller 12 may be able to achieve the same end result with less instruction cycles at least with respect to the data location and thus achieve higher performance. In some instances, one or more of the combined move instructions may be uninterrupted, meaning that controller 12 must complete the execution of the uninterrupted instructions before executing another instruction.Controller 12 may receive a reordering instruction instructing the controller to move a plurality of values from a plurality of arbitrary source GPRs to a plurality of arbitrary destination GPRs. For example, controller 12 may receive a two-valued reordering instruction in accordance with instruction (1) below, where dst0 indicates a first destination GPR in GPR 4, dst1 indicates a second destination GPR in GPR 4, and src0 indicates GPR The first source GPR in 4, and src1 indicates the second source GPR in GPR 4.Swz dst0,dst1,src0,src1 (1)The behavior obtained by executing the n-value rearrangement instruction from the controller 12 can be expressed as follows:Dst0=convert(src0);Dst1=convert(src1);...; andDstn-1=convert(srcn-1).In some examples, controller 12 may execute a two-valued reordering instruction in two instruction cycles. In some instances, the instructions may use the old source if any source is overwritten by the instruction itself.3A illustrates an example data flow through a processing unit during execution of a swap combination move instruction, the swap combination move instruction being of the type of rearrangement instruction in which the first GPR is exchanged with the second GPR. For example, the data stream illustrated by FIG. 3A may be a data stream within processing unit 2 during execution of rearrangement instructions swz 4B, 4A, 4A, 4B. In the following example, pipeline 6 may be a five-phase pipeline that includes four pipeline registers (ie, pipeline registers 8A through 8D) interposed between five logical units (ie, logical units 10A through 10E).As illustrated in FIG. 3A, during time period t0, GPR 4A may store value VA and GPR 4B may store value VB. Also during time period t0, controller 12 may cause the VA to be copied to initial pipeline register 8A. As an example, controller 12 may cause logic 10A to receive VA from GPR 4A and pass the VA to pipeline register 8A. As another example, controller 12 may cause logic 10A to pass through a bypass channel, for example, where VA is stored by pipeline register 8B at the end of time period t-1 (eg, a time period immediately before time period t0) That is, the bypass channel 9B of FIG. 2 receives VA from the pipeline register 8B and passes the VA to the pipeline register 8A.Upon receiving a signal from the clock 14, the processing unit 2 can advance to a time period t1. During time period t1, logic 10B can receive VA from pipeline register 8A and pass VA to pipeline register 8B. In other words, during time period t1, logic 10B can copy VA to pipeline register 8B. Also during time period t1, controller 12 may cause VB to be copied to initial pipeline register 8A. For example, controller 12 may cause logic 10A to receive VB from GPR 4B and pass VB to pipeline register 8A.Upon receiving a signal from the clock 14, the processing unit 2 can advance to time period t2. During time period t2, logic 10C may receive VA from pipeline register 8B and pass the VA to pipeline register 8C. Also during time period t2, logic 10B can receive VB from pipeline register 8A and pass VB to pipeline register 8B. In other words, during time period t2, both VA and VB can be shifted such that VA is copied to pipeline register 8C and VB is copied to pipeline register 8B.Upon receiving a signal from the clock 14, the processing unit 2 can advance to time period t3. During time period t3, logic 10D can receive VA from pipeline register 8C and pass VA to pipeline register 8D. Also during time period t3, logic 10C may receive VB from pipeline register 8B and pass VB to pipeline register 8C. In other words, during time period t3, both VA and VB can be shifted such that VA is copied to pipeline register 8D and VB is copied to pipeline register 8C.Upon receiving a signal from the clock 14, the processing unit 2 can advance to time period t4. During time period t4, logic 10E may receive the VA from pipeline register 8D and pass the VA to GPR 4B. Also during time period t4, logic 10D can receive VB from pipeline register 8C and pass VB to pipeline register 8D. In other words, during time period t4, the VA can be copied from the final pipeline register 8D to the GPR 4B such that the value stored in the GPR 4B represents the same value stored in the GPR 4A during the time period t0, and the VB can be advanced such that Copy VB to pipeline register 8D. In some examples, such as in the case of a modification of the data type of the instruction request VA, the value stored by the GPR 4B may be a representation of the value stored in the GPR 4A during the time period t0, with only different data types.Upon receiving a signal from the clock 14, the processing unit 2 can advance to a time period t5. During time period t5, logic 10E can receive VB from pipeline register 8D and pass VB to GPR 4A. In other words, during time period t5, VB can be copied from the final pipeline register 8D to GPR 4A such that GPR 4A stores the same value stored in GPR 4B during time period t0. In this manner, controller 12 may perform the swap/reorder instruction without the need to use additional GPRs for temporary storage (eg, as with copying the first value from the first GPR to the temporary GPR during the first period) Copying the second value from the second GPR to the first GPR during the second period, and copying the first value from the temporary GPR to the second GPR during the third period).Controller 12 may receive a centralized instruction instructing the controller to move a plurality of values from a plurality of arbitrary source GPRs to a plurality of consecutively located destination GPRs. For example, controller 12 may receive a four-valued centralized instruction in accordance with instruction (2) below, where dst indicates a first destination GPR in GPR 4, src0 indicates a first source GPR in GPR 4, and src1 indicates GPR 4 The second source GPR, and src2 indicates the third source GPR in GPR 4, and src3 indicates the fourth source GPR in GPR 4.Gat dst, src0, src1, src2, src3 (2)The behavior obtained by executing the n-value set instruction from the controller 12 can be expressed as follows:Dst=convert(src0);Dst+1=convert(src1);Dst+2=convert(src2);...; andDst+n-1=convert(srcn-1).In some examples, controller 12 may execute a four-valued centralized instruction in four instruction cycles. In some instances, the instructions may use the old source if any source is overwritten by the instruction itself.FIG. 3B illustrates an example data flow through a processing unit during execution of a centralized combined move instruction. For example, the data stream illustrated by FIG. 3B may be a data stream within processing unit 2 during execution of the three-value set instructions gat 4B, 4E, 4B, 4D.Controller 12 may receive a scatter instruction that instructs the controller to move a plurality of values from a plurality of consecutive positioning sources GPR to a plurality of arbitrary destination GPRs. For example, controller 12 may receive a four-valued centralized instruction in accordance with instruction (3) below, where dst0 indicates a first destination GPR in GPR 4, dst1 indicates a second destination GPR in GPR 4, and dst2 indicates GPR 4 The third destination GPR in the middle, and dst3 indicates the fourth destination GPR in the GPR 4, and src indicates the first source GPR in the GPR 4.Sct dst0, dst1, dst2, dst3, src (3)The behavior obtained by executing the n-value scatter instruction from the controller 12 can be expressed as follows:Dst=convert(src);Dst1=convert(src+1);Dst2=convert(src+1);...; andDstn-1=convert(src+n-1).In some examples, controller 12 may execute a four-valued centralized instruction in four instruction cycles. In some instances, the instructions may use the old source if any source is overwritten by the instruction itself.Figure 3C illustrates an example data flow through a processing unit during execution of a decentralized combined move instruction. For example, the data stream illustrated by FIG. 3C can be a data stream within processing unit 2 during execution of three-valued decentralized instructions gat 4B, 4E, 4B, 4D.Although described above as passing each copy value through pipeline 6, in some examples, controller 12 may perform the combined move instruction by passing only a subset of the copied values through pipeline 6. For example, when performing a three-value reordering instruction, such as swz 4C, 4A, 4B, 4A, 4B, 4C, controller 12 may store the value stored by GPR 4A to the initial pipeline of pipeline register 8 during the first cycle. a register; during the second period, the value stored by the GPR 4C is copied to the GPR 4A; during the third period, the value stored by the GPR 4B is copied to the GPR 4C; and during the fourth period, the value is taken from the pipeline register 8 The final pipeline register in the copy is copied to GPR 4B. In some examples, such as where a particular GPR is not included in multiple destination GPRs and multiple source GPRs, controller 12 may perform combined move instructions without passing any values through pipeline 6.4 is a flow diagram illustrating an example operation of a processing unit that uses pipeline registers to move values between GPRs when executing a combined move instruction in accordance with one or more techniques of the present disclosure. The technique of FIG. 4 may be performed by a processing unit (eg, processing unit 2 illustrated in FIG. 1 or processing unit 2A illustrated in FIG. 2). For purposes of illustration, the technique of FIG. 4 is described within the context of processing unit 2 of FIG. 1, but a processing unit having a configuration different from that of processing unit 2 may perform the techniques of FIG.Controller 12 of processing unit 2 may receive one or more instructions in accordance with one or more techniques of this disclosure. For example, the controller 12 may receive a request to move a first value from a first GPR of the plurality of GPRs to a third GPR of the plurality of GPRs and move the second value from the second GPR of the plurality of GPRs One or more instructions (400) to a fourth GPR of the plurality of GPRs. In some examples, the instructions may further request to convert one or both of the first value and/or the second value. For example, when storing the first value as an integer in the first GPR, the instruction may request to move the first value to the third GPR and convert it to a floating point value.In response to receiving one or more instructions, controller 12 may cause the initial logic unit of logic unit 10 of pipeline 6 to copy the first value to the initial pipeline register in pipeline register 8 of pipeline 6 during the first cycle (eg, Pipeline register 8A) (402). As an example, controller 12 may send a signal to an initial logic unit that causes the initial logic unit to retrieve a first value from the first GPR and store the first value to an initial pipeline register. As another example, where the first value has been stored by a particular pipeline register of pipeline register 8, controller 12 may send a signal to an initial logic unit that causes the initial logic unit to retrieve the first from a particular pipeline register. The value is stored in the initial pipeline register. As discussed above, the pipeline register 8 is different from the plurality of GPRs 4. Additionally, as discussed above, processing unit 2 may consume less power when accessing the pipeline registers in pipeline register 8 than when accessing GPR in GPR 4. For example, processing unit 2 can consume half the power when accessing a pipeline register, as compared to when accessing GPR. In some examples, one or more of the logical units may perform a data type conversion, such as when the instruction requests to convert a data type of the first value. For example, the initial logical unit can convert the first value from an integer data type to a floating point data type. The initial logic unit may copy the second value from the second GPR in GPR 4 to the initial pipeline register (404) during the second period. As an example, controller 12 may send a signal to an initial logic unit that causes the initial logic unit to retrieve a second value from the second GPR and store the second value to an initial pipeline register. As another example, where the second value has been stored by a particular pipeline register of pipeline register 8, controller 12 may send a signal to an initial logic unit that causes the initial logic unit to retrieve a second from a particular pipeline register. The value is stored to the initial pipeline register. As discussed above, during a second cycle, a subsequent logic unit (e.g., logic unit 10B) in logic unit 10 can copy the first value to a subsequent pipeline register (e.g., pipeline register 8B) in pipeline register 8. For example, controller 12 may send a signal to a subsequent logic unit that causes the subsequent logic unit to retrieve the first value from the initial pipeline register and store the first value to a subsequent pipeline register.If pipeline 6 is a two-phase pipeline, then the pipeline register can be the final pipeline register in pipeline register 8 (eg, pipeline register N-1). However, if pipeline 6 has more than two phases, the first and second values can be advanced through the phase of pipeline 6 until the first value is stored in the final pipeline register. As such, there may be a period between the second period and the third period.The final logical unit of logic unit 10 (e.g., logic unit 10N) may copy the first value from the final pipeline register to the third GPR (406) during the third period. For example, controller 12 may send a signal to a final logic unit that causes the final logic unit to retrieve a first value from a final pipeline register and store the first value to a third GPR. As discussed above, during the third period, the N-1th logical unit (ie, the second to last logical unit) can take the second value from the N-1th pipeline register (ie, the second to last pipeline register). Copy to the final pipeline register.The final logical unit may copy the second value from the final pipeline register to the fourth GPR (408) during the fourth cycle. For example, controller 12 can send a signal to a final logic unit that causes the final logic unit to retrieve a second value from a final pipeline register and store the second value to a fourth GPR. As discussed above, controller 12 may cause the logic unit to copy multiple values (ie, perform operations 402 through 408) in response to receiving a single instruction (eg, a combined move instruction).FIG. 5 is a block diagram of an example compiler module 16 that outputs combined move instructions in accordance with one or more techniques of the present disclosure. As illustrated in the example of FIG. 5, compiler module 16 can receive the code and output the instructions. Compiler module 16 may output instructions for execution by a processing unit (e.g., processing unit 2 of FIG. 1 or processing unit 2A of FIG. 2), the processing unit including a plurality of GPRs and pipelines, the pipelines including a plurality of pipeline registers. In some examples, compiler module 16 may be included in the same device as the processing unit. Additionally, as illustrated in the example of FIG. 5, the compiler module 16 can include a combined mobile module 18.The combined mobile module 18 can be configured to determine whether multiple operations indicated by the received code can be combined into the combined move instructions. For example, the combined mobile module 18 can analyze the code to determine if the plurality of operations indicated by the code include moving the plurality of values between the plurality of GPRs of the processing unit. In any event, in response to determining that the plurality of operations indicated by the code can be combined into the combined movement instructions, the combined movement module 18 can generate a single combined movement instruction that, when executed by the processing unit, is implemented with multiple operations The same result. As an example, if the plurality of operations indicated by the code include moving the first value from the first GPR to the temporary GPR, moving the second value from the second GPR to the first GPR and moving the first value from the temporary GPR to the first With two GPRs, then the combined mobile module can generate a single reordering instruction that achieves the same result with fewer instruction cycles.In some examples, compiler module 16 may receive uncompiled code. In such examples, compiler module 16 may compile the code into instructions. In some examples, combined mobile module 18 can determine that multiple operations indicated by the code can be combined into the combined move instructions during the compilation process. In some examples, combined mobile module 18 may determine that multiple operations indicated by the code may be combined into a combined move instruction after the compilation process has completed. In some examples, compiler module 16 may receive compiled code (eg, instructions). In such examples, the combined movement module 18 can analyze the instructions contained in the code to determine if any of the instructions can be combined into the combined movement instructions.6 is a flow diagram illustrating an example operation of a compiler module that outputs a combined move instruction in accordance with one or more techniques of the present disclosure. The technique of FIG. 6 may be performed by a compiler module (eg, compiler module 16 illustrated in FIG. 5). For purposes of illustration, the technique of FIG. 6 is described within the context of the compiler module 16 of FIG. 5, but a compiler module having a configuration different from that of the compiler module 16 may perform the techniques of FIG.Compiler module 16 may receive the code (602) in accordance with one or more techniques of this disclosure. As discussed above, the code can be compiled code or uncompiled code. The combined movement module 18 of the compiler module 16 can determine whether a plurality of operations indicated by the code can be combined into the combined movement instructions (604). In response to determining that the plurality of operations indicated by the code can be combined into the combined move instructions, the combined move module 18 can generate a combined move instruction (606). As discussed above, some example combined move instructions include rearrangement instructions, centralized instructions, and scatter instructions. As also discussed above, where the code includes a plurality of instructions, the combined move module 18 can replace the instructions corresponding to the plurality of operations that can be combined into the combined move instructions with the generated combined move instructions. In this manner, the combined mobile module 18 can reduce the number of instructions needed to perform the operation. Also in this manner, the combined mobile module 18 can reduce the number of instruction cycles required to perform operations by the processing module.FIG. 7 is a block diagram illustrating an example apparatus 100 including the integrated circuit of FIG. 1 in accordance with one or more techniques of the present disclosure. Examples of device 100 include, but are not limited to, wireless devices, mobile phones, personal digital assistants (PDAs), video game consoles including video displays, mobile video conferencing units, laptop computers, desktop computers, television set-top boxes, Tablet computing devices, e-book readers, and the like. The device 100 includes a GPU 17, a system memory 19, and a processor 20. In the example illustrated in FIG. 6, GPU 17 and processor 20 are illustrated with dashed lines to indicate that GPU 17 and processor 20 can be formed in the same integrated circuit. In some examples, GPU 17 and processor 20 may be formed in other integrated circuits (ie, in different chips). In some examples, one or both of GPU 17 and processor 20 may be an example of integrated circuit 1 of FIG. For example, one or both of GPU 17 and processor 20 can be configured to move multiple values from multiple source general purpose registers (GPRs) of multiple GPRs to multiple purposes in multiple GPRs The local GPR uses multiple pipeline registers as temporary accessors.System memory 19 can be considered a memory of device 100. System memory 19 can include one or more computer readable storage media. Examples of system memory 19 include, but are not limited to, random access memory (RAM), electrically erasable programmable read only memory (EEPROM), flash memory, or can be used to carry or store in the form of instructions and/or data structures. Any other medium that is intended to be program code and accessible by a computer or processor.In some aspects, system memory 19 can include instructions that cause processor 20 and/or GPU 17 to perform functions pertaining to processor 20 and GPU 17 in the present invention. Thus, system memory 19 can be a computer readable storage medium having instructions stored thereon that, when executed, cause one or more processors (e.g., processor 20 and GPU 17) to perform various functions. System memory 19 may store instructions that cause GPU 17 and/or processor 20 to implement the example techniques described in this disclosure.System memory 19 may be considered a non-transitory storage medium in some instances. The term "non-transitory" may refer to a storage medium that the storage medium does not embody in a carrier wave or a propagated signal. However, the term "non-transitory" should not be interpreted to mean that system memory 19 is immovable or its content is static. As an example, system memory 19 can be removed from device 100 and moved to another device. As another example, a memory (substantially similar to system memory 19) can be inserted into device 100. In some instances, the non-transitory storage medium can store data that can change over time (eg, stored in RAM).Examples of processor 20 and GPU 17 include, but are not limited to, a digital signal processor (DSP), a general purpose microprocessor, an application specific integrated circuit (ASIC), a field programmable logic array (FPGA), or other equivalent integrated or discrete logic. Circuit. In some examples, GPU 17 may be specialized hardware that includes integrated and/or discrete logic circuitry that provides GPU 17 with massively parallel processing capabilities suitable for graphics processing. In some examples, GPU 17 may also include general purpose processing capabilities and may be referred to as a general purpose GPU (GPGPU) when implementing general purpose processing tasks (ie, non-graphics related tasks).The processor 20 can execute various types of applications. Examples of applications include web browsers, email applications, spreadsheets, video games, or other applications that produce visual objects for display. Instructions for executing one or more applications may be stored in system memory 19. Processor 20 can transmit the graphical data of the visual object to GPU 17 for further processing.For example, processor 120 may offload processing tasks of GPU 17, such as tasks that require massively parallel operations. As an example, graphics processing requires massively parallel operations, and processor 20 can offload such image processing tasks of GPU 17. Processor 20 can communicate with GPU 17 in accordance with a particular application processing interface (API). Examples of such APIs include theAPI of, theAPI of the Khronos organization, and the OpenCLTM API; however, aspects of the present invention are not limited to DirectX, OpenGL, or OpenCL APIs, and can be extended to other types of APIs. . Moreover, the techniques described in this disclosure are not required to function according to the API, and the processor 20 and GPU 17 can utilize any technique for communication.To perform graphics operations, GPU 17 may implement a graphics processing pipeline. The graphics processing pipeline includes functions that are implemented as defined by software or firmware executing on the GPU 17 and that perform functions through fixed function units that are hardwired to perform very specific functions. Software or firmware executing on GPU 17 may be referred to as a shader program (or simply a shader), and the shader program may be at one or more shader cores (also referred to as shader processors) of GPU 17. Execute on. The shader program provides the user with functional flexibility because the user can design the shader program to perform the desired task in any conceivable way. However, the fixed function unit is hardwired to fix the way the functional unit performs the task. Therefore, the fixed function unit does not provide a lot of functional flexibility.For example, processor 20 may execute an application, such as a video game, and processor 20 may generate graphics data as part of the execution. Processor 20 can output graphics data for processing by GPU 17. GPU 17 can then process the graphics data in the graphics pipeline. In some instances, to process graphics data, GPU 17 may need to execute one or more shader programs. For example, an application executing on processor 20 may cause processor 20 to instruct GPU 17 to retrieve a shader program from system memory 19 and instruct GPU 17 to execute a shader program.There are various types of shader programs, such as vertex shaders, shell shaders, domain shaders, geometry shaders, and fragment shaders. Each of these instance shader programs can form a portion of the graphics pipeline. For example, a fixed function unit of GPU 17 may output data to a shader core executing one or more of the instance shader programs, and one or more of the instance shader programs may process the data and output the resulting data Another fixed feature to GPU 17. The shader program may also receive data from another shader program or output the data to another shader program. In this way, the shader program is implemented as part of the graphics pipeline.There are various types of shader programs, such as vertex shaders, shell shaders, domain shaders, geometry shaders, and fragment shaders. Each of these instance shader programs can form a portion of the graphics pipeline. For example, a fixed function unit of GPU 16 may output data to a shader core executing one or more of the instance shader programs, and one or more of the instance shader programs may process the data and output the resulting data Another fixed feature to the GPU 16. The shader program may also receive data from another shader program or output the data to another shader program. In this way, the shader program is implemented as part of the graphics pipeline.The device 100 can also include a display 60, a user interface 62, and a transceiver module 64. Apparatus 100 may include additional modules or units not shown in Figure 7 for purposes of clarity. For example, device 100 may include a speaker and a microphone (not shown in Figure 7) in an example where device 100 is a mobile wireless telephone to enable telephone communication. Moreover, the various modules and units shown in device 100 may not be necessary in every instance of device 100. For example, user interface 62 and display 60 may be external to device 100 in instances where device 100 is a desktop computer. As another example, user interface 62 may be part of display 60 in instances where display 60 is a touch sensitive or pressure sensitive display of a mobile device.Examples of user interface 62 include, but are not limited to, trackballs, mice, keyboards, and other types of input devices. User interface 62 may also be a touch screen and may be incorporated as part of display 60. Transceiver module 64 may include circuitry that allows for wireless or wired communication between device 100 and another device or network. Transceiver module 64 may include a modulator, a demodulator, an amplifier, and other such circuitry for wired or wireless communication. Display 60 may include a liquid crystal display (LCD), a cathode ray tube (CRT) display, a plasma display, a touch sensitive display, a pressure sensitive display, or another type of display device.Example 1. A method comprising: receiving, by a processing unit, a request to move a first value from a first GPR of a plurality of general purpose registers (GPRs) to a third GPR of the plurality of GPRs and from a second value to the plurality of Moving a second GPR in the GPR to one or more instructions of a fourth of the plurality of GPRs; and responsive to receiving the one or more instructions: passing an initial logical unit in the processing unit and The first value is copied to an initial pipeline register of a plurality of pipeline registers of the pipeline during a first clock cycle, wherein the plurality of pipeline registers are different from the plurality of GPRs; An initial logic unit and copying the second value to the initial pipeline register during a second clock cycle subsequent to the first clock cycle; passing through a final logic unit in the processing unit and following the Copying the first value from a final pipeline register of the plurality of pipeline registers to the third GPR during a third clock cycle after two clock cycles, wherein the first value is copied to the third GPR Expressed from said The same first value of the GPR copy; and the second value from the final pipeline through the final logic unit in the processing unit and during a fourth clock cycle subsequent to the second clock cycle A register is copied to the fourth GPR, wherein the second value copied to the fourth GPR represents the same second value copied from the second GPR.Example 2. The method of example 1, wherein the one or more instructions identify the first GPR, the second GPR, the third GPR, and the fourth GPR, and wherein the one or more instructions Any of the pipeline registers are not individually identified.Example 3. The method of any combination of examples 1 to 2, wherein the plurality of pipeline registers are not individually accessible by the instructions, and wherein the plurality of GPRs are individually accessible by the instructions.Example 4. The method of any combination of examples 1 to 3, wherein the processing unit consumes more time when accessing a pipeline register of the plurality of pipeline registers than when accessing a GPR of the plurality of GPRs Less power.Example 5. The method of any one of examples 1 to 4, wherein the one or more instructions comprise requesting movement of the first value from the first GPR to the third GPR and the second value A single uninterrupted instruction moving from the second GPR to the fourth GPR.Example 6. The method of any one of examples 1 to 5, wherein the instructions are selected from the group consisting of: a swap instruction, wherein the third GPR is the second GPR, and wherein the fourth GPR a first GPR; a reordering instruction, wherein the plurality of GPRs are arbitrarily positioned; a centralized instruction, wherein the first GPR and the second GPR are not continuously positioned, and wherein the third GPR and the The fourth GPR is continuously positioned; and the scatter command, wherein the first GPR and the second GPR are continuously located, and wherein the third GPR and the fourth GPR are not continuously positioned.Example 7. The method of any combination of examples 1 to 6, wherein the pipeline is a multi-cycle computation pipeline comprising one or more arithmetic logic units (ALUs).Example 8. The method of any combination of examples 1 to 7, further comprising, after the first clock cycle and before the third clock cycle, the first value from the initial by an intermediate logic unit a pipeline register is copied to an intermediate pipeline register of the plurality of pipeline registers; and after the second clock cycle and before the fourth clock cycle, the second cycle is The initial pipeline register is copied to the intermediate pipeline register.Example 9. The method of any combination of examples 1 to 8, wherein the processing unit is comprised of a central processing unit (CPU) or a graphics processing unit (GPU).Example 10. The method of any one of examples 1 to 9, wherein copying the first value to the initial pipeline register comprises any one of: placing the first value from the first GPR Copying to the initial pipeline register; or copying the first value from a pipeline register in the plurality of pipeline registers to the initial pipeline register; and wherein copying the second value to the initial pipeline register includes Any of: copying the second value from the second GPR to the initial pipeline register; or copying the second value from a pipeline register in the plurality of pipeline registers to The initial pipeline register.Example 11. The method of any one of examples 1 to 10, wherein the third GPR is the second GPR and/or the fourth GPR is the first GPR.Example 12. A processing unit comprising: a plurality of general purpose registers (GPRs); a pipeline comprising a plurality of pipeline registers, wherein the plurality of pipeline registers are different from the plurality of GPRs; a plurality of logic units; and a controller Configuring to receive a request to move a first value from a first GPR of the plurality of GPRs to a third GPR of the plurality of GPRs and to move a second value from a second GPR of the plurality of GPRs to One or more instructions of a fourth of the plurality of GPRs, wherein in response to receiving the one or more instructions, the controller is configured to: cause an initial logical unit of the plurality of logical units Copying the first value to an initial pipeline register of the plurality of pipeline registers during a first clock cycle; causing the initial logic unit to perform the first period during a second clock cycle subsequent to the first clock cycle Binary copying to the initial pipeline register; causing a final one of the plurality of logic units to pass the first value from the plurality of pipeline registers during a third clock cycle subsequent to the second clock cycle Final pipeline Copying to the third GPR, wherein the first value copied to the third GPR represents the same first value copied from the first GPR; and causing the final logical unit to succeed Copying the second value from the final pipeline register to the fourth GPR during a fourth clock cycle subsequent to the second clock cycle, wherein the second value copied to the fourth GPR represents Said same second value of the second GPR copy.Example 13. The processing unit of example 12, wherein the one or more instructions identify the first GPR, the second GPR, the third GPR, and the fourth GPR, and wherein the one or more The instructions do not individually identify any of the pipeline registers.Example 14. The processing unit of any combination of examples 12 to 13, wherein the plurality of pipeline registers are not individually accessible by the instructions, and wherein the plurality of GPRs are individually accessible by the instructions.Example 15. The processing unit of any combination of examples 12 to 14, wherein the processing unit consumes access to a pipeline register of the plurality of pipeline registers when accessing a GPR of the plurality of GPRs Less power.Example 16. The processing unit of any combination of examples 12 to 15, wherein the one or more instructions comprise requesting to move the first value from the first GPR to the third GPR and the second A value moves from the second GPR to a single uninterrupted instruction of the fourth GPR.Example 17. The processing unit of any combination of examples 12 to 16, wherein the instructions are selected from the group consisting of: a swap instruction, wherein the third GPR is the second GPR, and wherein the fourth a GPR is the first GPR; a reordering instruction, wherein the plurality of GPRs are arbitrarily positioned; a centralized instruction, wherein the first GPR and the second GPR are not continuously positioned, and wherein the third GPR and The fourth GPR is continuously positioned; and the scatter command, wherein the first GPR and the second GPR are continuously located, and wherein the third GPR and the fourth GPR are not continuously positioned.Example 18. The processing unit of any combination of examples 12 to 17, wherein the pipeline is a multi-cycle computation pipeline comprising one or more arithmetic logic units (ALUs).Example 19. The processing unit of any combination of examples 12 to 18, wherein in response to receiving the one or more instructions, the controller is further configured to: cause an intermediate of the plurality of logical units The logic unit copies the first value from the initial pipeline register to an intermediate pipeline register of the plurality of pipeline registers after the first clock cycle and before the third clock cycle; and causing the The intermediate logic unit copies the second period from the initial pipeline register to the intermediate pipeline register after the second clock cycle and before the fourth clock cycle.Example 20. The processing unit of any combination of examples 12 to 19, wherein the processing unit is comprised of a central processing unit (CPU) or a graphics processing unit (GPU).Example 21. The processing unit of any combination of examples 12 to 20, wherein the initial logic unit is configured to copy the first value to the initial pipeline register by any of the following: Copying a value from the first GPR to the initial pipeline register; or copying the first value from a pipeline register in the plurality of pipeline registers to the initial pipeline register; and wherein the initial logic unit is Configuring to copy the second value to the initial pipeline register by any of: copying the second value from the second GPR to the initial pipeline register; or placing the second Values are copied from the pipeline registers in the plurality of pipeline registers to the initial pipeline registers.Example 22. The processing unit of any combination of examples 12 to 21, wherein the third GPR is the second GPR and/or the fourth GPR is the first GPR.Example 23. A non-transitory computer readable storage medium storing for a processing unit to request a first value to be moved from a first GPR of the plurality of GPRs to a third GPR of the plurality of GPRs and to a second value from the Moving a second GPR of the plurality of GPRs to one or more instructions of a fourth of the plurality of GPRs, the one or more instructions, when executed, causing the processing unit to: cause the An initial one of the plurality of logic cells copies the first value to an initial pipeline register of a plurality of pipeline registers of the pipeline during a first clock cycle; causing the initial logic unit to follow the first clock cycle Copying the second value to the initial pipeline register during a second clock cycle; causing a final one of the plurality of logic cells to be said during a third clock cycle subsequent to the second clock cycle A value is copied from the final pipeline register of the plurality of pipeline registers to the third GPR, wherein the first value copied to the third GPR represents the same first from the first GPR copy Value The final logic unit copies the second value from the final pipeline register to the fourth GPR during a fourth clock cycle subsequent to the second clock cycle, wherein the copy is applied to the fourth GPR The second value represents the same second value copied from the second GPR.Example 24. The non-transitory computer readable storage medium of embodiment 23, wherein the one or more instructions identify the first GPR, the second GPR, the third GPR, and the fourth GPR, and wherein The one or more instructions do not individually identify any of the pipeline registers.Example 25. The non-transitory computer readable storage medium of any combination of examples 23 to 24, wherein the plurality of pipeline registers are not individually accessible by the instructions, and wherein the plurality of GPRs are individually accessible by the instructions.Example 26. The non-transitory computer readable storage medium of any one of examples 23 to 25, wherein the processing unit is accessing the plurality of pipeline registers when accessing a GPR of the plurality of GPRs The pipeline registers in the middle consume less power.Example 27. The non-transitory computer readable storage medium of any combination of examples 23 to 26, wherein the processing unit copies the value in response to receiving a single uninterrupted instruction.Example 28. The non-transitory computer readable storage medium of any one of examples 23 to 27, wherein the instructions are selected from the group consisting of: a swap instruction, wherein the third GPR is the second GPR, And wherein the fourth GPR is the first GPR; a reordering instruction, wherein the plurality of GPRs are arbitrarily positioned; a centralized instruction, wherein the first GPR and the second GPR are not continuously positioned, and wherein The third GPR and the fourth GPR are continuously positioned; and the scatter command, wherein the first GPR and the second GPR are continuously located, and wherein the third GPR and the fourth GPR are not Continuous positioning.Example 29. The non-transitory computer readable storage medium of any combination of examples 23 to 28, wherein the pipeline is a multi-cycle computation pipeline comprising one or more arithmetic logic units (ALUs).Example 30. The non-transitory computer readable storage medium of any one of examples 23 to 29, wherein, when executed, the one or more instructions cause the processing unit to: cause the plurality of logical units The intermediate logic unit in the middle of the first clock cycle and before the third clock cycle, copying the first value from the initial pipeline register to an intermediate pipeline register in the plurality of pipeline registers; and The intermediate logic unit is caused to copy the second period from the initial pipeline register to the intermediate pipeline register after the second clock cycle and before the fourth clock cycle.Example 31. The non-transitory computer readable storage medium according to any one of the examples 23 to 30, wherein the processing unit is constituted by a central processing unit (CPU) or a graphics processing unit (GPU).Example 32. The non-transitory computer readable storage medium of any one of examples 23 to 31, wherein the initial logic unit is configured to copy the first value to the initial pipeline by any of the following operations a register: copying the first value from the first GPR to the initial pipeline register; or copying the first value from a pipeline register in the plurality of pipeline registers to the initial pipeline register; and wherein The initial logic unit is configured to copy the second value to the initial pipeline register by any one of: copying the second value from the second GPR to the initial pipeline register; Or copying the second value from a pipeline register in the plurality of pipeline registers to the initial pipeline register.Example 33. The non-transitory computer readable storage medium of any one of examples 23 to 32, wherein the third GPR is the second GPR and/or the fourth GPR is the first GPR.Example 34. A method comprising: receiving a code by a compiler module; and generating the combined move instruction in response to determining, by the compiler module, that a plurality of operations indicated by the code can be combined into a combined move instruction, wherein The combined move instruction causes the processing unit to move a plurality of values from a plurality of source GPRs of the plurality of general purpose registers (GPRs) to a plurality of destination GPRs of the plurality of GPRs when executed by the processing unit A plurality of pipeline registers are utilized as temporary storage, and wherein the plurality of pipeline registers are different from the plurality of GPRs.Example 35. The method of example 34, wherein determining that the plurality of operations indicated by the code are combinable into the combined mode instruction comprises: responsive to determining that the plurality of operations indicated by the code comprise: Moving a plurality of values between the plurality of GPRs determines that the plurality of operations indicated by the code can be combined into the combined mode instructions.Example 36. The method of any combination of examples 34 to 32, wherein the code comprises a plurality of instructions, and wherein the method further comprises: replacing, by the generated combined movement instruction, corresponding to being combinable to the combined movement The instructions of the plurality of operations in the instruction.Example 37. A device comprising means for performing any combination of the methods described in Examples 1 to 11 and/or Examples 34 to 36.Example 38. A system comprising means for performing any combination of the methods described in Examples 1 to 11 and/or Examples 34 to 36.Example 39. An apparatus, comprising: receiving, by a processing unit, a request to move a first value from a first GPR of a plurality of general purpose registers (GPRs) to a third GPR of the plurality of GPRs and from a second value Means for moving a second GPR of the plurality of GPRs to one or more instructions of the fourth of the plurality of GPRs; and responsive to receiving the one or more instructions for passing through the processing unit Initial logic unit and means for copying the first value to an initial pipeline register of a plurality of pipeline registers of a pipeline during a first clock cycle, wherein the plurality of pipeline registers are different from the plurality of GPRs; Copying the second value to a component of the initial pipeline register during the second clock cycle following the first clock cycle by the initial logic unit in the processing unit; Means for copying the first logical value from the final pipeline register of the plurality of pipeline registers to the third GPR during a third clock cycle subsequent to the second clock cycle Among them The first value of the bay to the third GPR represents the same first value copied from the first GPR; and for passing the final logical unit in the processing unit and following the A device for copying the second value from the final pipeline register to the fourth GPR during a fourth clock cycle after two clock cycles, wherein the second value copied to the fourth GPR represents a slave Said same second value of the second GPR copy.Example 40. The device of example 40, further comprising means for performing any combination of the methods of Examples 1-11.In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted as one or more instructions or code via a computer-readable medium and executed by a hardware-based processing unit. The computer readable medium can comprise a computer readable storage medium corresponding to a tangible medium, such as a data storage medium, or a communication medium comprising any medium that facilitates, for example, transferring a computer program from one location to another location in accordance with a communication protocol. . In this manner, computer readable media generally can correspond to (1) a non-transitory tangible computer readable storage medium or (2) a communication medium such as a signal or carrier. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementing the techniques described in this disclosure. The computer program product can comprise a computer readable medium.By way of example and not limitation, such computer readable storage media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, disk storage or other magnetic storage, flash memory or may be used to The form of the instruction or data structure stores other media that are of the desired code and that are accessible by the computer. Also, any connection is properly termed a computer-readable medium. For example, if coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, and microwave are used to transmit commands from a website, server, or other remote source, then the coaxial cable , fiber optic cables, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are all included in the definition of the media. However, it should be understood that computer readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead for non-transitory, tangible storage media. As used herein, magnetic disks and optical disks include compact disks (CDs), laser disks, optical disks, digital versatile disks (DVDs), floppy disks, and Blu-ray disks, in which disks typically reproduce data magnetically, while disks are optically laser-transmissive. Reproduce the data. Combinations of the above should also be included within the scope of computer readable media.The instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete Logic circuit. Accordingly, the term "processor," as used herein, may refer to any of the foregoing structures or any other structure suitable for implementing the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding or incorporating in a combined codec. Moreover, the technology can be fully implemented in one or more circuits or logic elements.The techniques of the present invention can be implemented in a wide variety of devices or equipment, including wireless handsets, integrated circuits (ICs), or groups of ICs (e.g., chipsets). Various components, modules or units are described in this disclosure to emphasize functional aspects of the apparatus configured to perform the disclosed techniques, and do not necessarily need to be implemented by different hardware units. Rather, as described above, various units may be provided in combination with codec hardware units or by a combination of interoperative hardware units (including one or more processors as described above) in conjunction with suitable software and/or firmware.Various examples have been described. These and other examples are within the scope of the following claims.
The invention, in a first aspect is a multi-piece rotor for use in a chemical spray cleaner. The rotor comprises a top plate, a bottom plate, a plurality of struts, and a pair of retainer bars. The top plate defining an aperture keyed to a wafer cassette. The bottom plate includes an interior surface on which the wafer cassette may bottom and an exterior surface defining an aperture for receiving a rotating shaft. The struts separate the top and bottom plates, each strut fastened at a top end thereof to the top plate and at a bottom end to the bottom plate. The retaining bars fasten to adjacent ones of the struts on the radially proximal surface thereof. In a second aspect, the invention is a rotor constructed from polyphenelyne sulfide for use in a chemical spray cleaner.
What is claimed: 1. A multi-piece rotor for use in a chemical spray cleaner, the rotor comprising:a top plate defining an aperture keyed to a wafer cassette; a bottom plate including an interior surface on which the wafer cassette may bottom and an exterior surface defining an aperture for receiving a rotating shaft; a plurality of struts separating the top and bottom plates, each strut fastened at a top end thereof to the top plate and at a bottom end to the bottom plate; and a pair of retaining bars fastened to adjacent ones of the struts on the radially proximal surface thereof. 2. The multi-piece rotor of claim 1, wherein the bottom end of each strut tapers on the radially distal surface thereof.3. The multi-piece rotor of claim 1, wherein each of the top plate, bottom plate, struts, and retaining bars is constructed from polyphenelyne sulfide.4. The multi-piece rotor of claim 1, wherein the plurality of struts are spaced equidistantly circumferentially about the top plate and the bottom plate.5. The multi-piece rotor of claim 4, wherein the plurality of struts are arranged in diametrically opposed pairs.6. The multi-piece rotor of claim 1, wherein the plurality of struts are arranged in diametrically opposed pairs.7. The multi-piece rotor of claim 1, wherein the plurality of struts includes eight struts.8. The multi-piece rotor of claim 1, wherein each of the struts is fastened to the top plate by a pin pressed into an opening in the top plate and a blind bore in the radially distal surface of the top end of the strut, the opening and the blind bore being co-aligned upon press fitting the terminus of the top end into a recess defined by the interior surface of the top plate.9. The multi-piece rotor of claim 8, wherein each of the struts is fastened to the bottom plate by a pin pressed into an opening in the bottom plate and a blind bore in the radially distal surface of the bottom end of the strut, the opening and the blind bore being co-aligned upon press fitting the terminus of the bottom end into a recess defined by the interior surface of the bottom plate.10. The multi-piece rotor of claim 1, wherein each of the struts is fastened to the bottom plate by a pin pressed into an opening in the bottom plate and a blind bore in the radially distal surface of the bottom end of the strut, the opening and the blind bore being co-aligned upon press fitting the terminus of the bottom end into a recess defined by the interior surface of the bottom plate.11. A multi-piece rotor constructed from polyphenelyne sulfide for use in a chemical spray cleaner, the rotor including a plurality of struts, each of the struts being press fit into a top and a bottom plate at either end, the press fit creating a mechanical stop against centrifigul force.12. The multi-piece rotor of claim 11, wherein the multi-piece rotor is assembled with polyphenelyne sulfide fasteners.13. The multi-piece rotor of claim 11, wherein:the top plate defines an aperture keyed to a wafer cassette; the bottom plate includes a first surface on which the wafer cassette may bottom and a second surface defining an aperture for receiving a rotating shaft; and the plurality of struts separates the top and bottoms plates, each strut fastened at a top end thereof to the top plate and at a bottom end to the bottom plate, and spaced equidistantly circumferentially about the top plate and the bottom plate the bottom end of each strut tapering on the radially distal surface thereof; and wherein the rotor further comprises: a pair of retaining bars fastened to adjacent ones of the struts on the radially proximal surface thereof. 14. A rotor constructed from polyphenelyne sulfide for use in a chemical spray cleaner, the rotor comprising a plurality of struts, each of the struts being press fit into a top and a bottom plate at either end, the press fit creating a mechanical stop against centrifigul force.15. The rotor of claim 14, wherein the multi-piece rotor is assembled with polyphenelyne sulfide fasteners.16. The rotor of claim 15, wherein:the top plate defines an aperture keyed to a wafer cassette; the bottom plate includes a first surface on which the wafer cassette may bottom and a second surface defining an aperture for receiving a rotating shaft; and the plurality of struts are spaced equidistantly circumferentially about the top plate and the bottom plate the bottom end of each strut tapering on the radially distal surface thereof; and further comprising: a pair of retaining bars fastened to adjacent ones of the struts on the radially proximal surface thereof.
This application claims, under 35 U.S.C. [section]119(e), the earlier effective filing date of Dec. 13, 1999, from the co-pending Provisional Application Ser. No. 60/170,300, entitled "Durable, Multi-Piece Rotor For Spray Acid Tool."BACKGROUND OF THE INVENTION1. Field of the InventionThis invention pertains to spray-acid cleaning during semiconductor fabrication and, more particularly, a durable, multi-piece rotor for spray acid tools.2. Description of the Related ArtSemiconductor devices, or microchips, are manufactured from wafers of a substrate material. Layers of materials are added, removed, and/or treated during fabrication to create the integrated, electrical circuits that make up the device. The fabrication essentially comprises four basic operations. The four operations are:layering, or adding thin layers of various materials to a wafer from which a semiconductor is produced;patterning, or removing selected portions of added layers;doping, or placing specific amounts of dopants in the wafer surface through openings in the added layers; andheat treatment, or heating and cooling the materials to produce desired effects in the processed wafer.Although there are only four basic operations, they can be combined in hundreds of different ways, depending upon the particular fabrication process. See, e.g., Peter Van Zant, Microchip Fabrication A Practical Guide to Semiconductor Processing (3d Ed. 1997 McGraw-Hill Companies, Inc.) (ISBN 0-07-067250-4).Wafers under fabrication frequently require a Chemical Mechanical Polishing ("CMP") process to reduce topographical variance and achieve a planarity sufficient to meet the challenging lithography requirements of microprocessor manufacturing. CMP, by microelectronic fabrication standards, is an inherently dirty process. This production step lowers wafers, device side down, onto a rotating table saturated with slurry. Water and slurry between the pad on the table and the wafer surface provide the chemical catalyst in CMP. Downward pressure applied to the back of the wafer and the aggregate suspended in the slurry solution between the wafer and the rotating table provide the mechanical catalyst in CMP.Successfully removing slurry residuals following CMP processing aids in the successful manufacturing of operational integrated circuits. This post-polish, wafer surface conditioning consists of a chemical clean and/or a mechanical clean to reduce contamination to at least a minimally acceptable level, a direct correlation to a good die yield. Chemical cleans may employ either immersive or spray techniques. In an immersion chemical clean, the wafers are lowered into a pool of acid that cleans the slurry residuals from the wafers. In a spray chemical clean, an acid is sprayed across the wafers to clean the slurry residuals. Both of these types of cleaning operations are well known in the art. For instance, a wide variety of chemical spray cleaners are available from:Semitool, Inc.655 West Reserve DriveKalispell, Mont. 59901Tel: (406) 752-2107Fax: (406) 752-5522Website: www.semitool.comExemplary chemical spray cleaners available from Semitool, Inc. include those sold under the names Magnums(R) and Spray Acid Tool.More particularly, in spray cleaning, a cassette (or "boat") holds the wafers and is securely placed on a rotor housed in a roughly circular chamber. Once the door to the chamber is closed, the rotor is spun at rates as high as 1,000 rpm. The spinning wafers are rinsed with a hot de-ionized ("DI") water and dried with a high purity nitrogen (N2). A heated acid is then sprayed across the wafers as the rotor spins. The particular acid used depends on the chemical composition of the slurry. Depending on the particular process, additional rinsing/cleaning operations may be performed on the wafers as they are spun in the cassette on the rotor. Finally, the wafers are again rinsed with DI, the rotor is stopped, and the wafers are removed from the chamber.Conventional rotor design for chemical spray designs suffers from several problems. One of these problems is structural and another is operational. More particularly, one problem arises from the materials from which the rotors are constructed and the radial clearance between the rotor and the spray nozzles. A second problem arises from the need for routine maintenance and upkeep.First, the rotors are typically machined from a solid block of poly-tetra-fluoroethelene ("PTFE") such as is sold under the mark Teflon.(R) At the same time, a minimal radial clearance between the wafers and the spray nozzles is also desirable to obtain a more uniform spray pattern for fluids across the wafers. However, PTFE is not very dimensionally or thermally stable. During the cleaning operation, the temperatures involved frequently result in an inadequate radial clearance between the rotor and the spray nozzles, causing the rotor to scrape the nozzles. Even a single scrape can cause the rotor to slough PTFE particles that can contaminate the wafers, necessitating additional clean-up or even that the wafers be discarded. The scraping also aggravates variations in rotor rotation caused by load imbalances and other factors, which causes still additional scraping. Ultimately, the rotor self-destructs from the scraping and rotational variations, and this occurs more quickly as the rotor speeds increase.Several approaches may be employed to resolve this problem. The rinse nozzles can be exchanged for shorter versions to increase the radial clearance between the rotor and the nozzles. Unfortunately, this increases the radial distance between the wafers and the nozzles and lessens the uniformity in the spray pattern on the wafers. The PTFE from which the rotor is machined can be cured prior to final machining to impart greater rigidity and more resistance to thermal expansion. However, this approach drives up the costs of the rotors. Additional steps may be taken to improve the balance of the rotors, but these steps increase the likelihood of additional problems and drives up the cost of the machine. Thus, while these approaches can reduce rotor scraping and consequent PTFE particle contamination, they adversely impact the cost and performance of the operation in other ways.Second, the single-piece body for the rotors inhibits cleaning and repair. Because the rotors are typically constructed from a single block of PTFE, they cannot be repaired or cleaned simply by replacing clogged, scraped, or worn portions thereof They must instead be completely replaced. Even standard PTFE rotors used in conventional operations can cost several thousand dollars to replace. If cured to help preserve radial clearance as discussed above, these costs can go even higher. Attempts to construct rotors from multiple pieces resulted in rotors that lost their structural integrity and came apart at the high rotor spin speeds used in conventional chemical spray cleaning operations. Some of these multi-piece rotors were constructed from steel, but design constraints imposed by the size of the chamber resulted in a rotor that still could not remain operational.The present invention is directed to resolving one or all of the problems mentioned above.SUMMARY OF THE INVENTIONThe invention, in a first aspect is a multi-piece rotor for use in a chemical spray cleaner. The rotor comprises a top plate, a bottom plate, a plurality of struts, and a pair of retainer bars. The top plate defining an aperture keyed to a wafer cassette. The bottom plate includes an interior surface on which the wafer cassette may bottom and an exterior surface defining an aperture for receiving a rotating shaft. The struts separate the top and bottom plates, each strut fastened at a top end thereof to the top plate and at a bottom end to the bottom plate. The retaining bars fasten to adjacent ones of the struts on the radially proximal surface thereof. In a second aspect, the invention is a rotor constructed from polyphenelyne sulfide for use in a chemical spray cleaner.BRIEF DESCRIPTION OF THE DRAWINGSThe invention may be understood by reference to the following description taken in conjunction with the accompanying drawings, in which like reference numerals identify like elements, and in which:FIG. 1 is an assembled, isometric view of one embodiment of a durable, multi-piece rotor;FIG. 2 is an exploded isometric view of the rotor of FIG. 1;FIG. 3 is a fragmented, assembled view of a joint between a strut and a plate in the rotor of FIGS. 1-2;FIG. 4 is an exploded view of the joint in FIG. 3;FIG. 5 is an assembled, side view of the rotor in FIGS. 1-2 including a rotating shaft; andFIGS. 6-7 are top and bottom views, respectively, of the rotor in FIGS. 1-2.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.DETAILED DESCRIPTION OF THE INVENTIONIllustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort, even if complex and time-consuming, would be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.FIGS. 1-2 illustrate in isometric assembled and exploded views, respectively, a durable multi-piece rotor 10. The rotor 10 generally comprises a top plate 12, a bottom plate 14, a plurality of struts 16a-h, and a pair of retaining bars 18. The top plate 12 defines an aperture 20 keyed to accommodate a standard wafer cassette. The bottom plate 14 includes an interior surface 22 on which the wafer cassette may bottom and an exterior surface 24 defining an aperture 26 for receiving a rotating shaft 30 (shown in FIG. 5). The struts 16a-h separate the top and bottoms plates 12, 14. Each strut 16a-h is fastened at a top end 32a-h, respectively, thereof to the top plate 12 and at a bottom end 34a-h, respectively, to the bottom plate 14. The retaining bars 18 are fastened to adjacent ones 16a-b of the struts 16a-h on the radially proximal surface 36 thereof, shown best in FIG. 6.Returning to FIGS. 1-2, the top plate 12 more particularly includes an exterior surface 38, an interior surface 40, and a circumferential surface 42. The top plate 12 is, in this embodiment, circularly-shaped, although the invention is not so limited. Other shapes, such as elliptical or square, might be used in alternative embodiments. However, a circular shape maximizes certain functional characteristics, such as facilitating minimal radial clearance between the rotor 10 and the spray nozzles (not shown) of the tool. The interior surface 40 defines several blind bores 44 (shown best in FIG. 6) and the circumferential surface 42 defines several openings 46 intersecting the blind bores 44. The blind bores 44 and the openings 46 are used in assembling the rotor 10 as discussed more fully below.The bottom plate 14, like the top plate 12, also includes a circumferential surface 48 and is generally circularly-shaped. The top plate 12 and the bottom plate 14 are similarly shaped in this embodiment, but alternative embodiments may include differently shaped top and bottom plates 12, 14. However, again, functional considerations dictate that the top and bottom pates 12, 14 will be similarly shaped in most embodiments and that this shape will be circular. The interior surface 22 defines several blind bores 50 and the circumferential surface 48 defines several openings 52 intersecting the blind bores 50. The blind bores 50 and the openings 52 are used in assembling the rotor 10 as discussed more fully below. As noted above (and shown best in FIGS. 5 and 7), the exterior surface 24 defines the aperture 26 for receiving the rotating shaft 30. This aspect of the invention may be implemented in a broad range and the phrase "defines an aperture for receiving the rotating shaft" is to be construed broadly, as well. For instance, the aperture 26 in the illustrated embodiment is threaded and includes a lip 54 protruding from the exterior surface 24. In alternative embodiments not shown, the aperture 26 might be formed by fastening to the exterior surface 24 a separate piece containing a threaded blind bore.Each of the struts 16a-h is identically constructed, excepting only struts 16a-b as discussed more fully below. The radially distal surface 56 of the bottom end 58 of each strut 16a-h tapers in this particular embodiment to help maximize radial clearance between the rotor 10 and the spray nozzles (not show) of the tool. However, alternative embodiments may omit this tapering. The bottom end 58 and the top end 60 of each strut 16a-h includes a terminus 62 (shown best in FIGS. 2 and 4) shaped to mate with the blind bores 44, 50, respectively. Each terminus 62 includes a blind bore 64 on the radially distal surface thereof for use in assembling the rotor 10 as discussed below. The struts 16a-b also include a pair of radial, threaded bores 66 for fastening the retaining bars 18 thereto.The rotor 10 is assembled by press fitting the top and bottom plates 12, 14 to the struts 16a-h, fastening the press fitted parts, and then fastening the retainer bars 18 to the struts 16a-b. More particularly, and referring now to FIGS. 3-4, the terminus 62 for the bottom end 58 of each strut 16a-h is press fit into a blind bore 50 defined by the interior surface 22 of the bottom plate 14. This aligns the blind bores 64 with the openings 52. Press pins 70, each having a champfered end 72, are then pressed into the co-aligned openings 52 and blind bores 64 to fasten the struts 16a-h to the bottom plate 14. The process is repeated to fasten the top plate 12 to the struts 16a-h. The terminus 62 for the bottom end 60 of each strut 16a-h is press fit into a blind bore 44 so that press pins 70 may be pressed into the co-aligned openings 44 and blind bores 64. The retaining bars 18 are fastened to the struts 16a-b by threading screws 73 through the threaded bores 66 and into the bores 74 in the ends of the retaining bores 18.Several features of the illustrated embodiment are implementation specific based upon particular functional considerations. One such feature noted above is the generally circular shape of the top and bottom plates 12, 14. Other features include the number and placement of the struts 16a-h. The struts 16a-h are spaced equidistantly and circumferentially about the rotor 10 relative to the top and bottom plates 12, 14 and in diametrically opposed pairs. The equidistant spacing and positioning in diametrically opposed pairs facilitates radial balancing, which is an important consideration. However, radial balancing can be achieved with varied spacing and odd numbers of struts. The placing and number of the struts 16a-h also impacts performance of the spray tool itself by virtue of their location between the wafers and the sprayed acid. Other embodiments might therefore choose to emphasize other functional considerations and use varied spacing, and/or struts not positioned in diametrically opposed pairs, and/or in various numbers.The entire rotor 10, in the illustrated embodiment, excepting only the retaining bars 18, are constructed from polyphenelyne sulfide ("PPS") rather than the traditional PTFE employed by conventional approaches. PPS exhibits numerous unexpected advantages over PTFE including higher rigidity, superior dimensional stability, lighter weight, and a lower chemical absorption rate. The higher rigidity and lighter weight permit smaller dimensions for the rotor 10.These unexpected advantages translate into superior performance in a number of ways in both existing chemical spray tools and in such tools that may be designed in the future specifically to exploit these advantages. The superior dimensional stability will allow the radial clearance between the rotor 10 and the spray nozzles to be reduced in new chemical spray tools. The smaller dimensions coupled with lessened radial clearance will also permit the wafers to be positioned much closer to the spray nozzles in new chemical spray tools. This is highly desirable as it produces a much more uniform spray across the wafers during processing. The difference in dimensions between the conventional PTFE rotor and the PPS rotor of the present invention arising from the material of construction is relatively large. So large, in fact, that new tools employing a PPS rotor can achieve both a greater radial clearance and position the wafers closer to the spray nozzles than can a PTFE rotor. Existing chemical spray tools retrofitted with PPS rotors will experience reduced downtime caused by contact between the rotor and the spray nozzles since the rotor will be both smaller and more dimensionally stable.Because of these material characteristics, however, a rotor 10 constructed from PPS will radially balance slightly differently than will a rotor 10 constructed from PTFE. As will be appreciated by those in the art having the benefit of this disclosure, the process of radially balancing a PTFE rotor involves boring holes and, occasionally, adding PTFE blocks to center the mass of the rotor over the shaft. Typically, however, there is a sufficient consistency in manufacturing that the balancing process begins with drilling a predetermined pattern of holes in the rotor. The rotor is spun to see what additional adjustments need to be made, which are then implemented. The process continues until the rotor is balanced. This conventional process works equally well with PPS rotors, excepting only that the predetermined pattern of holes used for PTFE rotors may need to be altered for some embodiments. None of the balancing holes or blocks are shown in the illustrated embodiment because they are unique to each rotor.One advantage of the illustrated embodiment is that it may be readily used to retrofit existing chemical spray tools. The shaft 30, shown in FIG. 5, is commercially available from SemaTool, Inc. as part number 550U0002-501, and is currently used on many of the spray chemical tools sold by SemaTool. Thus, the rotor 10 may be used to retrofit most existing SemaTool spray chemical tools. As those in the art having the benefit of this disclosure will appreciate, the shaft 30 may be changed out so that the rotor 10 may be used to retrofit tools by other manufacturers. The aperture 26 may be modified accordingly. Thus, one reason for the broad construction of the phrase "defines an aperture for receiving the rotating shaft."However, the rotor 10 may also be used in new tools. To this end, various aspects of the illustrated embodiment may vary in alternative embodiments. For instance, the aperture20 is keyed to a wafer cassette design recognized as "standard" by the industry, but may be altered to accommodate alternative wafer cassette designs. Alternative wafer cassette designs may also require longer or shorter rotors, and the length of the struts 16a-h may be adjusted accordingly. Another advantage of the small dimensions possible with the PPS rotor of the present invention is that the rotor may be constructed to accommodate a wafer cassette larger than a standard one and still be able to retrofit existing spray chemical tools.The PPS rotor of the present invention will also prove advantageous in cleaning and repair because it may be disassembled. The ability to replace only pieces of a rotor instead of a whole rotor will result in substantial savings. As mentioned above, previous multi-piece rotors failed under the centrifugal forces inherent in the chemical spray process. However, the PPS rotor of the present invention will not do so primarily because: (1) the press fit of the strut termini into their respective blind bores operates as a mechanical stop against centrifugal forces on the struts, and (2) the press pins have little mass with which the centrifugal force may operate. Consequently, the struts and press pins remain in place even at the high rates of revolution in the chemical spray process.The particular embodiment disclosed above is illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below.
Apparatus, methods, computer readable media and processors operable on a wireless device may provide an anti-spam engine operable to intercept content intended for and/or generated by client applications, and filter out unwanted content. The anti-spam engine may include a configurable module having a spam filter that may determine whether content is unwanted. Based on the result of subjecting the content to the spam filter, the anti-spam engine may forward the content to the intended client application and/or a network destination, and/or may generate a spam log. The anti-spam module may be further operable to forward the spam log to another device, such as a user manager device, operable to analyze the log and generate a report which may be viewable by an operator.
CLAIMS What is claimed is: 1. A method for filtering content on a wireless device, comprising: intercepting content on the wireless device prior to delivery of the content to a content destination; analyzing the content based on a content filter to determine if the content comprises unwanted content, wherein the content filter is selected from a plurality of content filters based on at least one of a network service provider associated with the wireless device, a hardware characteristic associated with the wireless device, a predetermined characteristic associated with the content destination, and a hardware requirement associated with the content; and based upon the analyzing of the content, forwarding the content to the content destination or quarantining the content. 2. The method of claim 1, wherein intercepting further comprises intercepting prior to delivery to a client application resident on the wireless device. 3. The method of claim 2, wherein forwarding further comprises forwarding the content to at least one of a browser client, an Internet Messenger client, a short message service (SMS) client, a multimedia message service (MMS) client, and an E-mail client. 4. The method of claim 1, wherein intercepting further comprises intercepting prior to transmitting the content from the wireless device to a wireless network. 5. The method of claim 1, wherein the hardware characteristic comprises at least one of a processor capability, a speaker capability, a ringer capability, a display capability, and a memory capability. 6. The method of claim 1, wherein the predetermined characteristic associated with the content destination comprises at least one of an identification of a destination client application resident on the wireless device, and a number content destinations associated the content. 7. The method of claim 1, wherein the hardware requirement comprises at least one of a wireless device processor requirement, a wireless device audio component requirement, a wireless device video component requirement, and a wireless device memory component requirement. 8. The method of claim 1, wherein quarantining further comprises deleting the content based on a storage limit parameter. 9. The method of claim 1, further comprising receiving the content filter from across a wireless network. 10. The method of claim 1 , further comprising: storing predetermined information associated with the content; and transmitting the predetermined information to a remote device for analysis of the predetermined information. 11. The method of claim 10, wherein storing predetermined information further comprises storing at least one of a portion of the content, an identification of a source of the content, a calculated filter test result associated with the content, and an identification of the content destination. 12. The method of claim 10, further comprising receiving a reporting parameter associated with the content filter, wherein transmitting the predetermined information further comprises transmitting based on the reporting parameter. 13. The method of claim 10, wherein transmitting the predetermined information further comprises establishing a limited-access communications channel across a wireless network based on a predefined limited service configuration. 14. The method of claim 1, wherein analyzing further comprises: applying the content filter to the content; calculating a filter test result based upon application of the predetermined content filter to the content; comparing the calculated filter test result to a predetermined filter test result; and classifying the content as unwanted content based upon the comparison of the calculated filter test result and the predetermined filter test result. 15. The method of claim 1, further comprising receiving a revised content filter based on analyzing the content, and replacing the content filter with the revised content filter. 16. A machine-readable medium comprising instructions which, when executed by a machine, cause the machine to perform operations including: intercepting content on the wireless device prior to delivery of the content to a content destination; analyzing the content based on a content filter to determine a content classification, wherein the content filter is selected from a plurality of content filters based on at least one of a network service provider associated with the wireless device, a hardware characteristic associated with the wireless device, a predetermined characteristic associated with the content destination, and a hardware requirement associated with the content; and based upon the content classification, forwarding the content to the content destination or quarantining the content. 17. At least one processor configured to perform the actions of: intercepting content on the wireless device prior to delivery of the content to a content destination; analyzing the content based on a content filter to determine a content classification, wherein the content filter is selected from a plurality of content filters based on at least one of a network service provider associated with the wireless device, a hardware characteristic associated with the wireless device, a predetermined characteristic associated with the content destination, and a hardware requirement associated with the content; and based upon the content classification, forwarding the content to the content destination or quarantining the content. 18. A wireless device, comprising: means for intercepting content on the wireless device prior to delivery of the content to a content destination; means for analyzing the content based on a content filter to determine a content classification, wherein the content filter is selected from a plurality of content filters based on at least one of a network service provider associated with the wireless device, a hardware characteristic associated with the wireless device, a predetermined characteristic associated with the content destination, and a hardware requirement associated with the content; and means for forwarding the content to the content destination or quarantining the content, based upon the content classification. 19. A wireless device, comprising: an anti-spam engine operable to intercept content on the wireless device prior to delivery of the content to a content destination, the anti-spam engine comprising a content filter selected from a plurality of content filters based on at least one of a network service provider associated with the wireless device, a hardware characteristic associated with the wireless device, a predetermined characteristic associated with the content destination, and a hardware requirement associated with the content; and control logic associated with the anti-spam engine and operable to apply the content filter to the content and determine if the content comprises unwanted content, wherein the control logic is further operable to forward the content to the content destination if the content does not comprise unwanted content or quarantine the content if the content comprises unwanted content. 20. The wireless device of claim 19, further comprising a memory having at least one client application, and wherein the content destination comprises the client application. 21. The wireless device of claim 20, wherein the control logic is further operable to forward the content to at least one of a browser client, an Internet Messenger client, a short message service (SMS) client, a multimedia message service (MMS) client, and an E-mail client. 22. The wireless device of claim 19, further comprising a memory having at least one client application operable to generate the content, and wherein the content destination comprises a destination wirelessly connectable with the wireless device. 23. The wireless device of claim 19, wherein the hardware characteristic comprises at least one of a processor capability, a speaker capability, a ringer capability, a display capability, and a memory capability. 24. The wireless device of claim 19, wherein the predetermined characteristic associated with the content destination comprises at least one of an identification of a destination client application resident on the wireless device, and a number content destinations associated the content. 25. The wireless device of claim 19, wherein the hardware requirement comprises at least one of a wireless device processor requirement, a wireless device audio component requirement, a wireless device video component requirement, and a wireless device memory component requirement. 26. The wireless device of claim 19, further comprising a memory having a quarantine log and a storage limit parameter, wherein the control logic is further operable to store the content in the quarantine log based on a storage limit parameter. 27. The wireless device of claim 19, further comprising a memory having a spam log, wherein the control logic is further operable, if the content comprises unwanted content, to store predetermined information associated with the content in the spam log and transmit the spam log to a remote device for analysis of the predetermined information. 28. The wireless device of claim 27, wherein the control logic is operable to generate a calculated filter test result based on the application of the content filter to the content, and wherein the predetermined information further comprises at least one of a portion of the content, an identification of a source of the content, the calculated filter test result associated with the content, and an identification of the content destination. 29. The wireless device of claim 27, wherein the memory further comprises a reporting parameter associated with the content filter, and wherein the control logic is further operable to transmit the predetermined information based on the reporting parameter. 30. The wireless device of claim 27, wherein the memory further comprises a limited service configuration, and wherein the anti-spam engine is further operable to transmit the predetermined information by establishing a limited-access communications channel across a wireless network based on the limited service configuration. 31. The wireless device of claim 19, wherein the anti-spam engine is further operable to receive the content filter from across a wireless network. 32. The wireless device of claim 19, further comprising a memory having a predetermined filter result corresponding to the content filter, wherein the control logic is further operable to apply the content filter to the content to generate a filter test result, compare the calculated filter test result to the predetermined filter test result, and classifying the content as unwanted content based upon the comparison of the calculated filter test result and the predetermined filter test result. 33. The wireless device of claim 19, further comprising receiving a revised content filter based on analyzing the content, and replacing the content filter with the revised content filter. 34. A method for managing the filtering of content on a wireless device, comprising: providing a predetermined content filter and a reporting parameter to the wireless device; receiving, based on the reporting parameter, a spam log relating to content on the wireless device and subjected to the predetermined content filter; and generating a report based on the spam log. 35. The method of claim 34, wherein the predetermined content filter is selected from a plurality of content filters based on at least one of a network service provider associated with the wireless device, a hardware characteristic associated with the wireless device, a predetermined characteristic associated with the content destination, and a hardware requirement associated with the content. 36. The method of claim 34, wherein the spam log comprises information relating to at least one of incoming content received by the wireless device and outgoing content destined for transmission from the wireless device. 37. The method of claim 34, further comprising providing a predetermined filter test result to the wireless device to enable the wireless device to determine whether to include a reference to the content in the spam log after subjecting the content to the content filter. 38. The method of claim 37, wherein the content filter, when applied by the wireless device to the content, is operable to generate a calculated filter test result for comparison with the predetermined filter test result. 39. The method of claim 34, wherein the reporting parameter is operable to define predetermined information to store in the spam log. 40. The method of claim 39, wherein the predetermined information further comprises at least one of a portion of the content, an identification of a source of the content, a calculated filter test result associated with the content, and an identification of the content destination. 41. The method of claim 34, wherein providing the predetermined content filter and the reporting parameter further comprises forwarding across a wireless network to the wireless device. 42. The method of claim 34, further comprising forwarding a revised content filter to the wireless device based on the spam log. 43. A machine-readable medium comprising instructions which, when executed by a machine, cause the machine to perform operations including: providing a predetermined content filter and a reporting parameter to the wireless device; receiving, based on the reporting parameter, a spam log relating to content received by the wireless device and subjected to the content filter; and generating a report based on the spam log. 44. At least one processor configured to perform the actions of: providing a predetermined content filter and a reporting parameter to the wireless device; receiving, based on the reporting parameter, a spam log relating to content received by the wireless device and subjected to the content filter; and generating a report based on the spam log. 45. An apparatus for managing the filtering of content on a wireless device, comprising: means for providing a predetermined content filter and a reporting parameter to the wireless device; means for receiving, based on the reporting parameter, a spam log relating to content received by the wireless device and subjected to the content filter; and means for generating a report based on the spam log. 46. An apparatus for managing the filtering of content on a wireless device, comprising: a generator module operable to generate a content filter configuration comprising at least one predetermined content filter and a reporting parameter; an anti-spam module operable to forward the content filter configuration to the wireless device and operable to receive, based on the reporting parameter, a spam log relating to content received by the wireless device and subjected to the spam filter; and a report generator operable to generate a report based on the spam log. 47. The apparatus of claim 46, wherein the configuration generator module is further operable to select the predetermined content filter from a plurality of content filters based on at least one of a network service provider associated with the wireless device, a hardware characteristic associated with the wireless device, a predetermined characteristic associated with the content destination, and a hardware requirement associated with the content. 48. The apparatus of claim 46, wherein the spam log comprises information relating to at least one of incoming content received by the wireless device and outgoing content destined for transmission from the wireless device. 49. The apparatus of claim 46, wherein the content filter configuration further comprises a predetermined filter test result to enable the wireless device to determine whether to include a reference to the content in the spam log after subjecting the content to the content filter. 50. The apparatus of claim 49, wherein the content filter, when applied by the wireless device to the content, is operable to generate a calculated filter test result for comparison with the predetermined filter test result. 51. The apparatus of claim 46, wherein the reporting parameter is operable to define predetermined information to store in the spam log. 52. The apparatus of claim 51, wherein the predetermined information further comprises at least one of a portion of the content, an identification of a source of the content, a calculated filter test result associated with the content, and an identification of the content destination. 53. The apparatus of claim 46, wherein the anti-spam module is further operable to forward the predetermined content filter and the reporting parameter across a wireless network to the wireless device. 54. The apparatus of claim 46, wherein the anti-spam module is further operable to forward a revised content filter configuration to the wireless device based on the spam log.
APPARATUS AND METHODS FOR MANAGING CONTENT EXCHANGE ON A WIRELESS DEVICECLAIM OF PRIORITY UNDER 35 U.S.C. [section]119[0001] The present Application for Patent claims priority to Provisional Application No. 60/665,305 entitled "Methods and Apparatus for Preventing Unauthorized Downloads to a Wireless Device," filed March 25, 2005, assigned to the assignee hereof and hereby expressly incorporated by reference herein.FIELD OF INVENTION[0002] The described embodiments generally relate to wireless communication devices and computer networks, and more particularly relate to apparatus and methods for detecting unauthorized content on a wireless device.BACKGROUND[0003] Wireless networking connects one or more wireless devices to other computer devices without a direct electrical connection, such as a copper wire or optical cable. Wireless devices communicate data, typically in the form of packets, across a wireless or partially wireless computer network and open a "data" or "communication" channel on the network such that the device can send and receive data packets. The wireless devices often have wireless device resources, such as programs and hardware components, which individually and cooperatively operate to use and generate data in accordance to their design and specific protocol or configuration, such as using open communication connections to transmit and receive data on the network.] Wireless devices are being manufactured with increased computing capabilities and are becoming tantamount to personal computers and include such features as Internet browsing, instant messaging ("IM"), E-mail, and text messaging, including Short Message Service and Multimedia Messaging Service ("SMS/MMS"). Because such features facilitate direct contact with a wireless device user, these messaging clients have become targets for unauthorized, unsolicited, and in most cases unwanted, messages and/or viruses, herein referred to as "spam."[0005] Spamming may be loosely defined as the use of any electronic communications medium to send unsolicited messages and/or viruses in bulk and by definition, occurs without the permission of the recipient. While its use is usually limited to indiscriminate bulk mailing and not any targeted marketing, the term "spam" can refer to any commercially oriented, unsolicited bulk mailing perceived as being excessive and undesired. Although the most common form of spam is that delivered in E-mail, spammers have developed a variety of spamming techniques, which vary by media: E- mail spam, instant messaging spam, Usenet newsgroup spam, Web search engines spam, weblogs spam, and mobile phone messaging spam.[0006] Spam by E-mail is a type of spam that involves sending identical (or nearly identical) messages to thousands (or millions) of recipients. Spammers often harvest addresses of prospective recipients from Usenet postings and/or web pages, obtain them from databases, or simply guess them by using common names and domains. [0007] Instant messaging ("IM") systems, such as Yahoo! Messenger, AIM, MSN Messenger and ICQ, are popular targets for spammers. Many BVI systems offer a directory of users, including demographic information such as age and sex. Advertisers can gather this information, sign on to the system, and send unsolicited messages. [0008] Mobile phone spam, in some forms, includes spamming directed at mobile phone text messaging services and can be especially irritating to users not only for the inconvenience but also because they may have to pay to receive the unsolicited and often unwanted text message. Mobile phone spam may also include any type of content that can be received by a mobile phone, such as audio content, video content, software programs, etc., and combinations thereof.[0009] Several methods of message analysis to protect networks from spam include fingerprinting and rules-based scoring. Fingerprinting technology takes a "digital picture" of each message and matches it against known profiles of spam messages to detect unwanted email and flag it as spam. Rule-based scoring involves scoring messages against a database of spam rules, assigning scores to messages based on unique characteristics of spam and legitimate email. When a score of a message exceeds a defined threshold, it is flagged as spam.[0010] The approach to anti-spam filtering at the wireless user device level, has for the most part, been accomplished by incorporating an anti-spam module within each messaging client application. However, if anti-spam code is integrated within each client application, e.g., E-mail, MMS, SMS, and IM, much valuable handset storage/memory is wasted doing essentially the same function, that being anti-spam filtering.[0011] Furthermore, if the functionality of an anti-spam module is limited to filtering spam after being received by the wireless device, the filtering does nothing to address the equally if not more important issue of network congestion due to a flood of spam traversing the network. A network, accurately sized for a certain bandwidth of legitimate traffic (plus a little extra) may be hard pressed to maintain the designed-to quality-of-service in the presence of millions of instances spam content directed to an equally large and growing number of wireless devices hosting several content consuming client applications.[0012] Accordingly, it would be advantageous to provide an apparatus and method that provides a single ubiquitous anti-spam module that may be configured to monitor all content incoming to a wireless device prior to being received by any client application. Furthermore, it would be advantageous to provide an apparatus and method operable to analyze the effect of the spam filtering on the wireless device with the goal of blocking further spam attacks.SUMMARY[0013] The described embodiments comprise apparatus, methods, computer readable media and processors operable on a wireless device to provide a single ubiquitous anti- spam detection mechanism capable of filtering out unwanted content, such as unauthorized and/or unsolicited content and/or viruses, i.e., spam, within a data stream received from a wireless network and intended for a client application resident on the wireless device, and/or within a data stream generated on the wireless device and intended for transmission to a remote device on the wireless network.] Furthermore, such apparatus and methods may include the forwarding of information regarding the detected unwanted content to a user manager and/or operator for further analysis and report generation. Furthermore, the network carrier may be informed of the unwanted content for the purpose of blocking the future propagation of unwanted content throughout the network.[0015] In some aspects, a method for filtering content on a wireless device comprises intercepting content on the wireless device prior to delivery of the content to a content destination. The method further comprises analyzing the content based on a content filter to determine if the content comprises unwanted content, wherein the content filter is selected from a plurality of content filters based on at least one of a network service provider associated with the wireless device, a hardware characteristic associated with the wireless device, a predetermined characteristic associated with the content destination, and a hardware requirement associated with the content. Additionally, the method comprises, based upon the analyzing of the content, forwarding the content to the content destination or quarantining the content. In other aspects, at least one processor may perform the above-defined actions. In yet other aspects, a machine- readable medium may comprise instructions which, when executed by a machine, cause the machine to perform the above-defined operations.[0016] In some other aspects, a wireless device comprises means for intercepting content on the wireless device prior to delivery of the content to a content destination. The wireless device further comprises means for analyzing the content based on a content filter to determine a content classification, wherein the content filter is selected from a plurality of content filters based on at least one of a network service provider associated with the wireless device, a hardware characteristic associated with the wireless device, a predetermined characteristic associated with the content destination, and a hardware requirement associated with the content. Additionally, the wireless device comprises means for forwarding the content to the content destination or quarantining the content, based upon the content classification.[0017] hi yet other aspects, a wireless device comprises an anti-spam engine operable to intercept content on the wireless device prior to delivery of the content to a content destination, the anti-spam engine comprising a content filter selected from a plurality of content filters based on at least one of a network service provider associated with the wireless device, a hardware characteristic associated with the wireless device, a predetermined characteristic associated with the content destination, and a hardware requirement associated with the content. Additionally, the wireless device comprises control logic associated with the anti-spam engine and operable to apply the content filter to the content and determine if the content comprises unwanted content, wherein the control logic is further operable to forward the content to the content destination if the content does not comprise unwanted content or quarantine the content if the content comprises unwanted content. [0018] In still further aspects, a method for managing the filtering of content on a wireless device comprises providing a predetermined content filter and a reporting parameter to the wireless device and receiving, based on the reporting parameter, a spam log relating to content on the wireless device and subjected to the predetermined content filter. Further, the method comprises generating a report based on the spam log. In other aspects, at least one processor may perform the above-defined actions. In yet other aspects, a machine-readable medium may comprise instructions which, when executed by a machine, cause the machine to perform the above-defined operations. [0019] In other aspects, an apparatus for managing the filtering of content on a wireless device comprises means for providing a predetermined content filter and a reporting parameter to the wireless device, and means for receiving, based on the reporting parameter, a spam log relating to content received by the wireless device and subjected to the content filter. Additionally, the apparatus comprises means for generating a report based on the spam log.[0020] In further aspects, an apparatus for managing the filtering of content on a wireless device comprises a generator module operable to generate a content filter configuration comprising at least one predetermined content filter and a reporting parameter. Further, the apparatus comprises an anti-spam module operable to forward the content filter configuration to the wireless device and operable to receive, based on the reporting parameter, a spam log relating to content received by the wireless device and subjected to the spam filter. Additionally, the apparatus comprises a report generator operable to generate a report based on the spam log.BRIEF DESCRIPTION OF THE DRAWINGS[0021] The disclosed embodiments will hereinafter be described in conjunction with the appended drawings provided to illustrate and not to limit the disclosed embodiments, wherein like designations denote like elements, and in which:[0022] Fig. 1 is a schematic diagram of one aspect of a system for preventing predetermined content from being received by and/or sent from client applications on a wireless device;[0023] Fig. 2 is a schematic diagram of one aspect of a wireless device according toFig. 1;] Fig. 3 is a schematic diagram of one aspect of a memory resident anti-spam engine according to the wireless device of Fig. 2;[0025] Fig. 4 is a schematic diagram of one aspect of a user manager according to the system of Fig. 1;[0026] Fig. 5 is a schematic diagram of one aspect of a configuration generator module according to the user manager of Fig. 4;[0027] Fig. 6 is a schematic diagram of one aspect of a device control module according to the user manager of Fig. 4;[0028] Fig. 7 is a schematic diagram of one aspect of an operator workstation according to the system of Fig. 1;[0029] Fig. 8 is a schematic diagram of one aspect of a cellular telephone network according to Fig. 1;[0030] Fig. 9 is a flowchart diagram of a method for preventing unauthorized downloads to a wireless device according to the system of Fig. 1;[0031] Fig. 10 is another flowchart diagram of a method for preventing unauthorized downloads to a wireless device according to the system of Fig. 1;[0032] Fig. 11 is an event sequence diagram operable in some embodiments of the system of Fig. 1; and[0033] Fig. 12 is another event sequence diagram operable in some embodiments according to the system of Fig. 1.DETAILED DESCRIPTION[0034] Referring to Fig. 1, a system 100 for detecting unwanted content on a wireless device, including preventing the receipt and/or transmission of such content, may comprise a wireless device 102 operable to receive a content filter configuration 170 from a user manager 110. Unwanted content, or spam, may include unauthorized or unsolicited content and/or viruses. Content filter configuration 170 defines parameters for the filtering of content using filter module 180, for the recording of details associated with filtered content in a spam log 184, and for the forwarding of log 184 to the user manager 110 or another device for analysis.[0035] For example, an operator workstation 114 may be configured to receive a spam report 205 generated by a report generator 204 associated with user manager 110 and may further be configured to communicate future spam blocking instructions 116 to a message center 118. Communication between wireless device 102, user manager 110, operator workstation 114, and message center 118 may be accomplished via network 101.[0036] Content filter configuration 170, and corresponding filter module 180, may include one or more content filters to apply to incoming and/or outgoing content. The content filter utilized by wireless device 102 may be selected from a plurality of content filters based on, for example, at least one of a network service provider associated with the wireless device, a hardware characteristic associated with the wireless device, a predetermined characteristic associated with the content destination, and a hardware requirement associated with the content.[0037] For example, wireless device 102 may operate on a home wireless network provided by a network service provider, but the device may roam out of the home network into another wireless network under the control of another network service provider. Since spam may affect the performance of the given wireless network, each wireless service provider may define and provide a custom content filter to be used by any wireless device operating on their wireless network.[0038] In another example, the content filter may vary depending on the hardware characteristics of a given wireless device. For instance, a hardware characteristic such a processor type/capability, speaker type/capability, ringer type/capability, display type/capability, and a memory type/capability may affect whether or not a given wireless device can efficiently process a given content. For example, a given content comprising a given ring tone may require sounds not capable of being generated by a given ringer associated with a given wireless device, and thus the given content may be considered spam for the given device. Thus, a given content that adversely affects a hardware characteristic of one wireless device may be classified as spam for that device, while the same content may be classified as not spam for another wireless device having different hardware characteristics.[0039] Similarly, the content filter may vary depending on a predetermined characteristic associated with the content destination. For instance, in the case of incoming content received by the wireless device, the predetermined characteristic associated with the content destination may comprise, for example, an identification of a destination client application resident on the wireless device. In other words, content defined or classified as spam may vary depending upon whether the content is destined for a browser as opposed to a text messaging client. In the case of outgoing content intended from transmission from the wireless device, the predetermined characteristic associated with the content destination may comprise, for example, a number content destinations associated the content. In other words, sending of more than a predetermined number of copies of the same content maybe defined as spamming. [0040] Similarly, the content filter may vary depending on a hardware requirement associated with the content. For instance, the hardware requirement may comprise, for example, at least one of a wireless device processor type/capability/capacity usage, an audio component type/capability/usage, a video component type/capability/usage, and a wireless device memory type/capability/usage. In other words, for example, spam may be defined as requiring more than a predetermined amount of the total capacity of a given wireless device hardware resource. By using too much capacity, a given content may adversely affect the overall performance of the wireless device, and/or may affect the performance of other applications executing on the wireless device. [0041] Wireless device 102 may include any type of computerized device such as a cellular telephone, personal digital assistant, two-way text pager, portable computer, and even a separate computer platform that has a wireless communications portal, and which also may have a wired connection to a network or the Internet. The wireless device can be a remote-slave, or other device that does not have an end-user thereof, but simply communicates data across the wireless network 101, such as remote sensors, diagnostic tools, and data relays.[0042] Wireless device 102 includes an anti-spam engine module 138 that monitors incoming and/or outgoing content and filters out unwanted, unsolicited and/or unauthorized content and/or viruses, collectively referred to as spam. Anti-spam module 138 may be loaded into a memory 136 of wireless device 102 in a number of ways, including but not limited to: statically installed at the factory 106; by wireless transmission over a wireless network, such as network 101; and, over a hardwired connection, such as via a personal computer (PC). Wireless device 102 may be delivered to a network carrier and/or some other retailer for sale and delivery to a user and activation on a network.[0043] Once a wireless device 102 is activated by a carrier, application clients and wireless device components/ports/interfaces for sending and receiving content may be operable on the wireless device 102. For example, application clients may include, but are not limited to, clients such as instant messaging ("IM"), E-mail, and messaging clients, such as a Short Message Service (SMS) client and a Multimedia Messaging Service (MMS) client, and a browser. Wireless device components/ports/interfaces may include any point of content entry into, and/or any point of content exit from wireless device, as will be discussed in more detail below. Targeting these client applications and components/ports/interfaces, advertisers and other spam generators 122 of unsolicited communications may then gain access to network 101 and obtain the address information of wireless device 102. Armed with such information, such as a phone number and/or Internet Protocol (IP) address, a spam generator 122 may start sending spam 124 to the wireless device 102 via message center 118 and wireless link 120. Spam 124 may be any content that is unsolicited, unwanted, and/or unauthorized by the user of wireless device 102 and/or by the operator and/or network service provider associated with network 101. Furthermore, spam, intentionally or unintentionally generated by a client application, may be transmitted to the network, thereby degrading network availability.[0044] Anti-spam engine module 138 is operable to intercept all incoming content and/or all outgoing content, and to filter out content determined to be unauthorized and/or unsolicited and/or unwanted and/or a virus based upon configurable parameters to be discussed in detail herein.[0045] In one aspect, the anti-spam engine 138 may quarantine the detected spam in a quarantine folder and may generate a spam log with details of the detected spam. Further, in another aspect, based upon a configurable reporting parameter, the anti-spam engine 138 may transmit the spam log to the user manager 110 over wireless link 108. [0046] The user manager 110 may receive the log, analyze the data and generate a report. The user manager 110 may, for example, E-mail the report over communication channel 126 to an operator workstation 114, or otherwise make the contents of the report viewable to an operator.[0047] The operator may analyze the report and, based upon that analysis, may issue a command 112 to the user manager 110 with instructions to update the anti-spam engine 138, for example, to update the filtering characteristics of the engine to detect new forms of spam. Furthermore, the operator workstation 114 may transmit instructions 116 to the message center 118 to block further access of spam generator 122 to the network 101. [0048] Referring to Fig. 2, wireless device 102 may comprise a computer platform 130 interconnected with an input mechanism 132 and an output mechanism 134 respectively providing inputs and outputs for communicating with resident applications. For example, input mechanism 132 may include, but is not limited to, a mechanism such as a key or keyboard, a mouse, a touch-screen display, and a voice recognition module. Output mechanism 134 may include, but is not limited to, a display, an audio speaker, and a haptic feedback mechanism.[0049] Computer platform 130 may further include a communications module 152 embodied in hardware, software, firmware, executable instructions data and combinations thereof, operable to receive/transmit and otherwise enable communication between components within the wireless device 102, as well as to enable communications between the wireless device 102 and other devices, such as serially connected devices as well as devices connected via an air interface, such as network 101. Communications module 152 receives content 160, either from one or more client applications 140 and/or from input mechanism 132 on wireless devicelO2 and/or from another device in communication with wireless device 102, and cooperates with anti- spam engine 138 to analyze content 160 before allowing the content to be transmitted from and/or received by the wireless device.[0050] As noted above, communications module 152 may comprise any component/port/interface that may include any point of content entry into, and/or any point of content exit from wireless device. As such, communications module 152 may include interface components for hardwired communications and for wireless communications. For example, communications module 152 may include, but is not limited to, communication interface components such as a serial port, a universal serial bus (USB), a parallel port, and air interface components for wireless protocols/standards such as Wi-Fi, World Interoperability for Microwave Access (WiMAX), infrared protocols such as Infrared Data Association (IrDA), short-range wireless protocols/technologies, Bluetooth(R) technology, ZigBee(R) protocol, ultra wide band (UWB) protocol, home radio frequency (HomeRF), shared wireless access protocol (SWAP), wideband technology such as a wireless Ethernet compatibility alliance (WECA), wireless fidelity alliance (Wi-Fi Alliance), 802.11 network technology, public switched telephone network, public heterogeneous communications network such as the Internet, private wireless communications networks, land mobile radio networks, code division multiple access (CDMA), wideband code division multiple access (WCDMA), universal mobile telecommunications system (UMTS), advanced mobile phone service (AMPS), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal frequency division multiple access (OFDMA), global system for mobile communications (GSM), single carrier (IX) radio transmission technology (RTT), evolution data only (EV-DO) technology, general packet radio service (GPRS), enhanced data GSM environment (EDGE), high speed downlink data packet access (HSPDA), analog and digital satellite systems, and any other technologies/protocols that may be used in at least one of a wireless communications network and a data communications network.[0051] Computer platform 130 may also include memory 136, which may comprise volatile and nonvolatile memory such as read-only and/or random-access memory (RAM and ROM), EPROM, EEPROM, flash cards, or any memory common to computer platforms. Further, memory 136 may include one or more flash memory cells, or may comprise any secondary or tertiary storage device, such as magnetic media, optical media, tape, or soft or hard disk.[0052] Memory 136 may be operable to store one or more client applications 140, including, but not limited to: a web browser client; an IM client; a messaging client, such as a SMS text messaging client and/or a MMS multimedia messaging client; and an E-mail client.[0053] Furthermore, anti-spam engine 138 may be stored in memory 136 and is operable to intercept content 160 received by the communications module 152 that, in the absence of anti-spam engine 138, would be forwarded directly from the communications module 152 to a respective content destination, such as a resident client application and/or a remote device located across a wireless network. With anti- spam engine 138 in place, content determined to be spam may be blocked, while legitimate content may be forwarded to the respective content destination. It should be noted that anti-spam engine 138 may be configured to filter all content 160 received by communications module 152, or only selected content received from selective sources/interfaces, for example, only content for one or more predetermined client applications 140 and/or only content received at a predetermined port such as a USB. [0054] Further, computer platform 130 may include a processing engine 148, which may be an application-specific integrated circuit ("ASIC"), or other chipset, processor, logic circuit, or other data processing device. Processing engine 148 is operable to execute an application programming interface ("APF') layer 146 that may interface with any resident programs, such as the anti-spam engine 138, and client applications 140. [0055] In one non-limiting aspect, API 146 is a runtime environment executing on the respective wireless device. One such runtime environment is Binary Runtime Environment for Wireless(R) (BREW(R)) software developed by Qualcomm, Inc., of San Diego, California. Other runtime environments may be utilized that, for example, operate to control the execution of applications on wireless computing devices. [0056] Still referring to Fig. 2, processing engine 148 may include one or a combination of processing subsystems 150 that provide functionality to wireless device 102. In a cellular phone example, processing subsystems 150 may include subsystems such as: sound, non- volatile memory, file system, transmit, receive, searcher, layer 1, layer 2, layer 3, main control, remote procedure, handset, power management, diagnostic, digital signal processor, vocoder, messaging, call manager, Bluetooth(R) system, Bluetooth(R) LPOS, position determination, position engine, user interface, sleep, data services, security, authentication, USIM/SIM, voice services, graphics, USB, multimedia such as MPEG, GPRS, etc.[0057] Non-limiting, processing subsystems 150 may include any subsystem components that interact with applications executing on computer platform 130. For example, processing subsystems 150 may include any subsystem components that receive data reads and data writes from API 146 on behalf of the resident anti-spam engine 138 and any other memory resident client application 140. [0058] Referring to Fig. 3, anti-spam engine 138 may monitor and analyze content generated by, and/or designated for receipt by, any client application 140. Anti-spam engine 138 may be any one or a combination of hardware, software, firmware, executable instructions and data.[0059] The anti-spam engine 138 may comprise an anti-spam engine identification (ID) 139 that identifies the anti-spam engine, and control logic 162 operable to manage all functions and components of the anti-spam engine 138. For example, anti-spam engine ED 139 may include one or more of a name, a version, etc. Further, anti-spam engine 138 may include a content filter configuration file 170 that defines a content filter 182 to apply to incoming content. For example, content filter 182 may be a filter mechanism included in content filter configuration file 170, may be a reference to a remotely-stored filter mechanism, or may be an identification of a filter mechanism stored within filter module 180 resident on wireless device 102. Further, control logic 162 in conjunction with statistic collector/reporter module 168, is operable to apply the designated content filter 182 to content 160 and identify or classify the content as being spam or non-spam, and further collect information associated with the filtering and classification operations. Additionally, anti-spam engine 138 may store filtered content in a quarantine folder 164, and may store at least portions of the filtered content and/or additional content-related information in a spam log 184 that is used to report the activity of anti-spam engine 138. Further, anti-spam engine 138 may include a User Interface ("UI") 166 that assists a user, such as a local user of wireless device 102 or a remotely located user in communication with wireless device 102, in operating anti- spam engine 138.[0060] For example, UI 166, in conjunction with the input mechanism 132, may be operable by the end user to configure at least a portion of the capabilities of the anti- spam engine 138, including content filtering, reporting, quarantining, and disposing of detected spam.[0061] Besides being configurable by the user, the content filter configuration file 170 may be downloaded to memory 136 via wireless transmission over a wireless network 101, statically installed by the OEM 106 (Fig. 1) at the time of manufacture, and downloaded via hardwired connection to a personal computer (PC). For example, content filter configuration file 170 may be set by an operator associated with a network service provider and transmitted to wireless device 102 via user manager server 110. [0062] Content filter configuration file 170 may include any combination of one or more sets of parameters that dictate the spam filtering, recording and reporting activities to be performed by wireless device 102. For example, content filter configuration file 170 may include a set of parameters to apply to all content, regardless of the destination of the content. Alternatively, content filter configuration file 170 may include a destination-specific set of parameters corresponding to one or more of the resident client applications 140 (Fig. 2) capable of receiving content from network 101, and/or one or more content destinations on wireless network 101.[0063] As such, in some aspects, content filter configuration file 170 may include one or more of the following parameters: a content destination 172 which identifies a client application 140 arid/or a network device on wireless network 101 corresponding to the given set of parameters, such that the given set of parameters are applied to content designated for the corresponding content destination; a content filter 182 that identifies a content filter to be applied to the corresponding content; a predetermined filter test result 174 associated with the given content filter and/or the content destination, where the predetermined filter test result 174 is a limit that is compared to a filter test result generated by applying content filter 182 to incoming and/or outgoing content, and where the predetermined filter test result 174 defines spam and non-spam content; a storage limit parameter 176 associated with quarantined spam content, for example, storage limit parameter 176 may indicate a number of days to keep quarantined content before automatically deleting the content, and/or may indicate a maximum amount of memory to be utilized to store quarantined content; a reporting parameter 178 which defines what information to log corresponding to any detected spam, when to forward the log for analysis, to whom to forward the log and/or whom to allow access to the log; and a configuration identification (ID) 171, such as one or more of a name, a version, etc., that identifies the given set of parameters associated with the given configuration. [0064] Anti-spam engine 138 may be operable based upon at least one of several spam detection mechanisms, referred to herein as content filter 182. In some aspects, content filter 182 comprises a software mechanism for classifying content 160 as either spam or not spam. In some aspects, content 160 may be run through content filter 182 to produce a filter test result 188, which is calculated based upon a predetermined set of rules, i.e. the filter mechanism.[0065] There are many techniques for classifying content as spam or as not spam. These techniques are represented by content filter 182 and include, but are not limited to: host-based filtering; rule-based filtering; Bayesian statistical analysis; noise filters; and Sender Policy Framework ("SPF") or Sender Identification (ID) filters. Host-based and rule-based filters, for example, examine content for "spam-markers" such as common spam subjects, known spammer addresses, known mail forwarding machines, or simply common spam phrases, hi one aspect, such as in cases when the content comprises a message, the header and/or the body of the message may be examined for these markers. Another method is to classify as spam all content from unknown addresses.[0066] Bayesian filtering compares content that others have received to find common spam content, and accomplishes this by tokenizing a large corpus of spam and a large corpus of non-spam. The theory behind Bayesian filtering is that certain tokens will be common in spam content and uncommon in non-spam content, and certain other tokens will be common in non-spam content and uncommon in spam content. When content is to be classified, it is tokenized to see whether the tokens are more like those of spam content or those of non-spam content.[0067] Noise filters are a form of Bayesian filters that target spam containing numerous random words rarely used in sales promotions. Spammers hope to thwart Bayesian filters by minimizing promotion language and by making the spam appear to be personal correspondence. There are three primary steps employed by Bayesian noise reduction filters. The first step is pattern learning, where patterns are created and their disposition learned by the filter. The second step may use the patterns learned and performs "dubbing" or elimination of tokens whose disposition is inconsistent with the pattern of text they belong to. The third step may perform concurrent elimination of data from the sample up to a stop marker. Once a stop marker has been reached, certain checks may be performed on the length of the concurrent elimination to determine if the elimination should be made permanent.[0068] Sender Policy Framework ("SPF") or Sender Identification (ID) filters protects against return-path address forgery and makes it easier to identify spoofs. SPF operates by having domain owners identify sending mail servers in domain name servers ("DNS"). SMTP receivers verify the envelope sender address against this information, and can distinguish authentic content from forgeries before any content data is transmitted.[0069] Furthermore, because large files may have an adverse affect on the wireless device 102, such as by using memory or processing capability, or on the network 101, such as by using bandwidth, content may be identified as spam based upon the size of the content transmitted to/from the wireless device 102.[0070] Any one or any combination of the filtering mechanisms disclosed herein may be incorporated within filter module 180 to detect unwanted content. Furthermore, any filter 182 within filter module 180 may be associated with a specific content destination 172, thereby enabling the anti-spam engine 138 to select specific filters within filter module 180 to apply against a particular content 160 based upon the intended destination. [0071] For example, control logic 162 is operable to parse the parameters from content filter configuration file 170, and in conjunction with the statistic collector/reporter 168, which may include any combination of hardware, software, firmware, data and executable instructions, is operable to monitor and analyze all content 160 received by and/or generated for transmission from wireless device 102. In yet other embodiments, only content 160 having a given content destination 172 may be intercepted for processing by the anti-spam engine 138. Further, in some embodiments, the same content filter 182 maybe applied to all content 160.[0072] In other embodiments, different spam filters 182 may be applied to different content 160 based on, for example, at least one of a network service provider associated with the wireless device, a hardware characteristic associated with the wireless device, a predetermined characteristic associated with the content destination, and a hardware requirement associated with the content, as discussed in detail above. [0073] Regardless of the source and destination of monitored content, anti-spam engine 138 applies a specific content filter 182 to each content 160, generates calculated filter test result 188, compares result 188 with the corresponding predetermined filter test result 174, and classifies the given content 160 as spam content 163 or as authorized content. If classified as spam content 163, anti-spam engine 138 may then store the content in quarantine folder 164 and/or may automatically delete the content depending upon storage limit 176. If not classified as spam, then anti-spam engine 138 initiates the delivery of content 160 to the intended content destination 172.[0074] Further, for spam content 163, statistic collector/reporter 168 is operable to collect and save user-defined and/or content filter configuration-defined information based on reporting parameter 178. For instance, statistic collector/reporter 168 may log: device/configuration information 141, such as one or a combination of anti-spam engine ID 139 and/or content filter configuration 171, for example, to identify how content 160 was filtered, and wireless device information such as hardware and software information, for example, information identifying the model of the device, the resident hardware, the resident software, the state of selected hardware and/or software components, etc. and generally any information that may be useful in troubleshooting or determining a diagnostic status of wireless device 102; all or a selected portion 173 of the given content 160 and/or information associated with the content, including but not limited to: the calculated filter test result 188; the content destination 172; and source information 186 identifying the originator of the content and including, for example, a URL, a telephone number, a MAC address, an E-mail address of the spam generator 122, and a an identification of the generating client application 140 on the wireless device. The collected/calculated information may be saved in memory 136 as part of spam log 184, where the size of the spam log 184 may be, in one aspect, configurable as well.[0075] Furthermore, for content 160 classified as spam content 163 and stored in a separate quarantine folder 188, anti-spam engine 138 may alert a user of wireless device 102 of their presence in order to initiate review of these content. Further, anti-spam engine 138 may track a storage space used and/or a time in storage and automatically delete spam content 163 based on storage limit parameter 176. The actions of reviewing and/or deleting spam content 163 may be recorded in spam log 184 is dictated by reporting parameter 178.[0076] Through use of UI 166, the user may have access to all configurable parameters with the additional capability of marking specific content as unauthorized, i.e. placing the content into the quarantine folder 188, retrieving content previously designated as unauthorized content 163 from quarantine folder 188, and controlling what spam elements to log and when to upload log 184. For example, a user may update content filter 182 upon reviewing unauthorized content 163 and providing an input that identifies the given content as authorized content. For instance, the user may identify the source 186 of the given content as a non-spammer and/or an authorized source of content, and content filter 182 maybe updated accordingly. [0077] Reporting parameter 178 may configure statistic collector/reporter 168 to selectively transmit log file 184 to user manager 110 across wireless network 101. The timing of log transmission is non-limiting and may be transmitted at a predetermined time, a predetermined interval, and on an occurrence of a predetermined event, such as upon detection of at least one unauthorized content or upon request by an authorized remote device, such as user manager 110 or operator workstation 114. Further, reporting parameter 178 may determine to whom to allow local access to log 170, thereby allowing a remote device such as the user manager 110 access to memory 136. [0078] In one non-limiting aspect, spam log 170 may be transmitted over an open communication connection between the wireless device 102 and the wireless network 101. For example, anti-spam engine 138 may "piggyback" spam log 170 onto an ongoing voice or data call across an open connection. Alternatively, in a cellular network configuration, anti-spam engine 138 may transmit spam log 170 to user manager 110 through short message service ("SMS"). Furthermore, as noted above, user manager 110 may "pull" log 170 from the wireless device 102 across the network 101 on a scheduled or ad hoc basis.[0079] Non-limiting, anti-spam engine module 138 may also include a local wireless device control module 183. Under control of control logic 162, local wireless device control module 183 may execute a locally or remotely generated control command 185 on the wireless device 102. The local device control module 183 may request authorization of a control command 185 before its execution.[0080] For example, control command 185 may be any operation executable on wireless device 102 including, but not limited to, receiving and activating a content filter configuration file 170 downloaded from the network 101 and uploading log file 184 to the network 101.[0081] Further, anti-spam engine module 138 may include a limited service configuration 187 operable to establish a limited-access communications channel across the wireless network 101 generally not available to the user of wireless device 102. For example, the limited-access communications channel may be used for transmitting log file 184, receiving a content filter configuration file 170, as well as for receiving/generating control command 185.[0082] The identification and set-up of the limited-access communications channel may be based on a limited service setting 189. Limited service setting 189 may identify the type of communications that are allowed, and may identify the associated communication channels that can be utilized. Limited service configuration 187 may be received over the wireless network 101, may be locally transferred to wireless device 102, such as through a serial connection, or may be preloaded on the wireless device 102.[0083] Referring to Fig. 4, user manager 110 may be a server, personal computer, mini computer, mainframe computer, or any computing device operable to analyze and take proactive measures to block spam from the network 101. In some aspects, user manager 110 may operate in conjunction with operator workstation 114 to perform these functions. The user manager 110 may comprise user manager anti-spam module 190, which may include at least one of any type of hardware, software, firmware, data and executable instructions operable to generate content filter configuration file 170 and analyze spam log 184 from wireless device 102.[0084] Furthermore, there may be separate servers or computer devices associated with user manager 110 working in concert to provide data in usable formats to parties, and/or provide a separate layer of control in the data flow between the wireless device 102 and user manager anti-spam module 190. User manager 110 may send software agents or applications to wireless device 102 across wireless network 101, such that the wireless device 102 returns information from its resident applications and subsystems 150.[0085] Referring to Figs. 4 and 5, user manager anti-spam module 190 may include a configuration generator module 198 that comprises hardware, content, software and/or any other associated logic allowing configuration generator module 198 to generate content filter configuration file 170. In one aspect, configuration generator module 198 may be operable to assemble the various components of a given content filter configuration file 170 based on selections from a number of configurable parameters. [0086] For example, configuration logic 220 may provide an authorized user with the ability to select from a menu of a plurality of content filters 208, i.e., host-based filtering, rule-based filtering, Bayesian statistical analysis, noise filters, and Sender Policy Framework ("SPF") or Sender ID filters.[0087] hi addition, configuration logic 220 may provide an authorized user with the ability to select from a menu of a plurality of content destinations 210, including but not limited to resident client applications 140 on wireless device 102 and network devices on network 101, in order to generate content filter configuration file 170.] Similarly, configuration logic 220 may provide an authorized user with the ability to select from a menu of at least one of a plurality of reporting parameters 212, a plurality of control command parameters 206, and a plurality of predetermined filter score result values 216. Alternatively, rather than selecting the various configuration parameters individually, configuration logic 220 may provide an authorized user with the ability to select from a menu of a plurality of predetermined content filter configurations 218, which may include predetermined groupings of the above-noted parameters that comprise content filter configuration 170.[0089] Furthermore, what may be considered as spam by one network carrier may not be considered spam by another network carrier. Accordingly, configuration logic 220 may provide an authorized user with the ability to select from a menu of a plurality of predetermined network providers 219 to thereby associate a given configuration with a given network service provider. As such, different filtering configurations may be generated for different network providers, and a device roaming from one provider to the next may resultingly receive a new filtering configuration and filter out different content depending on the network provider.[0090] In addition, identification of spam may be dependent upon the specific wireless device in operation. For example, since spam may be based on the size of the content, the use of more than a predetermined portion of memory may cause content to be classified as spam. In this case, since different wireless devices have different memory sizes, such a spam definition may be device-specific. Other examples may be based on the processing ability, the graphics ability, etc. of the given wireless device. Accordingly, configuration logic 220 may provide an authorized user with the ability to select from a menu of a plurality of predetermined wireless device types 213. [0091] Once the specific parameters of a given content filter configuration 170 are determined, then configuration logic 220 may assign unique configuration ID 171 to the given configuration, and may store this configuration in a library for later recall, such as among plurality of predetermined anti-spam content filter configurations 218. Further, configuration logic 220, and/or another component of user manager anti-spam module 190, may be operable to transmit configuration 170 to one or more wireless devices 102. In some embodiments, a command 185 may be transmitted to activate the transmitted configuration 170, or the anti-spam engine 138 on the wireless device itself may be configured to activate the newly transmitted configuration upon download. [0092] User manager anti-spam module 190 may include information repository 194 for storing one or more spam logs 184 from one or more wireless devices 102. Information repository 194 may include any type of memory or storage device compatible with user manager anti-spam module 190.[0093] In addition, user manager anti-spam module 190 may comprise analyzer 202 and report generator 204. Analyzer 202 may include hardware and analysis logic, such as decision-making routines, statistical programs, and combinations thereof, for analyzing and interpreting logs 184 and generating report 205. Furthermore, user manager anti-spam module 190 may be operable to make report 205 available for viewing by an authorized user, as well as to generate and transmit an E-mail message, including at least portions of report 205, to a networked device, such as to operator workstation 114. For example, report 205 may group unauthorized content 163 based on predetermined parameters, such as the originator/sender, the destination wireless device and/or client application, some portion of the content, such as a word, name or file, etc.[0094] Referring to Fig. 6, the user manager anti-spam module 190 may further comprise a remote device control module 200 operable, by execution of control logic 230, to receive/generate control command 185 to/from operator workstation 114 and/or wireless device 102. For example, control command 185 may comprise operator identification ("ED") 232 and a control activity 234. Operator E) 232 may be some manner of identifying the originator of control command 185. For example, operator ID 234 may be a name, a number, a digital signature, a hash, or any other type of data or value that may be associated with an authorized user. Further, operator ID 232 may not be explicitly contained in the control command 185, but rather derived from the origin of control command 185.[0095] Control activity 234 may be the operation to be performed on wireless device 102 by anti-spam engine module 138 through executing control command 185. As mentioned above, the operation may include downloading configuration 170 and uploading log 184. Before executing or forwarding the control command 185, remote device control module 200 may execute permission logic 236 to verify the authenticity or authority of the party issuing control command 185.[0096] For instance, certain operators may be restricted to certain control activities, or restricted to controlling certain wireless devices. The authorization of a control command 185 may simply be a prompt to operator workstation 114 to confirm whether operator workstation 114 actually wishes to execute control activity 234 on wireless device 102. Alternatively, permission logic 236 may parse operator ID 232 and control activity 234 from control command 185 and correlate these parameters with a database of a plurality of operator IDs 226, a plurality of control permissions 224 and a plurality of wireless device identifications (IDs) 228, in order to generate a permission decision 222.[0097] It should be noted, however, that the plurality of operator IDs 270, the plurality of control permissions 224 and the plurality of wireless device identifications (IDs) 228 may be correlated in any manner. For example, control command 185 may contain an operator ID 232 and a control activity 234 of "update content filter configuration file" for a particular one of the plurality of wireless device identifications 228. Permission logic 236 may search the database of control permissions 224 and operator IDs 226 to determine if the operator was permitted to "push" a new configuration on the given wireless device 102.[0098] Referring now to Fig. 7, operator workstation 114 may be operable to enable an authorized user to review report 205, communicate with a user of wireless device 102, download the anti-spam engine 138 and/or content filter configuration file 170 to wireless device 102, and upload the spam log 184 from the wireless device 102. Furthermore, the operator, though the operation of the operator workstation 114, may be operable to request that the message center 118 block specific spam from accessing the network 101.[0099] Operator workstation 114 may comprise an input mechanism 248, and an output mechanism 250 interconnected to a computer platform 240. The input mechanism 248 and the output mechanism 250 may be similar to their respective counterparts, 132 and 134, on wireless device 102.[00100] The operator workstation 114 may further comprise a memory 246 for storing applications and data files, a processing engine 242, and a communications module 244 operable to transmit and receive content between the operator workstation 114, the user manager 110, wireless device 102, as well as any network component on wireless network 101. Furthermore, the communications module 244 may be operable to transmit voice over the network 101, thereby allowing an operator to engage in voice communications with any wireless device user or other authorized personnel. [00101] Memory 246 may comprise an operator control module 252 made executable by processing engine 242. As the number of operator workstations 114 and the number of operators are non-limiting, an operator ID parameter 232, previously discussed in reference to Fig. 6, may be entered into memory 246 to log in to the network 101 and identify that operator to network components.[00102] The operator control module 252 may itself comprise operator anti-spam logic 254 operable in conjunction with Graphic User Interface (GUI) logic 256, input mechanism 248, and output mechanism 250, to guide the operator through any spam analysis and command activity selection and transmission. The GUI logic 256 may control, for example, browser communications, E-mail communication, text messaging, voice communication, report presentation, as well providing a menu for selecting and transmitting any control command 185 to the user manager 110 and wireless device 102.[00103] The operator control module 252 may further comprise a remote device control module 260 similar to the remote device control module 200 of the user manager module 190. Similar to the remote device control module 200, the operator-based remote device control module 260 may generate a control command 185 operable on the wireless device 102 to perform a variety of activities, including, but not limited to: uploading log 184, downloading anti-spam engine 138 and/or configuration 170. [00104] Although the user of operator workstation 114 may normally be a person, the workstation 114 may be a computing device comprising hardware, software, content, and combinations thereof for analyzing and responding to report 205 or to an external communication such as from the user of the wireless device 102. Such software may include algorithms, decision-making routines, statistical programs, etc. for analyzing and interpreting report 205. Further, as with the user manager anti-spam module 190, the operator workstation 114 may reside on any network device of wireless network 101, such as on user manager 110, another server connected to the network, or even on a wireless device 102.[00105] Referring to Fig. 1, wireless network 101 may include any communications network operable, at least in part, for enabling wireless communications between wireless device 102 and any other device connected to wireless network 101. Further, wireless network 101 may include all network components and all connected devices that form the network. For example, wireless network 101 may include at least one, or any combination, of: a cellular telephone network; a terrestrial telephone network; a satellite telephone network; an infrared network such as an Infrared Data Association ("IrDA")-based network; a short-range wireless network; a Bluetooth(R) technology network; a ZigBee(R) protocol network; an ultra wide band ("UWB") protocol network; a home radio frequency ("HomeRF") network; a shared wireless access protocol ("SWAP") network; a wideband network, such as a wireless Ethernet compatibility alliance ("WECA") network, a wireless fidelity alliance ("Wi-Fi Alliance") network, and a 802.11 network; a public switched telephone network; a public heterogeneous communications network, such as the Internet; a private communications network; and land mobile radio network.[00106] Suitable examples of telephone networks include at least one, or any combination, of analog and digital networks/technologies, such as: code division multiple access ("CDMA"), wideband code division multiple access ("WCDMA"), universal mobile telecommunications system ("UMTS"), advanced mobile phone service ("AMPS"), time division multiple access ("TDMA"), frequency division multiple access ("FDMA"), orthogonal frequency division multiple access ("OFDMA"), global system for mobile communications ("GSM"), single carrier ("IX") radio transmission technology ("RTT"), evolution data only ("EV-DO") technology, general packet radio service ("GPRS"), enhanced data GSM environment ("EDGE"), high speed downlink data packet access ("HSPDA"), analog and digital satellite systems, and any other technologies/protocols that may be used in at least one of a wireless communications network and a data communications network.[00107] Referring back to Fig. 1, message center 118 may include a processor, a memory and a middleware program disposed in the memory, the middleware program operable to handle content sent for use by other programs using a messaging application program interface (API). A messaging center can usually queue and prioritize content as needed and saves each of the client programs from having to perform these services. [00108] Fig. 8 illustrates a non-limiting cellular telephone system 270 and comprises at least one wireless device 102 and a cellular wireless network 288 connected to a wired network 280 via a wireless carrier network 284. Cellular telephone system 270 is merely exemplary and may include any system whereby remote modules, such as wireless devices 102 communicate packets including voice and data over-the-air between and among each other and/or between and among components of wireless network 288, including, without limitation, wireless network carriers and/or servers. [00109] According to system 270, user manager 110 may communicate over the wired network 280 (e.g. a local area network, LAN) with data repository 274 for storing spam information, such as spam log 184, gathered from the wireless device 102. Further, a data management server 278 may be in communication with user manager 110 to provide post-processing capabilities, data flow control, etc. User manager 110, data repository 274 and data management server 278 may be present along with any other network components needed to provide cellular telecommunication services. It is through the user manager 272, the data repository 274, and the data management server 278, that spam detected by the wireless device 102 may result in the carrier network 284 eventually blocking the detected spam from wireless devices 102 and/or network 288. [00110] User manager 110, and/or data management server 278 may communicate with the carrier network 284 through data links 282 and 286, such as the Internet, a secure LAN, WAN, or other network. Carrier network 284 may control the transmission of content (generally being data packets) sent to a mobile switching center ("MSC") 290. Further, carrier network 284 communicates with MSC 290 by a network 286, such as the Internet, and/or POTS ("plain old telephone service"). Typically, in network 286, a network or Internet portion transfers data, and the POTS portion transfers voice information.[00111] MSC 290 may be connected to multiple base stations ("BTS") 294 by another network 292, such as a data network and/or Internet portion for data transfer and a POTS portion for voice information. BTS 294 ultimately broadcasts content wirelessly to the wireless devices, such as wireless device 102, by short messaging service ("SMS"), or other over-the-air methods.[00112] Referring to Fig. 9, a flowchart illustrating a method of spam detection on a wireless device may include obtaining anti-spam engine 138 at step 360. For example, the anti-spam engine module 138 may be embodied within the hardware and/or content of the wireless device 102 during the manufacture of the device 102. Alternatively, the anti-spam engine 138 may be "pushed" by user manager anti-spam module 190 to the wireless device 102 or "pulled" from a user manager anti-spam module 190 by the wireless device 102 across a wireless network 101.[00113] At step 362, content filter configuration 170 may be obtained by the wireless - device 102, in a similar manner as anti-spam engine 138, and may comprise parameters defining at least one content filter 182 and reporting parameter 178. [00114] At step 364, the method includes intercepting content 160 on the wireless device 102 prior to delivery to a content destination. For example, content 160 may be intended for at least one client application 140 resident on wireless device 102, i.e., browser client, IM client, SMS client, MMS client, and E-mail client, is intercepted prior to delivery to the intended client application. In other embodiments, content 160 may be generated on the wireless device and is intercepted prior to being transmitted by communications module 152 to another device on network 101. [00115] At step 366, at least one filter 182 may be applied to the content 160. For example, the filter may be any spam filtering mechanism 182, such as: a host-based filter; rule-based filter, i.e., filtering out content having a size greater than a user determined size, where the filter may be specific to a given network carrier; Bayesian statistical filter; noise filter; and Sender Policy Framework ("SPF") or Sender Identification (ID) filter. At step 368, content filter test result 174 is determined based upon the application of the at least one filter 182 to the content 160. The calculated filter test result 174 may be a value that when compared to predetermined filter test result 188 at step 370, is operable to determine whether the content 160 is spam. [00116] If the content classification indicates that the content 160 is not spam, the content may, at step 372, be forwarded to the respective content destination 172, which may be a wireless device resident client application or another network device. Alternatively, if the content classification indicates that the content 160 is likely spam, the content is not forwarded to the intended client application. Furthermore, the content 160 may, at step 374, be stored in quarantine folder 163 as spam content 163 until such time or other predefined condition when the spam content 163 may be deleted at step 376. The predefined condition, such as storage limit 176, may be obtained from the content filter configuration 170. Furthermore, storing and deleting the quarantined content 163 may be accomplished under control of a control command 185 as part of the local device control module 183.[00117] In addition, upon determination of content 160 as spam at step 370, a record may be entered into spam log 184 at step 378, comprising at least a portion 173 of content 160, for example, content destination 172 and the source 186 of the content, and the calculated filter test result 188. The spam log 184 may then, at step 380, be provided to a remote device, such as the user manager 110 and the operator position 114 for further analysis.[00118] At step 381, a message may be received by the wireless device 102 in response to the transmitted spam log 184. For example, the message may comprise a control command 185 instructing the wireless device 102 to receive and upload an update to content filter configuration 170.[00119] Fig. 10 illustrates a flowchart of one aspect of a method, operable on a network device such as user manager 110, to manage content on a wireless device. In one aspect the method includes, at step 382, providing an anti-spam engine to a wireless device. In one example, user manager 110 may wirelessly transmit anti-spam engine 138, stored in the memory of the user manager, to the wireless device 102 over wireless network 101. [00120] The method further includes, at step 384, generating a content filter configuration to the wireless device. For example, the user manager 110 may generate content filter configuration 170. The user manager 110 may generate the filter configuration 170 upon request from at least one of the wireless device 102, the operator 114 or the user manager anti-spam logic 192. The filter configuration 170 may be generated by the configuration generator module 198 based upon the parameters and logic shown in Fig. 5. At step 386, the content filter configuration 170 may be provided to the wireless device 102. In one example, user manager 110 may transmit configuration 170 to wireless device 102 over network 101.[00121] At step 388, the method includes receiving a spam log from the wireless device based on the content filter configuration. In one example, the user manager 110 may receive at least one spam log 184 generated by at least one wireless device 102 by applying content filter configuration 170 to content 160, and transmitted over wireless network 101. The spam log 184 may be stored in information repository 194 where it may be further analyzed by analyzer 202, which may include hardware and analysis logic, such as decision-making routines, statistical programs, and combinations thereof, for analyzing and interpreting logs 184.[00122] Based upon a result of the spam log analysis, the user manager 110 may, at step 390, generate a report 205 and make this report available to an operator 114. The report 205 may be made viewable on the user manager by an authorized user such as operator 114, or the user manager 110 may transmit at least portions of the report 205 over network 101 to the operator 114 as an E-mail.[00123] Based upon an analysis of the spam log 184, either by an operator 114 or by the analyzer 202, the user manager 110 may, at step 392, either generate or receive a revised content filter configuration 170. Prior to accepting the content filter configuration 170 transmitted by the operator position 114, the remote device control module 200 of the user manager 110 is operable to verify the authorization of the operator 114 to update the configuration of the wireless device 102. [00124] The revised content filter configuration 170 may be made available to the wireless device 102 and/or the message center 118 at step 394. All or some portion of the filter configuration 170 may be transmitted to the wireless device 102 and/or the message center 118 over the wireless network 101. In some cases, the wireless device 102 may request authorization confirmation prior to accepting the revisions and the confirmation may be provided by control command 185 generated by the remote device control module 200.[00125] Referring to Fig. 11, some embodiments of a method of spam detection on a wireless device 102 may include receiving, at step 302, at least a portion of an anti- spam engine 138 onto wireless device 102. For example, the anti-spam engine module 138 may be embodied within the hardware and/or content of the wireless device 102 during the manufacture of the device 102. Alternatively, the anti-spam engine 138 may be "pushed" by user manager anti-spam module 190 to the wireless device 102 or "pulled" from a user manager anti-spam module 190 by the wireless device 102 across a wireless network 101 depending, for example, on whether or not the wireless device 102 has the latest version of the anti-spam engine module 138 for the respective wireless device 102. The pushing or pulling of the anti-spam engine 138 to the wireless device 102 may be configurable in any manner, for example: being initiated by a predetermined event.[00126] When activated, in some embodiments, anti-spam engine 138 may have a rudimentary content filter configuration 170. In some embodiments, a user may further configure the anti-spam engine 138 by means of input mechanism 132 and UI 166 at step 304. Alternatively, a new and/or updated content filter configuration 170 may be "pushed" by a user manager anti-spam module 190 to the wireless device 102, or may be "pulled" from a user manager anti-spam module 190 by the wireless device 102, across wireless network 101 at step 306. The loading and activation of configuration 170 may be initiated in any manner, for example, by ad hoc request by the user, by a predetermined event, such as activation, power up, and a predetermined schedule. [00127] After configuration, the anti-spam engine 138 may, at step 310, operate on wireless device 102 as a background process, processing at least a portion of an incoming content received by communications module 152 and stored in memory. The content may be received, at step 308, from a spam generator 122. Although the statistic collector/reporter 168 may apply a common filter 182 to all content types, in some embodiments, the statistic collector/reporter 168 may determine a client identification 172 associated with each content 160 and apply the corresponding filter 182 to each content 160 based on the given content filter configuration 170. Configurable client identifications may include, but are not limited to browser, SSM, MMS, IM, and E-mail clients. Based upon a result of applying rules comprising the applied filter, that is, a"filter result", some content may be forwarded to their intended client while other content maybe classified as spam and stored in quarantine folder 164.[00128] In some aspects, the filter result 188 may result in a calculated value that when compared to a predetermined filter test value 174 is operable to determine whether the content is authorized or is to be classified as spam.[00129] Depending upon the at least one spam-filter 182 and the parameters of the content filter configuration file 170, the anti-spam engine 138 may be operable to detect received spam, quarantine the spam in quarantine folder 164, and create a log entry in log 184. The log entry, configurable and non-limiting, may comprise the spam content163 and/or additional information, such sender information 186, filter result 188 derived by applying content filter 182 to the received content, etc.[00130] Furthermore, unauthorized content 163 stored in the quarantine folder may be removed based on the storage limit parameter 176.[00131] Based upon reporting parameters 178, log 184 may, at step 312, be uploaded to user manager anti-spam module 190. Such a mechanism may include a standard HTTP, an FTP, or other data transfer protocol. In other embodiments, the collected log file 170 may be uploaded using any communication means the wireless device 102 may access.[00132] At step 314, user manager anti-spam module 190 may store spam log 184 in information repository 194, analyze the contents of the spam log, and generate a report205 based upon that analysis.[00133] At step 316, the user manager anti-spam module 190 may transmit the report205 to an operator workstation 114 for further analysis and action. Report 205 may include any form of output that represents analysis of log 184 and other information contained in the information repository 194, as well as any other associated information such as reports of spam, new filtering techniques, etc.[00134] Although user manager anti-spam module 190 may generate report 205, the user manager 110 and its corresponding components may be operable to present a view of spam related information collected from the wireless device 102 in any form, such as tables, maps, graphics views, plain text, interactive programs or web pages, or any other display or presentation of the data. For example, user manager anti-spam module 190 may present content authorization related information on a monitor or display device, and/or may transmit this information, such as via electronic mail, to another computer device for further analysis or review through such mechanisms as through a standard HTTP, an FTP, or some other data transfer protocol.[00135] At step 318, an authorized user of operator workstation 114 may analyze report 205 and decide, for example, to contact message center 118. In one aspect, the operator workstation 114 may transmit, at step 320, an appropriately composed message to the user manager 110, to be forwarded, at step 322, to the message center 118. In an alternate embodiment, the operator workstation may send a message directly to the message center 118. Such a message may be in any format suitable to both the sender and receiver, including, but not limited to, E-mail, SMS text messaging, and telephonic communication.[00136] Based upon the received message from the operator, the message center 118 may update its own filters and at step 324, block future content from spam generator 122.[00137] Fig. 12 represents an additional aspect of the herein disclosed system 100, in which a user of wireless device 102, upon receiving spam on at least one of their wireless device resident client applications, contacts, at step 330, operator 114 regarding the accrued charges due to the unsolicited content ("spam"). As disclosed above, the communication between the user and the operator may be by electronic message or by real-time voice communication.[00138] The wireless device 102 may require a download of the anti-spam module 138 or may simply require an update to the content filter configuration file 170. At step 332, the operator workstation 114 is operable to transmit a message to the user manager 110 requesting the user manager module 190 to "push," at step 334, anti-spam module 138 and/or a content filter configuration file 170 to the wireless device 102. [00139] Further at step 334, a control command 185 may be generated by the operator workstation 114 and be forwarded to the wireless device 102. The control command 185 may operate to verify the authenticity and authorization of the operator/user manager to command the wireless device 102 to perform a specific action. In one non- limiting aspect, remote device control module 200 may execute permission logic 236 to make permission decision 222 as to whether or not to relay an operator generated control command 185 to a specific wireless device 102. [00140] Whether or not the operator workstation 114 had initiated a download of the anti-spam engine 138 and/or the content filter configuration file 170, new unauthorized or junk content, received by the wireless device 102 at step 336, may, at step 338, be filtered and prevented from reaching their targeted client. In addition, the filtered content are logged in log file 170, which, based upon reporting parameters 178, may be uploaded to user manger 110 for analysis at step 340. Similar to the message sequence of Fig. 9, a report 205 may be generated by the user manager 110 at step 342 and forwarded to the operator workstation 114 at step 344.[00141] Steps 346, 348, 350 and 352 of Fig. 12, operating similarly to steps 318, 320, 322 and 324 of Fig. 11, enable the user of operator workstation 114 to analyze spam report 205 and take the appropriate steps to have the message center 118 block similar spam attacks from clogging network 101.[00142] In another aspect (not shown), upon a user complaint, the operator workstation 114 may simply send a request to the wireless device 102 to upload the current log 184 and/or upload the currently active configuration 170 without updating the content filter configuration file 170, in order to determine the current level of spam protection on the wireless device 102.[00143] The various illustrative logics, logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.[00144] Further, the steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM3 or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor, such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.[00145] While the foregoing disclosure shows illustrative aspects and/or embodiments, it should be noted that various changes and modifications could be made herein without departing from the scope of the described aspects and/or embodiments as defined by the appended claims. Furthermore, although elements of the described embodiments may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated. Additionally, all or a portion of any aspect and/or embodiment may be utilized with all or a portion of any other aspect and/or embodiment, unless stated otherwise.
As part of a communication session, a wireless source device can transmit audio and video data to a wireless sink device, and the wireless sink device can transmit user input data received at the wireless sink device back to the wireless source device. In this manner, a user of the wireless sink device can control the wireless source device and control the content that is being transmitted from the wireless source device to the wireless sink device. The input data received at the wireless sink device can be a multi-touch gesture.
1.A method for sending user input data from a wireless sink device to a wireless source device, the method includes:Obtain user input data for multi-touch gestures;Generate packet header;Generate payload data, wherein the payload data associates user input data for the first touch input event with the first pointer identification, and associates user input data for the second touch input event with the second pointer identification ;Generating a data packet including the packet header and the payload data;Sending the data packet to the wireless source device.2.The method of claim 1, wherein the payload data includes a field for identifying a plurality of pointers, wherein each pointer is associated with a unique pointer identification.3.The method of claim 1, wherein the number of pointers is greater than two.4.The method of claim 1, wherein the first touch input event and the second touch input event occur substantially simultaneously.5.The method according to claim 1, wherein the first touch input event is composed of a touch press on an input device, a touch lift from the input device, and a mobile touch on the input device Selected from the group.6.The method of claim 1, wherein the payload data includes:A first touch press input associated with the first pointer identification;A first touch movement input associated with the first pointer identification;A first touch-up input associated with the first pointer identification;A second touch press input associated with the second pointer identification;A second touch movement input associated with the second pointer identification;A second touch-up input associated with the second pointer identification.7.The method of claim 1, wherein the data packet header includes a field for identifying an input category of the user input data, and the field identifies a general input.8.The method of claim 1, wherein the payload data includes a length field that identifies the length of the input description.9.The method according to claim 8, wherein the length of the input description is identified in units of octets.10.The method of claim 1, wherein the payload data includes a description field for describing details of the user input data.11.The method of claim 1, wherein the payload data identifies:A first x coordinate and a first y coordinate corresponding to the coordinates where the first touch input event occurs;A second x coordinate and a y coordinate corresponding to the coordinates at which the second touch input event occurs.12.The method of claim 11, wherein the first x coordinate, first y coordinate, second x coordinate, and second y coordinate are based on a video stream between the wireless sink device and the wireless source device The resolution of the negotiation.13.The method of claim 1, wherein obtaining the user input data comprises: capturing the user input data through an input device of the wireless sink device.14.The method of claim 1, wherein obtaining the user input data comprises: receiving the forwarded user input data from another wireless sink device.15.The method of claim 1, wherein the data packet header is an application layer packet header.16.The method of claim 1, wherein the data packet header is used to control audio data or video data of the wireless source device.17.The method of claim 1, wherein the data packet is sent via TCP / IP.18.A wireless sink device configured to send user input data to a wireless source device, the wireless sink device includes:Memory, which stores instructions;One or more processors configured to execute the instructions, wherein after executing the instructions, the one or more processors cause:Obtain user input data for multi-touch gestures;Generate packet header;Generate payload data, wherein the payload data associates user input data for the first touch input event with the first pointer identification, and associates user input data for the second touch input event with the second pointer identification ;Generating a data packet including the packet header and the payload data;A transmission unit configured to send the data packet to the wireless source device;19.The apparatus of claim 18, wherein the payload data includes a field for identifying a plurality of pointers, wherein each pointer is associated with a unique pointer identification.20.The device according to claim 18, wherein the number of the pointers is greater than two.21.The device of claim 18, wherein the first touch input event and the second touch input event occur substantially simultaneously.22.The device according to claim 18, wherein the first touch input event is composed of a touch press on the input device, a touch lift from the input device, and a mobile touch on the input device Selected from the group.23.The apparatus of claim 18, wherein the payload data includes:A first touch press input associated with the first pointer identification;A first touch movement input associated with the first pointer identification;A first touch-up input associated with the first pointer identification;A second touch press input associated with the second pointer identification;A second touch movement input associated with the second pointer identification;A second touch-up input associated with the second pointer identification.24.The apparatus of claim 18, wherein the data packet header includes a field for identifying an input category of the user input data, and the field identifies a general input.25.The apparatus of claim 18, wherein the payload data includes a length field for identifying the length of the input description.26.The apparatus of claim 25, wherein the length of the input description is identified in units of octets.27.The apparatus of claim 18, wherein the payload data includes a description field for describing details of the user input data.28.The apparatus of claim 18, wherein the payload data identifies:A first x coordinate and a first y coordinate corresponding to the coordinates where the first touch input event occurs;A second x coordinate and a y coordinate corresponding to the coordinates at which the second touch input event occurs.29.The device of claim 28, wherein the first x coordinate, first y coordinate, second x coordinate, and second y coordinate are based on a video stream between the wireless sink device and the wireless source device The resolution of the negotiation.30.The device of claim 18, wherein obtaining the user input data comprises: capturing the user input data through an input device of the wireless sink device.31.The device of claim 18, wherein obtaining the user input data comprises: receiving the forwarded user input data from another wireless sink device.32.The apparatus of claim 18, wherein the data packet header is an application layer packet header.33.The device according to claim 18, wherein the data packet header is used to control audio data or video data of the wireless source device.34.The apparatus according to claim 18, wherein the data packet is sent via TCP / IP.35.A computer-readable storage medium storing instructions, after the instructions are executed by one or more processors, causing the one or more processors to perform a method of transmitting user input data from a wireless sink device to a wireless source device, The method includes:Obtain user input data for multi-touch gestures;Generate packet header;Generate payload data, wherein the payload data associates user input data for the first touch input event with the first pointer identification, and associates user input data for the second touch input event with the second pointer identification ;Generating a data packet including the packet header and the payload data;Sending the data packet to the wireless source device.36.A wireless sink device configured to send user input to a wireless source device, the wireless sink device includes:A module for obtaining user input data for multi-touch gestures;Module for generating packet headers;A module for generating payload data, wherein the payload data associates user input data for a first touch input event with a first pointer identification, and associates user input data for a second touch input event with a second Pointer identification is associated;A module for generating a data packet including the packet header and the payload data;A module for sending the data packet to the wireless source device.37.A method for receiving user input data from a wireless sink device at a wireless source device, the method includes:Receive data packets including data packet headers and payload data;Parse the payload data to identify:User input data for the first touch input event with the first pointer identification, andUser input data for the second touch input event with the second pointer identification;The user input data for the first touch input event and the user input data for the second touch input event are interpreted as multi-touch gestures.38.The method of claim 37, wherein the payload data includes a field for identifying a plurality of pointers, wherein each pointer is associated with a unique pointer identification.39.The method of claim 37, wherein the number of pointers is greater than two.40.The method of claim 37, wherein the first touch input event and the second touch input event occur substantially simultaneously at the wireless sink device.41.The method of claim 37, wherein the first touch input event is composed of a touch press on the input device, a touch lift from the input device, and a touch move on the input device Selected from the group.42.The method of claim 37, wherein the payload data includes:A first touch-press input event associated with the first pointer identification;A first touch movement input event associated with the first pointer identification;A first touch-up input event associated with the first pointer identification;A second touch-press input event associated with the second pointer identification;A second touch movement input event associated with the second pointer identification;A second touch-up input event associated with the second pointer identification; andWherein, the method further includes:Interpret the first touch-down input event, the first touch-movement input event, and the first touch-up input event as a sequence of first touch input events;Interpret the second touch-down input event, the second touch-movement input event, and the second touch-up input event as a second touch input event sequence;The first touch input event sequence and the second touch input event sequence are interpreted as a multi-touch gesture.43.The method of claim 37, wherein the data packet header includes a field for identifying an input category of the user input data, and the field identifies a general input.44.The method of claim 37, wherein the payload data includes a length field that identifies the length of the input description.45.The method of claim 44, wherein the length of the input description is identified in units of octets.46.The method according to claim 37, wherein the payload data includes a description field for describing details of the user input data.47.The method of claim 37, wherein the payload data identifies:A first x coordinate and a first y coordinate corresponding to the coordinates where the first touch input event occurs;A second x coordinate and a second y coordinate corresponding to the coordinate at which the second touch input event occurs.48.The method of claim 47, wherein the first x coordinate, first y coordinate, second x coordinate, and second y coordinate are based on a video stream between the wireless sink device and the wireless source device The resolution of the negotiation.49.The method of claim 37, wherein obtaining the user input data comprises: capturing the user input data through an input device of the wireless sink device.50.The method of claim 37, wherein obtaining the user input data comprises: receiving the forwarded user input data from another wireless sink device.51.The method of claim 37, wherein the data packet header is an application layer packet header.52.The method of claim 37, wherein the data packet header is used to control audio data or video data of the wireless source device.53.The method of claim 37, wherein the data packet is sent via TCP / IP.54.A wireless source device configured to receive user input data from a wireless sink device, the wireless source device includes:A transmission unit, which is used to receive a data packet including a data packet header and payload data;Memory, which stores instructions;One or more processors configured to execute the instructions, wherein after executing the instructions, the one or more processors cause:Parse the payload data to identify:User input data for the first touch input event with the first pointer identification,as well asUser input data for the second touch input event with the second pointer identification;The user input data for the first touch input event and the user input data for the second touch input event are interpreted as multi-touch gestures.55.The wireless source device of claim 54, wherein the payload data includes a field for identifying a plurality of pointers, wherein each pointer is associated with a unique pointer identification.56.The wireless source device of claim 54, wherein the number of pointers is greater than two.57.The wireless source device of claim 54, wherein the first touch input event and the second touch input event occur substantially simultaneously at the wireless sink device.58.The wireless source device according to claim 54, wherein the first touch input event is from pressing by a touch on the input device, lifting from the input device, and moving the touch on the input device Selected from the group consisting of.59.The wireless source device of claim 54, wherein the payload data includes:A first touch-press input event associated with the first pointer identification;A first touch movement input event associated with the first pointer identification;A first touch-up input event associated with the first pointer identification;A second touch-press input event associated with the second pointer identification;A second touch movement input event associated with the second pointer identification;A second touch-up input event associated with the second pointer identification; andWherein, the method further includes:Interpret the first touch-down input event, the first touch-movement input event, and the first touch-up input event as a sequence of first touch input events;Interpret the second touch-down input event, the second touch-movement input event, and the second touch-up input event as a second touch input event sequence;The first touch input event sequence and the second touch input event sequence are interpreted as a multi-touch gesture.60.The wireless source device according to claim 54, wherein the data packet header includes a field for identifying an input category of the user input data, and the field identifies a general input.61.The wireless source device according to claim 54, wherein the payload data includes a length field for identifying the length of the input description.62.The wireless source device according to claim 61, wherein the length of the input description is identified in units of octets.63.The wireless source device according to claim 54, wherein the payload data includes a description field for describing details of the user input data.64.The wireless source device of claim 54, wherein the payload data identification:A first x coordinate and a first y coordinate corresponding to the coordinates where the first touch input event occurs;A second x coordinate and a second y coordinate corresponding to the coordinate at which the second touch input event occurs.65.The wireless source device according to claim 64, wherein the first x coordinate, first y coordinate, second x coordinate, and second y coordinate are based on the relationship between the wireless sink device and the wireless source device The negotiated resolution of the video stream.66.The wireless source device of claim 54, wherein obtaining the user input data comprises: capturing the user input data through an input device of the wireless sink device.67.The wireless source device of claim 54, wherein obtaining the user input data includes receiving the forwarded user input data from another wireless sink device.68.The wireless source device according to claim 54, wherein the data packet header is an application layer packet header.69.The wireless source device according to claim 54, wherein the data packet header is for controlling audio data or video data of the wireless source device.70.The wireless source device according to claim 54, wherein the data packet is sent via TCP / IP.71.A computer-readable storage medium storing instructions, after the instructions are executed by one or more processors, causing the one or more processors to perform a method of receiving user input data from a wireless sink device at a wireless source device , The method includes:Receive data packets including data packet headers and payload data;Parse the payload data to identify:User input data for the first touch input event with the first pointer identification, andUser input data for the second touch input event with the second pointer identification;The user input data for the first touch input event and the user input data for the second touch input event are interpreted as multi-touch gestures.72.A wireless source device configured to receive user input data from a wireless sink device, the wireless source device includes:A module for receiving data packets including data packet headers and payload data;A module for parsing the payload data to identify the following items:User input data for the first touch input event with the first pointer identification, andUser input data for the second touch input event with the second pointer identification;A module for interpreting the user input data for the first touch input event and the user input data for the second touch input event as a multi-touch gesture.
User input return channel for wireless displayThis application requires the following US provisional rights and interests:U.S. Provisional Application No. 61 / 435,194 filed on January 21, 2011;U.S. Provisional Application No. 61 / 447,592 submitted on February 28, 2011;U.S. Provisional Application No. 61 / 448,312 filed on March 2, 2011;U.S. Provisional Application No. 61 / 450,101 submitted on March 7, 2011;U.S. Provisional Application No. 61 / 467,535 submitted on March 25, 2011;U.S. Provisional Application No. 61 / 467,543 filed on March 25, 2011;U.S. Provisional Application No. 61 / 514,863 filed on August 3, 2011;U.S. Provisional Application No. 61 / 544,440 submitted on October 7, 2011;The entire contents of the above provisional application are incorporated herein by reference.Technical fieldThe present disclosure relates to techniques for transmitting data between wireless source devices and wireless sink devices.Background techniqueA wireless display (WD) or Wi-Fi display (WFD) system includes a wireless source device and one or more wireless sink devices. The source device and each sink device may be mobile devices or wired devices with wireless communication capabilities. For example, one or more of the source device and the sink device may include a mobile phone, a portable computer with a wireless communication card, a personal digital assistant (PDA), a portable media player, or other such devices with wireless communication capabilities, including So-called "smart" phones and "smart" tablets or tablets, e-readers or any type of wireless display, video game devices, or other types of wireless communication devices. One or more of the source device and sink device may also include wired devices with communication capabilities, such as televisions, desktop computers, monitors, projectors, and the like.The source device sends media data (such as audio video (AV) data) to one or more of the sink devices participating in a particular media sharing session. Media data can be played back at each of the source device's local display and the sink device's display. More precisely, each of the participating sink devices presents the received media data on its screen and audio device.Summary of the inventionThis disclosure generally describes a system in which a wireless sink device can communicate with a wireless sink device. As part of the communication session, the wireless source device can send audio and video data to the wireless sink device, and the wireless sink device can send user input received at the wireless sink device back to the wireless source device. In this way, the user of the wireless sink device can control the wireless source device and can control the content transmitted from the wireless source device to the wireless sink device.In one example, a method of transmitting user input data from a wireless sink device to a wireless source device includes: obtaining user input data for a multi-touch gesture; generating a packet header; generating payload data, wherein the payload data Associate the user input data for the first touch input event with the first pointer identification, and associate the user input data for the second touch input event with the second pointer identification; generating the packet header and the payload A data packet of data; and sending the data packet to the wireless source device.In another example, a wireless sink device configured to send user input data to a wireless source device. The wireless sink device includes a memory that stores instructions and one or more processors configured to execute the instructions. After the instructions are executed, the one or more processors cause: to obtain a user for a multi-touch gesture Input data; generate a packet header; generate payload data, wherein the payload data associates the user input data for the first touch input event with the first pointer identification and associates the user input data for the second touch input event Associated with a second pointer identification; and generating a data packet including the packet header and the payload data. The wireless sink device further includes a wireless sink device configured to send the data packet to the wireless source device.In another example, a computer-readable storage medium that stores instructions, after the instructions are executed by one or more processors, causes the one or more processors to perform transmission from the wireless sink device to the wireless source device User input data method. The method includes: obtaining user input data for a multi-touch gesture; generating a packet header; generating payload data, wherein the payload data associates user input data for a first touch input event with a first pointer identification And associate the user input data for the second touch input event with the second pointer identification; generate a data packet including the packet header and the payload data; and send the data packet to the wireless source device.In another example, a wireless sink device configured to send user input to a wireless source device. The wireless sink device includes: a module for obtaining user input data for a multi-touch gesture; a module for generating a packet header; and a module for generating payload data, wherein the payload data will be directed to the first The user input data of the touch input event is associated with the first pointer identification, and the user input data for the second touch input event is associated with the second pointer identification; used for generating the packet header and the payload data A module for data packet; and a module for sending the data packet to the wireless source device.In another example, a method of receiving user input data from a wireless sink device at a wireless source device includes: receiving a data packet including a data packet header and payload data; parsing the payload data to identify: having User input data identified by the first pointer for the first touch input event and user input data identified by the second pointer for the second touch input event; and the user input data for the first touch input event and The user input data for the second touch input event is interpreted as a multi-touch gesture.In another example, a wireless source device configured to receive user input data from a wireless sink device. The wireless source device includes: a transmission unit for receiving a data packet including a data packet header and payload data; a memory that stores instructions; and one or more processors configured to execute the instructions, wherein After the instruction, the one or more processors cause the payload data to be parsed to identify: user input data for the first touch input event with the first pointer identification and the target for the second pointer identification User input data for the second touch input event; and interpreting the user input data for the first touch input event and the user input data for the second touch input event as a multi-touch gesture.In another example, a computer-readable storage medium that stores instructions, after the instructions are executed by one or more processors, causes the one or more processors to execute from the wireless sink device at the wireless source device The method of receiving user input data. The method includes: receiving a data packet including a data packet header and payload data; parsing the payload data to identify: user input data with a first pointer identification for a first touch input event and having a second pointer Identified user input data for the second touch input event; and interpreting the user input data for the first touch input event and the user input data for the second touch input event as a multi-touch gesture .In another example, a wireless source device configured to receive user input data from a wireless sink device, the wireless source device includes: a module for receiving a data packet including a data packet header and payload data; The payload data is parsed to identify the following modules: user input data for the first touch input event with the first pointer identification, and user input data for the second touch input event with the second pointer identification; And a module for interpreting the user input data for the first touch input event and the user input data for the second touch input event as a multi-touch gesture.BRIEF DESCRIPTIONFIG. 1A is a block diagram showing an example of a source / sink system that can implement the technology of the present disclosure.FIG. 1B is a block diagram showing an example of a source / sink system having two sink devices.FIG. 2 shows an example of a source device that can implement the technology of the present disclosure.FIG. 3 shows an example of a sink device that can implement the technology of the present disclosure.4 shows a block diagram of a transmitter system and a receiver system that can implement the techniques of this disclosure.5A and 5B show example message transmission sequences for performing capability negotiation according to the techniques of this disclosure.Figure 6 shows an example data packet that can be used to deliver user input data obtained at the sink device to the source device.7A and 7B are flowcharts showing techniques of the present disclosure that can be used for capability negotiation between a source device and a sink device.8A and 8B are flowcharts showing techniques of the present disclosure that can be used to send and receive data packets with user input data.9A and 9B are flowcharts showing techniques of the present disclosure that can be used to send and receive data packets with user input data.10A and 10B are flowcharts showing techniques of the present disclosure that can be used to send and receive data packets with time stamp information and user input data.11A and 11B are flowcharts showing techniques of the present disclosure that can be used to send and receive data packets with time stamp information and user input data.12A and 12B are flowcharts showing techniques of the present disclosure that can be used to send and receive data packets including voice commands.13A and 13B are flowcharts showing techniques of the present disclosure that can be used to send and receive data packets with multi-touch user input commands.14A and 14B are flowcharts showing techniques of the present disclosure that can be used to send and receive data packets with user input data forwarded from a third-party device.15A and 15B are flowcharts showing techniques of the present disclosure that can be used to send and receive data packets.detailed descriptionThis disclosure generally describes a system in which a wireless sink device can communicate with a wireless sink device. As part of the communication session, the wireless source device can send audio and video data to the wireless sink device, and the wireless sink device can send user input received at the wireless sink device back to the wireless source device. In this way, the user of the wireless sink device can control the wireless source device, and can control the content transmitted from the wireless source device to the wireless sink device.FIG. 1A is a block diagram illustrating an exemplary source / sink system 100 that can implement one or more of the techniques of this disclosure. As shown in FIG. 1A, the system 100 includes a source device 120 that communicates with a sink device 160 via a communication channel 150. The source device 120 may include a memory that stores audio / video (A / V) data 121, a display 122, a speaker 123, an audio / video encoder 124 (also referred to as an encoder 124), an audio / video control module 125, and a transmitter / Receiver (TX / RX) unit 126. The sink device 160 may include a display 162, a speaker 163, an audio / video decoder 164 (also referred to as a decoder 164), a transmitter / receiver unit 166, a user input (UI) device 167, and a user input processing module (UIPM) 168. The components shown constitute only one example configuration of source / sink system 100. Other configurations may include fewer components than those shown or may include additional components in addition to those shown.In the example of FIG. 1A, the source device 120 may display the video portion of the audio / video data 121 on the display 122, and may output the audio portion of the audio / video data 121 on the speaker 123. The audio / video data 121 may be stored locally on the source device 120, accessed from an external storage medium (such as a file server, hard drive, external storage, Blu-ray disc, DVD, or other physical storage medium), or may be connected via a network such as the Internet Streaming to the source device 120. In some cases, audio / video data 121 may be captured in real time via the camera and microphone of the source device 120. The audio / video data 121 may include multimedia content such as movies, TV programs, or music, but may also include real-time content generated by the source device 120. For example, such real-time content may be generated by applications running on the source device 120, or captured video data (for example, as part of a video telephony session). As will be described in more detail, in some cases, such real-time content may include video frames of user input options available to the user. In some cases, audio / video data 121 may include a combined video frame of different types of content, such as a video frame of a movie or television program with user input options overlaid on the video frame.In addition to locally presenting audio / video data 121 via display 122 and speaker 123, audio / video encoder 124 of source device 120 may encode audio / video data 121, and transmitter / receiver unit 126 may be on a communication channel At 150, the encoded data is sent to the sink device 160. The transmitter / receiver unit 166 of the sink device 160 receives the encoded data, and the audio / video decoder 164 decodes the encoded data, and outputs the decoded data via the display 162 and the speaker 163. In this way, the audio and video data presented by the display 122 and the speaker 123 can be presented by the display 162 and the speaker 163 at the same time. Audio data and video data can be arranged in frames, and when presented, audio frames can be time synchronized with video frames.The audio / video encoder 124 and audio / video decoder 164 can implement any number of audio and video compression standards, such as the ITU-T H.264 standard (or MPEG-4, Part 10), advanced video coding (AVC) Or the emerging high efficiency video coding (HEVC) standard (sometimes called the H.265 standard). Many other types of proprietary or standardized compression techniques can also be used. In general, the audio / video encoder 164 is configured to perform the reciprocal encoding operation of the audio / video encoder 124. Although not shown in FIG. 1A, in some aspects, both the A / V encoder 124 and the A / V decoder 164 may be integrated with the audio encoder and decoder, and may include appropriate MUX-DEMUX (multiplexed- Demultiplexing) unit or other hardware and software to handle the encoding of both audio and video in a common data stream or separate data streams.As will be described in more detail below, in addition to implementing the video compression standard described above, the A / V encoder 124 can perform other encoding functions. For example, before sending the A / V data 121 to the sink device 160, the A / V encoder 124 may add various types of metadata to the A / V data 121. In some cases, the A / V data 121 may be stored in the source device 120 in an encoded form or received at the source device 120, so that no further compression by the A / V encoder 124 is required.Although FIG. 1A shows a communication channel 150 carrying audio payload data and video payload data separately, it should be understood that in some cases, video payload data and audio payload data may be part of a common data stream . If applicable, the MUX-DEMUX unit can follow the ITU H.223 multiplexer protocol or other protocols such as User Datagram Protocol (UDP). Both the audio / video encoder 124 and the audio / video decoder 164 can be implemented as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), discrete Logic devices, software, hardware, firmware, or any combination thereof. Each of the audio / video encoder 124 and the audio / video decoder 164 can be included in one or more encoders or decoders, and any one of them can be integrated into a combined encoder / decoder (CODEC) portion. Therefore, each of the source device 120 and the sink device 160 may include a dedicated machine configured to perform one or more of the techniques of the present disclosure.The display 122 and the display 162 may include any of various video output devices, such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, a light emitting diode (LED) display, an organic light emitting diode (OLED) display, Or other types of display devices. In these or other examples, both displays 122 and 162 may be emissive displays or transmissive displays. The displays 122 and 162 may also be touch displays, so that they are both an input device and a display device at the same time. Such touch display screens may be capacitive, resistive, or other types of touch panels that allow users to provide user input to their respective devices.The speaker 123 may include any of various audio output devices, such as a headphone, a single speaker system, a multi-speaker system, or a surround sound system. Furthermore, although the display 122 and the speaker 123 are shown as part of the source device 120 and the display 162 and the speaker 163 are shown as part of the sink device 160, the source device 120 and the sink device 160 may actually be device systems. As one example, the display 162 may be a television, the speaker 163 may be a surround sound system, and the decoder 164 may be part of an outer box connected to the display 162 and the speaker 163 by wire or wirelessly. In other cases, sink device 160 may be a single device such as a tablet or smart phone. In other cases, the source device 120 and the sink device 160 are similar devices, for example, both are smartphones, tablets, etc. In this case, one device can operate as a source and the other device can operate as a sink. In subsequent communication sessions, these roles can even be reversed. In other cases, the source device may include a mobile device such as a smartphone, laptop, or tablet, and the sink device may include a more static device (eg, with an AC power cord), in which case, The source device can transmit audio and video data via the sink device for presentation to a larger crowd.Both the transmitter / receiver unit 126 and the transmitter / receiver unit 166 may include various mixers, filters, amplifiers, and other components designed for signal modulation, as well as one or more antennas and designed for transmission and Other components that receive data. Communication channel 150 generally represents any suitable communication medium or collection of different communication media for sending video data from source device 120 to sink device 160. The communication channel 150 is generally a relatively short-distance communication channel, similar to Wi-Fi, Bluetooth, and the like. However, the communication channel 150 is not necessarily limited in this regard, but may include any wireless or wired communication medium (such as a radio frequency (RF) spectrum or one or more physical transmission lines) or any combination of wireless and wired media. In other examples, the communication channel 150 may even form part of a packet-based network such as a wired or wireless local area network, a wide area network, or a global network such as the Internet. In addition, the communication channel 150 may be used by the source device 120 and the sink device 160 to create a peer-to-peer link. The source device 120 and the sink device 160 can communicate on the communication channel 150 using a standard communication protocol such as from the IEEE 802.11 family of standards. For example, the source device 120 and the sink device 160 may communicate according to the Wi-Fi direct standard, so that the source device 120 and the sink device 160 directly communicate with each other without using a medium such as a wireless access point or a so-called hotspot. The source device 120 and the sink device 160 may also establish tunnel direct link establishment (TLDS) to avoid or reduce network congestion. The technology of the present disclosure may sometimes be described with reference to Wi-Fi, but it should be envisaged that various aspects of these technologies are also compatible with other communication protocols. By way of example and not limitation, wireless communication between the source device 120 and the sink device may use orthogonal frequency division multiplexing (OFDM) technology. Various other wireless communication technologies can also be used, including but not limited to time division multiple access (TDMA), frequency division multiple access (FDMA), code division multiple access (CDMA) or OFDM, FDMA, TDMA and / or CDMA random combination. WiFi directly and TDLS are designed to establish a relatively short-range communication session. In this context, a relatively short distance may refer to, for example, less than 70 meters, although in noisy or obstructed environments, the distance between devices may be even shorter, such as less than 35 meters.In addition to decoding and presenting the data received from source device 120, sink device 160 may also receive user input from user input device 167. For example, the user input device 167 may be a keyboard, mouse, trackball or trackpad, touch screen, voice command recognition module, or any other such user input device. UIPM formats the user input commands received by user input device 167 into a data packet structure that source device 120 can interpret. These data packets are sent by the transmitter / receiver 166 over the communication channel 150 to the source device 120. The transmitter / receiver unit 126 receives the data packet, and the A / V control module 125 parses the data packet to interpret the user input command received by the user input device 167. Based on the command received in the data packet, the A / V control module 125 can change the content to be encoded and transmitted. In this way, the user of the sink device 160 can remotely control the audio payload data and the video payload data sent by the source device 120 without directly interacting with the source device 120. Examples of types of commands that the user of the sink device 160 can send to the source device 120 include commands for rewinding, fast forwarding, pausing, and playing audio and video data, and commands for zooming, rotating, scrolling, and so on. For example, the user can also make a selection from the options menu and send the selection back to the source device 120.In addition, the user of the sink device 160 can start and control the application on the source device 120. For example, a user of sink device 160 can start a photo editing application stored on source device 120, and use the application to edit photos stored locally on source device 120. The sink device 160 may present to the user a user experience that looks and feels that the picture is locally edited on the sink device 160, while actually editing the picture on the source device 120. With such a configuration, device users can use the capabilities of one device for several devices. For example, the source device 120 may be a smartphone with a large amount of memory and high-end processing capabilities. The user of the source device 120 can use the smartphone in all settings and situations normally used by the smartphone. However, when watching a movie, the user may wish to watch the movie on a device with a larger display screen. In this case, the sink device 160 may be a tablet computer or even a larger display device or a television. When wanting to send or reply to an email, the user may wish to use a device with a keyboard. In this case, the sink device 160 may be a laptop computer. In both of the above cases, although the user is interacting with the sink device, most of the processing can still be performed by the source device 120 (in this example, a smartphone). In this particular operating context, since most operations are performed by the source device 120, if the sink device 160 is required to perform the processing being performed by the source device 120, the sink device 160 may have fewer resources in comparison Lower cost equipment. In some examples, both the source device and the sink device are capable of accepting user input (such as touch screen commands), and the techniques of this disclosure can facilitate bidirectionality by negotiating and or identifying the capabilities of the device in any given session Interaction.In some configurations, the A / V control module 125 may be an operating system process being executed by the operating system of the source device 125. However, in other configurations, the A / V control module 125 may be a software process of an application running on the source device 120. In this configuration, the user input commands can be interpreted by the software process so that the user of the sink device 160 directly interacts with the application running on the source device 120 instead of the operating system running on the source device 120. By directly interacting with the application rather than the operating system, the user of the sink device 160 can have access to a command library that is not local to the operating system of the source device 120. In addition, directly interacting with applications can make commands more easily sent and processed by devices running on different platforms.The source device 120 may respond to user input applied at the wireless sink device 160. In such an interactive application setup, user input applied at the wireless sink device 160 can be sent back to the wireless display source on the communication channel 150. In one example, a reverse channel architecture (also called a user interface return channel (UIBC)) may be implemented to enable sink device 160 to send user input applied at sink device 160 to source device 120. The reverse channel architecture may include upper layer messages for transmitting user input and lower layer frames for negotiating user interface capabilities at the sink device 160 and the source device 120. The UIBC may be located at the Internet Protocol (IP) transport layer between the sink device 160 and the source device 120. In this way, UIBC can be above the transport layer in the Open Systems Interconnection (OSI) communication model. In one example, OSI communication includes seven layers (1-physical, 2-data link, 3-network, 4-transport, 5-session, 6-presentation, and 7-application). In this example, layers 5, 6, and 7 are referred to above the transport layer. To improve reliable transmission and sequential delivery of data packets containing user input data, UIBC can be configured to use other packet-based communication protocols (such as Transmission Control Protocol / Internet Protocol (TCP / IP) or User Datagram Protocol (UDP) ) At the top. UDP and TCP can operate in parallel in the OSI layer architecture. TCP / IP can enable sink device 160 and source device 120 to implement retransmission techniques in the case of packet loss.In some cases, there may be a mismatch between the user input interfaces located at source device 120 and sink device 160. In order to solve the potential problems caused by this mismatch and improve the good user experience in these cases, user input can be made between the source device 120 and the sink device 160 before establishing the communication session or at various times throughout the communication session Interface capability negotiation. As part of this negotiation process, the source device 120 and the sink device 160 can agree on the negotiated screen resolution. When sink device 160 sends coordinate data associated with user input, sink device 160 may scale the coordinate data obtained from display 162 to match the negotiated screen resolution. In one example, if the sink device 160 has a resolution of 1280x720 and the source device 120 has a resolution of 1600x900, the device may, for example, use 1280x720 as its negotiated resolution. Although the resolution of the source device 120 or some other resolution may also be used, the negotiated resolution may be selected based on the resolution of the sink device 160. In the example of using a 1280x720 sink device, the sink device 160 may scale the obtained x coordinate by a factor of 1600/1280 before sending coordinates to the source device 120, and similarly, the sink device 160 may Before sending coordinates 120, the obtained y coordinate is scaled by 900/720. In other configurations, the source device 120 may scale the obtained coordinates to the negotiated resolution. Scaling may be based on whether the sink device 160 uses a higher resolution display than the source device 120 to increase or decrease the coordinate range, or vice versa.Furthermore, in some examples, the resolution at the sink device 160 may change during the communication session, potentially causing a mismatch between the display 122 and the display 162. In order to improve the user experience and ensure proper functionality, the source / sink system 100 may implement techniques for reducing or preventing user interaction mismatch by implementing techniques for screen normalization. The display 122 of the source device 120 and the display 162 of the sink device 160 may have different resolutions and / or different screen aspect ratios. In addition, in some settings, the user of the sink device 160 may have the ability to adjust the size of the display window for the video data received from the source device 120 so that the window that covers less than the entire display 162 of the sink device 160 is presented with Video data received by the source device 120. In another example setup, the user of sink device 160 may have the option to view content in landscape mode or portrait mode, each mode having unique coordinates and different screen aspect ratios. In these cases, the coordinates associated with the user input received at sink device 160 (such as the coordinates at which a mouse click or touch event occurred) may not be processed by source device 120 without modifying the coordinates. Therefore, the technology of the present disclosure may include mapping the coordinates of the user input received at the sink device 160 to the coordinates associated with the source device 120. In this paper, the mapping is also called normalization, and the mapping will be explained in more detail below. The mapping may be sink-based or source-based.The user input received by sink device 160 may be received by UI module 167 (for example, at the driver layer) and passed to the operating system of sink device 160. The operating system on the sink device 160 may receive the coordinates (x SINK, y SINK) associated with where user input occurs on the display surface. In this example, (x SINK, y SINK) may be the coordinates of the display 162 where the mouse click or touch event occurred. The display window presented on the display 162 may have an x-coordinate length (L DW) and a y-coordinate width (W DW) that describe the size of the display window. The display window may also have upper left corner coordinates (a DW, b DW) describing the position of the display window. Based on L DW, W DW and the upper left coordinate (a DW, b DW), the portion of the display 162 covered by the display window can be determined. For example, the upper right corner of the display window can be located at coordinates (a DW + L DW, b DW), the lower left corner of the display window can be located at coordinates (a DW + L DW, b DW + W DW), and the lower right corner of the display window can Located at coordinates (a DW + L DW, b DW + W DW). If the input is received at the coordinates within the display window, the sink device 160 may process the input as a UIBC input. In other words, if the following conditions are met, the input associated with the coordinates (x SINK, y SINK) can be processed as UIBC input:a DW ≤x SINK ≤a DW + L DW (1)b DW ≤y SINK ≤b DW + W DW (2)After determining that the user input is a UIBC input, before being sent to the source device 120, the UIPM 168 may normalize the coordinates associated with the input. Inputs that are determined to be outside the display window may be processed locally by sink device 160 as non-UIBC inputs.As mentioned above, the normalization of the input coordinates may be source-based or sink-based. When source-based normalization is implemented, the source device 120 may send the display resolution (LSRC, WSRC) supported by the display 122 to the sink device 160 together with or independently of the video data. For example, the supported display resolution can be sent as part of the capability negotiation session or can be sent at another moment during the communication session. The sink device 160 may determine the display resolution of the display 162 (L SINK, W SINK), the display window resolution of the window displaying the content received from the source device 120 (L DW, W DW), and the coordinates of the upper left corner of the display window (A DW, b DW). As described above, when it is determined that the coordinates (x SINK, y SINK) corresponding to the user input are within the display window, the operating system of the sink device 160 can use the conversion function to map the coordinates (x SINK, y SINK) to the source coordinates ( x SRC, y SRC). An example conversion function for converting (x SINK, y SINK) to (x SRC, y SRC) can be as follows:x SRC = (x SINK -a DW) * (L SRC / L DW) (3)y SRC = (y SINK -b DW) * (W SRC / W DW) (4)Therefore, when sending the coordinates corresponding to the received user input, the sink device 160 may send the coordinates (x SRC, y SRC) of the user input received at (x SINK, y SINK). As will be described in more detail below, for example, the coordinates (x SRC, y SRC) may be sent as part of the user input data packet received at sink device 160 on UIBC to source device 120. Throughout the description of the input coordinates as included in the data packets in other parts of the present disclosure, as described above in the example where the source / sink system 100 implements sink-based normalization, those coordinates can be converted to source coordinates.When the source / sink system 100 implements sink-based normalization, for user input determined to be UIBC input rather than local input (ie, within the display window rather than outside the display window), it may be at the source device 120 instead of The above calculation is performed at the sink device 160. To facilitate these calculations, the sink device 160 may send the values of L DW and W DW and the position information of the display window (for example, a DW, b DW), and the coordinates of (x SINK, y SINK) to the source device 120. Using these transmitted values, the source device 120 can determine the value of (x SRC, y SRC) according to equations 3 and 4 above.In other implementations of sink-based normalization, sink device 160 may send user input coordinates (x DW, y that describe the location where the user input event occurred within the display window instead of the location where the user input event occurred on display 162 DW). In such an implementation, the coordinates (x DW, y DW) may be sent to the source device 120 along with the values of (L DW, W DW). Based on these received values, the source device 120 can be determined according to the following conversion function (x SRC, y SRC):x SRC = x DW * (L SRC / L DW) (5)y SRC = y DW * (W SRC / W DW) (6)The sink device 160 may determine x DW and y DW based on the following functions:x DW = x SINK -a DW (7)y DW = y SINK -b DW (8)When the present disclosure describes, for example, sending coordinates associated with user input in data packets, the sending of these coordinates may include sink-based or source-based normalization as described above, and / or may include for performing sink-based Or any additional information necessary for source-based normalization.UIBC can be designed to transmit various types of user input data, including cross-platform user input data. For example, the source device 120 may run theoperating system, while the sink device 160 runs another operating system such asor. Regardless of the platform, the UIPM 168 can encapsulate the received user input in a form understandable by the A / V control module 125. UIBC can support many different types of user input formats to allow many different types of source and sink devices to utilize the protocol, regardless of whether the source and sink devices operate on different platforms. A common input format can be defined, and platform-specific input formats can be supported at the same time, thereby providing flexibility by transferring user input between source device 120 and sink device 160 through UIBC.In the example of FIG. 1A, the source device 120 may include a smartphone, tablet computer, laptop computer, desktop computer, Wi-Fi-enabled TV, or any other device capable of transmitting audio and video data. The sink device 160 may likewise include a smart phone, tablet computer, laptop computer, desktop computer, Wi-Fi-enabled TV, or any other device capable of receiving audio and video data and receiving user input data. In some cases, sink device 160 may include a system of devices such that display 162, speaker 163, UI device 167, and A / V encoder 164 are all separate but interoperable devices. The source device 120 may also be a system of devices, rather than a single device.In the present disclosure, the term source device is generally used to refer to a device that transmits audio / video data, and the term sink device is generally used to refer to a device that receives audio / video data from a source device. In many cases, source device 120 and sink device 160 may be similar or the same device, with one device operating as the source and the other device operating as the sink. In addition, these roles can be reversed in different communication sessions. Therefore, the sink device in one communication session can become the source device in a subsequent communication session, or vice versa.FIG. 1B is a block diagram illustrating an exemplary source / sink system 101 that can implement the techniques of this disclosure. The source / sink system 101 includes a source device 120 and a sink device 160, and each of the source device 120 and the sink device 160 may operate and operate in the manner described above for FIG. 1A. The source / sink system 101 also includes sink equipment 180. The sink device 180 may receive audio and video data from the source device 120 on the established UIBC in a similar manner to the sink device 160 described above, and send a user command to the source device 120. In some configurations, sink device 160 and sink device 180 may operate independently of each other, and audio and video data output at source device 120 may be output at sink device 160 and sink device 180 at the same time. In an alternative configuration, sink device 160 may be the primary sink device, and sink device 180 may be the secondary sink device. In this example configuration, sink device 160 and sink device 180 may be coupled, and sink device 160 may display video data and sink device 180 outputs corresponding audio data. Furthermore, in some configurations, sink device 160 may only output the transmitted video data, while sink device 180 may only output the transmitted audio data.FIG. 2 is a block diagram showing an example of the source device 220. The source device 220 may be a device similar to the source device 120 in FIG. 1A, and may operate in the same manner as the source device 120. The source device 220 includes a local display 222, a local speaker 223, a processor 231, a memory 232, a transmission unit 233, and a wireless modem 234. As shown in FIG. 2, the source device 220 may include one or more processors (ie, processors 231) that encode and / or decode A / V data for transmission, storage, and display. For example, the A / V data may be stored at the memory 232. The memory 232 may store a complete A / V file, or may include a smaller buffer that stores only a portion of the A / V file (eg, streamed from another device or source). The transmission unit 233 can process the encoded A / V data for network transmission. For example, the encoded A / V data may be processed by the processor 231 and encapsulated by the transmission unit 233 into a network access layer (NAL) unit for cross-network communication. The NAL unit may be transmitted to the wireless sink device via the network connection by the wireless modem 234. The wireless modem 234 may be, for example, a Wi-Fi modem configured to implement one of the IEEE 802.11 family of standards.The source device 220 can also process and display the A / V data locally. Specifically, the display processor 235 may process the video data to be displayed on the local display 222, and the audio processor 236 may process the audio data for output on the speaker 223.As described above with reference to the source device 120 of FIG. 1A, the source device 220 may also receive user input from the sink device. In this way, the wireless modem 234 of the source device 220 receives the encapsulated data packet such as a NAL unit, and sends the encapsulated data unit to the transmission unit 233 for decapsulation. For example, the transmission unit 233 may extract a data packet from the NAL unit, and the processor 231 may parse the data packet to extract user input commands. Based on the user input command, the processor 231 can adjust the encoded A / V data sent from the source device 220 to the sink device. In this way, the functions described above with reference to the A / V control module 125 of FIG. 1A may be fully or partially implemented by the processor 231.The processor 231 of FIG. 2 generally represents any one of various processors, including but not limited to one or more digital signal processors (DSPs), general-purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic Array (FPGA), other equivalent integrated or discrete logic circuits, or some combination thereof. The memory 232 of FIG. 2 may include any of various volatile or nonvolatile memories, including but not limited to random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), only Read memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), flash memory, etc. The memory 232 may include a computer-readable storage medium for storing audio / video data and other types of data. The memory 232 may additionally store instructions and program codes executed by the processor 231 as part of performing various techniques described in this disclosure.FIG. 3 shows an example of the sink device 360. The sink device 360 may be a device similar to the sink device 160 in FIG. 1A and may operate in the same manner as the sink device 160. Sink device 360 includes one or more processors (ie, processor 331), memory 332, transmission unit 333, wireless modem 334, display processor 335, local display 362, audio processor 336, speaker 363, and user input interface 376 . The sink device 360 receives the encapsulated data unit transmitted from the source device at the wireless modem 334. The wireless modem 334 may be, for example, a Wi-Fi modem that is configured to implement one or more standards in the IEEE 802.11 family of standards. The transmission unit 333 may decapsulate the encapsulated data unit. For example, the transmission unit 333 may extract the encoded video data from the encapsulated data unit, and send the encoded A / V data to the processor 331 for decoding and rendering for output. The display processor 335 may process the decoded video data to be displayed on the local display 362, and the audio processor 336 may process the decoded audio data for output on the speaker 363.In addition to presenting audio and video data, the wireless sink device 360 may also receive user input data through the user input interface 376. The user input interface 376 may represent any one of multiple user input devices, including but not limited to: a touch display interface, a keyboard, a mouse, a voice command module, a gesture capture device (for example, with camera-based input capture capabilities), or multiple Any other device in the user input device. The user input received through the user input interface 376 may be processed by the processor 331. The processing may include generating data packets including received user input commands according to the techniques described in this disclosure. Once generated, the transmission unit 333 can process these data packets for network transmission to the wireless source device on the UIBC.The processor 331 of FIG. 3 may include one or more of a wide range of processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGA), other equivalent integrated or discrete logic circuits, or some combination thereof. The memory 332 of FIG. 3 may include any of various volatile or non-volatile memories, including but not limited to: random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), Read only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read only memory (EEPROM), flash memory, etc. The memory 232 may include a computer-readable storage medium for storing audio / video data and other types of data. The memory 332 may also store instructions and program code executed by the processor 331 as part of performing various techniques described in this disclosure.4 shows a block diagram of an example transmitter system 410 and receiver system 450, which can be used by the transmitter / receiver 126 and transmitter / receiver 166 in FIG. 1A to communicate Communication is performed on channel 150. At the transmitter system 410, service data for multiple data streams is provided from a data source 412 to a transmit (TX) data processor 414. Each data stream can be sent on the corresponding transmit antenna. The TX data processor 414 formats, codes, and interleaves the traffic data of each data stream based on the specific coding scheme selected for each data stream.Orthogonal Frequency Division Multiplexing (OFDM) technology can be used to multiplex the encoded data of each data stream with the pilot data. Various other wireless communication technologies may also be used, including but not limited to time division multiple access (TDMA), frequency division multiple access (FDMA), code division multiple access (CDMA), or any combination of OFDM, FDMA, TDMA, and / or CDMA.Consistent with Figure 4, pilot data is usually a known data pattern that is processed in a known manner and can be used at the receiver system to estimate the channel response. Then, it can be based on a specific modulation scheme selected for each data stream (eg, binary phase shift keying (BPSK), quadrature phase shift keying (QPSK), M-PSK, or M-QAM (quadrature amplitude modulation) , Where M may be a power of 2) to modulate (eg, symbol map) the coded data of the multiplexed pilot and data streams to provide modulation symbols. The data rate, encoding, and modulation of each data stream can be determined by instructions executed by processor 430, which can be coupled to memory 432.Then, the modulation symbols of the data stream are provided to the TX MIMO processor 420, which can further process the modulation symbols (eg, for OFDM). The TX MIMO processor 420 may then provide NT modulation symbol streams to NT transmitters (TMTR) 422a through 422t. In some aspects, the TX MIMO processor 420 applies beamforming weights to the symbol of the data stream and the antenna that transmitted the symbol.Each transmitter 422 can receive and process a corresponding symbol stream to provide one or more analog signals, and further adjust (eg, amplify, filter, and upconvert) the analog signals to provide modulated signals suitable for transmission on the MIMO channel signal of. Then, NT modulated signals from transmitters 422a to 422t are transmitted from NT antennas 424a to 424t, respectively.At the receiver system 450, the transmitted modulated signal is received through NR antennas 452a through 452r, and the received signal from each antenna 452 is provided to the corresponding receiver (RCVR) 454a through 454r. The receiver 454 adjusts (e.g., filters, amplifies, and downconverts) the respective received signal, digitizes the adjusted signal to provide samples, and further processes the samples to provide a corresponding "received" symbol stream.The receive (RX) data processor 460 may then receive and process the NR received symbol streams from the NR receivers 454 based on specific receiver processing techniques to provide NT "detected" symbol streams. Then, the RX data processor 460 can demodulate, deinterleave, and decode each detected symbol stream to recover the service data of the data stream. The processing performed by the RX data processor 460 is complementary to the processing performed by the TX MIMO processor 420 and the TX data processor 414 at the transmitter system 410.The processor 470, which may be coupled to the memory 472, periodically determines which precoding matrix to use. The reverse link message may include various types of information related to the communication link and / or the received data stream. Then, the reverse link message is processed by the TX data processor 438 (the TX data processor 438 also receives service data of multiple data streams from the data source 436), modulated by the modulator 480, and performed by the transmitters 454a to 454r Adjusted and sent back to the transmitter system 410.At transmitter system 410, the modulated signal from receiver system 450 is received by antenna 424, adjusted by receiver 422, demodulated by demodulator 440, and processed by RX data processor 442 to extract The reverse link message sent by the receiver system 450. Then, the processor 430 determines which precoding matrix is used to determine the beamforming weight, and then processes the extracted message.FIG. 5A is a block diagram illustrating an example message transmission sequence between the source device 520 and the sink device 560 as part of a capability negotiation session. The capability negotiation can be performed as part of a larger communication session establishment process between the source device 520 and the sink device 560. For example, Wi-Fi Direct or TDLS can be used as the basic connectivity standard to establish the session. After establishing a Wi-Fi direct or TDLS session, sink device 560 may initiate a TCP connection with source device 520. As part of establishing a TCP connection, a control port running real-time streaming protocol (RTSP) can be established to manage the communication session between the source device 520 and the sink device 560.Source device 520 may generally operate in the same manner as described above for source device 120 of FIG. 1A, and sink device 560 may generally operate in the same manner as described above for sink device 160 of FIG. 1A . After the source device 520 and the sink device 560 establish connectivity, the source device 520 and the sink device 560 may determine the set of parameters for their subsequent communication sessions as part of the capability negotiation exchange.The source device 520 and the sink device 560 can negotiate capabilities through a series of messages. These messages may be, for example, real-time streaming protocol (RTSP) messages. At any stage of negotiation, the receiver of the RTSP request message can respond with an RTSP response that includes the RTSP status code in addition to RTSP OK. In this case, a different set of parameters can be used to retry the message exchange, or The capability negotiation session can be ended.The source device 520 may send a first message (RTSP selection request message) to the sink device 560 to determine the set of RTSP methods supported by the sink device 560. After receiving the first message from the source device 520, the sink device 560 may respond with a second message (RTSP selection response message), which lists the RTSP methods supported by the sink 560. The second message may also include the RTSP OK status code.After sending the second message to the source device 520, the sink device 560 may send a third message (RTSP selection request message) in order to determine the set of RTSP methods supported by the source device 520. After receiving the third message from the sink device 560, the source device 520 may respond with a fourth message (RTSP selection response message), which lists the RTSP methods supported by the source device 520. The fourth message may also include the RTSP OK status code.After sending the fourth message, the source device 520 may send a fifth message (RTSP get_parameter request message) to specify a list of capabilities that the source device 520 is interested in. The sink device 560 may respond with a sixth message (RTSP get_parameter response message). The sixth message may contain the RTSP status code. If the RTSP status code is OK, the sixth message may also include response parameters for the parameters supported by the sink device 560 specified in the fifth message. The sink device 560 may ignore parameters that the sink device 560 does not support in the fifth message.Based on the sixth message, the source 520 may determine the optimal parameter set for the communication session, and may send the seventh message (RTSP setting_parameter request message) to the sink device 560. The seventh message may contain the set of parameters to be used during the communication session between the source device 520 and the sink device 560. The seventh message may include a wfd-presentation-url describing the universal resource identifier (URI) to be used in the RTSP establishment request in order to establish the communication session. wfd-presentation-url specifies the URI that sink device 560 can use for subsequent messages during the session establishment exchange. The values of wfd-url0 and wfd-url1 specified in this parameter may correspond to the values of rtp-port0 and rtp-port1 in wfd-client-rtp-ports in the seventh message. In this case, RTP usually refers to a real-time protocol that can run on top of UDP.After receiving the seventh message, the sink device 560 may respond with an eighth message with an RTSP status code indicating whether the setting of the parameter is successful according to the designation in the seventh message. As mentioned above, the role or source device and sink device can be reversed or changed in different sessions. In some cases, the sequence of messages that establish a communication session may define a device that operates as a source and a device that operates as a sink.5B is a block diagram showing another example message transmission sequence between the source device 560 and the sink device 520 as part of the capability negotiation session. The message transmission sequence of FIG. 5B is intended to provide a more detailed view of the transmission sequence described above for FIG. 5A. In FIG. 5B, the message "1b. Get_Parameter Response" shows an example of a message identifying a list of supported input categories (e.g., general and HIDC) and multiple lists of supported input types. Each supported input category in the list of supported input categories has an associated list of supported types (eg, generic_cap_list and hidc_cap_list). In FIG. 5B, the message "2a. Setting_Parameter Request" is an example of a second message identifying a second list of supported input categories (e.g., general and HIDC) and a plurality of second lists of supported types. The second list of supported input categories. Each supported input category has an associated second list of supported types (eg, generic_cap_list and hidc_cap_list). The message "1b. Get_Parameter Response" identifies the input category and input type supported by the sink device 560. The message "2a. Setting_Parameter Request" identifies the input categories and input types supported by the source device 520, but it may not be a comprehensive list of all input categories and input types supported by the source device 520. Instead, the message "2a. Setting_parameter request" may only identify those input categories and input types that are identified as supported by the sink device 560 in the message "1b. Get_parameter response". In this way, the input categories and input types identified in the message "2a. Setting_parameter request" can constitute a subset of the input categories and input types identified in the message "1b. Get_parameter response".6 is a conceptual diagram showing an example of data packets that can be generated by a sink device and sent to a source device. Various aspects of the data packet 600 will be explained with reference to FIG. 1A, but the techniques discussed can be applied to other types of source / sink systems. The data packet 600 may include a data packet header 610 followed by payload data 650. The payload data 650 may additionally include one or more payload headers (eg, payload header 630). For example, the data packet 600 may be sent from the sink device 160 of FIG. 1A to the source device 120 to enable the user of the sink device 160 to control the audio / video data sent by the source device 120. In such a case, the payload data 650 may include user input data received at the sink device 160. For example, the payload data 650 may identify one or more user commands. The sink device 160 may receive the one or more user commands, and may generate a data packet header 610 and payload data 650 based on the received commands. Based on the content of the data packet header 610 of the data packet 600, the source device 120 may parse the payload data 650 to identify the user input data received at the sink device 160. Based on the user input data contained in the payload data 650, the source device 120 may somehow change the audio and video data sent from the source device 120 to the sink device 160.As used in this disclosure, the terms "parse" and "parse" generally refer to the process of analyzing a bitstream to extract data from the bitstream. Once extracted, the data may be processed by the source device 120, for example. For example, extracting data may include identifying how the data in the bitstream is formatted. As will be described in more detail below, the data packet header 610 may define a standardized format known to both the source device 120 and the sink device 160. However, the payload data 650 can be formatted in one of many possible ways. By parsing the data packet header 610, the source device 120 can determine how the payload data 650 is formatted, and thus, the source device 120 can parse the payload data 650 to extract one from the payload data 650 or Multiple users enter commands. This can provide flexibility in the different types of payload data supported in source-sink communication. As will be described in more detail below, the payload data 650 may also include one or more payload headers such as a payload header 630. In these cases, the source device 120 may parse the data packet header 610 to determine the format of the payload header 630, and then parse the payload header 630 to determine the format of the remaining portion of the payload data 650.Diagram 620 is a conceptual description about how the data packet header 610 can be formatted. The numbers 0-15 in row 615 are intended to identify the bit position within the data packet header 610, and are not intended to actually represent the information contained in the data packet header 610. The data packet header 610 includes a version field 621, a time stamp flag 622, a reserved field 623, an input category field 624, a length field 625, and an optional time stamp field 626.In the example of FIG. 6, the version field 621 is a 3-bit field, which may indicate the version of the specific communication protocol implemented by the sink device 160. The value in the version field 621 may inform the source device 120 how to parse the remaining part of the data packet header 610 and how to parse the payload data 650. In the example of FIG. 6, the version field 621 is a 3-bit field, which enables a unique identifier for 8 different versions. In other examples, more or fewer bits may be dedicated to the version field 621.In the example of FIG. 6, the time stamp flag (T) 622 is a 1-bit field that indicates whether a time stamp field 626 is present in the data packet header 610. Timestamp field 626 is a 16-bit field that contains a timestamp based on multimedia data generated by source device 120 and sent to sink device 160. For example, the time stamp may be an order value assigned to the frame by the source device 120 before the video frame is sent to the sink device 160. The time stamp flag 622 may include, for example, "1" indicating that the time stamp field 626 is present, and may include "0" indicating that the time stamp field 626 is not present. After parsing the data packet header 610 and determining that there is a timestamp field 626, the source device 120 may process the timestamp included in the timestamp field 626. After parsing the data packet header 610 and determining that there is no timestamp field 626, since there is no timestamp field in the data packet header 610, the source device 120 may start to analyze the payload data 650 after parsing the length field 625 To parse.If present, the time stamp field 626 may include a time stamp identifying the video data frame displayed at the wireless sink device 160 when the user input data of the payload data 650 is obtained. For example, before the source device 120 sends the video frame to the sink device 160, the source device 120 may have added the time stamp to the video frame. Therefore, the source device 120 may generate a video frame and embed a time stamp (for example, as metadata) in the video data of the frame. The source device 120 may send a video frame with a time stamp to the sink device 160, and the sink device 160 may display the video frame. When the video frame is displayed by the sink device 160, the sink device 160 may receive a user command from the user. When the sink device 160 generates a data packet to transmit the user command to the source device 120, the sink device 160 may include the time stamp of the frame displayed by the sink device 160 when the user command is received in the time stamp field 626.After receiving the data packet 600 having the timestamp field 626 present in the header, the wireless source device 120 may identify the video frame displayed at the sink device 160 when the user input data of the payload data 650 is obtained, and may be based on The content of the frame identified by the time stamp is used to process the user input data. For example, if the user input data is a touch command applied to the touch display or a mouse pointer click, the source device 120 may determine the content of the frame displayed when the user applies the touch command to the display or clicks the mouse. In some cases, the content of the frame may be required to properly process the payload data. For example, user input based on user touch or mouse click may depend on what is being displayed on the display when touched or clicked. For example, touch or click can correspond to an icon or menu option. In the event that the content of the display changes, the timestamp present in the timestamp field 626 may be used by the source device 120 to match the touch or click to the correct icon or menu option.Additionally or alternatively, the source device 120 may compare the time stamp in the time stamp field 626 with the time stamp applied to the currently rendered video frame. By comparing the time stamp in the time stamp field 626 with the current time stamp, the source device 120 can determine the round trip time. The round trip time generally corresponds to the amount of time elapsed from the time the source device 120 sent the frame to the time the source device 120 received back the user input based on the frame from the sink device 160. The round trip time can provide the source device 120 with an indication of the system delay, and if the round trip time is greater than the threshold, the source device 120 can ignore the data contained in the payload data 650 under the assumption that the input command is applied to the outdated display frame In the user input data. When the round trip time is less than the threshold, the source device 120 may process the user input data and adjust the audio / video content sent in response to the user input data. Thresholds can be programmable, and different types of devices (or different source-sink combinations) can be configured to define acceptable different thresholds for round-trip time.In the example of FIG. 6, the reserved field 623 is an 8-bit field, which does not include information used by the source device 120 when parsing the data packet header 610 and the payload data 650. However, future versions of a particular protocol (as identified in the version field 621) may use the reserved field 623. In this case, the source device 120 may use the information in the reserved field 623 to parse the data packet header 610 and / Or parse the payload data 650. The reserved field 623 combined with the version field 621 potentially provides the ability to extend and add features to the data packet format without radically changing the format and features already in use.In the example of FIG. 6, the input category field 624 is a 4-bit field that is used to identify the input category of the user input data contained in the payload data 650. The sink device 160 may classify user input data to determine input categories. For example, the classification of user input data may be based on the device from which the command was received or based on the attributes of the command itself. The value of the input category field 624 (possibly in conjunction with other information in the data packet header 610) identifies to the source device 120 how the payload data 650 is formatted. Based on this formatting, the source device 120 may parse the payload data 650 to determine the user input received at the sink device 160.In the example of FIG. 6, since the input category field 624 is 4 bits, 16 different input categories can be identified. One such input category may be a universal input format indicating that the user input data of the payload data 650 is formatted using the general information elements defined in the protocol executed by both the source device 120 and the sink device 160 . As will be described in more detail below, the generic input format may use generic information elements that allow the user of sink device 160 to interact with source device 120 at the application layer.Another such input category may be Human Machine Interface Device Command (HIDC) format, which indicates that user input data of payload data 650 is formatted based on the type of input device used to receive the input data. Examples of types of devices include: keyboards, mice, touch input devices, joysticks, cameras, gesture capture devices (such as camera-based input devices), and remote control. Other types of input categories that can be identified in the input category field 624 include: indicating that the user input data in the payload data 650 is not a forwarding input format derived from the sink device 160, or an operating system-specific format, and indicating payload data 650 includes a voice command format for voice commands.The length field 625 may include a 16-bit field indicating the length of the data packet 600. For example, the length can be indicated in units of 8 bits. Since the data packet 600 is parsed by the source device 120 in 16-bit words, the data packet 600 can be padded to an integer multiple of 16 bits. Based on the length contained in the length field 625, the source device 120 can recognize the end of the payload data 650 (i.e., the end of the data packet 600) and the start of a new subsequent data packet.The various sizes of the fields provided in the example of FIG. 6 are only intended to be explanatory, and it is intended that these fields can be implemented using a different number of bits than shown in FIG. In addition, it is also conceivable that the data packet header 610 may include fewer fields than all the fields discussed above or may use additional fields not discussed above. Of course, the technology of the present disclosure may be flexible in terms of the actual format used for each data field in the packet.After parsing the data packet header 610 to determine the formatting of the payload data 650, the source device 120 may parse the payload data 650 to determine the user input commands contained in the payload data 650. The payload data 650 may have its own payload header (payload header 630), which indicates the content of the payload data 650. In this manner, the source device 120 may parse the payload header 630 based on the parsing of the data packet header 610, and then may parse the payload data 650 based on the parsing of the payload header 630.For example, if the input category field 624 of the data packet header 610 indicates that there is a universal input in the payload data 650, the payload data 650 may have a universal input format. Therefore, the source device 120 can parse the payload data 650 according to the general input format. As part of the universal input format, the payload data 650 may include a series of one or more input events, where each input event has its own input event header. Table 1 below identifies the fields that can be included in the input header.Table 1The generic input event (IE) identification (ID) field identifies the generic input event identifier used to identify the type of input. For example, the general IE ID field may be 1 octet in length, and may include the identification selected from Table 2 below. As in this example, if the generic IE ID field is 8 bits, then 256 different types of input can be identified (identified as 0-255), although not all 256 types of identification necessarily require an associated input type. Some of these 256 may be reserved for future use in future versions of any protocol implemented by sink device 160 and source device 120. In Table 2, for example, the generic IE ID9-255 does not have an associated input type, but it can be assigned an input type in the future.The length field in the input event header identifies the length of the description field, and the description field includes information elements describing the user input. The formatting of the description field may depend on the type of input identified in the general IE ID field. Therefore, the source device 120 may parse the content of the description field based on the input type identified in the general IE ID field. Based on the length field of the input event header, the source device 120 can determine the end of one input event in the payload data 650 and the start of a new input event. As will be explained in more detail below, a user command can be described in the payload data 650 as one or more input events.Table 2 provides examples of input types, each of which has a corresponding generic IE ID that can be used to identify the input type.Form 2The description field associated with each input type may have a different format. For example, the description fields for the left mouse down / touch down event, left mouse up / touch up event, and mouse movement / touch movement may include the information elements identified in Table 3 below, although in other examples Use other formats.Form 3The number of pointers can identify the number of touches or mouse clicks associated with the input event. Each pointer can have a unique pointer ID. For example, if the multi-touch event includes a three-finger touch, the input event may have three pointers, where each pointer has a unique pointer ID. Each pointer (ie, each finger touch) may have an x coordinate and a y coordinate corresponding to where the touch occurs.A single user command can be described as a series of input events. For example, if the three-finger swipe is a command to close the application, the three-finger swipe may be described in the payload data 650 as a touch-down event with three pointers, a touch movement event with three pointers, and three The touch of the pointer raises the event. The three pointers of the touch down event may have the same pointer ID as the three pointers of the touch move event and the touch up event. The source device 120 may interpret the combination of these three input events as a three-finger swipe.For example, the description field of the key press event or the key lift event may include the information elements identified in Table 4 below.Form 4For example, the description field of the zoom event may include the information elements identified in Table 5 below.Table 5For example, the description field of the horizontal scroll event or vertical scroll event may include the information elements identified in Table 6 below.Form 6The above example shows some exemplary ways in which payload data can be formatted for generic input categories. If the input category field 624 of the data packet header 610 indicates a different input category (such as forwarded user input), the payload data 650 may have a different input format. In the case of forwarded user input, sink device 160 may receive user input data from a third-party device and forward the input to source device 120 without interpreting the user input data. Thus, the source device 120 can parse the payload data 650 according to the forwarded user input format. For example, the payload header 630 of the payload data 650 may include a field for identifying a third-party device from which user input is obtained. For example, this field may include the Internet Protocol (IP) address, MAC address, domain name, or some other such identifier of the third-party device. The source device 120 may parse the remaining part of the payload data based on the identifier of the third-party device.Sink device 160 may negotiate capabilities with third-party devices via a series of messages. Then, as part of the capability negotiation process, sink device 160 may send the unique identifier of the third-party device to source device 120 as part of establishing a communication session with source device 120. Alternatively, the sink device 160 may send information describing the third-party device to the source device 120, and based on the information, the source device 120 may determine the unique identifier of the third-party device. For example, the information describing the third-party device may include information for identifying the third-party device and / or information for identifying the capabilities of the third-party device. Regardless of whether the unique identifier is determined by the source device 120 or the sink device 160, when the sink device 160 transmits a data packet with user input obtained from a third-party device, the sink device 160 may include the unique identifier in the data packet (For example, included in the payload header) to enable the source device 120 to identify the source of the user input.If the input category field 624 of the data packet header 610 indicates yet another different input category, such as voice commands, then the payload data 650 may have yet another different input type. For voice commands, the payload data 650 may include encoded audio. The codec used to encode and decode the audio of the voice command can be negotiated between the source device 120 and the sink device 160 via a series of messages. In order to send a voice command, the timestamp field 626 may include a voice sampling time value. In this case, the timestamp flag 622 may be set to indicate the presence of a timestamp instead of the timestamp as described above, and the timestamp field 626 may include a voice sampling time value for the encoded audio of the payload data 650 .In some examples, as described above, the voice command may be sent as a general command. In this case, the input category field 626 may be set to identify the general command format, and one of the reserved general IE IDs may be Give voice commands. If the voice command is sent as a general command, the voice sampling rate may be present in the time stamp field 626 of the data packet header 610 or may be present in the payload data 650.For the captured voice command data, the voice data can be encapsulated in various ways. For example, RTP can be used to encapsulate voice command data. RTP can provide a payload type to identify the codec and timestamp, where the timestamp is used to identify the sampling rate. The RTP data can be encapsulated using the general user input format described above with or without optional timestamps. Sink device 160 may use TCP / IP to send general input data carrying voice command data to source device 120.As previously discussed, when coordinates are included as part of a data packet (such as data packet 600) in, for example, payload data 650, the coordinates can be normalized to coordinates scaled based on the negotiated resolution, display window coordinates, and normalized Coordinates, or coordinates associated with the sink display. In some cases, additional information may be included in the data packet or sent separately to be used by the source device to normalize the coordinates received in the data packet.Regardless of the input category of a particular data packet, the data packet header may be an application layer packet header, and the data packet may be sent via TCP / IP. TCP / IP may enable sink device 160 and source device 120 to perform retransmission techniques in the event of packet loss. Data packets may be sent from the sink device 160 to the source device 120 to control the audio data or video data of the source device 120, or for other purposes (such as controlling applications running on the source device 120).7A is a flowchart of an example method for negotiating capabilities between a sink device and a source device. The example method shown may be performed by sink device 160 (FIG. 1A) or 360 (FIG. 3). In some examples, a computer-readable storage medium (eg, memory 332) may store instructions, modules, or algorithms, which when executed, cause one or more processors (eg, processor 331) to execute One or more of the steps shown in one or more of the flowcharts described herein.The method of FIG. 7A includes the sink device 160 receiving the first message from the source device 120 (701). For example, the message may include a request to obtain parameters. In response to the first message, sink device 160 may send a second message to source device 120 (703). For example, the second message may include obtaining a parameter response that identifies a first list of supported input categories and a plurality of first lists of supported types. Wherein, each supported input category in the first list of supported input categories has an associated first list of supported types. For example, the supported input categories may correspond to the same categories used for input category field 624 of FIG. Table 2 above shows an example of the supported types for a specific input category (in this example, universal input). The sink device 160 may receive the third message from the source device 120 (705). For example, the third message may include a setting parameter request, where the setting parameter request identifies a port for communication, a second list of supported input categories, and a plurality of second lists of supported types, wherein Each supported input category in the second list of supported input categories has an associated second list of supported types, and each supported type in the second list includes the types in the first list Subset. Sink device 160 may send a fourth message to source device 120 (707). For example, the fourth message may include a setting parameter response for confirming that the type in the second list has been enabled. The sink device 160 may receive the fifth message from the source device 120 (709). For example, the fifth message may include a second setting parameter request indicating that the communication channel between the source device 120 and the sink device 160 has been enabled. For example, the communication channel may include a user input return channel (UIBC). Sink device 160 may send a sixth message to source device 120 (711). For example, the sixth message may include a second setting parameter response confirming the reception of the second setting parameter request by the sink device 160.7B is a flowchart of an example method for negotiating capabilities between a sink device and a source device. The illustrated example method may be performed by the source device 120 (FIG. 1A) or 220 (FIG. 2). In some examples, a computer-readable storage medium (eg, memory 232) may store instructions, modules, or algorithms, and when the instructions, modules, or algorithms execute, cause one or more processors (eg, processor 231) to execute the One or more of the steps shown in the flowchart.The method of FIG. 7B includes the source device 120 sending a first message to the sink device 160 (702). For example, the first message may include a request to obtain parameters. The source device 120 may receive the second message from the sink device 160 (704). For example, the second message may include obtaining a parameter response that identifies a first list of supported input categories and a plurality of first lists of supported types, where in the first list of supported input categories Each supported input category has a first list of associated supported types. The source device 120 may send a third message to the sink device 160 (706). For example, the third message may include a setting parameter request that identifies a port used for communication, a second list of supported input categories, and a plurality of second lists of supported types, where the supported Each supported input category in the second list of input categories has an associated second list of supported types, and each supported type in the second list includes a subset of the types in the first list . The source device 120 may receive the fourth message from the sink device 160 (708). For example, the fourth message may include a setting parameter response for confirming that the type in the second list has been enabled. The source device 120 may send a fifth message to the sink device 160 (710). For example, the fifth message may include a second setting parameter request indicating that the communication channel between the source device 120 and the sink device 160 has been enabled. For example, the communication channel may include a user input return channel (UIBC). The source device 120 may receive the sixth message from the sink device 160 (712). For example, the sixth message may include a second setting parameter response confirming the reception of the second setting parameter request by the sink device 160.8A is a flowchart of an example method of sending user input data from a wireless sink device to a wireless source device according to the present disclosure. The illustrated example method may be performed by sink device 160 (FIG. 1A) or 360 (FIG. 3). In some examples, a computer-readable storage medium (eg, memory 232) may store instructions, modules, or algorithms, which when executed, cause one or more processors (eg, processor 231) to execute the One or more of the steps shown in the flowchart.The method of FIG. 8A includes obtaining user input data at a wireless sink device (such as wireless sink device 160) (801). The user input data may be obtained through a user input component of the wireless sink device 160, such as, for example, the user input interface 376 shown in connection with the wireless sink device 360. In addition, sink device 160 may classify user input data as, for example, general purpose, forwarded, or operating system specific. The sink device 160 may then generate a data packet header based on user input data (803). The data packet header may be an application layer packet header. In addition to other fields, the data packet header may include fields for identifying input categories corresponding to user input data. For example, input categories may include universal input formats or human-machine interface device commands. Sink device 160 may also generate a data packet (805), where the data packet includes the generated data packet header and payload data. In one example, the payload data may include the received user input data, and may identify one or more user commands. The sink device 160 may then send the generated data packet to the wireless source device (eg, the source device 120 of FIG. 1A or 220 of FIG. 2) (807). Sink device 160 may include components that allow the transmission of data packets, including, for example, transmission unit 333 and wireless modem 334 shown in FIG. The sink device 160 may transfer data packets through TCP / IP.8B is a flowchart of an example method of receiving user input data from a wireless sink device at a wireless source device according to the present disclosure. The illustrated example method may be performed by the source device 120 (FIG. 1A) or 220 (FIG. 2). In some examples, a computer-readable storage medium (eg, memory 232) may store instructions, modules, or algorithms, which when executed, cause one or more processors (eg, processor 231) to execute the One or more of the steps shown in the flowchart.The method of FIG. 8B includes receiving a data packet (802), where, among other things, the data packet may include a data packet header and payload data. The payload data may include, for example, user input data. The source device 120 may include a communication component that allows transmission of data packets, which includes, for example, the transmission unit 233 and the wireless modem 234 shown with reference to FIG. 2. Then, the source device 120 may parse the data packet header included in the data packet (804) to determine the input category associated with the user input data contained in the payload data. The source device 120 may process the payload data based on the determined input category (806). The data packet described with reference to FIGS. 8A and 8B can generally take the form of the data packet described with reference to FIG. 6 and can be used to control audio / video data and applications at the source device.9A is a flowchart of an example method of transmitting user input data from a wireless sink device to a wireless source device according to the present disclosure. The illustrated example method may be performed by sink device 160 (FIG. 1A) or 360 (FIG. 3). In some examples, a computer-readable storage medium (eg, memory 332) may store instructions, modules, or algorithms, and when the instructions, modules, or algorithms execute, cause one or more processors (eg, processor 331) to execute the One or more of the steps shown in the flowchart.The method of FIG. 9A includes obtaining user input data at a wireless sink device (such as wireless sink device 160) (901). The user input data may be obtained through a user input component of the wireless sink device 160, such as, for example, the user input interface 376 shown with reference to FIG. 3. The sink device 160 may then generate payload data (903), where the payload data may describe user input data. In one example, the payload data may include received user input data and may identify one or more user commands. The sink device 160 may also generate a data packet (905), where the data packet includes a data packet header and the generated payload data. The sink device 160 may then send the generated data packet to the wireless source device (eg, the source device 120 of FIG. 1A or 220 of FIG. 2) (907). Sink device 160 may include components that allow the transmission of data packets, such as, for example, transmission unit 333 and wireless modem 334. Data packets can be sent to wireless source devices via TCP / IP.9B is a flowchart of an example method of receiving user input data from a wireless sink device at a wireless source device according to the present disclosure. The illustrated example method may be performed by the source device 120 (FIG. 1A) or 220 (FIG. 2). In some examples, a computer-readable storage medium (eg, memory 232) may store instructions, modules, or algorithms, which when executed, cause one or more processors (eg, processor 231) to execute the One or more of the steps shown in the flowchart.The method of FIG. 9B includes receiving a data packet from the sink device 360 (902), where, among other things, the data packet may include a data packet header and payload data. In one example, the payload data may include, for example, data describing details entered by the user (such as input type values). The source device 120 may include communication components that allow the transmission of data packets, including, for example, the transmission unit 233 and the wireless modem 234 shown with reference to FIG. 2. Then, the source device 120 may parse the data packet (904) to determine the input type value in the input type field in the payload data. The source device 120 may process the data describing the details input by the user based on the determined input type value (906). The data packet described with reference to Figs. 9A and 9B may generally take the form of the data packet described with reference to Fig. 6.10A is a flowchart of an example method of transmitting user input data from a wireless sink device to a wireless source device according to this disclosure. The example method shown may be performed by sink device 160 (FIG. 1A) or 360 (FIG. 3). In some examples, a computer-readable storage medium (eg, memory 332) may store instructions, modules, or algorithms, and when the instructions, modules, or algorithms execute, cause one or more processors (eg, processor 331) to execute the One or more of the steps shown in the flowchart.The method of FIG. 10A includes obtaining user input data at a wireless sink device (such as wireless sink device 160) (1001). The user input data may be obtained through a user input component of the wireless sink device 160, such as, for example, the user input interface 376 shown with reference to FIG. 3. The sink device 160 may then generate a data packet header based on user input (1003). Among other fields, the data packet header may include a time stamp flag (for example, a 1-bit field) indicating whether there is a time stamp field in the data packet header. For example, the timestamp flag may include "1" indicating the presence of a timestamp field, and may include "0" indicating that there is no timestamp field. For example, the timestamp field may be a 16-bit field that contains a timestamp generated by the source device 120 and added to the video data before transmission. Sink device 160 may also generate a data packet (1005), where the data packet includes the generated data packet header and payload data. In one example, the payload data may include received user input data and may identify one or more user commands. The sink device 160 may then send the generated data packet to the wireless source device (eg, the source device 120 of FIG. 1A or 220 of FIG. 2) (1007). The sink device 160 may include components that allow transmission of data packets, including, for example, the transmission unit 333 and the wireless modem 334 shown with reference to FIG. 3. Data packets can be sent to wireless source devices via TCP / IP.10B is a flowchart of an example method of receiving user input data from a wireless sink device at a wireless source device according to the present disclosure. The illustrated example method may be performed by the source device 120 (FIG. 1A) or 220 (FIG. 2). In some examples, a computer-readable storage medium (eg, memory 232) may store instructions, modules, or algorithms, which when executed, cause one or more processors (eg, processor 231) to execute the One or more of the steps shown in the flowchart.The method of FIG. 10B includes receiving a data packet from the wireless sink device 160 (1002), where, among other things, the data packet may include a data packet header and payload data. For example, the payload data may include user input data. The source device 120 may include a communication component that allows transmission of data packets, which includes, for example, the transmission unit 233 and the wireless modem 234 shown with reference to FIG. 2. Then, the source device 120 may parse the data packet header included in the data packet (1004). The source device 120 may determine whether there is a timestamp field in the data packet header (1006). In one example, the source device 120 may make this determination based on the time stamp flag value included in the data packet header. If the data packet header includes a timestamp field, the source device 120 may process the payload data based on the timestamp in the timestamp field (1008). The data packet described with reference to FIGS. 10A and 10B may generally take the form of the data packet described with reference to FIG. 6 and may be used to control audio / video data at the source device.11A is a flowchart of an example method of transmitting user input data from a wireless sink device to a wireless source device according to the present disclosure. The example method shown may be performed by sink device 160 (FIG. 1A) or 360 (FIG. 3). In some examples, a computer-readable storage medium (eg, memory 332) may store instructions, modules, or algorithms, and when the instructions, modules, or algorithms execute, cause one or more processors (eg, processor 331) to execute the One or more of the steps shown in the flowchart.The method of FIG. 11A includes obtaining user input data at a wireless sink device (such as wireless sink device 160) (1101). The user input data may be obtained through a user input component of the wireless sink device 160, such as, for example, the user input interface 376 shown with reference to FIG. 3. The sink device 160 may then generate a data packet header based on user input (1103). Among other fields, the data packet header may include a timestamp field. For example, the timestamp field may include a 16-bit field containing a timestamp based on multimedia data generated by the wireless source device 120 and sent to the wireless sink device 160. The time stamp may be added to the video data frame by the wireless source device 120 before being sent to the wireless sink device. For example, the timestamp field may identify the timestamp associated with the video data frame displayed at the wireless sink device 160 when the user input data was captured. Sink device 160 may also generate a data packet (1105), where the data packet includes the generated data packet header and payload data. In one example, the payload data may include received user input data and may identify one or more user commands. The sink device 160 may then send the generated data packet to the wireless source device (eg, the source device 120 of FIG. 1A or 220 of FIG. 2) (1107). The sink device 160 may include components that allow transmission of data packets, including, for example, the transmission unit 333 and the wireless modem 334 shown with reference to FIG. 3. Data packets can be sent to wireless source devices via TCP / IP.11B is a flowchart of an example method of receiving user input data from a wireless sink device at a wireless source device according to the present disclosure. The illustrated example method may be performed by the source device 120 (FIG. 1A) or 220 (FIG. 2). In some examples, a computer-readable storage medium (eg, memory 232) may store instructions, modules, or algorithms, which when executed, cause one or more processors (eg, processor 231) to execute the One or more of the steps shown in the flowchart.The method of FIG. 11B includes receiving a data packet (1102) from a wireless sink device (such as wireless sink device 160), where, among other things, the data packet may include a data packet header and payload data. For example, the payload data may include user input data. The source device 120 may include a communication component that allows transmission of data packets, which includes, for example, the transmission unit 233 and the wireless modem 234 shown with reference to FIG. 2. The source device 120 can then identify the timestamp field in the data packet header (1104). The source device 120 may process the payload data based on the time stamp in the time stamp field (1106). As part of processing the payload data, the source device 120 may identify the video data frame displayed at the wireless sink device when the user input data is obtained based on the time stamp, and interpret the payload data based on the content of the frame. As part of processing the payload data based on the timestamp, the source device 120 may compare the timestamp with the current timestamp of the current video frame sent by the source device 120, and may respond to the timestamp and the current timestamp The time difference between them is less than the threshold to execute the user input command described in the payload data, or the user input command described in the payload data is not executed in response to the time difference between the time stamp and the current time stamp being greater than the threshold. The data packets described with reference to FIGS. 11A and 11B may generally take the form of data packets described with reference to FIG. 6 and may be used to control audio / video data at the source device.12A is a flowchart of an example method of transmitting user input data from a wireless sink device to a wireless source device according to this disclosure. The example method shown may be performed by sink device 160 (FIG. 1A) or 360 (FIG. 3). In some examples, a computer-readable storage medium (eg, memory 332) may store instructions, modules, or algorithms, and when the instructions, modules, or algorithms execute, cause one or more processors (eg, processor 331) to execute the One or more of the steps shown in the flowchart.The method of FIG. 12A includes obtaining user input data at a wireless sink device (such as wireless sink device 160) (1201). In one example, the user input data may be voice command data, which may be obtained through a user input component of the wireless sink device 160 (such as, for example, a voice command recognition module included in the user input interface 376 in FIG. 3) Command data. Sink device 160 may generate a data packet header based on user input (1203). Sink device 160 may also generate payload data (1205), where the payload data may include voice command data. In one example, the payload data may also include received user input data and may identify one or more user commands. The sink device 160 may also generate a data packet (1207), where the data packet includes the generated data packet header and payload data. The sink device 160 may then send the generated data packet to the wireless source device (eg, the source device 120 of FIG. 1A or 220 of FIG. 2) (1209). The sink device 160 may include components that allow transmission of data packets, including, for example, the transmission unit 333 and the wireless modem 334 shown with reference to FIG. 3. Data packets can be sent to wireless source devices via TCP / IP.12B is a flowchart of an example method of receiving user input data from a wireless sink device at a wireless source device according to the present disclosure. The illustrated example method may be performed by the source device 120 (FIG. 1A) or 220 (FIG. 2). In some examples, a computer-readable storage medium (eg, memory 232) may store instructions, modules, or algorithms, which when executed, cause one or more processors (eg, processor 231) to execute the One or more of the steps shown in the flowchart.The method of FIG. 12B includes receiving a data packet (1202), where, among other things, the data packet may include a data packet header and payload data. For example, the payload data may include user input data such as voice command data. The source device 120 may include a communication component that allows transmission of data packets, which includes, for example, the transmission unit 233 and the wireless modem 234 shown with reference to FIG. 2. Then, the source device 120 may parse the payload data included in the data packet (1204) to determine whether the payload data includes voice command data. The data packets described with reference to FIGS. 12A and 12B may generally take the form of data packets described with reference to FIG. 6 and may be used to control audio / video data at the source device.13A is a flowchart of an example method of transmitting user input data from a wireless sink device to a wireless source device according to the present disclosure. The example method shown may be performed by sink device 160 (FIG. 1A) or 360 (FIG. 3). In some examples, a computer-readable storage medium (eg, memory 332) may store instructions, modules, or algorithms, and when the instructions, modules, or algorithms execute, cause one or more processors (eg, processor 331) to execute the One or more of the steps shown in the flowchart.The method of FIG. 13A includes obtaining user input data at a wireless sink device (such as wireless sink device 160) (1301). In one example, the user input data may be a multi-touch gesture, which may be obtained through a user input component of the wireless sink device 160 (such as, for example, the UI 167 or the user input interface 376 in FIG. 3). In one example, the multi-touch gesture may include a first touch input and a second touch input. Sink device 160 may generate a data packet header based on user input (1303). Sink device 160 may also generate payload data (1305), where the payload data may associate user input data for the first touch input event with the first pointer identification and associate user input data for the second touch input event Associated with the second pointer identification. Sink device 160 may also generate a data packet (1307), where the data packet includes the generated data packet header and payload data. The sink device 160 may then send the generated data packet to the wireless source device (eg, the source device 120 of FIG. 1A or 220 of FIG. 2) (1309). The sink device 160 may include a component that allows transmission of data packets, including, for example, the transmission unit 333 and the wireless modem 334 shown with reference to FIG. 3. Data packets can be sent to wireless source devices via TCP / IP.13B is a flowchart of an example method of receiving user input data from a wireless sink device at a wireless source device according to the present disclosure. The illustrated example method may be performed by the source device 120 (FIG. 1A) or 220 (FIG. 2). In some examples, a computer-readable storage medium (eg, memory 232) may store instructions, modules, or algorithms, which when executed, cause one or more processors (eg, processor 231) to execute the One or more of the steps shown in the flowchart.The method of FIG. 13B includes receiving a data packet (1302), where, among other things, the data packet may include a data packet header and payload data. For example, the payload data may include user input data such as multi-touch gestures. The source device 120 may include a communication component that allows transmission of data packets, which includes, for example, the transmission unit 233 and the wireless modem 234 shown with reference to FIG. 2. Then, the source device 120 may parse the payload data included in the data packet (1304) to identify the user input data included in the payload data. In one example, the identified data may include user input data for the first touch input event identified by the first pointer and user input data for the second touch input event identified by the second pointer. Then, the source device 120 may interpret the user input data for the first touch input event and the user input data for the second touch input event as a multi-touch gesture (1306). The data packets described with reference to FIGS. 13A and 13B may generally take the form of data packets described with reference to FIG. 6 and may be used to control audio / video data at the source device.14A is a flowchart of an example method of transmitting user input data from a wireless sink device to a wireless source device according to the present disclosure. The example method shown may be performed by sink device 160 (FIG. 1A) or 360 (FIG. 3). In some examples, a computer-readable storage medium (eg, memory 332) may store instructions, modules, or algorithms, and when the instructions, modules, or algorithms execute, cause one or more processors (eg, processor 331) to execute the One or more of the steps shown in the flowchart.The method of FIG. 14A includes obtaining user input data from an external device at the wireless sink device 360 (1401). In one example, the external device may be a third-party device connected to the sink device. The sink device 160 may generate a data packet header based on user input (1403). In one example, the data packet header may identify user input data as forwarded user input data. Sink device 160 may also generate payload data (1405), where the payload data may include user input data. Sink device 160 may also generate a data packet (1407), where the data packet includes the generated data packet header and payload data. The sink device 160 may then send the generated data packet to the wireless source device (eg, the source device 120 of FIG. 1A or 220 of FIG. 2) (1409). The sink device 160 may include components that allow transmission of data packets, including, for example, the transmission unit 333 and the wireless modem 334 shown with reference to FIG. 3. Data packets can be sent to wireless source devices via TCP / IP.14B is a flowchart of an example method of receiving user input data from a wireless sink device at a wireless source device according to the present disclosure. The illustrated example method may be performed by the source device 120 (FIG. 1A) or 220 (FIG. 2). In some examples, a computer-readable storage medium (eg, memory 232) may store instructions, modules, or algorithms, which when executed, cause one or more processors (eg, processor 231) to execute the One or more of the steps shown in the flowchart.The method of FIG. 14B includes receiving a data packet (1402), where, among other things, the data packet may include a data packet header and payload data. For example, the payload data may include user input data such as a forwarded user input command that indicates that the user input data is forwarded from a third-party device. The source device 120 may include a communication component that allows transmission of data packets, which includes, for example, the transmission unit 233 and the wireless modem 234 shown with reference to FIG. 2. Then, the source device 120 may parse the data packet header (1404) and may determine that the payload data includes the forwarded user input command (1404). Then, the source device 120 may parse the payload data included in the data packet (1406) to identify the identifier associated with the third-party device corresponding to the forwarded user input command. Then, the source device 120 may process the payload data based on the identification of the identified third-party device (1408). The data packets described with reference to FIGS. 14A and 14B may generally take the form of data packets described with reference to FIG. 6 and may be used to control audio / video data at the source device.15A is a flowchart of an example method of transmitting user data from a wireless sink device to a wireless source device according to this disclosure. The example method shown may be performed by sink device 160 (FIG. 1A) or 360 (FIG. 3). In some examples, the computer-readable storage medium (eg, memory 332) may store instructions, modules, or algorithms, and when the instructions, modules, or algorithms execute, cause one or more processors (eg, processor 331) to execute the One or more of the steps shown in the flowchart.The method of FIG. 15A includes obtaining user input data at the wireless sink device (1501). The user input data may have associated coordinate data. For example, the associated coordinate data may correspond to the location of a mouse click event or a touch event. The sink device 160 may then normalize the associated coordinate data to generate normalized coordinate data (1503). Then, the sink device 160 may generate a data packet including the normalized coordinate data (1505). Normalizing the coordinate data may include scaling the associated coordinate data based on the ratio of the resolution of the display window to the resolution of the source's display (such as the display 22 of the source device 120). The resolution of the display window may be determined by the sink device 160, and the resolution of the display of the source device may be received from the source device 120. Then, the sink device 160 may send the data packet with the normalized coordinates to the wireless source device 120 (1507). As part of the method of FIG. 15A, sink device 160 may also determine whether the associated coordinate data is within the display window for the content received from the wireless source device, and, for example, if the associated coordinate data is outside the display window The user input is processed locally, or otherwise if the input is within the display window, the coordinates are normalized as described.15B is a flowchart of an example method of receiving user input data from a wireless sink device at a wireless source device according to the present disclosure. The illustrated example method may be performed by the source device 120 (FIG. 1A) or 220 (FIG. 2). In some examples, a computer-readable storage medium (eg, memory 232) may store instructions, modules, or algorithms, which when executed, cause one or more processors (eg, processor 231) to execute the One or more of the steps shown in the flowchart.The method of FIG. 15B includes receiving a data packet at a wireless source device, where the data packet includes user input data associated with coordinate data (1502). For example, the associated coordinate data may correspond to the location of the mouse click event or touch event at the sink device. Then, the source device 120 may normalize the associated coordinate data to generate normalized coordinate data (1504). The source device 120 may normalize the coordinate data by scaling the associated coordinate data based on the ratio of the resolution of the display window to the resolution of the source's display. The source device 120 may determine the resolution of the display of the source device, and may receive the resolution of the display window from the wireless sink device. Then, the source device may process the data packet based on the normalized coordinate data (1506). The data packets described with reference to FIGS. 15A and 15B can generally take the form of data packets described with reference to FIG. 6 and can be used to control audio / video data at the source device.For simplicity of explanation, various aspects of the present disclosure are described individually with reference to FIGS. 7-15. However, it should be envisaged that these various aspects can be combined with each other and used in conjunction with each other rather than just individually. In general, the functions and / or modules described herein can be implemented in both wireless source devices and wireless sink devices. In this way, the user interface functions described in the current example can be used interchangeably between the wireless source device and the wireless sink device.Various devices and apparatuses including wireless handheld devices and integrated circuits (ICs) or a group of ICs (ie, chipsets) may be used to implement the technology of the present disclosure. Any described components, modules or units are provided to emphasize functional aspects and do not necessarily require implementation by different hardware units.Accordingly, the techniques described herein can be implemented using hardware, software, firmware, or any combination thereof. If implemented in hardware, any features described as modules, units or components may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in software, these techniques may be implemented at least in part by a computer-readable medium that includes instructions that, when executed in a processor, perform one or more of the above-described methods. Computer-readable media may include tangible and non-transitory computer-readable storage media, and may form part of a computer program product, which may include packaging materials. Computer readable storage media may include random access memory (RAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable and removable such as synchronous dynamic random access memory (SDRAM) Programming read-only memory (EEPROM), flash memory, magnetic or optical data storage media, etc. Additionally or alternatively, these techniques may be implemented at least in part by a computer-readable communication medium that carries communication code in the form of instructions or data structures and can be accessed, read, and / or executed by the computer.The code may consist of one or more such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuits Processors. Accordingly, as used herein, the term "processor" may refer to any of the foregoing structure or any other structure suitable for implementing the techniques described herein. In addition, in some aspects, the functions described herein may be provided within dedicated software modules or hardware modules configured to encode and decode or incorporate a combined video codec. In addition, one or more circuits or logic units can be used to fully implement these techniques.Various aspects of the present disclosure have been described. These and other aspects are within the scope of the following claims.
A method for combining speech recognition with near field communication (NFC) to enable a user to enter, store, and use web addresses on portable devices. A user of a portable device having a NFC reader, a voice input interface, a speech recognition system, and memory enables the NFC reader of the portable device to touch a NFC tag or reader found on an object. The object containing information of interest to a user of the portable device; wherein when the NFC reader and the NFC tag or reader touch, the portable device receives a URI and default keywords associated with the URI. The portable device stores the URI in a persistent storage of the portable device based on the default keywords, and date, time, and location of when and where the URI was obtained. The user of the portable device can then retrieve and use the URI at a later time using the voice input interface and the speech recognition system, wherein when the user speaks the default keywords into the voice input interface, the speech recognition system to retrieve the URI.
A computing device for transferring Near Field Communications information comprising:a database to store information corresponding to a plurality of services, the information comprises a Universal Resource Identifier and content-specific keywords associated with each of the plurality of services;a voice recognition system to receive a voice input corresponding to a name of a requested service from the plurality of services, the voice recognition system to retrieve the Universal Resource Identifier and the content-specific keywords associated with the requested service from the database; anda Near Field Communications reader to load the Universal Resource Identifier and the content-specific keywords associated with the requested service into a Near Field Communications tag emulated by the Near Field Communications reader, the Near Field Communications reader to transfer the Universal Resource Identifier and the content-specific keywords from the Near Field Communications tag to a portable computing device in response to the Near Field Communications tag being touched by the portable computing device.The computing device of claim 1, further comprising a display to display one or more of the plurality of services corresponding to the name of the requested service, the Near Field Communications reader to load the Universal Resource Identifier and the content-specific keywords associated with a service selected from the plurality of services displayed into the Near Field Communications tag emulated by the Near Field Communications reader.The computing device of claim 1, wherein the plurality of services comprises one or more of a local hotel, a restaurant, or a transportation service.The computing device of claim 1, wherein the information corresponding to the plurality of services comprises one or more of a phone number, a web site, or directions corresponding to one or more of the plurality of services.A system for transferring Near Field Communications information comprising:a computing device comprising:a database to store information corresponding to a plurality of services, the information comprises a Universal Resource Identifier and content-specific keywords associated with each of the plurality of services,a voice input to receive a spoken name of a requested service from the plurality of services,a voice recognition interface to retrieve the Universal Resource Identifier and the content-specific keywords associated with the requested service from the database, anda first Near Field Communications reader to load the Universal Resource Identifier and the content-specific keywords associated with the requested service into an emulated Near Field Communications tag; anda portable computing device comprising:a second Near Field Communications reader to receive the Universal Resource Identifier and the content-specific keywords transferred from the first Near Field Communications reader of the computing device in response to the emulated Near Field Communications tag being touched by the second Near Field Communications reader of the portable computing device, anda memory to store the Universal Resource Identifier based on the content-specific keywords, anda speech-recognition system to retrieve the Universal Resource Identifier from the memory of the portable computing device in response to receiving a spoken keyword, the spoken keyword corresponding to one of the content-specific keywords stored in the memory.The system of claim 5, wherein the computing device further comprises a display to display one or more of the plurality of services corresponding to the name of the requested service,wherein to load the Universal Resource Identifier and the content-specific keywords associated with the requested service into the Near Field Communications tag comprises to load, into the Near Field Communications tag, the Universal Resource Identifier and the content-specific keywords associated with a service selected from the plurality of services displayed.The system of claim 5, wherein the plurality of services comprises one or more of a local hotel, a restaurant, or a transportation service.The system of claim 5, wherein the information corresponding to the plurality of services comprises one or more of a phone number, a web site, or directions corresponding to one or more of the plurality of services.A method for transferring Near Field Communications information comprising:storing, by a computing device, information corresponding to a plurality of services in a database on the computing device, the information comprises a Universal Resource Identifier and content-specific keywords associated with each of the plurality of services;receiving, by the computing device, a voice input corresponding to a name of a requested service from the plurality of services;retrieving, by the computing device, the Universal Resource Identifier and the content-specific keywords associated with the requested service from the database;loading, by the computing device, the Universal Resource Identifier and the content-specific keywords associated with the requested service into a Near Field Communications tag emulated by the computing device; andtransferring, by the computing device, the Universal Resource Identifier and the content-specific keywords from the Near Field Communications tag to a portable computing device in response to the Near Field Communications tag being touched by a Near Field Communications reader of the portable computing device.The method of claim 9, further comprising:receiving, by the portable computing device, the Universal Resource Identifier and the content-specific keywords from the Near Field Communications tag emulated by the computing device using the Near Field Communications reader of the portable computing device;storing, by the portable computing device, the Universal Resource Identifier in memory of the portable computing device based on the content-specific keywords; andretrieving, by the portable computing device, the Universal Resource Identifier from the memory of the portable computing device in response to receiving a spoken keyword, the spoken keyword corresponding to one of the content-specific keywords stored in the memory.The method of claim 9, further comprising:displaying, by the computing device, one or more of the plurality of services corresponding to the name of the requested service; andreceiving, by the computing device, a selection of one of the plurality of services displayed,wherein loading the Universal Resource Identifier and the content-specific keywords associated with the requested service into the Near Field Communications tag comprises loading the Universal Resource Identifier and the content-specific keywords associated with the selected service from the plurality of services displayed into the Near Field Communications tag.The method of claim 9, wherein the plurality of services comprises one or more of a local hotel, a restaurant, or a transportation service.The method of claim 9, wherein the information corresponding to the plurality of services comprises one or more of a phone number, a web site, or directions corresponding to one or more of the plurality of services.An article comprising a storage medium having a plurality of machine accessible instructions, wherein when the instructions are executed by a processor, the instructions enable a computing device to perform a method according to any one of claims 9-13 or to operate as a computing device as claimed in any one of claims 1 to 4.
BACKGROUND OF THE INVENTIONField of the InventionThe present invention is generally related to near field communications (NFC). More particularly, the present invention is related to a voice interface to NFC applications.DescriptionNear-Field Communications (NFC) is a very short-range contactless data transfer technology related to RFID (Radio Frequency Identification). NFC has achieved commercial success in Europe and Japan for public transit payment systems and for pointof-sale purchases using cell phones with built-in NFC interfaces.Another NFC application that has been proposed and deployed to a limited extent is to store URIs (Universal Resource Identifiers) in NFC tags attached to Smart Posters. Users with NFC-equipped cell phones can scan the NFC tag on a Smart Poster to automatically call up web content associated with the poster on their cell phones. This eliminates the need to manually enter a URI on a device with a limited keypad. However, Smart Poster scenarios typically presume that the user intends to immediately use the URI. What is not considered is the problem of retrieving or managing multiple such URIs on the portable device.Speech recognition is another possible technology that could be used for entering web addresses on limited user interface devices. US 2004/0138781 A1 discloses speech recognition software for a mobile device. However, considering how awkward it is to verbally communicate most URIs to another person, it is clear that speech recognition technology will have to become very sophisticated before it can be used for this purpose. Accurate speech recognition requires a large number of MIPS (million instructions per second), which is problematic for low power portable devices. Furthermore, even if the recognition engine worked perfectly, insurmountable usability obstacles surround the problem of verbally entering typical URIs such as, for example, http://www!ncbi!nlm!nih!gov/entrez/query!fcgi?cmd=Retrieve&db=PubMed&list uids=9 962543&dopt=Abstract. (It should be noted that periods have been replaced with exclamation marks in the above referenced URI to avoid inadvertent hyperlinks.)Thus, what is needed is a technique for combining speech recognition with NFC to enable a user to enter and use web addresses on portable devices.BRIEF DESCRIPTION OF THE DRAWINGSThe accompanying drawings, which are incorporated herein and form part of the specification, illustrate embodiments of the present invention and, together with the description, further serve to explain the principles of the invention and to enable a person skilled in the pertinent art(s) to make and use the invention. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number.FIG. 1 is a block diagram illustrating an exemplary platform topology of a portable device according to an embodiment of the present invention.FIG. 2 is a block diagram illustrating an exemplary system for combining speech recognition and NFC to enable a user to enter and use web addresses on portable devices according to an embodiment of the present invention.FIG. 3 is a flow diagram describing an exemplary method for enabling portable devices to navigate and use Internet content according to an embodiment of the present invention.FIG. 4 is a flow diagram describing an exemplary method for retrieving and using URIs stored on a portable device via a voice input interface according to an embodiment of the present invention.FIG. 5 is a flow diagram 500 illustrating an exemplary method for transferring information from one NFC reader to another NFC reader according to an embodiment of the present invention.DETAILED DESCRIPTION OF THE INVENTIONWhile the present invention is described herein with reference to illustrative embodiments for particular applications, it should be understood that the invention is not limited thereto. Those skilled in the relevant art(s) with access to the teachings provided herein will recognize additional modifications, applications, and embodiments within the scope thereof and additional fields in which embodiments of the present invention would be of significant utility.Reference in the specification to "one embodiment", "an embodiment" or "another embodiment" of the present invention means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrase "in one embodiment" or "in an embodiment" appearing in various places throughout the specification are not necessarily all referring to the same embodiment.Embodiments of the present invention enable a portable device to process URIs (Universal Resource Identifiers), as well as the web content to which the URI (Universal Resource Identifier) refers to, in an efficient manner using NFC and voice recognition technology. This is accomplished using a portable (i.e,mobile) device that includes a NFC reader, an audio input interface, and a voice recognition system. The NFC reader may be used to read URIs from "Smart Posters" and other objects in which NFC tags are located. The audio input interface may be used to further annotate the URIs retrieved by the NFC reader with user-defined keywords for managing stored URIs. The audio input interface may also be used in conjunction with the voice recognition system as a voice assisted lookup mechanism for retrieving stored URIs.Embodiments of the present invention provide a flexible framework for combining voice recognition with NFC. This enables devices with limited user interface (UI) capabilities to more easily navigate and use Internet content. Embodiments of the present invention also extend a portable device's command vocabulary through meta-data associated with the URIs obtained via the NFC reader.Embodiments of the present invention may be implemented using hardware, software, or a combination thereof and may be implemented in one or more multi-core processor platforms or single-core processing systems. FIG. 1 illustrates an exemplary platform topology of a portable device 100 according to an embodiment of the present invention. Various embodiments are described in terms of this exemplary platform topology. After reading this description, it will be apparent to a person skilled in the relevant art(s) how to implement the invention using other platform topologies and/or other computer architectures.Portable device 100 comprises a processor 102. As previously indicated, processor 102 may be a single core, a dual core, a quad core, or a multi-core processor. Processor 102 may be an Intel® Pentium® M processor manufactured by Intel® Corporation, located in Santa Clara, CA, or any other type of processors capable of carrying out the methods disclosed herein, such as, for example, an Intel® Core™ Solo processor, an Intel® Core™ Duo processor, etc., each manufactured by Intel® Corporation. Processor 102 may include multiple threads as well.Processor 102 may communicate with a memory controller hub (MCH) 104, also known as a North bridge, via a front side bus 106. MCH 104 communicates with system memory 110 via a memory bus 108. Memory 110 may be a hard disk, a floppy disk, random access memory (RAM), read only memory (ROM), flash memory, or any other type of medium readable by processor 102. Memory 110 may store instructions for performing the execution of method embodiments of the present invention. Memory 110 may also store each URI and its associated data that is captured using portable device 100. MCH 104 may also communicate with an advanced graphics port (AGP) 114 via a graphics bus 112.MCH 104 may communicate with an I/O controller hub (ICH) 118, also known as a South bridge, via a peripheral component interconnect (PCI) bus 116. ICH 118 may be coupled to one or more I/O (Input/Output) component devices, such as, but not limited to, a NFC reader 120, an audio input interface 122, a network interface controller (NIC) 124 via a PCI bus 126, a display 128 for displaying web content as well as other information, and a keyboard 130. In many instances keyboard 130 may be a limited user interface (UI). Other types of I/O components may also be used as well.NFC reader 120 of portable device 100 may be used for URI input. For example, NFC reader 120 may be used to obtain information about an object, event, advertisement, etc. from, for example, a Smart Poster or any other object having information attached onto a NFC tag. When a user touches the NFC tag with NFC reader 120 of portable device 100, information, such as, for example, a URI, may be read by NFC reader 120. In one embodiment, keywords specific to the content of the object, for example, the Smart Poster, in which the URI is obtained, may also be read by NFC reader 120 and used as default keywords when storing and retrieving the URI. In an embodiment in which portable device 100 has wireless Internet capabilities, when NFC reader 120 of portable device 100 touches an NFC tag from a Smart Poster or other object having information attached onto the NFC tag, a web browser window may open on display 128 and portable device 100 may connect to the Internet to download the data associated with the URI read by NFC reader 120.Audio input interface 122 may be used for classification and retrieval purposes. For example, after a URI is read by portable device 100 via NFC 120, the user may augment the default keywords that are obtained from the NFC tag through audio input interface 122 by inputting user-defined keywords via audio input interface 122.Portable device 100 further comprises a speech recognition system 132 (shown in phantom). Speech recognition system 132 may be implemented in hardware, software, or a combination thereof. If speech recognition system 132 is implemented in hardware, speech recognition system 132 may be coupled to MCH 104 via PCI bus 116. If speech recognition system 132 is implemented in software, speech recognition system 132 may be found in memory 110 (not shown). Speech recognition system 132 may be used to search and retrieve URIs based on voice input received from audio input interface 122. Speech recognition accuracy and efficiency improves dramatically when applied to limited- vocabulary domains. In one embodiment of the present invention, speech recognition system 132 may use limited vocabulary domains such as command-driven menus and keyword-based lookup.Nonvolatile memory, such as a Flash memory 134, may be coupled to ICH 118 via a SPI (System Parallel Interface) bus 136. In embodiments of the present invention, BIOS firmware may reside in Flash memory 134 and at boot up of the platform, instructions stored on Flash memory 134 may be executed. In an embodiment, Flash memory 134 may also store instructions for performing the execution of method embodiments described herein. In one embodiment, speech recognition system 132 may be implemented in software stored in Flash memory 134. In this instance, speech recognition system 132 may be initialized during system boot up of the platform when portable device 100 is turned on.As previously indicated, embodiments of the present invention perform the complex and error-prone task of URI input on a portable device using a NFC interface combined with an audio interface and a speech recognition system. Rather than having a user enter the entire URI via voice, the user may enter the URI via NFC and optionally enter user-defined keywords via the voice input interface that may be used to retrieve and manipulate data associated with the URI.FIG. 2 is a block diagram illustrating an exemplary system 200 for combining speech recognition and NFC to enable a user to enter and use web addresses on portable devices according to an embodiment of the present invention. System 200 comprises portable device 100, a Smart Poster 202 of a movie presently be shown at the theatre, a network, such as, for example, Internet 204, and a Web page 206. Smart Poster 202 includes a NFC tag 208 containing a URI associated with advertised movie. As indicated above, portable device 100 includes NFC reader 120, audio input interface 122, and speech recognition system 132 (not explicitly shown).A user 210, interested in attending the advertised movie on Smart Poster 202, enables NFC reader 120 of portable device 100 to touch NFC tag 208 to read the URI and associated default keywords into portable device 100. Associated keywords read by NFC reader 120 may be the title of the movie, local theatres and times of where and when the movie is playing, and other information about the movie. The time, date, and location of when and where the URI is captured may also be used as an annotation for the URI.The URIs may be stored using default keywords and, if desired by the user, user-defined keywords entered by the user via the voice input interface. Once the URI is read, user 210 may verbally annotate the URI with user-defined keywords. In this example, user 210 verbally annotates the URI by saying the keyword "JoeActor" into audio input interface 122. "JoeActor" is user 210's favorite actor in the advertised movie, and therefore, will be easy for user 210 to remember when attempting to retrieve the URI at a later time.If portable device 100 includes wireless Internet connectivity, portable device 100 may load Web page 206 associated with the URI read by NFC reader 120. In addition to the primary content on Web page 206, Web page 206 may contain meta-data encoded as an embedded XML (eXtensible Markup Language) data island. The meta-data may be used to facilitate subsequent search and selection by the user. For example, the meta-data can include a set of content links optimized to different device form factors, a small graphical icon, and a set of keywords (for lookup) that may be verbally entered using audio input interface 122.The meta-data may also include additional voice commands tied to additional related links. These commands can help accelerate navigation of the target website. For example, if the URI includes a nearby restaurant to a theatre in which the movie is playing, voice command meta-data can point to internal links to provide a display of the menu from the nearby restaurant (that is, <Command word="menu"; link= http://www?Restaurant?com/menu/ >) or directions to the restaurant (that is, <Command word="restaurantdirections"; link= http://www?Restaurant?com/map/ >). The URI may also include a command for directions to the theatre (that is, <Command word="theatredirections"; link= http://www?Theatre?com/map/ >). (It should be noted that periods have been replaced with question marks in the above referenced URIs to avoid inadvertent hyperlinks.) Voice recognition system 132 in portable device 100 may be temporarily augmented with such extended commands when the user selects a URI.With embodiments of the present invention, it is not mandatory that the web content associated with the URIs captured by NFC interface 120 be viewed immediately. Simple command-oriented voice recognition processing allows the stored URIs to be retrieved and manipulated. The voice recognition system and audio input interface of the portable device together form a speech-based interface that allows the user to perform URI lookup using the default and user-defined keywords.FIG. 3 is a flow diagram 300 describing an exemplary method for enabling portable devices to navigate and use Internet content according to an embodiment of the present invention. The invention is not limited to the embodiment described herein with respect to flow diagram 300. Rather, it will be apparent to persons skilled in the relevant art(s) after reading the teachings provided herein that other functional flow diagrams are within the scope of the invention. The process begins at block 302, where the process immediately proceeds to block 304.In block 304, a NFC reader of a portable device touches a NFC tag found on an object, such as, for example, a Smart Poster. The process then proceeds to block 306.In block 306, the portable device receives a URI and default keywords associated with the URI via the NFC reader. The process then proceeds to block 308.In block 308, a user of the portable device may optionally annotate the URI with user-defined keywords via a voice input interface on the portable device. The process then proceeds to decision block 310.In decision block 310, it is determined whether a Web page associated with the URI is to be downloaded and displayed on the portable device immediately. If the portable device is configured and able to connect with a server storing the Web page associated with the URI over the Internet, and the user desires to view the Web page at that time, then the portable device may retrieve and display the Web page at block 312. The process then proceeds to block 314, where the user may navigate and use the Internet content as described above with reference to FIG. 2 above. The user may also surf the Internet in a manner well known to those skilled in the relevant art(s). The process then proceeds to block 316.Returning to decision block 310, if it is determined that the Web page associated with the URI is not to be downloaded and displayed immediately on the portable device, the process then proceeds to block 316.In block 316, the portable device stores the URI, keywords, an icon for lookup, and commands for voice recognition in a persistent storage of the portable device. The process then proceeds to block 316.In block 318, the user may retrieve and use the URI at a later time using the speech-based interface.FIG. 4 is a flow diagram 400 describing an exemplary method for retrieving and using URIs stored on a portable device via a voice input interface according to an embodiment of the present invention. The invention is not limited to the embodiment described herein with respect to flow diagram 400. Rather, it will be apparent to persons skilled in the relevant art(s) after reading the teachings provided herein that other functional flow diagrams are within the scope of the invention. The process begins at block 402, where the process immediately proceeds to block 404.In block 404, a user may issue a voice command to retrieve a URI stored in a persistent store of a portable device. For example, the user may issue the voice command "JoeActor" to retrieve all URIs related to Joe Actor that are stored on the portable device. The process then proceeds to block 406.In block 406, representations of URIs matching the keyword "JoeActor" are displayed to the user. For example, a graphical icon and a short title for each URI matching the keyword "JoeActor" may be displayed. Other information associated with the URIs when they were originally acquired, such as keywords, time/date/location, etc. may also be displayed to aid the user in selecting the desired URI. The process then proceeds to decision block 408.In decision block 408, it is determined whether the user has found the URI of interest to the user. If the user has found the URI of interest to the user, the process proceeds to block 410.In block 410, the user may select the URI of interest to the user to be displayed. The process then proceeds to block 412.In block 412, the portable device connects to the Internet and loads the web content corresponding to the URI. If the web content contains new meta-data, the portable device may augment the stored URI reference with the new meta-data.Returning to decision block 408, if it is determined that the user has not found the URI of interest to the user, the process proceeds back to block 404, where the user may issue a voice command using a different keyword.In an embodiment where the keyword results in only one match, the portable device may directly connect to the Internet and load the web content corresponding to that URI.Most NFC readers can emulate NFC tags to be read by other NFC readers. Thus, when a user enables its portable device to be automatically loaded with a URI from an NFC tag on an object, that portable device may also transfer the URI to other portable devices having an NFC reader. For example, a public kiosk in an airport may include a voice recognition interface coupled with a pre-loaded database of URIs of local hotels, transportation, restaurants, and other services. A user may speak the desired service name using the voice input of the kiosk to look up matching services. Once a service is selected by the user, the kiosk can load the URI of that service into its NFC reader. The user can then touch the NFC reader of their portable device to the NFC reader of the kiosk to read the data into their portable device. In this way, associated and up-to-date contact information such as, for example, phone numbers, web sites, directions, etc., can be easily loaded by the portable device via its Internet connection.FIG. 5 is a flow diagram 500 illustrating an exemplary method for transferring information from one NFC reader to another NFC reader according to an embodiment of the present invention. The invention is not limited to the embodiment described herein with respect to flow diagram 500. Rather, it will be apparent to persons skilled in the relevant art(s) after reading the teachings provided herein that other functional flow diagrams are within the scope of the invention. The process begins at block 502, where the process immediately proceeds to block 504.In block 504, a user speaks into an object having a voice recognition interface coupled with a pre-loaded database of URIs. The keyword spoken by the user is one of a plurality of desired services that the object has information about that may be retrieved by the user. The process proceeds to block 506.In block 506, matching services are displayed by the object to the user. The process then proceeds to block 508.In block 508, the user may select the service of interest to the user. The process then proceeds to block 510.In block 510, the object may load the URI of that service into its NFC reader. The process then proceeds to block 512.In block 512, the user may then enable the NFC reader of the portable device of the user to touch the NFC reader of the object to read the URI of the service into the portable device of the user.Embodiments of the present invention may be implemented using hardware, software, or a combination thereof and may be implemented in one or more portable computer systems, as shown in FIG. 1 , or other processing systems. The techniques described herein may find applicability in any computing, consumer electronics, or processing environment. The techniques may be implemented in programs executing on programmable machines such as mobile or stationary computers, personal digital assistants, set top boxes, cellular telephones and pagers, consumer electronics devices (including DVD (Digital Video Disc) players, personal video recorders, personal video players, satellite receivers, stereo receivers, cable TV receivers), and other electronic devices that may include at least one processor, a storage medium accessible by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and one or more output devices. Program code is applied to the data entered using the input device to perform the functions described and to generate output information. The output information may be applied to one or more output devices. One of ordinary skill in the art may appreciate that the invention can be practiced with various system configurations, including multiprocessor systems, minicomputers, mainframe computers, independent consumer electronics devices, and the like. The invention can also be practiced in distributed computing environments where tasks or portions thereof may be performed by remote processing devices that are linked through a communications network.Each program may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. However, programs may be implemented in assembly or machine language, if desired. In any case, the language may be compiled or interpreted.Program instructions may be used to cause a general-purpose or special-purpose processing system that is programmed with the instructions to perform the operations described herein. Alternatively, the operations may be performed by specific hardware components that contain hardwired logic for performing the operations, or by any combination of programmed computer components and custom hardware components. The methods described herein may be provided as a computer program product that may include a machine accessible medium having stored thereon instructions that may be used to program a processing system or other electronic device to perform the methods. The term "machine accessible medium" used herein shall include any medium that is capable of storing or encoding a sequence of instructions for execution by the machine and that cause the machine to perform any one of the methods described herein. The term "machine accessible medium" shall accordingly include, but not be limited to, solid-state memories, optical and magnetic disks, and a carrier wave that encodes a data signal. Furthermore, it is common in the art to speak of software, in one form or another (e.g.,program, procedure, process, application, module, logic, and so on) as taking an action or causing a result. Such expressions are merely a shorthand way of stating the execution of the software by a processing system to cause the processor to perform an action or produce a result.While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention as defined in the appended claims. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined in accordance with the following claims and their equivalents.
A semiconductor device (100) comprising a programming circuit (105) that includes an active device (110) on or in a substrate (117) and a programmable electronic component (115) on the substrate. The programmable electronic component includes at least one carbon nanotube (130) having a segment (135) with an adjusted diameter (140). The programmable electronic component has a value that depends upon the adjusted diameter. The programming circuit also includes interconnects that couple the active device to the programmable electronic component. The active device is configured to control a current transmitted to the programmable electronic component.
CLAIMS What is claimed is: 1. A semiconductor device, comprising: a substrate; a circuit on said substrate, including: an active device; a programmable electronic component coupled to said active device and including at least one carbon nanotube having a segment with an adjusted diameter; said active device being configured to control a current transmitted to said programmable electronic component and said programmable electronic component having a value that depends upon said adjusted diameter. 2. The device of Claim 1, wherein said programmable electronic component is configured as a fusible link; wherein said segment is configured to open when said current, equal to a predefined level, is transmitted through said carbon nanotube; and said value is thereby configured to equal a zero or nonzero current depending on whether said segment is opened or not opened, respectively. 3. The device of Claim 1, wherein said programmable electronic component is configured as a capacitor; with said carbon nanotube being capacitively coupled to a conductive body such that a distance between said segment and said conductive body is configured to change as a function of said adjusted diameter, and said value is configured to be equal to a capacitance. 4. The device of Claim 3, wherein said conductive body comprises a second carbon nanotube. 5. The device of Claim 3, wherein said distance is within a range from about 30 to 43 nm. 6. The device of Claim 3 or 5, wherein said adjusted diameter is within a range from about 3 to 30 nm. 7. The device of Claim 3, wherein said programming circuit further includes: an inverter and a comparator, wherein said programmable electronic component is coupled to an output of said inverter; and said comparator has a first input coupled to a reference signal source, a second input coupled to an output of saidprogrammable electronic component, and a programming output that depends upon said value. 8. The device of Claim 7, wherein said reference signal source comprises a source of a reference voltage or of a clock signal. 9. The device of Claim 7 or 8, wherein said programming output of said comparator comprises a tripping time of said comparator, said tripping time depending upon said capacitance. 10. The device of Claim 7, further including one or more calibration circuits; wherein each said calibration circuit includes a second inverter whose output is coupled to an input of a known capacitance; and an output of said known capacitance is coupled to a second comparator; wherein said one or more calibration circuits are coupled to said programming circuit to thereby determine said capacitance. 11. The device of Claim 7, wherein said value is configured to trim an oscillator of said device, trim a voltage of said device, or to provide a unique identification code for said device. 12. A semiconductor device, comprising: a programming circuit, including: transistors located on or in a substrate; a fusible link on said substrate that includes at least one carbon nanotube having a segment with an adjusted diameter; and interconnects that couple said transistors to said programmable electronic component, wherein said transistors are configured to control a current transmitted to said fusible link such that said segment is configured to open when said current, equal to a predefined level, is transmitted through said at least one carbon nanotube, and wherein said fusible link thereby has a value that depends upon said adjusted diameter, said value is being configured to equal a zero or nonzero current depending on whether said segment is opened or not opened, respectively. 13. A semiconductor device, comprising: a programming circuit, including: transistors located on or in a substrate; a capacitor on said substrate that includes: at least one carbon nanotube having a segment with an adjusted diameter; and a conductive body capacitively coupled to said carbon nanotube, a distance between said segment and said conductive body is configured to change as a function of said adjusted diameter; and interconnects that couple said transistors to said capacitor, wherein said transistors are configured to control a current transmitted to said capacitor, and said capacitor has a value that depends upon said adjusted diameter, said value configured to be equal to a capacitance. 14. A method of manufacturing a semiconductor device, comprising: fabricating a programming circuit, including: forming an active device on or in a substrate; forming a programmable electronic component, including depositing a carbon nanotube on said substrate, wherein said carbon nanotube has a segment with an adjustable diameter, and said programmable electronic component has a value that depends upon said adjustable diameter; and forming interconnects that couple said active device to said programmable electronic component.
PROGRAMMABLE CIRCUIT HAVING A CARBON NANOTUBE The disclosure is directed, in general, to semiconductor devices, and more specifically, to a device for programming a circuit and its method of manufacture. BACKGROUND The programming of application- specific semiconductor devices often relies on the use of fuses as a programming component. To program an integrated circuit device, fuses in the circuit can be selectively left intact, or opened, to create circuit paths according to a predefined design. Fuses can thereby be used to implement a variety of programming functions. One problem with the use of conventional fuses, however, is that the size of fuses is not scaling down as rapidly as transistor sizes are. This can be problematic in devices that incorporate thousands of fuses to implement increasingly sophisticated circuit programming. That is, the size of fuses can limit the extent of miniaturization of semiconductor devices. Another problem is that only a binary signal information is obtained from a fuse (e.g., a zero or nonzero current). Consequently, to send more complex control signals, several fuses have to be used, thereby increasing the amount of space on a circuit that is occupied by fuses. Accordingly, what is needed is a method for programming a circuit that addresses the drawbacks of the prior art methods and devices. SUMMARY One aspect of the disclosure is a semiconductor device. The device comprises a programming circuit that includes an active device on or in a substrate and a programmable electronic component on the substrate. The programmable electronic component includes at least one carbon nanotube having a segment with an adjusted diameter. The programmable electronic component has a value that depends upon the adjusted diameter. The programming circuit also includes interconnects that couple the active device to the programmable electronic component. The active device is configured to control a current transmitted to the programmable electronic component. In one embodiment of the device, the programming circuit includes transistors located on or in a substrate, a fusible link on the substrate that includes at least one of the above- described carbon nanotubes, and interconnects that couple the transistors to the programmable electronic component. The transistors are configured to control a currenttransmitted to the fusible link such that the segment is configured to open when the current, equal a predefined level, is transmitted through the carbon nanotube. The fusible link thereby has a value that depends upon the adjusted diameter, the value configured to equal a zero or nonzero current depending on whether the segment is opened or not opened, respectively. In another embodiment of the device, the programming circuit includes transistors located on or in a substrate, a capacitor on the substrate and interconnects that couple the transistors to the capacitor. The capacitor includes at least one the above-described carbon nanotubes having a segment with an adjusted diameter and a conductive body capacitively coupled to the carbon nanotube. A distance between the segment and the conductive body is configured to change as a function of the adjusted diameter. The transistors are configured to control a current transmitted to the capacitor, and the capacitor has a value that depends upon the adjustable diameter, the value configured to equal to a capacitance that depends on the adjusted diameter. Still another aspect of the disclosure is a method of manufacturing a semiconductor device. The method comprises fabricating a programming circuit, including forming an active device on or in a substrate and forming a programmable electronic component. Forming the programmable electronic component including depositing the above-described carbon nanotube on the substrate. Fabricating a programming circuit also include forming interconnects that couple the active device to the programmable electronic component. BRIEF DESCRIPTION OF THE DRAWINGS Example implementations of aspects of the invention are described with reference to the accompanying drawings, wherein: FIG. 1 shows a plan view (at the device level) of an example semiconductor device; FIG. 2 is a cross-sectional view of the example device, taken along the line 2-2 in FIG. 1; FIG. 3 is a cross-sectional view of the example device, taken along the line 3-3 in FIG. l; and FIG. 4 shows a circuit diagram of an example device of the disclosure; FIGS. 5 to 12 illustrate cross- sectional views of selected steps in example implementation of a method of fabricating semiconductor devices.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS The disclosure benefits from the realization that a programmable electronic component that comprises a carbon nanotube (CNT) provides several advantages over conventional fuses. CNTs are substantially smaller (at least an order of magnitude) than conventional fuse components. Additionally, the diameter of CNTs can be adjusted after forming the CNT in a circuit. A segment of the CNT having the adjusted diameter can be used to facilitate the CNT as a fusible link, or as a capacitor when coupled to a conductive body. Circuitry having such a programmable electronic component can be substantially smaller than conventional fuses. One embodiment of the invention is a semiconductor device. FIG. 1 shows a plan view of an example semiconductor device 100 of the disclosure. FIGS. 2 and 3 illustrate cross- sectional view of the device 100, along view lines 2 — 2 and 3 — 3, respectively, as depicted in FIG. 1. The device 100 comprises a programming circuit 105 that includes an active device 110 and a programmable electronic component 115 (FIG. 1). In some embodiments, the semiconductor device 100 is or includes an integrated circuit and the active device 110 and a programmable electronic component 115 are components of the integrated circuit. The plan view of FIG. 1 shows the device 100 at the layer that the active device 110 and programmable electronic component 115 are located in. The active device 110 is located on or in a substrate 117. Example substrates 117 include semiconductors such as silicon, silicon-on-insulator, or silicon germanium, or non- semiconductors, such as sapphire or quartz. Some embodiments of the active device 110 comprise one or more transistors 120 (FIG. 1). The transistors 120 can comprise an nMOS or pMOS transistor, or combination of such transistors. The active device 110 is configured to control the amount of current 210 (FIG. 2) transmitted to the programmable electronic component 115. As illustrated in FIG. 1 the transistor 120 can comprise a gate 122, source and drain structures, 125. Additional components of the transistor 120 include gate sidewalls 215 and a doped well 220 (FIG. X). To isolate the active device 110 and programmable electronic component 115 the device 100 can also include insulating structures 127 (e.g., field oxide or shallow trench isolation structures) in or on the substrate 117. In some embodiments the transistor 120 can be configured as a sensor, and the transistor's 120 gate122 is connected to a resistor 225 (FIG. 2) that provides current control to the programmable electronic component 115. The programmable electronic component 115 is located on the substrate 117 and includes at least one CNT 130 having a segment 135 with an adjusted diameter 140. The term adjusted diameter 140, as used herein, refers to the diameter after exposing the segment 135 to an electron beam to shrink its non-adjusted diameter 145, or after applying a current 210 sufficient to cause an open or short to occur in the segment 135. The term CNT, as used herein refers, to a carbon-based tubular fullerene structures having a non-adjusted diameter 145 of 1 micron or less. Both multi-wall and single- wall CNTs are within the scope of the disclosure. The device 100 further comprises interconnects 230 (e.g., lines, vias, contacts) that couple the active device 110 to the programmable electronic component 115 (FIG X). The interconnects 230 can be patterned metal line (e.g., tungsten), single or dual damacence metal structures (e.g., copper), or other electrically conductive materials that are patterned or deposited on the substrate 117 (e.g., polysilicon or other CNTs). As further illustrated in FIG. 2 the device 100 can further comprise insulating layers 240, such as pre-metal dielectric (PMD) or interlay er dielectric (ILD) layers. The insulating layers 240 help to electrically isolate the active device 110 and the programmable electronic component 115 from each other, or from other active structures in the device 100. With continuing reference to FIGS. 1-3, the active device 110 is configured to control a current transmitted to the programmable electronic component 115, and the programmable electronic component 115 has a value that depends upon the adjusted diameter 140. One skilled in the art would understand how the value of the programmable electronic component 115 could be used to perform a variety of device programming functions. Examples include programming the device 100 to allow redundant components to replace defective components, adapting the device to perform a specific operation, such as trim an oscillator of the device 100 or trim a voltage of the device 100, or to provide a unique identification code for the device 100. In some embodiments of the device 100, the programmable electronic component 115 is configured as a fusible link. In such embodiments, the segment 305 is configured to open when a current 210 equal to a predefined level, is transmitted through the CNT 130. E.g., insome embodiments, the active device 110 has transistors 120 that are configured to control a current 210 transmitted to the fusible link. In such embodiments, the segment 135 is configured to open when a current 210, equal to a predefined level, is transmitted through the one or more CNTs 130 of the programmable circuit component 115. In such embodiments, either the segment 135 forms an open circuit when the predefined level of current 210 is transmitted through the CNT 130, or the segment 135 remains unopened (i.e., closed) when the current 210 is less than the predefined level. The value of the programmable electronic component 115 is thereby configured to equal a zero or nonzero current, depending on whether the segment 135 is opened or not opened, respectively. In other embodiments, however, programmable electronic component 115 is configured as a capacitor. In such embodiments, the CNT 130 is capacitively coupled to a conductive body 150. E.g., the CNT 130 and conductive body 150 serve as capacitor plates and together have a capacitance. In some embodiments, the active device 110 has transistors 120 that are configured to control a current 210 transmitted to the capacitor (e.g., to the CNT 130 or the conductive body 150). A distance 160 between the segment 135 and the conductive body 150 is configured to change as a function of the adjusted diameter 140, and the value is configured to be equal to a capacitance. The capacitance may have any number of discrete values that can be used by the programming circuit 105 to control other circuit components. E.g., when the programming circuit 105 has an output of a capacitance that is equal to some predefined value, the programming circuit can use the capacitance in a predetermined fashion to adjust (e.g., activate or deactivate) other circuit components in the device 100. Because the capacitance is inversely proportional to the distance 160 (FIG. 1), a larger dynamic range of discrete capacitance values can be obtained by having a large range of possible adjusted diameters 140. E.g., consider an embodiment where prior to adjusting the segment's diameter, the diameter 145 equals about 16 nm and the distance 160 between the conductive body 150 and the segment 135 equals about 30 nm. After adjusting the diameter 140 of the segment 135 from 16 nm to 3 nm, the distance 160 increases from about 30 nm to about 36.5 nm. Consequently, the capacitance between the conductive body 150 and the CNT 130 decreases by about 18 percent. In some embodiments the diameter 140 canrange from about 30 (prior to adjustment) to 0 nm (after adjustment), thereby providing an even larger dynamic range of capacitance values. By configuring the conductive body 150 as a second CNT, the dynamic range of capacitance values can be nearly doubled. Consider an embodiment where the conductive body 150 also comprises a second CNT and a diameter 165 of the conductive body 150 prior to its adjustment equals about 16 nm. After adjusting the diameters 140, 165 of both the CNT 130 and conductive body 150 from about 16 nm to 3 nm, the distance 160 increases from about 30 nm to about 43 nm. Consequently, the capacitance between the conductive body 150 and the CNT 130 decreases by about 30 percent. It is non-intuitive to use a CNT as a capacitor plate because CNTs are generally cylindrically shaped. Cylindrically-shaped plates do not present as large surface area as a planar surface, and therefore a lesser amount charge can be stored, as compared to capacitor plates having a planar surface. As part of the present disclosure, it was realized that despite these shortcoming, CNTs can still be effectively employed to generate a capacitance value that is sufficient to be used by the programming circuit 105 to control other circuit components. Importantly, because the diameter 140 of the CNT's segment 135 can be adjusted, several different control signals can be generated by the programming circuit 105. This can be an advantage over a single fuse which is limited to producing a binary control signal (e.g., zero or nonzero current flowing through the fuse). Using the same reference numbers to show device components analogous to that depicted in FIGS. 1-3, FIG. 4 shows a circuit diagram of an example semiconductor device 100, when the programmable electronic component 115 is configured as a capacitor. In such embodiments, the programming circuit 105 further includes an inverter 410, having an output 415, and a comparator 420. The programmable electronic component 115 is connected to the output 415 of the inverter 410. The inverter 410 can comprise transistors 120, such as pMOS and nMOS transistors. The comparator 420 has a first input 425 comprising a reference signal 430 and a second input 435 comprising an output 440 of the programmable electronic component 115. In some cases the reference signal 430 comprises a voltage (e.g., a DC voltage) while in other cases the reference signal 430 comprises a clock signal (e.g., an AC voltage). A programming output 445 of the comparator 420 depends upon the value of the programmableelectronic component 115, which as noted above, can have a number of different capacitances. E.g., the programming output 445 can equal a tripping time whose value increases as the capacitance increases. In turn, the capacitance increases as the adjusted distance 140 between the segment 135 and the conductive body 150 decreases. One skilled in the art would understand that the circuit depicted in FIG. 4 is just one of many configurations of the programming circuit 105. In other embodiments, e.g., one or both of the inverter 410 and comparator 420 can be replaced with other types of circuitry configured to accomplish analogous functions. E.g., the inverter 410 can be replaced with switched resistors or other analog circuitry, and the comparator 420 can be replaced by other type of voltage measurement circuitry. Some embodiments of the device 100 include additional circuitry to facilitate a more accurate measurement of the capacitance of the programmable electronic component 115. E.g., in cases where the output 430 of the inverter 410 can vary from one device 100 to another, it is desirable to further include one or more calibration circuits 450, 452, which are coupled to the programming circuit 105 to thereby determine its output 440, e.g., determine the capacitance. In some cases, the calibration circuits 450, 452 can be part of the programming circuit 105, while in other cases the calibration circuits 450, 452 are separate from the programming circuit 105. Each calibration circuit 450, 452 can respectively include a second inverter 460, 462, whose output is coupled to the input of a known capacitance 470, 472. An output of the known capacitance 470, 472 is coupled to a second comparator 480, 482. Preferably, each of the known capacitances 470, 472, equals one of a target discrete value that the programming circuit 115 is configured to have. By comparing the programming output 445 of the comparator 420 to the analogous output of the calibration circuits 450, 452 an accurate capacitance value can be determined. Another embodiment of the disclosure is a method of manufacturing a semiconductor device. FIGS. 5 to 10 illustrate cross-sectional views, analogous to those shown in FIGS. 2 or 3, of selected steps in example methods of manufacturing a device of the disclosure. The same reference numbers are used to depict analogous features as shown in FIGS. 1-3. Manufacturing the device 100 includes fabricating a programming circuit 105, aspects of which are illustrated in FIGS. 5 to 10. FIG. 5 shows a cross-section view,analogous to that depicted in FIG. 2, of the device 100 after forming an active device 110 on or in the substrate 117. Forming the active device 110 can include forming one or more transistors 120. Forming the transistors 120 can include depositing and patterning dielectric and conductive layers to form a gate structure 122, depositing gate sidewalls 215, implanting and activating dopants to form source and drain structures 125, and a doped well 220 in the substrate 117, and forming insulating structures 127 (e.g., field oxide or shallow trench isolation structures) in or on the substrate 117. FIG. 6 shows the device 100 depicted in FIG. 5, at an intermediate step in forming a programmable electronic component 115 of the device 100. FIG. 7 shows the device 100 at the same stage of fabrication, but from a view analogous to that shown in FIG. 3 (view line B — B in FIG. 1). Forming the programmable electronic component 115 includes depositing a CNT 130 on the substrate 117. E.g., a multi-walled CNT 130 can be synthesized using an arc-discharge or pyrolysis method. The CNT 130 can then be dispersed in an organic liquid (e.g., ortho-dichlorobenzene or isopropyl alcohol). The CNT-containing liquid can be deposited at discrete locations on the substrate 117, after which the liquid is removed (e.g., evaporated) leaving the CNT 130 on the substrate 117. Further examples of forming and depositing CNTs are given in Wang et al. U.S. Patent Application Publication No. US2003/0190278A1 and Zettle U.S. Patent Application Publication No. US2006/0228287A1. A variety of methods can be used to adjust a segment of the deposited CNT 130 such that its diameter is reduced and the programmable electronic component 115 is thereby configured to have a value. E.g., FIGS. 8 and 9 show different embodiments of the device 100 after adjusting a diameter 140 of a segment 135 of the CNT 130. FIG. 8 shows a cross- sectional view of one embodiment of the device 100 depicted in FIG. 7, after adjusting the diameter 140 by opening the segment 135. Creating an opening 810 in the segment 130 includes transmitting a predefined current 210 (FIG. 2) through the CNT 130, such that at least a portion of the segment 135 corresponding to the opening 810 has a diameter 140 of zero. The programmable electronic component 115 thereby has a value that is equal to a zero current. In other cases, when the predefined current 210 is not transmitted through the CNT 130, and an opening 810 is not created, the programmable electronic component 115 has a value that is equal to a nonzero current.FIG. 9 shows a cross-sectional view of another embodiment of the device 100 depicted in FIG. 7, after adjusting the diameter 140 by irradiating the segment 135 with an electron beam. In some embodiments, the electron beam has an energy ranging from about 1 to 100 keV. This energy range is conducive to shrinking certain embodiments of the CNT 130 while maintaining its tubular fullerene structure. In some cases, the electron beam can comprise the electron beam from a transmission electron microscope. In some embodiments, a potential (e.g., about 2 to 3 Volts) is applied to the segment 135 during the electron beam irradiation. Applying a potential the segment 135 can generate a current flow that is sufficient to thermally anneal structural damage and reshape the segment 135. For other examples of irradiating CNTs with electron beams, see Yuzvinsky et al., Nanoletters 6:2718- 22, 2006, incorporated herein in its entirety. The energy of the electron beam, the magnitude of the applied potential, and the durations of the electron beam irradiation and the optional simultaneously applied potential, can be individually adjusted to control the shrinkage of the CNT 130 to the desired adjusted diameter 140. Because the distance 160 between the segment 135 and the conductive body 150 (FIG. 1) depends upon the adjusted diameter 140, the programmable electronic component 115 can thereby be configured to have a value equal to any number of predefined capacitances. In some cases, the process to adjust the diameter can be configured to provide one of multiple discrete diameters 140 (e.g., 3, 8, 12 and 16 nm) so as to provide discrete target capacitance values. In some embodiments, the segment 135 is irradiated with an electron beam to adjust its diameter 140 before transmitting the predefined current 210 (FIG. 2) such as described above in the context of FIG. 8. Shrinking the segment 135 so that it has a smaller adjusted diameter 140 than other portions of the CNT 130 helps to define where along the CNT 130 the opening 810 will be formed when the predefined current 210 is transmitted. FIG. 10 shows the device 100 depicted in FIG. 6, at an intermediate step in forming the programmable electronic component 115 configured as a capacitor, that includes forming a conductive body 150 close to (e.g., within about 100 nm) of the CNT 130. The CNT 130 and the conductive body 150 can thereby establish a capacitance. In embodiments where the programmable electronic component 115 is configured as a fusible link, a conductive body need not be formed. In some cases, the conductive body 150 is formed by depositing a layerof conductive material (e.g., a polysilicon layer or metal layer deposited by chemical vapor or physical vapor deposition techniques) and then patterning the layer using conventional photolithography processes. In other cases, such as depicted in FIG. 10, the conductive body 150 is formed by depositing a second CNT on the substrate 117. E.g., the second CNT can be deposited in substantially the same fashion and at the same time as the CNT 130. In such embodiments, the diameters of the CNT 130 and conductive body 150 can both be adjusted via irradiation with an electron beam, such as described in the context of FIG 8. E.g., FIG 11 shows the device 100 depicted in FIG. 10 after adjusting the diameters 140, 165 of the CNT 130 and conductive body 150 (configured as a second CNT) with electron beam irradiation. FIG. 12 shows the device 100 depicted in FIG. 11, after depositing insulating layers 240 (e.g., silicon oxide, or low-k dielectric material deposited as PMD or ILD layers) and after forming interconnects 230 (e.g., tungsten contacts and copper vias and lines) in or on the insulating layers 240. The interconnects 230 are configured to complete the formation of the active device 110 (e.g., by interconnecting the transistors 120 of the active device 110) and to couple the active device 110 to the programmable electronic component 115. Those skilled in the art to which the disclosure relates will appreciate that other and further additions, deletions, substitutions, and modifications may be made to the described example embodiments, without departing from the scope of the claimed invention.
Disclosed herein are structures and techniques utilizing directed self-assembly for microelectronic device fabrication. For example, a microelectronic structure may include a patterned region including a first conductive line and a second conductive line, wherein the second conductive line is adjacent to the first conductive line; and an unordered region having an unordered lamellar pattern, wherein the unordered region is coplanar with the patterned region.
Claims:1. A microelectronic structure, comprising: a patterned region including a first conductive line and a second conductive line, wherein the second conductive line is adjacent to the first conductive line, the first conductive line and the second conductive line have a pitch that is less than 30 nanometers, the first conductive line has a line edge roughness that is less than 1.2 nanometers, aid the second conductive line has a line edge roughness that is less than 1.2 nanometers.2. The microelectronic structure of claim 1, wherein the microelectronic structure further includes an unordered region having an unordered lamellar patter, and the unordered region is coplanar with the pattered region.3. The microelectronic structure of claim 1 , wherein the microelectronic structure further includes pitch-division artifacts proximate to the pattered region.4. The microelectronic structure of any of claims 1-3, wherein the patterned region is a first patterned region, the microelectronic structure further includes a second patterned region including a first conductive line and a second conductive line, wherein the second conductive line of the second patterned region is adjacent to the first conductive line of the second patterned region, the first conductive line of the second pattered region and the second conductive line of the second patterned region have a pitch that is greater than 24 nanometers.5. The microelectronic structure of claim 4, wherein the first conductive line of the second pattered region has a line edge roughness that is greater than 1.2 nanometers, and the second conductive line has a line edge roughness that is greater than 1.2 nanometers.6. The microelectronic structure of claim 4, wherein the first conductive line of the second patterned region has a line width roughness and a line edge roughness, and the line width roughness is equal to the line edge roughness multiplied by the square root of 2.7. The microelectronic structure of claim 4, wherein the second patterned region is coplanar with the first patterned region.8. The microelectronic structure of claim 4, wherein the second patterned region is in a same layer of a metallization stack as the first pattered region.9. The microelectronic structure of any of claims 1-3, wherein the first conductive line has a line width roughness, and the line width roughness of the first conductive line is not equal to the line edge roughness of the first conductive line multiplied by the square root of 2.10. The microelectronic structure of any of claims 1-3, wherein the patterned region includes a third conductive line and a fourth conductive line, the third conductive line is between the second conductive line and the fourth conductive line, the third conductive line has a line edge roughness greater than 1.2 nanometers, and the fourth conductive line has a line edge roughness less than 1.2 nanometers.11. A microelectronic structure, comprising: a patterned region including a first conductive line and a second conductive line, wherein the second conductive line is adjacent to the first conductive line; and an unordered region having an unordered lamellar pattern, wherein the unordered region is coplanar with the patterned region. 12. The microelectronic structure of claim 11, wherein the first conductive line includes a conductive material, and the unordered region includes a material having a sane material composition as the conductive material.13. The microelectronic structure of claim 11, wherein the patterned region includes a dielectric material, aid the unordered region includes a material having a same material composition as the dielectric material.14. The microelectronic structure of any of claims 11-13, wherein a spacing between the first conductive line and the second conductive line is less than 15 nanometers.15. The microelectronic structure of any of claims 11-13, wherein the microelectronic structure further includes a device layer, and the patterned region is included in an interconnect layer above or below the device layer.16. A microelectronic structure, comprising: a first patterned region including a first conductive line; and a second patterned region including a second conductive line, wherein the second patterned region is coplanar with the first patterned region, the first conductive line has a first line width roughness and a first line edge roughness, the first line width roughness is not equal to the first line edge roughness multiplied by the square root of 2, the second conductive line has a second line width roughness and a second line edge roughness, and the second line width roughness is equal to the second line edge roughness multiplied by the square root of 2.17. The microelectronic structure of claim 16, wherein the microelectronic structure lurther includes a via in conductive contact with the first conductive line.18. The microelectronic structure of claim 17, wherein the via is in a dielectric material, and the dielectric material includes a photo acid generator.19. The microelectronic structure of claim 18, wherein the dielectric material includes a quencher.20. The microelectronic structure of any of claims 17-19, wherein the via has side faces that are self-aligned with side faces of the first conductive line.
DIRECTED SELF-ASSEMBLY STRUCTURES AND TECHNIQUESCross-Reference to Related AoDlication[1] This application claims priority to U.S. Provisional Patent Application No. 63/033,721, filed June 2, 2020 and titled "CHEMICAL COMPOSITIONS & METHODS OF PATTERNING MICROELECTRONIC DEVICE STRUCTURES." and U.S. Non-Provisional Patent Application No. 17/032,517, filed September 25, 2020 aid titled "DIRECTED SELF-ASSEMBLY STRUCTURES AND TECHNIQUES*. These priority applications are hereby incorporated herein in their entireties.Backaround[2] Conventional microelectronic fabrication techniques may not be able to reliably pattern particularly small features. Consequently, the size and performance of microelectronic devices has been limited.Brief Description of the Drawl nos[3] Embodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example, not by way of limitation, in the figures of the accompanying drawings.[4] FIGS. 1A-1C are various views of a microelectronic structure including lines having low line edge roughness (LER), in accordance with various embodiments.[5] FIGS. 2A-2L illustrate stages in an example process of manufacturing the microelectronic structure of FIG. 1, in accordance with various embodiments.[6] FIG. 3 illustrates a stage in another example process of manufacturing the microelectronic structure of FIG, 1, in accordance with various embodiments.[7] FIGS. 4A-4B are various views of another microelectronic structure including lines having low (LER), in accordance with various embodiments.[8] FIGS. 5A-5D illustrate stages in an example process of manufacturing the microelectronic structure of FIG. 4, in accordance with various embodiments.[9] FIGS. 6A-6B are various views of another microelectronic structure including lines having low LER, in accordance with various embodiments.[10] FIGS. 7A-7H illustrate stages in an example process of manufacturing the microelectronic structure of FIG. 6, in accordance with various embodiments.[11] FIGS. 8A-8B are various views of a microelectronic structure including lines having low LER and lines having high LER, in accordance with various embodiments.[12] FIGS. 9A-9M illustrate stages in an example process of manufacturing the microelectronic structure of FIG. 8, in accordance with various embodiments.[13] FIGS. 10A-10B are various views of another microelectronic structure including lines having low LER, in accordance with various embodiments,[14] FIGS. 11 A-11 H illustrate stages in an example process of manufacturing the microelectronic structure of FIG. 10, in accordance with various embodiments. [15] FIGS. 12A-12B are various views of another microelectronic structure including lines having low LER and lines having high LER, in accordance with various embodiments.[16] FIGS. 13A-13P illustrate stages in an example process of manufacturing the microelectronic structure of FIG. 12, in accordance with various embodiments.[17] FIGS. 14A-14B are various views of another microelectronic structure including lines having low LER, in accordance with various embodiments.[18] FIGS. 15A-15G illustrate stages in an example process of manufacturing the microelectronic structure of FIG. 14, in accordance with various embodiments.[19] FIGS. 16A-16B are various views of another microelectronic structure including lines having low LER, in accordance with various embodiments.[20] FIGS. 17A-17G illustrate stages in an example process of manufacturing the microelectronic structure of FIG. 16, in accordance with various embodiments.[21] FIG. 18 is a top view of a microelectronic structure including lines having low LER at multiple pitches, in accordance with various embodiments.[22] FIGS. 19A-19E illustrate stages in an example process of manufacturing the microelectronic structure of FIG. 18, in accordance with various embodiments.[23] FIG. 20 illustrates a stage in another example process of manufacturing the microelectronic structure of FIG. 18, in accordance with various embodiments.[24] FIG. 21 is a side, cross-sectional view of a microelectronic structure including vias in conductive contact with lines having low LER, in accordance with various embodiments.[25] FIGS. 22A-22F illustrate stages in an example process of manufacturing the microelectronic structure of FIG. 21, in accordance with various embodiments.[26] FIG. 23 is a side, cross-sectional view of another microelectronic structure including vias in conductive contact with lines with low LER, in accordance with various embodiments.[27] FIGS. 24A-24C illustrate stages in an example process of manufacturing the microelectronic structure of FIG. 23, in accordance with various embodiments.[28] FIGS. 25-27 are top views of microelectronic structures including pitch-division artifacts, in accordance with various embodiments.[29] FIG. 28 is a top view of a wafer and dies that may include any of the microelectronic structures disclosed herein.[30] FIG. 29 is a side, cross-sectional view of a microelectronic device that may include any of the microelectronic structures disclosed herein.[31] FIG. 30 is a side, cross-sectional view of a microelectronic package that may include any of the microelectronic structures disclosed herein.[32] FIG. 31 is a side, cross-sectional view of a microelectronic device assembly that may include any of the microelectronic structures disclosed herein. [33] FIG. 32 is a block diagram of an example computing device that may include any of the microelectronic structures disclosed herein.Detailed Description[34] Disclosed herein are structures and techniques utilizing directed self-assembly (DSA) for microelectronic device fabrication. The structures and techniques disclosed herein may achieve fine feature sizes with low roughness and defect densities, and may be particularly suitable to accompany and improve extreme ultraviolet (EUV) lithography techniques.[35] Existing conventional lithography techniques, such as existing conventional EUV techniques, may not be able to pattern features that are both sufficiently small and have sufficiently few defects to be used in commercial microelectronic devices. For example, conventional EUV lithography may suffer from high roughness and excessive bridging defects at tight pitches (e.g., pitches below 32 nanometers), which may limit or effectively prevent deployment of EUV patterning techniques (e.g., spacer-based pitch-division techniques having resist "backbones' defined by EUV lithography). Conventional EUV lithographic techniques also suffer from a trade-off between EUV dose and resist thickness; although higher EUV doses have the potential to patter lines with lower roughnesses, such higher EUV doses typically require thinner resist layers in order to achieve a desired depth of focus and avoid patter collapse, but these thinner resist layers typically cannot withstand etch transfer (i.e„ the transfer of a patter in the resist to one or more underlying layers) as well as thicker resists can. These constraints have provided significant barriers to the adoption of EUV techniques in commercial microelectronic fabrication processes.[36] Various ones of the embodiments disclosed herein may remedy the deficiencies of conventional EUV lithographic techniques through the use of DSA operations. DSA-based techniques may utilize the propensity of some materials to self-organize into particular patterns under certain conditions, and these patters may be utilized in various ways to fabricate small and accurate features in a microelectronic device. For example, various ones of the embodiments disclosed herein may include lines with low line edge roughness (LER) at varying pitches that can be reliably manufactured using DSA-based techniques.[37] In the following detailed description, reference is made to the accompanying drawings that form a part hereof wherein like numerals designate like parts throughout, and in which is shown, by way of illustration, embodiments that may be practiced. It is to be understood that other embodiments may be utilized, and structural or logical changes may be made, without departing from the scope of the present disclosure.Therefore, the following detailed description is not to be taken in a limiting sense.[38] Various operations may be described as multiple discrete actions or operations in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent In particular, these operations may not be performed in the order of presentation. Operations described may be performed in a different order from the described embodiment. Various additional operations may be performed, and/or described operations may be omitted in additional embodiments. [39] For the purposes of the present disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B, and C), The phrase "A, B, or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B, and C). The drawings are not necessarily to scale. Although many of the drawings illustrate rectilinear structures with flat walls and right-angle comers, this is simply for ease of illustration, and actual devices made using these techniques will exhibit rounded comers, surface roughness, and other features.[40] The description uses the phrases "in an embodiment’ or 'in embodiments," which may each refer to one or more of the same or different embodiments. Furthermore, the terms ’comprising,’ ’including,’ ’having," and the like, as used with respect to embodiments of the present disclosure, are synonymous. As used herein, a "conductive" material refers to an electrically conductive material, unless otherwise specified. When used to describe a range of dimensions, the phrase "between X and Y" represents a range that includes X and Y. For convenience, the phrase "FIG. 1’ may be used to refer to the collection of drawings of FIGS. 1A-1C, the phrase "FIG. 2’ may be used to refer to the collection of drawings of FIGS. 2A-2L, etc. Although mask materials are referred to with various reference numerals repeated between different ones of the drawings (e.g., mask material 126, mask material 128, mask material 148, etc.), this is simply for ease of illustration, and a mask material having a specific reference numeral referred to in one of the drawings (e.g., the mask material 128 referred to in the drawings of FIG. 7) need not be the same mask material as the mask material having the same reference numeral referred to in another of the drawings (e.g., the mask material 128 referred to in the drawings of FIG. 9).[41] FIGS. 1 A-1C are various views of an example microelectronic structure 100 including lines 140 having low LER; such lines 140 may be referred to herein as low-LER lines 140. FIG. 1A is a side, cross-sectional view of the microelectronic structure 100 through the section A-A of FIG. 1B, FIG. 1 B is a top view of the microelectronic structure 100, and FIG. 1C is a detailed top view of an unordered lamellar structure 138 of a microelectronic structure 100 (discussed further below). The low-LER lines 140 of FIG. 1 may have edges 130, as shown. The term "low," when used with reference to the low-LER lines 140, is a relative one, indicating that the LER of low-LER lines 140 is less then the LER of other "high-LER" lines (e.g., the high-LER lines 170 discussed below). The LER may measure a local deviation of a line edge from its center of mass; in some embodiments, the LER may be quantified as the root-mean-square deviation of a line edge from a best-fit straight line. In some embodiments, low-LER lines 140 may be those patterned utilizing various ones of the DSA-based techniques disclosed herein, while high-LER lines may be pattered utilizing conventional techniques (e.g., EUV lithography). In some embodiments, the LER of a low-LER line 140 may be less than 1.2 nanometers, while the LER of a high-LER line may be greater than 1.2 nanometers; in other embodiments, the LER of a low-LER line 140 may be less than 1.5 nanometers, while the LER of a high-LER line may be greater than 1.5 nanometers, but these are simply examples aid other LER thresholds may apply (e.g., dependent upon pitch and process). In some embodiments, the microelectronic structure 100 of FIG. 1 may be part of an interconnect layer in a microelectronic device (e.g., as discussed below with reference to FIG. 29).[42] The microelectronic structure 100 of FIG. 1 includes multiple low-LER lines 140 formed of parallel arrangements of line material 120 through a dielectric material 102. The line material 120 may include one or more layers of various materials, such as one or more layers of liner material End fill material. In some embodiments, a liner material may include tantalum, tantalum nitride, titanium, titanium nitride, cobalt, or ruthenium (e.g,, combinations thereof) and a fill material may include tungsten, cobalt (e.g., as cobalt silidde), ruthenium, molybdenum, copper, silver, nickel (e.g., as nickel sillcide), gold, aluminum, other metals or alloys, or other combinations of materials. The dielectric material 102 may Include any suitable dielectric material. For example, in some embodiments, the dielectric material 102 may include an inorganic dielectric material, such as silicon oxide, carbon-doped oxide, silicon nitride, silicon carbide, silicon oxynitride, silicon oxycarbide, or insulating metal oxides such as hafnium oxide and zirconium oxide. In some embodiments, the dielectric material 102 may have a porosity that is less than 50% (e.g., less than 30%) and/or air gaps. In some embodiments, the pitch 172 of the low-LER lines 140 may be less than 30 nanometers (e.g., less than 24 nanometers), the line width 174 of a low-LER line 140 may be less than 15 nanometers (e.g., less than 12 nanometers), and/or the spacing between adjacent low-LER lines 140 may be less than 15 nanometers (e.g., less than 12 nanometers).[43] The low-LER lines 140 may be part of a patterned region 142, and the microelectronic structure 100 may also include one or more unpattemed regions 144. In some embodiments, when DSA-based techniques are used to manufacture the microelectronic structure 100 (e.g., as discussed below with reference to FIG. 2), the unpattemed regions 144 may include an unordered lamellar structure 138 like that illustrated in FIG. 1C. The unordered lamellar structure 138 may include the line material 120 and the dielectric material 102 patterned according to the unordered lamellar structure of a DSA material that did not assume an ordered structure during preceding patterning operations (e.g., due to the absence of a patterned brush material over the unpattemed regions 144, as discussed below with reference to FIG. 2). The presence of an unordered lamellar structure 138, like that illustrated in FIG. 1C, in an unpattemed region 144 of a microelectronic structure 100 may be indicative of the use of a DSA-based technique during fabrication of the patterned region 142. In some embodiments, the unpattemed region 144 may be part of a transition region of a die including the microelectronic structure 100, under a guard ring of a die including the microelectronic structure 100, or in a frame of a die including the microelectronic structure 100 (e.g., any of the dies 1502 discussed below with reference to FIG. 28).[44] FIGS. 2A-2L illustrate stages in an example process of manufacturing the microelectronic structure 100 of FIG. 1, in accordance with various embodiments. Although the operations of the method of FIG. 2 (and others of the methods disclosed herein) may be illustrated with reference to particular embodiments of the microelectronic structures 100 disclosed herein, the method of FIG. 2 (and others of the methods disclosed herein) may be used to form any suitable microelectronic structures 100. Operations are illustrated once each and in a particular order in FIG. 2 (and others of the drawings descriptive of the methods disclosed herein), but the operations may be reordered and/or repeated as suitable (e.g., with different operations performed in parallel when manufacturing multiple microelectronic structures 100 simultaneously).[45] FIG. 2A is a side, cross-sectional view of an assembly including a dielectric material 102, a mask material 104, a mask material 106, and a mask material 108. In some embodiments, the mask material 104 may include titanium nitride. In some embodiments, the mask material 106 may include silicon nitride, silicon oxide, or a silicon anti-reflective coating. In some embodiments, the mask material 108 may be a carbon-based hardmask or may include amorphous silicon. The particular number and arrangement of mask materials depicted in the assembly of FIG. 2A (and others of the accompanying drawings) is simply illustrative, and more or fewer mask materials may be arranged in any desired manner in accordance with the techniques disclosed herein.[46] FIG. 2B is a side, cross-sectional view of an assembly subsequent to forming an initial brush 110 on the mask material 108 of the assembly of FIG. 2A. The initial brush 110 may include a material that will serve as a template for DSA of a block copolymer (BCP), as described below, and in some embodiments, may include one or more of the components of the BCP. For ease of discussion, the DSA-based techniques disclosed herein may refer to a BCP (e.g., the BCP 114 discussed below) having two components, a first component 116 and a second component 118, but this is simply illustrative, and a BCP having more than two components may be utilized in any of the techniques disclosed herein. One example of a BCP that may serve as the BCP 114 in the operations disclosed herein is polystyrene-co-poly(methyl methacrylate) (PS-PMMA); when the BCP 114 is PS- PMMA, the first component 116 may be polystyrene (PS) while the second component 118 may be polymethyl methacrylate (PMMA). As noted above, the DSA-based techniques disclosed herein may utilize a brush 110 that includes one or more of the first component 116 and the second component 118 of a BCP 114, but this is also simply illustrative, and any suitable material or materials may be included in a brush 110 (e.g., materials that are not components of the BCP that will undergo DSA on the brush 110). FIG. 2B (and others of the accompanying drawings) may illustrate a brush 110 that includes the first component 116. Although the brush 110 is illustrated as including the first component 116, the brush 110 may include other materials as well, as suitable (e.g., the brush 110 may include the second component 118, instead of or in addition to the first component 116, or the brush 110 may include one or more materials different from the first component 116 and the second component 118). As used herein, a "brush* may refer to any material that facilitates the self-assembly of a DSA material thereon, and may include large polymers, small polymers, self-assembled monolayers (SAMs), and other suitable materials.[47] FIG. 2C is a side, cross-sectional view of an assembly subsequent to patterning the initial brush 110 of the assembly of FIG. 2B to form openings 178 in the brush 110. The locations of the openings 178 may correspond to the desired locations of low-LER lines 140 in a microelectronic structure 100, although the roughness of the edges 130 of the openings 178 in the assembly of FIG. 2C may not be "low," as discussed below. In some embodiments, the brush 110 may itself be photolithographically pattered (e.g., the brush 110 may be selectively treated to change properties of the brush 110 in accordance with a desired patter, then portions of the brush 110 may be removed by a suitable etch or rinse to yield the desired patter). In some such embodiments, the brush 110 may include a component that can undergo chain scission reactions upon photon or electron exposure (e.g., a PMMA resist). In other such embodiments, the brush 110 may include a surface anchoring group that may be cleaved by photon or electron exposure, or a reaction with a subsequent photo acid or base. In other such embodiments, the brush 110 may undergo a polarity switch upon photon or electron exposure; such a polarity switch may generate either a 2-color tone or a 3-color tone brush contrast, depending on the edge broadening effect. In other embodiments, the brush 110 may be patterned by applying a photoresist material (not shown), pattering the photoresist material, transferring the patter of the photoresist material into the brush 110, and then removing the photoresist material, FIG. 2D is a top view of the assembly of FIG, 2C, illustrating the edges 130 of the openings 178 in the brush 110. The illustration of FIG. 2C is taken through the section C-C of FIG. 2D. As noted above, the openings 178 may have highly rough edges 130; if the pattern of the openings 178 were transferred into the dielectric material 102, and the transferred openings were filled with the line material 120, the resulting lines would be similarly rough, and thus would be high-LER lines (e.g., the high-LER lines 170 discussed below).[48] FIG. 2E is a side, cross-sectional view of an assembly subsequent to depositing a BCP 114 on the assembly of FIGS. 2C and 2D, As noted above, the brush 110 and the BCP 114 may be selected so as to achieve a desired DSA behavior, as discussed below with reference to FIG. 2F. In the embodiment of FIG. 2, as noted above, the BCP 114 may include a first component 116 and a second component 118 (not shown In FIG.2E).[49] FIG. 2F is a side, cross-sectional view of an assembly subsequent to treating the assembly of FIG. 2E to cause the BCP 114 to self-assemble in accordance with the template provided by the brush 110. In the particular embodiment of FIG. 2F, the self-assembly of the BCP 114 includes the BCP 114 self-segregating its first component 116 and second component 118 into bands, forming alternating vertically oriented regions of the first component 116 and the second component 118 in the pattered region 142. The dimensions and spacing of the openings 178 in the brush 110 may be selected to correspond with the size and spacing of the bands of the second component 118 of the BCP 114, as shown, and the dimensions and spacing of the first component 116 in the brush 110 may be selected to correspond with the size and spacing of the bands of the first component 116 of the BCP 114, so that the brush 110 provides a 'template" for the self-assembly of the BCP 114, aligning the self-assembled BCP 114 as desired with respect to the underlying brush 110. A BCP 114 may be able to 'stretch" or ’shrink" around a nominal "inherent" spacing of the self-assembled bands of the first component 116/second component 118, allowing a range of dimensions of the self-assembled bands of the first component 116/second component 118, as well as some tolerance to deviation in the pattering of the brush 110 from its intended patter. The particular band-like self-assembly illustrated in FIG. 2F is one example of a pattern into which a BCP 114 may self-assemble; some BCPs 114 may self-assemble into other patterns, and various BCPs 114 may self-assemble into multiple different patters under different conditions, as discussed below. Outside the patterned region 142, the brush 110 may not provide a surface on which the BCP 114 readily self-assembles into alternating vertically oriented regions of the first component 116 and the second component 118, and so instead, the BCP 114 in the unpattemed regions 144 may self-assemble into unordered lamellae 132 of the first component 116 and the second component 118; the unordered lamellae 132 may have a structure like that illustrated in FIG. 1C,[50] FIG. 2G is a side, cross-sectional view of an assembly subsequent to removing the second component 118 from the assembly of FIG. 2F. The first component 116 may remain in place, and thus the patterned region 142 may include a series of parallel openings 180. In some embodiments, the assembly of FIG. 2F may be treated with an ion implant technique to harden the first component 116 (e.g., PS) prior to removing the second component 118 (e.g., PMMA). In some embodiments, a suitable selective etch technique may be used to remove the second component 118 while leaving the first component 116 in place. Removing the second component 118 from the unordered lamellae 132 of the unpattemed regions 144 may result in partially etched unordered lamellae 134, which may retain a structure like that illustrated in FIG. 1C. FIG. 2H is a top view of the assembly of FIG. 2G, illustrating the edges 130 of the openings 180 in the first component 116. The illustration of FIG. 2G is taken through the section G-G of FIG. 2H. These openings 180 may have edges 130 with low LER; if the patter of the openings 180 were transferred into the dielectric material 102 (as discussed below), and the transferred openings were filled with a line material 120 (as discussed below), the resulting lines will be similarly rough, and thus will be low-LER lines 140, The process of performing a DSA-based technique on the "rough" openings 178 of the assembly of FIG. 2D may result in the "smooth" openings 180 of the assembly of FIG. 2H, and thus the technique of FIG. 2 (and others of the DSA-based techniques disclosed herein) may be said to "rectify" the "rough" openings 178. The ability of the DSA-based techniques disclosed herein to rectify rough lithographic features may enable the use of lower-dose EUV lithography for fabrication; since the additional roughness associated with lower-dose EUV lithography (relative to higher-dose EUV lithography) may be remedied by the DSA operations, the benefits of lower-dose EUV lithography (e.g., the ability to use thicker resist materials) may be realized without the conventionally associated roughness penalty.[51] FIG. 2I is a side, cross-sectional view of an assembly subsequent to transferring the pattern of the openings 180 of the assembly of FIGS. 2G and 2H into the underlying mask material 108. Any suitable etch technique may be used. Transferring the pattern of the openings 180 into the mask material 108 may also result in transferring the unordered lamellar patters of the partially etched unordered lamellae 134 into the underlying mask material 108 in the unpattemed regions 144, yielding the unordered lamellae-pattemed mask material 136.[52] FIG. 2J is a side, cross-sectional view of an assembly subsequent to removing the first component 116 from the assembly of FIG. 2I. Any suitable selective etch technique may be used (e.g., when the first component 116 includes PS, an ash technique may be used).[53] FIG. 2K is a side, cross-sectional view of an assembly subsequent to performing a lateral etch on the mask material 108 of FIG. 2J to decrease the lateral size of the portions of the mask material 108 and thereby increase the distance between adjacent portions of the mask material 108. This etch may be controlled to achieve a desired distance between adjacent portions of the mask material. In some embodiments, the width 111 of a portion of mask material 108 may be between 10 nanometers and 12 nanometers.[54] FIG. 2L is a side, cross-sectional view of an assembly subsequent to transferring the pattern of the mask material 108/unordered lamellae-pattemed mask material 136 into the dielectric material 102 of the assembly of FIG. 2K (through the intermediate mask materials 104 and 106, subsequently removed), and then providing line material 120 in the openings of the dielectric material 102 to form the low-LER lines 140, The pattern of the unordered lamellae-pattemed mask material 136 may be transferred into the dielectric material 102 to form the unordered lamellar structure 138. The assembly of FIG. 2L may take the form of the microelectronic structure 100 of FIG. 1. [55] As noted above, in some embodiments, a brush 110 may include multiple different materials arranged in a desired pattern. For example, FIG. 3 illustrates an assembly subsequent to depositing the second component 118 in the openings 178 of the brush 110 of the assembly of FIG. 2C. The operations discussed above with reference to FIGS. 2E-2L may be performed on the assembly of FIG. 3 to form the microelectronic structure 100 of FIG. 1. Utilizing multiple different materials in a brush 110 may provide a stronger "template· to the BCP 114, and may thereby improve the resulting self-assembly and achieve low-LER lines 140 with even lower LER.[56] In some embodiments, spacer-based techniques may be used to further reduce the pitch 172 of low- LER lines 140 in a pattered region 142. For example, FIGS. 4A-4B are various views of another microelectronic structure 100 including low-LER lines 140, in accordance with various embodiments. FIG. 4A is a side, cross- sectional view of the microelectronic structure 100 through the section A-A of FIG. 4B, and FIG. 4B is a top view of the microelectronic structure 100; the unordered lamellar structure 138 of the microelectronic structure 100 of FIG. 4 may take the form illustrated in FIG. 1C. The embodiment of FIG. 4 shares a number of elements with the embodiment of FIG. 1; for ease of discussion, a description of these elements is not repeated, and these elements may take the form of any of the embodiments of these elements disclosed herein. Relative to the embodiment of FIG. 1, the low-LER lines 140 of the embodiment of FIG. 4 may have a smaller pitch 172, smaller line width 174, and/or smaller spacing 176.[57] FIGS. 5A-5D illustrate stages in an example process of manufacturing the microelectronic structure 100 of FIG. 4, in accordance with various embodiments. FIG. 5A is a side, cross-sectional view of an assembly subsequent to forming spacers 124 at side faces of the patterned mask material 108 of the assembly of FIG. 2K. The spacers 124 may include a dielectric material, and may be fabricated using any suitable spacer technique (e.g., a conformal deposition of the dielectric material, such as by atomic layer deposition (ALD), followed by a "downward" directional etch to remove the dielectric material on horizontal surfaces and leave the dielectric material in place on side faces).[58] FIG. 5B is a side, cross-sectional view of an assembly subsequent to depositing and patterning a mask material 182 on the assembly of FIG. 5A to cover the mask material 108 proximate to the unordered lamellae- pattemed mask material 136, and then removing the remaining mask material 108. Any suitable mask material 182, deposition techniques, patterning techniques, and etch techniques may be used.[59] FIG. 5C is a side, cross-sectional view of an assembly subsequent to transferring the patter of the mask material 108/unordered lamellae-pattemed mask material 136 into the dielectric material 102 of the assembly of FIG. 5B (through the intermediate mask materials 104 and 106, subsequently removed), and then providing line material 120 in the openings of the dielectric material 102 to form the low-LER lines 140. The patter of the unordered lamellae-pattemed mask material 136 may be transferred into the dielectric material 102 to form the unordered lamellar structure 138. FIG. 5D is a top view of the assembly of FIG. 5C, illustrating the edges 130 of the low-LER lines 140. The illustration of FIG, 5C is taken through the section C-C of FIG. 5D.The assembly of FIGS. 5C and 5D may take the form of the microelectronic structure 100 of FIG. 4.[60] Spacer-based techniques may be used to reduce the spacing 176 between low-LER lines 140 in a microelectronic structure 100 in other ways. For example, FIGS. 6A-6B are various views of another microelectronic structure 100 including low-LER lines 140, in accordance with various embodiments. FIG. 6A is a side, cross-sectional view of the microelectronic structure 100 through the section A-A of FIG. 6B, and FIG. 6B is a top view of the microelectronic structure 100; the unordered lamellar structure 138 of the microelectronic structure 100 of FIG. 6 may take the form illustrated in FIG. 1C. The embodiment of FIG. 6 shares a number of elements with preceding embodiments; for ease of discussion, a description of these elements is not repeated, and these elements may take the form of any of the embodiments of these elements disclosed herein. Relative to the embodiment of FIG. 1, the low-LER lines 140 of the embodiment of FIG. 6 may have a smaller spacing176.[61] FIGS. 7A-7H illustrate stages in an example process of manufacturing the microelectronic structure 100 of FIG. 6, in accordance with various embodiments. FIG, 7A is a side, cross-sectional view of an assembly subsequent to providing and patterning a mask material 128 in the unpattemed regions 144 of an assembly substantially similar to that of FIG. 2G, but with additional openings 180 between the outermost portions of the first component 116 and the unordered lamellae 132 for illustrative purposes. Any suitable mask material 128 may be used.[62] FIG. 7B Is a side, cross-sectional view of an assembly subsequent to forming spacers 124 at side faces of the first component 116 of the assembly of FIG. 7A. The spacers 124 may take any of the forms disclosed herein.[63] FIG. 7C is a side, cross-sectional view of an assembly subsequent to depositing a mask material 126 over the assembly of FIG. 7B. In some embodiments, the mask material 126 may include amorphous silicon. The mask materia! 126 may fill in openings in the unordered lamellae 132, forming a lamellar material 184.[64] FIG. 7D is a side, cross-sectional view of an assembly subsequent to the assembly of FIG. 7C to remove the overburden of mask material 126. In some embodiments, a chemical mechanical polishing (CMP) technique may be used.[65] FIG. 7E is a side, cross-sectional view of an assembly subsequent to removing the first component 116 and the spacers 124 from the assembly of FIG. 7D. Any suitable selective etch technique(s) may be used (e.g., when the first component 116 includes PS, an ash technique may be used). Removing the first component 116 from the lamellar material 184 may result in the partially etched lamellar material 186, which may have a structure like that of FIG. 1C.[66] FIG. 7F is a side, cross-sectional view of an assembly subsequent to transferring the pattern of the partially etched lamellar material 186/mask material 126 of the assembly of FIG. 7E into the mask material 108. Transferring the pattern may include transferring the unordered lamellar patterns of the partially etched lamellar material 186 into the underlying mask material 108 in the unpatterned regions 144, yielding the unordered lamellae-pattemed mask material 136.[67] FIG. 7G is a side, cross-sectional view of an assembly subsequent to transferring the pattern of the mask material 108/unordered lamellae-pattemed mask material 136 of the assembly of FIG. 7F into the dielectric material 102 (through the intermediate mask materials 104 and 106, subsequently removed), and then providing line material 120 in the openings of the dielectric material 102 to form the low-LER lines 140, The patter of the unordered lamellae-pattemed mask material 136 may be transferred into the dielectric material 102 to form the unordered lamellar structure 138. FIG. 7H is a top view of the assembly of FIG. 7G, illustrating the edges 130 of the low-LER lines 140. The illustration of FIG. 7G is taken through the section G-G of FIG. 7H. The assembly of FIGS. 7G and 7H may take the form of the microelectronic structure 100 of FIG. 6.[68] In some embodiments, a microelectronic structure 100 may include low-LER lines 140 and high-LER lines 170. For example, FIGS. 8A-8B are various views of another microelectronic structure 100 including low- LER lines 140, in accordance with various embodiments. FIG. 8A is a side, cross-sectional view of the microelectronic structure 100 through the section A-A of FIG. 8B, and FIG. 8B is a top view of the microelectronic structure 100; the unordered lamellar structure 138 of the microelectronic structure 100 of FIG. 8 may take the form illustrated in FIG. 1C. The embodiment of FIG. 8 shares a number of elements with preceding embodiments; for ease of discussion, a description of these elements is not repeated, and these elements may take the form of any of the embodiments of these elements disclosed herein. Relative to the embodiment of FIG. 1, the microelectronic structure 100 of FIG. 8 includes a first patterned region 142-1 including low-LER lines 140 and a second pattered region 142-2 including one (as shown) or more high-LER lines 170.[69] As noted above, in some embodiments, lines or other features patterned by DSA-based techniques may be distinguished from lines or other features pattered by lithographic techniques (e.g., EUV lithographic techniques) by their LER; in particular, lines or other features patterned by the DSA-based techniques disclosed herein may have lower LER than lines or other features patterned by lithographic techniques. Other markers may distinguish lines or other features patterned by the DSA-based techniques disclosed herein from lines or other features patterned by lithographic techniques. For example, in some embodiments, lines or other features patterned by conventional lithographic techniques (e.g., EUV lithographic techniques) may have a line width roughness (LWR) that is equal to the LER of those lines or other features, multiplied by the square root of 2.This lithographic property" may not hold for lines or other features pattered by the DSA-based techniques disclosed herein, and thus the presence of this lithographic property may indicate whether a feature was patterned using conventional lithographic techniques or the DSA-based techniques disclosed herein.[70] FIGS. 9A-9M illustrate stages in an example process of manufacturing the microelectronic structure 100 of FIG. 8, in accordance with various embodiments. FIG. 9A is a side, cross-sectional view of an assembly like that of FIG. 2C, including a pattered initial brush 110. The assembly of FIG. 9A may be formed in accordance with any of the fabrication techniques discussed herein with reference to FIG. 2C. Like the assembly of FIG. 2C, openings 178 may be pattered into the first patterned region 142-1 of the first component 116 using lithographic techniques, and thus may have highly rough edges.[71] FIG. 9B is a side, cross-sectional view of an assembly subsequent to depositing a BCP 114 on the assembly of FIG. 9A. The brush 110 and the BCP 114 may be selected so as to achieve a desired behavior when the BCP 114 self-assembles on the brush 110. In the embodiment of FIG. 9, the BCP 114 may include a first component 116 and a second component 118.[72] FIG. 9C is a side, cross-sectional view of an assembly subsequent to treating the assembly of FIG. 9B to cause the BCP 114 to self-assemble in accordance with the template provided by the brush 110. As discussed above with reference to FIG. 2, the resulting assembly may include alternating vertically oriented regions of the first component 116 and the second component 118 in the first pattered region 142-1. Outside the first pattered region 142-1, the brush 110 may not provide a surface on which the BCP 114 readily self- assembles into alternating vertically oriented regions of the first component 116 and the second component 118, and so instead, the BCP 114 in the unpattemed region 144 and the second pattered region 142-2 may self- assemble into unordered lamellae 132 of the first component 116 and the second component 118; the unordered lamellae 132 may have a structure like that illustrated in FIG. 1C.[73] FIG. 9D is a side, cross-sectional view of an assembly subsequent to removing the second component 118 from the assembly of FIG. 9C. The first component 116 may remain in place, and thus the first patterned region 142-1 may include a series of parallel openings 180. In some embodiments, the assembly of FIG, 9C may be treated with an ion implant technique to harden the first component 116 (e.g., PS) prior to removing the second component 118 (e.g., PMMA). In some embodiments, a suitable selective etch technique may be used to remove the second component 118, Removing the second component 118 from the unordered lamellae 132 may result in the partially etched unordered lamellae 134, which may retain a structure like that illustrated in FIG. 1C. As discussed above with reference to FIGS. 2G and 2H, the process of performing a DSA operation on the "rough" openings 178 of the assembly of FIG. 9A may result in the "smooth" openings 180 of the assembly of FIG. 9D, and thus the technique of FIG. 9 (and others of the DSA-based techniques disclosed herein) may be said to "rectify" the "rough" openings 178.[74] FIG. 9E is a side, cross-sectional view of an assembly subsequent to depositing and pattering a mask material 148 on the assembly of FIG. 9D so as to cover the partially etched unordered lamellae 134 in the second pattered region 142-2. Any suitable mask material 148, and any suitable pattering technique, may be used.[75] FIG. 9F is a side, cross-sectional view of an assembly subsequent to transferring the pattern of the openings 180 of the assembly of FIG. 9E into the underlying mask material 108. Any suitable etch technique may be used. Transferring the pattern of the openings 180 into the mask material 108 may also result in transferring the unordered lamellar patterns of the exposed partially etched unordered lamellae 134 into the underlying mask material 108 in the unpattemed region 144, yielding the unordered lamellae-pattemed mask material 136.[76] FIG. 9G is a side, cross-sectional view of an assembly subsequent to removing the mask material 148 from the assembly of FIG. 9F, and then removing the first component 116. Any suitable selective etch techniques may be used (e.g., when the first component 116 includes PS, an ash technique may be used).[77] FIG. 9H is a side, cross-sectional view of an assembly subsequent to depositing a mask material 128 on the assembly of FIG. 9G. Any suitable mask material 128 may be used.[78] FIG. 9I is a side, cross-sectional view of an assembly subsequent to patterning the mask material 128 in the second patterned region 142-2 to form an opening 188 that will correspond to the high-LER line 170 of FIG.8. In some embodiments, the opening 188 may be formed using lithographic techniques, and thus may have rough edges. [79] FIG. 9J is a side, cross-sectional view of an assembly subsequent to transferring the patter of the mask material 128 of the assembly of FIG. 9I into the mask material 108 and the mask material 106. Any suitable etch techniques may be used.[80] FIG. 9K is a side, cross-sectional view of an assembly subsequent to removing the mask material 128 from the assembly of FIG. 9J. Any suitable etch technique may be used.[81] FIG. 9L is a side, cross-sectional view of an assembly subsequent to transferring the pattern of the mask material 108/unordered lamellae-pattemed mask material 136 into the dielectric material 102 (through the intermediate mask materials 104 and 106, subsequently removed), and then providing line material 120 in the openings of the dielectric material 102 to form the low-LER lines 140 and the high-LER line 170. The pattern of the unordered lamellae-pattemed mask material 136 may be transferred into the dielectric material 102 to form the unordered lamellar structure 138. FIG. 9M is a top view of the assembly of FIG. 9L, illustrating the edges 130 of the low-LER lines 140 and the high-LER line 170. The illustration of FIG. 9L is taken through the section L-L of FIG. 9M. The assembly of FIGS. 9L and 9M may take the form of the microelectronic structure 100 of FIG.8.[82] In some embodiments, the spacing 176 between adjacent low-LER lines 140 may be increased by selective depopulation using a DSA-based technique. For example, FIGS. 10A-10B are various views of another microelectronic structure 100 including low-LER lines 140, in accordance with various embodiments. FIG. 10A is a side, cross-sectional view of the microelectronic structure 100 through the section A-A of FIG. 10B, and FIG.10B is a top view of the microelectronic structure 100; the unordered lamellar structure 138 of the microelectronic structure 100 of FIG, 10 may take the form illustrated in FIG. 1C. The embodiment of FIG. 10 shares a number of elements with preceding embodiments; for ease of discussion, a description of these elements is not repeated, and these elements may take the form of any of the embodiments of these elements disclosed herein. Relative to the embodiment of FIG. 1, the microelectronic structure 100 of FIG. 10 includes smaller inter-line spaces 150- 1 aid larger inter-line spaces 150-2 between adjacent low-LER lines 140. The particular arrangement of smaller inter-line spaces 150-1 and larger inter-line spaces 150-2 is simply illustrative, and any desired arrangement may be included in a microelectronic structure 100 in accordance with the techniques disclosed herein.[83] FIGS. 11 A-11 H illustrate stages in an example process of manufacturing the microelectronic structure of FIG. 10, in accordance with various embodiments. FIG. 11 A is a side, cross-sectional view of an assembly like that of FIGS. 2C and 9A, including a patterned initial brush 110. The assembly of FIG. 11 A may be formed in accordance with any of the fabrication techniques discussed herein with reference to FIG. 2C. Like the assembly of FIG. 2C, openings 178 may be patterned into the first component 116 using lithographic techniques, and thus may have highly rough edges.[84] FIG. 11 B is a side, cross-sectional view of an assembly subsequent to depositing a BCP (e.g., a BCP 114 as discussed above, not shown) on the assembly of FIG. 11 A, and then treating the resulting assembly in order to cause the BCP to self-assemble in accordance with the template provided by the brush 110. The resulting assembly includes alternating vertically oriented regions of the first component 116 and the second component 118 in the patterned region 142. Outside the patterned region 142, the brush 110 may not provide a surface on which the BCP readily self-assembles into alternating vertically oriented regions of the first component 116 and the second component 118, and so instead, the BCP 114 in the unpattemed region 144 may self- assemble into unordered lamellae 132 of the first component 116 and the second component 118; the unordered lamellae 132 may have a structure like that illustrated in FIG. 1C.[85] FIG. 11 C is a side, cross-sectional view of an assembly subsequent to removing the second component 118 from the assembly of FIG. 11 B. The first component 116 may remain in place, and thus the patterned region 142 may include a series of parallel openings 180. In some embodiments, the assembly of FIG. 11 B may be treated with an ion implant technique to harden the first component 116 (e.g., PS) prior to removing the second component 118 (e.g., PMMA). In some embodiments, a suitable selective etch technique may be used to remove the second component 118. Removing the second component 118 from the unordered lamellae 132 may result in the partially etched unordered lamellae 134, which may retain a structure like that illustrated in FIG. 1C. As discussed above with reference to FIGS. 2G and 2H, the process of performing a DSA operation on the 'rough' openings 178 of the assembly of FIG. 11 A may result in the "smooth* openings 180 of the assembly of FIG. 11C, and thus the technique of FIG. 11 (and others of the DSA-based techniques disclosed herein) may be said to "rectify" the "rough" openings 178.[86] FIG. 11 D is a side, cross-sectional view of an assembly subsequent to depositing a mask material 128 over the assembly of FIG. 11C. Any suitable mask material 128 may be used.[87] FIG. 11 E is a side, cross-sectional view of an assembly subsequent to patterning the mask material 128 of the assembly of FIG. 11D to cover the openings 180 in a region that will correspond to the larger inter-line space 150-2 between adjacent low-LER lines 140. Any suitable patterning technique may be used.[88] FIG. 11 F is a side, cross-sectional view of an assembly subsequent to transferring the patter of the exposed openings 180 of the assembly of FIG. 11 E into the underlying mask material 108. Any suitable etch technique may be used. Transferring the pattern of the openings 180 into the mask material 108 may also result in transferring the unordered lamellar patterns of the exposed partially etched unordered lamellae 134 into the underlying mask material 108 in the unpattemed region 144, yielding the unordered lameilae-patterned mask material 136.[89] FIG. 11G is a side, cross-sectional view of an assembly subsequent to removing the mask material 128 from the assembly of FIG. 11F, transferring the pattern of the mask material 108/unordered lameilae-patterned mask material 136 into the dielectric material 102 (through the intermediate mask materials 104 and 106, subsequently removed), and then providing line material 120 in the openings of the dielectric material 102 to form the low-LER lines 140. The patter of the unordered lameilae-pattered mask material 136 may be transferred into the dielectric material 102 to form the unordered lamellar structure 138. FIG. 11 H is a top view of the assembly of FIG. 11G, illustrating the edges 130 of the low-LER lines 140, the smaller inter-line spaces 150- 1, and a larger inter-line space 150-2, The illustration of FIG. 11G is taken through the section G-G of FIG. 11H. The assembly of FIGS. 11G and 11H may take the form of the microelectronic structure 100 of FIG. 10.[90] FIGS. 1 -24 illustrate example microelectronic structures 100 and examples of methods of manufacture of such microelectronic structures 100. Any of the features discussed with reference to any of FIGS. 1-24 herein may be combined with any other features to form a microelectronic structure 100. For example, FIGS. 3 end 4 illustrate an embodiment in spacer-based techniques are used to reduce the pitch of low-LER lines 140, FIGS. 8 and 9 illustrate an embodiment including both low-LER lines 140 and high-LER lines 170, and FIGS, 10 and 11 illustrate an embodiment in which the spacing between various pairs of adjacent low-LER lines 140 is increased by selective depopulation. These features of FIGS. 3, 4, 8, 9, 10, and 11 may be combined so that a microelectronic structure 100 includes reduced pitch low-LER lines 140, both low-LER lines 140 and high-LER lines 170, end increased spacing between various pairs of low-LER lines 140. Such an embodiment of a microelectronic structure 100 is illustrated in FIG. 12, and a method of manufacturing the microelectronic structure 100 of FIG. 12 is illustrated in FIG. 13. However, this particular combination is simply an example, and any combination may be used,[91] As noted above, FIGS. 12A-12B are various views of another microelectronic structure 100 including low-LER lines 140 and high-LER lines 170, in accordance with various embodiments. FIG. 12A is a side, cross- sectional view of the microelectronic structure 100 through the section A-A of FIG. 12B, and FIG. 12B is a top view of the microelectronic structure 100; the unordered lamellar structure 138 of the microelectronic structure 100 of FIG. 12 may take the form illustrated in FIG. 1C. The embodiment of FIG. 12 shares a number of elements with preceding embodiments; for ease of discussion, a description of these elements is not repeated, and these elements may take the form of any of the embodiments of these elements disclosed herein. Relative to the embodiment of FIG. 1, the microelectronic structure 100 includes reduced pitch low-LER lines 140, both low-LER lines 140 and high-LER lines 170, and increased spacing 176 between various pairs of low-LER lines140.[92] FIGS. 13A-13P illustrate stages in an example process of manufacturing the microelectronic structure of FIG. 12, In accordance with various embodiments. FIG. 13A is a side, cross-sectional view of an assembly like that of FIGS. 2C, 9A, aid 11 A, including a patterned initial brush 110. The assembly of FIG. 13A may be formed in accordance with any of the fabrication techniques discussed herein with reference to FIG. 2C. Like the assembly of FIG. 2C, openings 178 may be pattered into the first component 116 using lithographic techniques, and thus may have highly rough edges.[93] FIG. 13B is a side, cross-sectional view of an assembly subsequent to depositing a BCP (e.g., a BCP 114 as discussed above, not shown) on the assembly of FIG. 13A, and then treating the resulting assembly in order to cause the BCP to self-assemble in accordance with the template provided by the brush 110. The resulting assembly includes alternating vertically oriented regions of the first component 116 and the second component 118 in the patterned region 142. Outside the patterned region 142, the brush 110 may not provide a surface on which the BCP readily self-assembles into alternating vertically oriented regions of the first component 116 and the second component 118, and so instead, the BCP 114 in the unpattemed region 144 may self- assemble into unordered lamellae 132 of the first component 116 and the second component 118; the unordered lamellae 132 may have a structure like that illustrated in FIG. 1C.[94] FIG. 13C is a side, cross-sectional view of an assembly subsequent to removing the second component 118 from the assembly of FIG. 13B. The first component 116 may remain in place, and thus the pattered region 142 may include a series of parallel openings 180. In some embodiments, the assembly of FIG. 13B may be treated with an ion implant technique to harden the first component 116 (e.g., PS) prior to removing the second component 118 (e.g., PMMA). In some embodiments, a suitable selective etch technique may be used to remove the second component 118. Removing the second component 118 from the unordered lamellae 132 may result in the partially etched unordered lamellae 134, which may retain a structure like that of FIG. 1C. As discussed above with reference to FIGS. 2G aid 2H, the process of performing a DSA operation on the "rough" openings 178 of the assembly of FIG. 13A may result in the "smooth" openings 180 of the assembly of FIG. 13C, and thus the technique of FIG. 13 (and others of the DSA-based techniques disclosed herein) may be said to "rectify" the "rough" openings 178.[95] FIG. 13D is a side, cross-sectional view of an assembly subsequent to transferring the pattern of the openings 180 of the assembly of FIG. 13C into the underlying mask material 108. Any suitable etch technique may be used. Transferring the pattern of the openings 180 into the mask material 108 may also result in transferring the unordered lamellar patterns of the partially etched unordered lamellae 134 into the underlying mask material 108 in the unpatterned region 144, yielding the unordered lamellae-patterned mask material 136.[96] FIG. 13E is a side, cross-sectional view of an assembly subsequent to removing the first component 116 (and thus the partially etched unordered lamellae 134) from the assembly of FIG. 13D. Any suitable etch technique may be used (e.g., an ash technique when the first component 116 includes PS).[97] FIG. 13F is a side, cross-sectional view of an assembly subsequent to depositing a mask material 128 over the assembly of FIG. 13E, and pattering the mask material 128 to cover the mask material 108 (and thus the unordered lamellae-pattemed mask material 136) in the unpatterned region 144 and the first patterned region 142-1, while exposing the mask material 108 in the second patterned region 142-2. Any suitable mask material 128 may be used.[98] FIG. 13G is a side, cross-sectional view of an assembly subsequent to removing the exposed mask material 108 (in the second patterned region 142-2) from the assembly of FIG. 13F. Any suitable etch technique may be used.[99] FIG. 13H is a side, cross-sectional view of an assembly subsequent to removing the mask material 128 from the assembly of FIG. 13G, and forming spacers 124 at the side faces of the remaining pattered mask material 108. The spacers 124 may include a dielectric material, and may be fabricated using any known spacer technique (e.g., a conformal deposition of the dielectric material, followed by a "downward" directional etch to remove the dielectric material on horizontal surfaces and leave the dielectric material in place on side faces). [100] FIG. 131 is a side, cross-sectional view of an assembly subsequent to depositing and patterning a mask material 148 on the assembly of FIG. 13H to selectively cover desired portions of the mask material 108, spaces in between the mask material 108, and portions of the mask material 106 in the second pattered region 142-2, as shown, Any suitable mask material 148 and selective etch technique may be used. The openings 188 in the mask material 128 in the second patterned region 142-2 will correspond to the high-LER lines 170 of FIG. 12. In some embodiments, the openings 188 may be formed using lithographic techniques, and thus may have rough edges. [101] FIG. 13J is a side, cross-sectional view of an assembly subsequent to transferring the patter of the mask material 108/spacer material 124/mask material 148 of the assembly of FIG. 131 into the underlying mask material 106. Any suitable selective etch technique may be used. The pattern of the unordered lamellae- pattemed mask material 136 may be transferred into the mask material 106 in the unpatterned region 144, yielding the unordered lamellae-pattemed mask material 146.[102] FIG. 13K is a side, cross-sectional view of an assembly subsequent to depositing a mask material 182 on the assembly of FIG. 13J, and then recessing the mask material 182 to expose the top surfaces of the mask material 108 (and therefore the unordered lamellae-pattemed mask material 136) and the spacers 124. Any suitable mask material 182 and recess technique may be used.[103] FIG. 13L is a side, cross-sectional view of an assembly subsequent to removing the exposed mask material 108 (and therefore the unordered lamellae-pattemed mask material 136) from the assembly of FIG. 13K, selectively exposing the underlying mask material 106. Any suitable selective etch technique may be used.[104] FIG. 13M is a side, cross-sectional view of an assembly subsequent to removing the exposed mask material 106 from the assembly of FIG. 13L. Any suitable selective etch technique may be used.[105] FIG. 13N is a side, cross-sectional view of an assembly subsequent to removing the spacers 124 and the mask materia! 182 from the assembly of FIG. 13M. Any suitable selective etch techniques may be used.[106] FIG. 130 is a side, cross-sectional view of an assembly subsequent to transferring the pattern of the mask material 106/unordered lamellae-pattemed mask material 146 of the assembly of FIG. 13N into the dielectric material 102 (through the intermediate mask material 104, subsequently removed), and then providing line material 120 in the openings of the dielectric material 102 to form the low-LER lines 140 and the high-LER lines 170. The pattern of the unordered lamellae-pattemed mask material 146 may be transferred into the dielectric material 102 to form the unordered lamellar structure 138. FIG. 13P is a top view of the assembly of FIG. 130, illustrating the edges 130 of the low-LER lines 140 and the high-LER line 170, as well as the selectively variable inter-line spaces. The illustration of FIG. 130 is taken through the section 0-0 of FIG, 13P, The assembly of FIGS. 130 and 13P may take the form of the microelectronic structure 100 of FIG. 12.[107] As noted above, a BCP may be capable of self-assembling into multiple different arrangements. For example, a BCP may be capable of forming both the vertically oriented repeating structures illustrated in various ones of the preceding drawings, as well as horizontally oriented repeating structures. Whether such a BCP forms a vertically oriented repeating structure, a horizontally oriented repeating structure, or an unordered structure may depend on the pattern of the underlying brush 110, the composition of the BCP, and the conditions under which the BCP undergoes DSA; these variables may be adjusted to achieve a desired result. The opportunity to form horizontally oriented repeating structures may be utilized to manufacture low-LER lines 140 having different line widths 174. For example, FIGS. 14A-14B are various views of another microelectronic structure 100 including low-LER lines 140 having different line widths 174, in accordance with various embodiments. FIG. 14A is a side, cross-sectional view of the microelectronic structure 100 through the section A-A of FIG. 14B, and FIG. 14B is a top view of the microelectronic structure 100; the unordered lamellar structure 138 of the microelectronic structure 100 of FIG. 14 may take the form illustrated in FIG. 1C. The embodiment of FIG. 14 shares a number of elements with preceding embodiments; for ease of discussion, a description of these elements is not repeated, and these elements may take the form of any of the embodiments of these elements disclosed herein. Relative to the embodiment of FIG, 1, the microelectronic structure 100 of FIG. 14 includes a low-LER lines 140 having different line widths 174 (i.e., with the middle low-LER line 140 having a greater line width 174 than the adjacent low-LER lines 140).[108] FIGS. 15A-15G illustrate stages in an example process of manufacturing the microelectronic structure 100 of FIG. 14, in accordance with various embodiments. FIG. 15A is a side, cross-sectional view of an assembly like that of FIG. 2C, including a patterned initial brush 110; the initial brush 110 of FIG. 15A may include the second component 118, instead of the first component 116. The assembly of FIG. 15A may be formed in accordance with any of the fabrication techniques discussed herein with reference to FIG, 2C. Like the assembly of FIG. 2C, the openings 178 in the brush 110 may be patterned using lithographic techniques, and thus may have highly rough edges. Note that the central portion of the second component 118 of the brush 110 is wider than other portions of the second component 118 in the patterned region 142.[109] FIG. 15B is a side, cross-sectional view of an assembly subsequent to "filling in" the openings 178 in the brush 110 of FIG. 15A with the first component 116 to "complete" the brush 110, as discussed above with reference to FIG. 3. In other embodiments, this operation is not performed before proceeding to subsequent operations.[110] FIG. 15C is a side, cross-sectional view of an assembly subsequent to depositing a BCP 114 on the assembly of FIG. 15B. As noted above, the brush 110 and the BCP 114 may be selected so as to achieve a desired DSA behavior. In the embodiment of FIG, 15, the BCP 114 may include a first component 116 and a second component 118.[111] FIG. 15D is a side, cross-sectional view of an assembly subsequent to treating the assembly of FIG.15C to cause the BCP 114 to self-assemble in accordance with the template provided by the brush 110. The resulting assembly includes alternating vertically oriented regions of the first component 116 and the second component 118, as well as a horizontally oriented region of the first component 116 (formed over the "wider" portion of the second component 118 in the patterned region 142). Outside the patterned region 142, the brush 110 may not provide a surface on which the BCP 114 readily self-assembles into alternating vertically oriented regions of the first component 116 and the second component 118 (or into alternating horizontally oriented regions of the first component 116 and the second component 118), and so instead, the BCP 114 in the unpattemed region 144 may self-assemble into unordered lamellae 132 of the first component 116 aid the second component 118; the unordered lamellae 132 may have a structure like that illustrated in FIG. 1C.[112] FI G. 15E is a side, cross-sectional view of an assembly subsequent to planarizing the assembly of FI G. 15D to remove the upper portion of the first component 116, second component 118, and the unordered lamellae 132 (e.g., using a CMP technique).[113] FIG. 15F is a side, cross-sectional view of an assembly subsequent to removing the second component 118 from the assembly of FIG. 15E (e.g., using a suitable selective etch technique) to form openings in the first component 116 that are "smoother" than the rough openings 178, transferring the pattern of the first component 116 (through the intermediate mask materials 108, 106, and 104, subsequently removed), and then providing line material 120 in the openings of the dielectric material 102 to form the low-LER lines 140. The patter of the unordered lamellae 132 may be transferred into the dielectric material 102 to form the unordered lamellar structure 138. FIG. 15G is a top view of the assembly of FIG. 15F, illustrating the edges 130 of the low-LER lines 140. The illustration of FIG. 15F is taken through the section F-F of FIG. 15G. The assembly of FIGS. 15F and 15G may take the form of the microelectronic structure 100 of FIG. 14.[114] The opportunity to form horizontally oriented repeating structures may be utilized to manufacture low- LER lines 140 having different spacings 176 (instead of or in addition to different line widths 174, as discussed above with reference to FIGS. 14 and 15). For example, FIGS. 16A-16B are various views of another microelectronic structure 100 including low-LER lines 140 having different spacings 176 therebetween, in accordance with various embodiments. FIG. 16A is a side, cross-sectional view of the microelectronic structure 100 through the section A-A of FIG. 16B, and FIG. 16B is a top view of the microelectronic structure 100; the unordered lamellar structure 138 of the microelectronic structure 100 of FIG. 16 may take the form illustrated in FIG. 1C. The embodiment of FIG. 16 shares a number of elements with preceding embodiments; for ease of discussion, a description of these elements is not repeated, and these elements may take the form of any of the embodiments of these elements disclosed herein. Relative to the embodiment of FIG. 1, the microelectronic structure 100 of FIG. 16 includes low-LER lines 140 having different spatings 176 (i.e., with the middle spacing 176 greater than the adjacent spatings 176).[115] FIGS. 17A-17G illustrate stages in an example process of manufacturing the microelectronic structure 100 of FIG. 16, in accordance with various embodiments. FIG. 17A is a side, cross-sectional view of an assembly like that of FIG. 2C, including a patterned initial brush 110 of the first component 116. The assembly of FIG. 17A may be formed in accordance with any of the fabrication techniques discussed herein with reference to FIG. 2C. Like the assembly of FIG. 2C, the openings 178 in the brush 110 may be patterned using lithographic techniques, and thus may have highly rough edges. Note that the central portion of the first component 116 of the brush 110 is wider than other portions of the first component 116 in the patterned region 142.[116] FIG. 17B is a side, cross-sectional view of an assembly subsequent to "filling in" the openings 178 in the brush 110 of FIG. 17A with the second component 118 to "complete" the brush 110, as discussed above with reference to FIG. 3. In other embodiments, this operation is not performed before proceeding to subsequent operations.[117] FIG. 17C is a side, cross-sectional view of an assembly subsequent to depositing a BCP 114 on the assembly of FIG. 17B. As noted above, the brush 110 and the BCP 114 may be selected so as to achieve a desired DSA behavior. In the embodiment of FIG. 17, the BCP 114 may include a first component 116 and a second component 118.[118] FIG. 17D is a side, cross-sectional view of an assembly subsequent to treating the assembly of FIG.17C in order to cause the BCP 114 to self-assemble in accordance with the template provided by the brush 110. The resulting assembly includes alternating vertically oriented regions of the first component 116 and the second component 118, as well as a horizontally oriented region of the second component 118 (formed over the "wider" portion of the first component 116 in the patterned region 142). Outside the pattered region 142, the brush 110 may not provide a surface on which the BCP 114 readily self-assembles into alternating vertically (or horizontally) oriented regions of the first component 116 and the second component 118, and so instead, the BCP 114 in the unpattemed region 144 may self-assemble into unordered lamellae 132 of the first component 116 and the second component 118; the unordered lamellae 132 may have a structure like that of FIG. 1C.[119] FIG. 17E is a side, cross-sectional view of an assembly subsequent to planarizing the assembly of FIG. 17D to remove the upper portion of the first component 116, second component 118, and the unordered lamellae 132 (e.g., using a CMP technique).[120] FIG. 17F is a side, cross-sectional view of an assembly subsequent to removing the second component 118 from the assembly of FIG. 17E (e.g., using a suitable selective etch technique) to form openings in the first component 116 that are "smoother" than the rough openings 178, transferring the pattern of the first component 116 (through the intermediate mask materials 108, 106, and 104, subsequently removed), and then providing line material 120 in the openings of the dielectric material 102 to form the low-LER lines 140. The pattern of the unordered lamellae 132 may be transferred into the dielectric material 102 to form the unordered lamellar structure 138. FIG. 17G is a top view of the assembly of FIG. 17F, illustrating the edges 130 of the low-LER lines 140. The illustration of FIG. 17F is taken through the section F-F of FIG. 17G. The assembly of FIGS. 17F and 17G may take the form of the microelectronic structure 100 of FIG. 16.[121] In some embodiments, a BCP used in a DSA-based technique may be "stretchable" in that it is capable of self-assembling into repeating patterns having variable size (e.g., around a nominal size), depending upon the dimensions and structure of the underlying brush. For example, FIG. 18 is a top view of a microelectronic structure 100 including low-LER lines 140 at multiple pitches (including variable line widths and line spacings), in accordance with various embodiments. The microelectronic structure 100 of FIG. 18 includes a first set of low- LER lines 140-1 and a second set of low-LER lines 140-2, and corresponding inter-line spaces 150-1 and 150-2, respectively. The widths of the low-LER lines 140 are shown as superimposed over the low-LER lines 140 (e.g,, 1x, 1.5x, 2x) and the widths of the inter-line spaces 150 are shown adjacent to the inter-line spaces 150 (e.g., 1x, 1 ,2x, 3x). The use of a stretchable BCP in a DSA-based technique, such as those discussed below with reference to FIGS. 19 and 20, to form a microelectronic structure 100 may result in features having roughnesses that increase with the feature size; further, those features may not have the lithographic property, as discussed above, and thus the use of a stretchable BCP in the fabrication of a microelectronic structure 100 may be detected in the microelectronic structure 100.[122] FIGS. 19A-19E illustrate stages in an example process of manufacturing the microelectronic structure 100 of FIG. 18, in accordance with various embodiments. FIG. 19A is a top view of an assembly including a pattered metal 152 on top of a mask material 108. Additional mask materials (e.g., the mask materials 104 and 106, not shown) may underlie the mask material 108, and a dielectric material 102 (not shown) may underlie the additional mask materials. In some embodiments, the metal 152 may include titanium nitride or a metal oxide. The metal 152 may be patterned using a lithographic technique (and thus may have rough edges). [123] FIG. 19B is a top view of an assembly subsequent to providing a brush 110 on the metal 152 of the assembly of FIG. 19A. In some embodiments, the brush 110 may be a material that selectively deposits and adheres to the metal 152 in order to replicate the patter of the metal 152.[124] FIG. 19C is a top view of an assembly subsequent to depositing a BCP (e.g., the BCP 114, not shown), treating the resulting assembly in order to cause the BCP to self-assemble in accordance with the template provided by the brush 110, and then removing some of the assembled BCP to leave behind the BCP component 154. The BCP component 154 may be a "stretchable' component, as it is able to assemble into vertically oriented bands having different widths (e.g., 1x and 1 ,2x) depending upon the dimensions of the underlying brush 110. In some embodiments, a "stretchable" BCP may include a triblock copolymer, such as PMMA-b-PS- b-PMMA, PS-b-PMMA-b-PS, PS-b- polyethylene oxide) (PEG), PS-b-PEO-b-PS, PEO-b-PS-b-PEO, poly(styrene-b-2-vinylpyridine) (PS-b-P2VP), PS-b-P2VP-b-PS, P2VP-b-PS-b-P2VP, PS-b-P4VP, PS-b-P4VP-b- PS, P4VP-b-PS-b-P4VP, polystyrene-block-polydimethylsiloxane (PS-b-PDMS), PDMS-b-PS-b-PDMS,or PS-b- PDMS-b-PS.[125] FIG. 19D is a top view of an assembly subsequent to removing the exposed mask material 108 from the assembly of FIG. 19C (e.g., by an appropriate selective etch), patterning the underlying dielectric material 102 of the resulting assembly in accordance with the pattern of the BCP component 154 and the brush 110, and then removing the BCP component 154 and the brush 110 (e.g., by appropriate selective etches). The openings 190 in the dielectric material 102 may have "smooth" edges.[126] FIG. 19E is a top view of an assembly subsequent to filling the openings 190 of the assembly of FIG. 19D with line material (e.g., line material 120) to form the low-LER lines 140 and inter-line spaces 150. The assembly of FIG. 19E may take the form of the microelectronic structure 100 of FIG. 18. Note that the drawings of FIGS. 18 and 19 are simply examples, and the components thereof may take any suitable form. For example, the dielectric material 102 may be a multi-layer dielectric and/or the inter-line spaces 150 may be provided by dielectric spacers (e.g., including silicon oxynitride, silicon oxycarbide, aluminum oxide, silicon nitride, or silicon oxide) on an intervening dielectric material (e.g., a carbon-doped oxide).[127] In some embodiments, a "stretchable" BCP may be utilized in a DSA-based technique that does not utilize an underlying metal 152 on which a brush 110 may be replicated. Instead, the brush 110 may be patterned using other techniques (e.g., lithography). For example, FIG. 20 is a top view of an assembly including a patterned brush 110 on top of a mask material 108. An assembly like that of FIG. 20 may be utilized as discussed above with reference to FIGS. 19C-19E to form the microelectronic structure 100 of FIG. 18.[128] In some embodiments, low-LER lines 140 may be included in a metallization stack, as discussed below with reference to FIG. 29. For example, low-LER lines 140 in accordance with any of the embodiments disclosed herein may be part of the M0, M1, M2, or other interconnect layers of a metallization stack. In some embodiments, low-LER lines 140 may be contacted by vias in a metallization stack. In same such embodiments, the vias may be formed using conventional techniques, such as by forming openings that land on the low-LER lines 140 using lithographic techniques, and filling those openings with conductive material. In other embodiments, such vias may be formed using self-alignment techniques to reduce the misalignment that may occur when conventional approaches are used. For example, FIG. 21 is a side, cross-sectional view of a microelectronic structure 100 including vias 166 in conductive contact with low-LER lines 140, in accordance with various embodiments. FIG, 21 (and others of the accompanying drawings) illustrate the vias 166 as including the line material 120, but the vias 166 may include any suitable fill and/or liner materials.[129] In the microelectronic structure 100 of FIG. 21, the vias 166 include a lower portion that extends through a second replication brush component 158 (discussed further below), and an upper portion that extends through a photoresist 162. The photoresist 162 may be a dielectric material that includes cross-linking elements that may be selectively activated by EUV exposure, as discussed further below. An unpatterned region 144 of the microelectronic structure 100 of FIG. 21 may include an unordered lamellar structure 138, as discussed above, and an unordered dielectric material 160 on the unordered lamellar structure 138.[130] FIGS. 22A-22F illustrate stages in an example process of manufacturing the microelectronic structure 100 of FIG. 21, in accordance with various embodiments. FIG. 22A is a side, cross-sectional view of an assembly including a patterned region 142 having one or more low-LER lines 140 in a dielectric material 102, and an unpatterned region 144 having an unordered lamellar structure 138. The assembly of FIG. 22A may take the form of any of the microelectronic structures 100 discussed above with reference to FIGS. 1-20.[131] FIG. 22B is a side, cross-sectional view of an assembly subsequent to forming a replication brush 192 on the assembly of FIG. 22A. The replication brush 192 may include a first replication brush component 156 and a second replication brush component 158. The first replication brush component 156 may preferentially attach to the line material 120 of the low-LER lines 140 and the second replication brush component 158 may preferentially attach to the dielectric material 102 to form a self-assembled replication brush 192. The replication brush 192 may also include the unordered dielectric material 160, which may not have a self-assembled structure, or may have a unordered lamellar structure like that of FIG. 1C. In some embodiments, the first replication brush component 156 (a metal-selective brush material) may have a surface anchoring group including phosphines, thiol, thiolate, thioacetate, disulfide, alkyl azide, aryl azide, nitrile, phosphate, silyl, alkyl and other phosphonate ester, phosphonamide, sulfonamides, sulfonate, sulfinate, sulfonate, boronic acid, phosphonic adds, carboxylic acids, phosphorous dichloride, alkenes or alkyne material. In some embodiments, the second replication brush component 158 (a dielectric-selective brush material) may have a surface anchoring group of hydroxyl, amines, or a carboxylic add group.[132] FIG. 22C is a side, cross-sectional view of an assembly subsequent to depositing a photoresist 162 on the assembly of FIG. 22B. The photoresist 162 may indude cross-linking elements that, upon activation by EUV exposure, cross-link when in the presence of the first replication brush component 156, and do not cross-link otherwise. In some embodiments, the photoresist 162 and/or the first replication brush component 156 may include photo add generator (PAG) molecules that, upon ultraviolet (UV) exposure (e.g., EUV exposure), generate add to cause cross-linking of the photoresist 162; the cross-linked photoresist 164, discussed below, may then be selectively removed. In some embodiments, the photoresist 162 and/or the second replication brush component 158 may indude quencher molecules that, upon UV exposure, cause add generated by the photoresist 162 to be quenched in the areas above the second replication brush component 158 to prevent cross-linking of the photoresist 162 in the areas above the second replication brush component 158; the cross- linked photoresist 164, discussed below, may then be selectively removed. More generally, the first replication brush component 156, the second replication brush component 158, and/or the photoresist 162 may include catalysts that can selectively localize the cross-linking of the photoresist 162 upon exposure to UV radiation.[133] FIG. 22D is a side, cross-sectional view of an assembly subsequent to exposing the photoresist 162 of the assembly of FIG. 22C to EUV radiation (e.g., an EUV "flood"), forming cross-linked photoresist 164 in the volumes of the photoresist 162 proximate to the first replication brush component 156.[134] FIG. 22E is a side, cross-sectional view of an assembly subsequent to removing the cross-linked photoresist 164 from the assembly of FIG. 22D (e.g., using a suitable selective etch technique).[135] FIG. 22F is a side, cross-sectional view of an assembly subsequent to removing the first replication brush component 156 from the assembly of FIG. 22E (e.g., using a suitable selective etch technique), and then filling the openings with the line material 120 to form the vias 166. The assembly of FIG. 22F may take the form of the microelectronic structure 100 of FIG. 21.[136] FIGS. 21 and 22 illustrate a microelectronic structure 100 that may include vias 166 patterned by a technique that includes an EUV flood. In other embodiments, the vias 166 may be patterned using selective application of UV radiation. For example, FIG. 23 is a side, cross-sectional view of another microelectronic structure 100 including vias 166 in conductive contact with low-LER lines 140, in accordance with various embodiments. The microelectronic structure 100 of FIG. 23 shares many elements in common with the microelectronic structure 100 of FIG. 21; for ease of discussion, a description of these elements is not repeated, and these elements may take the form of any of the embodiments of these elements disclosed herein. In the embodiment of FIG. 23, however, the vias 166 may not be centered over the low-LER lines 140, but may instead be formed at the intersection between the volume above the low-LER lines 140 and the area to which EUV radiation is selectively applied, as discussed below.[137] FIGS. 24A-24C illustrate stages in an example process of manufacturing the microelectronic structure 100 of FIG. 23, in accordance with various embodiments. FIG. 24A is a side, cross-sectional view of an assembly subsequent to exposing the photoresist 162 of the assembly of FIG. 22C to patterned EUV radiation (with the regions of EUV radiation indicated by the dotted lines), forming cross-linked photoresist 164 in the intersection between the EUV radiation volumes and the volumes of the photoresist 162 proximate to the first replication brush component 156.[138] FIG. 24B is a side, cross-sectional view of an assembly subsequent to removing the cross-linked photoresist 164 from the assembly of FIG. 24A (e.g., using a suitable selective etch technique).[139] FIG. 24C is a side, cross-sectional view of an assembly subsequent to removing the first replication brush component 156 from the assembly of FIG. 24B (e.g., using a suitable selective etch technique), and then filling the openings with the line material 122 form the vias 166. The assembly of FIG, 24C may take the form of the microelectronic structure 100 of FIG. 23.[140] The fabrication process discussed above with reference to FIGS. 4-5, 6-7, and 12-13 include spacer- based pitch-division techniques. The particular pitch-division techniques of FIGS. 5, 7, and 13 are pitch-halving techniques (utilizing one round of spacer formation), but in other embodiments, a pitch-quartering technique (using two rounds of spacer formation) may be used instead to obtain smaller feature sizes. The use of such pitch-division techniques in the process of forming low-LER lines 140 in a patterned region 142 may be evidenced in a microelectronic structure 100 by the presence of pitch-division artifacts in the microelectronic structure 100. For example, because of the manner in which the width of various elements propagate through the pitch-division technique to the line widths 174 and inter-line spadngs 176, the line widths 174 and the interline spacings 176 may exhibit a periodicity across multiple ones of the low-LER lines 140. Such periodicity may serve as a pitch-division artifact in the microelectronic structure 100 that provides evidence of the use of a pitch- division technique during fabrication. Another example of a pitch-division artifact that may appear in a microelectronic structure 100 are nested and/or rounded, half-ring patterns in the dielectric material 102 that correspond to the ends of the spacers 124. FIGS. 25, 26, and 27 are top views of the microelectronic structures 100 of FIGS. 4, 6, and 12, respectively illustrating such nested and rounded patterns 168 proximate to a perimeter of the pattered regions 142; in embodiments in which a pitch-quartering technique is used instead of a pitch-halving technique, more "half-rings" may be part of the patterns 168. The presence of such nested and/or rounded patterns may serve as a pitch-division artifact in the microelectronic structure 100 that provides evidence of the use of a pitch-division technique during fabrication. Other pitch-division artifacts may be present instead of or in addition to one or more of these artifacts. For example, spacer-based pitch-division, as discussed above, may have a single size of a feature (either a line width or a width of a space between lines) that is defined by ALD spacer deposition. The thickness of the ALD spacer deposition may determine this size.[141] The microelectronic structures 100 disclosed herein may be included in any suitable electronic component. FIGS. 28-32 illustrate various examples of apparatuses that may include any of the microelectronic structures 100 disclosed herein.[142] FIG. 28 is a top view of a wafer 1500 and dies 1502 that may include one or more microelectronic structures 100 in accordance with any of the embodiments disclosed herein. The wafer 1500 may be composed of semiconductor material and may include one or more dies 1502 having microelectronic structures formed on a surface of the wafer 1500. Each of the dies 1502 may be a repeating unit of a semiconductor product that includes any suitable microelectronic structure. After the fabrication of the semiconductor product is complete, the wafer 1500 may undergo a singulation process in which the dies 1502 are separated from one another to provide discrete "chips" of the semiconductor product. The die 1502 may include one or more microelectronic structures 100 (e.g., as discussed below with reference to FIG. 29), one or more transistors (e.g., some of the transistors 1640 of FIG. 29, discussed below) and/or supporting circuitry to route electrical signals to the transistors, as well as any other circuit components. In some embodiments, the wafer 1500 or the die 1502 may include a memory device (e.g., a random access memory (RAM) device, such as a static RAM (SRAM) device, a magnetic RAM (MRAM) device, a resistive RAM (RRAM) device, a conductive-bridging RAM (CBRAM) device, etc.), a logic device (e.g., an AND, OR, NAND, or NOR gate), or any other suitable circuit element. Multiple ones of these devices may be combined on a single die 1502. For example, a memory array formed by multiple memory devices may be formed on a same die 1502 as a processing device (e.g., the processing device 1802 of FIG. 32) or other logic that is configured to store information in the memory devices or execute instructions stored in the memory array.[143] FIG. 29 is a side, cross-sectional view of a microelectronic device 1600 that may include one or more microelectronic structures 100 in accordance with any of the embodiments disclosed herein. One or more of the microelectronic devices 1600 may be included in one or more dies 1502 (FIG. 28). The microelectronic device 1600 may be formed on a substrate 1602 (e.g., the wafer 1500 of FIG. 28) and may be included in a die (e.g., the die 1502 of FIG. 28). The substrate 1602 may be a semiconductor substrate composed of semiconductor material systems including, for example, n-type or p-type materials systems (or a combination of both). The substrate 1602 may include, for example, a crystalline substrate formed using a bulk silicon or a silicon-on- insulator (SOI) substructure. In some embodiments, the substrate 1602 may be formed using alternative materials, which may or may not be combined with silicon, that include but are not limited to germanium, indium antimonide, lead telluride, indium arsenide, indium phosphide, gallium arsenide, or gallium antimonide. Further materials classified as group ll-VI, lll-V, or IV may also be used to form the substrate 1602. Although a few examples of materials from which the substrate 1602 may be formed are described here, any material that may serve as a foundation for a microelectronic device 1600 may be used. The substrate 1602 may be part of a singulated die (e.g., the dies 1502 of FIG. 28) or a wafer (e.g., the wafer 1500 of FIG. 28).[144] The microelectronic device 1600 may include one or more device layers 1604 disposed on the substrate 1602. The device layer 1604 may include features of one or more transistors 1640 (e.g., metal oxide semiconductor field-effect transistors (MOSFETs)) formed on the substrate 1602. The device layer 1604 may include, for example, one or more source and/or drain (S/D) regions 1620, a gate 1622 to control current flow in the transistors 1640 between the S/D regions 1620, and one or more S/D contacts 1624 to route electrical signals to/from the S/D regions 1620. The transistors 1640 may include additional features not depicted for the sake of clarity, such as device isolation regions, gate contacts, and the like. The transistors 1640 are not limited to the type and configuration depicted in FIG, 29 and may include a wide variety of other types and configurations such as, for example, planar transistors, non-planar transistors, or a combination of both. Planar transistors may include bipolar junction transistors (BUT), heterojunction bipolar transistors (HBT), or high- electron-mobility transistors (HEMT). Non-planar transistors may include FinFET transistors, such as doublegate transistors or tri-gate transistors, and wrap-around or all-around gate transistors, such as nanoribbon and nanowire transistors.[145] Each transistor 1640 may include a gate 1622 formed of at least two layers, a gate dielectric and a gate electrode. The gate dielectric may include one layer or a stack of layers. The one or more layers may include silicon oxide, silicon dioxide, silicon carbide, and/or a high-k dielectric material. The high-k dielectric material may include elements such as hafnium, silicon, oxygen, titanium, tantalum, lanthanum, aluminum, zirconium, barium, strontium, yttrium, lead, scandium, niobium, and zinc. Examples of high-k materials that may be used in the gate dielectric include, but are not limited to, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, and lead zinc niobate. In some embodiments, an annealing process may be carried out on the gate dielectric to improve its quality when a high-k material is used.[146] The gate electrode may be formed on the gate dielectric aid may include at least one p-type work function metal or n-type work function metal, depending on whether the transistor 1640 is to be a p-type metal oxide semiconductor (PMOS) or an n-type metal oxide semiconductor (NMOS) transistor. In some implementations, the gate electrode may consist of a stack of two or more metal layers, where one or more metal layers are work function metal layers and at least one metal layer is a fill metal layer. Further metal layers may be included for other purposes, such as a barrier layer. For a PMOS transistor, metals that may be used for the gate electrode include, but are not limited to, ruthenium, palladium, platinum, cobalt, nickel, conductive metal oxides (e.g„ ruthenium oxide), and any of the metals discussed below with reference to an NMOS transistor (e.g., for work function tuning). For an NMOS transistor, metals that may be used for the gate electrode include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, alloys of these metals, carbides of these metals (e.g., hafnium carbide, zirconium carbide, titanium carbide, tantalum carbide, and aluminum carbide), and any of the metals discussed above with reference to a PMOS transistor (e.g., for work function tuning).[147] In some embodiments, when viewed as a cross-section of the transistor 1640 along the source-channel- drain direction, the gate electrode may consist of a U-shaped structure that includes a bottom portion substantially parallel to the surface of the substrate and two sidewall portions that are substantially perpendicular to the top surface of the substrate. In other embodiments, at least one of the metal layers that form the gate electrode may simply be a planar layer that is substantially parallel to the top surface of the substrate and does not include sidewall portions substantially perpendicular to the top surface of the substrate, In other embodiments, the gate electrode may consist of a combination of U-shaped structures and planar, non-U- shaped structures. For example, the gate electrode may consist of one or more U-shaped metal layers formed atop one or more planar, non-U-shaped layers.[148] In some embodiments, a pair of sidewall spacers may be formed on opposing sides of the gate stack to bracket the gate stack. The sidewall spacers may be formed from materials such as silicon nitride, silicon oxide, silicon carbide, silicon nitride doped with carbon, and silicon oxynitride. Processes for forming sidewall spacers are well known in the art and generally include deposition and etching process steps, In some embodiments, a plurality of spacer pairs may be used; for instance, two pairs, three pairs, or four pairs of sidewall spacers may be formed on opposing sides of the gate stack.[149] The S/D regions 1620 may be formed within the substrate 1602 adjacent to the gate 1622 of each transistor 1640. The S/D regions 1620 may be formed using an implantation/diffusion process or an etching/deposition process, for example. In the former process, dopants such as boron, aluminum, antimony, phosphorous, or arsenic may be ion-implanted into the substrate 1602 to form the S/D regions 1620. An annealing process that activates the dopants and causes them to diffuse farther into the substrate 1602 may follow the ion-implantation process. In the latter process, the substrate 1602 may first be etched to form recesses at the locations of the S/D regions 1620. An epitaxial deposition process may then be carried out to fill the recesses with material that is used to fabricate the S/D regions 1620. In some implementations, the S/D regions 1620 may be fabricated using a silicon alloy such as silicon germanium or silicon carbide. In some embodiments, the epitaxially deposited silicon alloy may be doped in situ with dopants such as boron, arsenic, or phosphorous. In some embodiments, the S/D regions 1620 may be formed using one or more alternate semiconductor materials such as germanium or a group lll-V material or alloy. In further embodiments, one or more layers of metal and/or metal alloys may be used to form the S/D regions 1620.[150] Electrical signals, such as power and/or input/output (I/O) signals, may be routed to and/or from the devices (e.g., the transistors 1640) of the device layer 1604 through one or more interconnect layers disposed on the device layer 1604 (illustrated in FIG. 29 as interconnect layers 1606-1610). For example, conductive features of the device layer 1604 (e.g., the gate 1622 and the S/D contacts 1624) may be electrically coupled with the interconnect structures 1628 of the interconnect layers 1606-1610. The one or more interconnect layers 1606-1610 may form a metallization stack (also referred to as an "ILD stack') 1619 of the microelectronic device 1600. Any of the microelectronic structures 100 disclosed herein may be included in any of the interconnect layers of a metallization stack 1619.[151] The interconnect structures 1628 may be arranged within the interconnect layers 1606-1610 to route electrical signals according to a wide variety of designs (in particular, the arrangement is not limited to the particular configuration of interconnect structures 1628 depicted in FIG. 29). Although a particular number of interconnect layers 1606-1610 is depicted in FIG. 29, embodiments of the present disclosure include microelectronic devices having more or fewer interconnect layers than depicted.[152] In some embodiments, the interconnect structures 1628 may include lines 1628a and/or vias 1628b filled with a conductive material such as a metal. The lines 1628a may be arranged to route electrical signals in a direction of a plane that is substantially parallel with a surface of the substrate 1602 upon which the device layer 1604 is formed. For example, the lines 1628a may route electrical signals in a direction in and out of the page from the perspective of FIG. 29. Any of the lines 1628a in a metallization stack 1619 may take the form of the low-LER lines 140 disclosed herein; for example, one or more of the lines 1628a in an interconnect layer of a metallization stack 1619 may be low-LER lines 140. The vias 1628b may be arranged to route electrical signals in a direction of a plane that is substantially perpendicular to the surface of the substrate 1602 upon which the device layer 1604 is formed. In some embodiments, the vias 1628b may electrically couple lines 1628a of different interconnect layers 1606-1610 together. Any of the vias 1628b in a metallization stack 1619 may take the form of the vias 166 disclosed herein.[153] The interconnect layers 1606-1610 may include a dielectric material 1626 disposed between the interconnect structures 1628, as shown in FIG. 29. In some embodiments, the dielectric material 1626 disposed between the interconnect structures 1628 in different ones of the interconnect layers 1606-1610 may have different compositions; in other embodiments, the composition of the dielectric material 1626 between different interconnect layers 1606-1610 may be the same.[154] A first interconnect layer 1606 may be formed above the device layer 1604. In some embodiments, the first interconnect layer 1606 may include lines 1628a and/or vias 1628b, as shown. The lines 1628a of the first interconnect layer 1606 may be coupled with contacts (e.g., the S/D contacts 1624) of the device layer 1604. The first interconnect layer 1606 may be referred to as the 'M0‘ interconnect layer, and in some embodiments, the MO interconnect layer may include any of the low-LER lines 140 disclosed herein. In some embodiments, the MO interconnect layer may include any suitable portion of any of the microelectronic structures 100 disclosed herein.[155] A second interconnect layer 1608 may be formed above the first interconnect layer 1606. In some embodiments, the second interconnect layer 1608 may include vias 1628b to couple the lines 1628a of the second interconnect layer 1608 with the lines 1628a of the first interconnect layer 1606. Although the lines 1628a and the vias 1628b are structurally delineated with a line within each interconnect layer (e.g., within the second interconnect layer 1608) for the sake of clarity, the lines 1628a and the vias 1628b may be structurally and/or materially contiguous (e.g., simultaneously filled during a dual-damascene process) in some embodiments. The second interconnect layer 1608 may be referred to as the "M1" interconnect layer, and in some embodiments, the M1 interconnect layer may include any of the low-LER lines 140 disclosed herein. In some embodiments, the M1 interconnect layer may include any suitable portion of any of the microelectronic structures 100 disclosed herein.[156] A third interconnect layer 1610 (and additional interconnect layers, as desired) may be formed in succession on the second interconnect layer 1608 according to similar techniques and configurations described in connection with the second interconnect layer 1608 or the first interconnect layer 1606. The third interconnect layer 1610 may be referred to as the 'M2" interconnect layer, and in some embodiments, the M2 interconnect layer may include any of the low-LER lines 140 disclosed herein. In some embodiments, the M2 interconnect layer may include any suitable portion of any of the microelectronic structures 100 disclosed herein, In some embodiments, the interconnect layers that are "higher up" in the metallization stack 1619 in the microelectronic device 1600 (i.e., farther away from the device layer 1604) may be thicker.[157] The microelectronic device 1600 may include a solder resist materia! 1634 (e.g. , polyimide or similar material) and one or more conductive contacts 1636 formed on the interconnect layers 1606-1610. In FIG. 29, the conductive contacts 1636 are illustrated as taking the form of bond pads. The conductive contacts 1636 may be electrically coupled with the interconnect structures 1628 and configured to route the electrical signals of the transistors) 1640 to other external devices, For example, solder bonds may be formed on the one or more conductive contacts 1636 to mechanically and/or electrically couple a chip including the microelectronic device 1600 with another component (e.g., a circuit board). The microelectronic device 1600 may include additional or alternate structures to route the electrical signals from the interconnect layers 1606-1610; for example, the conductive contacts 1636 may include other analogous features (e.g., posts) that route the electrical signals to external components.[158] FIG. 30 is a side, cross-sectional view of an example microelectronic package 1650 that may include one or more microelectronic structures 100 in accordance with any of the embodiments disclosed herein. In some embodiments, the microelectronic package 1650 may be a system-in-package (SIR).[159] The package substrate 1652 may be formed of a dielectric material (e.g., a ceramic, a buildup film, an epoxy film having filler particles therein, glass, an organic material, an inorganic material, combinations of organic and inorganic materials, embedded portions formed of different materials, etc.), and may have conductive pathways extending through the dielectric material between the face 1672 and the face 1674, or between different locations on the face 1672, and/or between different locations on the face 1674. These conductive pathways may take the form of any of the interconnects 1628 discussed above with reference to FIG.29.[160] The package substrate 1652 may include conductive contacts 1663 that are coupled to conductive pathways (not shown) through the package substrate 1652, allowing circuitry within the dies 1656 and/or the interposer 1657 to electrically couple to various ones of the conductive contacts 1664 (or to other devices included in the package substrate 1652, not shown).[161] The microelectronic package 1650 may include an interposer 1657 coupled to the package substrate 1652 via conductive contacts 1661 of the interposer 1657, first-level interconnects 1665, and the conductive contacts 1663 of the package substrate 1652. The first-level interconnects 1665 illustrated in FIG. 30 are solder bumps, but any suitable first-level interconnects 1665 may be used, In some embodiments, no interposer 1657 may be included in the microelectronic package 1650; instead, the dies 1656 may be coupled directly to the conductive contacts 1663 at the face 1672 by first-level interconnects 1665. More generally, one or more dies 1656 may be coupled to the package substrate 1652 via any suitable structure (e.g., (e.g., a silicon bridge, an organic bridge, one or more waveguides, one or more interposers, wirebonds, etc.).[162] The microelectronic package 1650 may include one or more dies 1656 coupled to the interposer 1657 via conductive contacts 1654 of the dies 1656, first-level interconnects 1658, and conductive contacts 1660 of the interposer 1657. The conductive contacts 1660 may be coupled to conductive pathways (not shown) through the interposer 1657, allowing circuitry within the dies 1656 to electrically couple to various ones of the conductive contacts 1661 (or to other devices included in the interposer 1657, not shown). The first-level interconnects 1658 illustrated in FIG. 30 are solder bumps, but any suitable first-level interconnects 1658 may be used. As used herein, a "conductive contact’ may refer to a portion of conductive material (e.g., metal) serving as an interface between different components; conductive contacts may be recessed in, flush with, or extending away from a surface of a component, and may take any suitable form (e.g., a conductive pad or socket).[163] In some embodiments, an underfill material 1666 may be disposed between the package substrate 1652 and the interposer 1657 around the first-level interconnects 1665, and a mold compound 1668 may be disposed around the dies 1656 and the interposer 1657 end in contact with the package substrate 1652. In some embodiments, the underfill material 1666 may be the same as the mold compound 1668. Example materials that may be used for the underfill material 1666 and the mold compound 1668 are epoxy mold materials, as suitable. Second-level interconnects 1670 may be coupled to the conductive contacts 1664. The second-level interconnects 1670 illustrated in FIG. 30 are solder balls (e.g., for a ball grid array arrangement), but any suitable second-level interconnects 16770 may be used (e.g., pins in a pin grid array arrangement or lands in a land grid array arrangement). The second-level interconnects 1670 may be used to couple the microelectronic package 1650 to another component, such as a circuit board (e.g., a motherboard), an interposer, or another microelectronic package, as known in the art and as discussed below with reference to FIG. 31.[164] The dies 1656 may take the form of any of the embodiments of the die 1502 discussed herein (e.g,, may include any of the embodiments of the microelectronic device 1600). In embodiments in which the microelectronic package 1650 includes multiple dies 1656, the microelectronic package 1650 may be referred to as a multi-chip package (MCP). The dies 1656 may include circuitry to perform any desired functionality. For example, or more of the dies 1656 may be logic dies (e.g., silicon-based dies), and one or more of the dies 1656 may be memory dies (e.g., high bandwidth memory).[165] Although the microelectronic package 1650 illustrated in FIG. 30 is a flip chip package, other package architectures may be used. For example, the microelectronic package 1650 may be a ball grid array (BGA) package, such as an embedded wafer-level ball grid array (eWLB) package. In another example, the microelectronic package 1650 may be a wafer-level chip scale package (WLCSP) or a panel fanout (FO) package. Although two dies 1656 are illustrated in the microelectronic package 1650 of FIG. 30, a microelectronic package 1650 may include any desired number of dies 1656. A microelectronic package 1650 may include additional passive components, such as surface-mount resistors, capacitors, and inductors disposed on the first face 1672 or the second face 1674 of the package substrate 1652, or on either face of the interposer 1657. More generally, a microelectronic package 1650 may include any other active or passive components known in the art[166] FIG. 31 is a side, cross-sectional view of a microelectronic device assembly 1700 that may include one or more microelectronic packages or other electronic components (e.g., a die) including one or more microelectronic structures 100 in accordance with any of the embodiments disclosed herein. The microelectronic device assembly 1700 includes a number of components disposed on a circuit board 1702 (which may be, e.g., a motherboard). The microelectronic device assembly 1700 includes components disposed on a first face 1740 of the circuit board 1702 and an opposing second face 1742 of the circuit board 1702; generally, components may be disposed on one or both faces 1740 and 1742. Any of the microelectronic packages discussed below with reference to the microelectronic device assembly 1700 may take the form of any of the embodiments of the microelectronic package 1650 discussed above with reference to FIG. 30 (e.g,, may include one or more microelectronic structures 100 in a die).[167] In some embodiments, the circuit board 1702 may be a printed circuit board (RGB) including multiple metal layers separated from one another by layers of dielectric material and interconnected by conductive vias. Any one or more of the metal layers may be formed in a desired circuit pattern to route electrical signals (optionally in conjunction with other metal layers) between the components coupled to the circuit board 1702. In other embodiments, the circuit board 1702 may be a non-RGB substrate.[168] The microelectronic device assembly 1700 illustrated in FIG, 31 includes a package-on-interposer structure 1736 coupled to the first face 1740 of the circuit board 1702 by coupling components 1716. The coupling components 1716 may electrically and mechanically couple the package-on-interposer structure 1736 to the circuit board 1702, and may include solder balls (as shown in FIG. 31), male and female portions of a socket, an adhesive, an underfill material, and/or any other suitable electrical and/or mechanical coupling structure.[169] The package-on-interposer structure 1736 may include a microelectronic package 1720 coupled to an package interposer 1704 by coupling components 1718. The coupling components 1718 may take any suitable form for the application, such as the forms discussed above with reference to the coupling components 1716. Although a single microelectronic package 1720 is shown in FIG. 31, multiple microelectronic packages may be coupled to the package interposer 1704; indeed, additional interposers may be coupled to the package interposer 1704. The package interposer 1704 may provide an intervening substrate used to bridge the circuit board 1702 and the microelectronic package 1720. The microelectronic package 1720 may be or include, for example, a die (the die 1502 of FIG. 28), a microelectronic device (e.g,, the microelectronic device 1600 of FIG. 29), or any other suitable component. Generally, the package interposer 1704 may spread a connection to a wider pitch or reroute a connection to a different connection. For example, the package interposer 1704 may couple the microelectronic package 1720 (e.g., a die) to a set of BGA conductive contacts of the coupling components 1716 for coupling to the circuit board 1702. In the embodiment illustrated in FIG. 31, the microelectronic package 1720 and the circuit board 1702 are attached to opposing sides of the package interposer 1704; in other embodiments, the microelectronic package 1720 and the circuit board 1702 may be attached to a same side of the package interposer 1704. In some embodiments, three or more components may be interconnected by way of the package interposer 1704.[170] In some embodiments, the package interposer 1704 may be formed as a RGB, including multiple metal layers separated from one another by layers of dielectric material and interconnected by conductive vias, In some embodiments, the package interposer 1704 may be formed of an epoxy resin, a fiberglass-reinforced epoxy resin, an epoxy resin with inorganic fillers, a ceramic material, or a polymer material such as polyimide. In some embodiments, the package interposer 1704 may be formed of alternate rigid or flexible materials that may include the same materials described above for use in a semiconductor substrate, such as silicon, germanium, and other group lll-V and group IV materials. The package interposer 1704 may include metal lines 1710 and vias 1708, including but not limited to through-silicon vias (TSVs) 1706. The package interposer 1704 may further include embedded devices 1714, including both passive and active devices, Such devices may include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, sensors, electrostatic discharge (BSD) devices, and memory devices. More complex devices such as radio frequency devices, power amplifiers, power management devices, antennas, arrays, sensors, and microelectromechanical systems (MEMS) devices may also be formed on the package interposer 1704. The package-on-interposer structure 1736 may take the form of any of the package-on-interposer structures known in the art.[171] The microelectronic device assembly 1700 may include a microelectronic package 1724 coupled to the first face 1740 of the circuit board 1702 by coupling components 1722. The coupling components 1722 may take the form of any of the embodiments discussed above with reference to the coupling components 1716, and the microelectronic package 1724 may take the form of any of the embodiments discussed above with reference to the microelectronic package 1720.[172] The microelectronic device assembly 1700 illustrated in FIG. 31 includes a package-on-package structure 1734 coupled to the second face 1742 of the circuit board 1702 by coupling components 1728. The package-on-package structure 1734 may include a microelectronic package 1726 and a microelectronic package 1732 coupled together by coupling components 1730 such that the microelectronic package 1726 is disposed between the circuit board 1702 and the microelectronic package 1732. The coupling components 1728 and 1730 may take the form of any of the embodiments of the coupling components 1716 discussed drove, and the microelectronic packages 1726 and 1732 may take the form of any of the embodiments of the microelectronic package 1720 discussed above. The package-on-package structure 1734 may be configured in accordance with any of the package-on-package structures known in the art[173] FIG. 32 is a block diagram of an example computing device 1800 that may include one or more microelectronic structures 100 in accordance with any of the embodiments disclosed herein. For example, any suitable ones of the components of the computing device 1800 may include one or more of the microelectronic device assemblies 1700, microelectronic packages 1650, microelectronic devices 1600, or dies 1502 disclosed herein. A number of components are illustrated in FIG. 32 as included in the computing device 1800, but any one or more of these components may be omitted or duplicated, as suitable for the application. In some embodiments, some or all of the components included In the computing device 1800 may be attached to one or more motherboards. In some embodiments, some or all of these components are fabricated onto a single system-on-a-chip (SoC) die.[174] Additionally, in various embodiments, the computing device 1800 may not include one or more of the components illustrated In FIG. 32, but the computing device 1800 may include interface circuitry for coupling to the one or more components. For example, the computing device 1800 may not include a display device 1806, but may include display device interface drcuitiy (e.g., a connector and driver circuitry) to which a display device 1806 may be coupled. In another set of examples, the computing device 1800 may not include an audio input device 1824 or an audio output device 1808, but may include audio input or output device interface circuitry (e.g., connectors and supporting circuitry) to which an audio input device 1824 or audio output device 1808 may be coupled.[175] The computing device 1800 may include a processing device 1802 (e.g., one or more processing devices). As used herein, the term "processing device" or "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory. The processing device 1802 may include one or more digital signal processors (DSPs), application-specific integrated circuits (ASICs), central processing units (CPUs), graphics processing units (GPUs), ctyptoprocessors (specialized processors that execute cryptographic algorithms within hardware), server processors, or any other suitable processing devices. The computing device 1800 may include a memory 1804, which may itself include one or more memory devices such as volatile memory (e.g., dynamic randan access memory (DRAM)), nonvolatile memory (e.g., read-only memory (ROM)), flash memory, solid state memory, and/or a hard drive. In some embodiments, the memory 1804 may include memory that shares a die with the processing device 1802. This memory may be used as cache memory and may include embedded dynamic random access memory (eDRAM) or spin transfer torque magnetic random access memory (STT-MRAM).[176] In some embodiments, the computing device 1800 may include a communication chip 1812 (e.g., one or more communication chips). For example, the communication chip 1812 may be configured for managing wireless communications for the transfer of data to and from the computing device 1800. The term 'wireless’ and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a nonsolid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not.[177] The communication chip 1812 may implement any of a number of wireless standards or protocols, including but not limited to Institute for Electrical and Electronic Engineers (IEEE) standards including Wi-Fi (IEEE 802.11 family), IEEE 802.16 standards (e.g., IEEE 802.16-2005 Amendment), Long-Term Evolution (LTE) project along with any amendments, updates, and/or revisions (e.g., advanced LTE project, ultra mobile broadband (UMB) project (also referred to as "3GPP2*), etc.). IEEE 802.16 compatible Broadband Wireless Access (BWA) networks are generally referred to as WiMAX networks, an acronym that stands for Worldwide Interoperability for Microwave Access, which is a certification mark for products that pass conformity and interoperability tests for the IEEE 802.16 standards. The communication chip 1812 may operate in accordance with a Global System for Mobile Communication (GSM), General Packet Radio Service (GPRS), Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Evolved HSPA (E-HSPA), or LTE network. The communication chip 1812 may operate in accordance with Enhanced Data for GSM Evolution (EDGE), GSM EDGE Radio Access Network (GERAN), Universal Terrestrial Radio Access Network (UTRAN), or Evolved UTRAN (E-UTRAN). The communication chip 1812 may operate in accordance with Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced Cordless Telecommunications (DECT), Evolution-Data Optimized (EV-DO), and derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The communication chip 1812 may operate in accordance with other wireless protocols in other embodiments. The computing device 1800 may include an antenna 1822 to facilitate wireless communications and/or to receive other wireless communications (such as AM or FM radio transmissions).[178] In some embodiments, the communication chip 1812 may manage wired communications, such as electrical, optical, or any other suitable communication protocols (e.g., the Etheret). As noted above, the communication chip 1812 may include multiple communication chips. For instance, a first communication chip 1812 may be dedicated to shorter-range wireless communications such as Wi-Fi or Bluetooth, and a second communication chip 1812 may be dedicated to longer-range wireless communications such as global positioning system (GPS), EDGE, GPRS, CDMA, WiMAX, LTE, EV-DO, or others. In some embodiments, a first communication chip 1812 may be dedicated to wireless communications, and a second communication chip 1812 may be dedicated to wired communications.[179] The computing device 1800 may include batteiy/power circuitry 1814. The battery/power circuitry 1814 may include one or more energy storage devices (e.g., batteries or capacitors) and/or circuitry for coupling components of the computing device 1800 to an energy source separate from the computing device 1800 (e.g., AC line power).[180] The computing device 1800 may include a display device 1806 (or corresponding interface circuitry, as discussed above). The display device 1806 may include any visual indicators, such as a heads-up display, a computer monitor, a projector, a touchscreen display, a liquid crystal display (LCD), a light-emitting diode display, or a fiat panel display,[181] The computing device 1800 may include an audio output device 1808 (or corresponding interface circuitry, as discussed above). The audio output device 1808 may include any device that generates an audible indicator, such as speakers, headsets, or earbuds.[182] The computing device 1800 may include an audio input device 1824 (or corresponding interface circuitry, as discussed above). The audio input device 1824 may include any device that generates a signal representative of a sound, such as microphones, microphone arrays, or digital instruments (e.g., instruments having a musical instrument digital interface (MIDI) output).[183] The computing device 1800 may include a GPS device 1818 (or corresponding interface circuitry, as discussed above). The GPS device 1818 may be in communication with a satellite-based system and may receive a location of the computing device 1800, as known in the art[184] The computing device 1800 may include an other output device 1810 (or corresponding interface circuitry, as discussed above). Examples of the other output device 1810 may include an audio codec, a video codec, a printer, a wired or wireless transmitter for providing information to other devices, or an additional storage device.[185] The computing device 1800 may include an other input device 1820 (or corresponding interface circuitry, as discussed above). Examples of the other input device 1820 may include an accelerometer, a gyroscope, a compass, an image capture device, a keyboard, a cursor control device such as a mouse, a stylus, a touchpad, a bar code reader, a Quick Response (QR) code reader, any sensor, or a radio frequency identification (RFID) reader.[186] The computing device 1800 may have any desired form factor, such as a handheld or mobile computing device (e.g., a cell phone, a smart phone, a mobile internet device, a music player, a tablet computer, a laptop computer, a netbook computer, an ultrabook computer, a personal digital assistant (PDA), an ultra mobile personal computer, etc.), a desktop computing device, a server computing device or other networked computing component, a vehicle computing device (e.g„ a vehicle control unit), a laptop computing device, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a digital video recorder, or a wearable computing device. In some embodiments, the computing device 1800 may be any other electronic device that processes data. [187] The following paragraphs provide various examples of the embodiments disclosed herein. Example 1 is a microelectronic structure, including: a patterned region including a first conductive line and a second conductive line, wherein the second conductive line is adjacent to the first conductive line, the first conductive line and the second conductive line have a pitch that is less than 30 nanometers, the first conductive line has a line edge roughness that is less than 1.2 nanometers, and the second conductive line has a line edge roughness that is less than 1.2 nanometers.[188] Example 2 includes the subject matter of Example 1 , and further specifies that the microelectronic structure further includes an unordered region having an unordered lamellar patter, and the unordered region is coplanar with the patterned region.[189] Example 3 includes the subject matter of Example 2, and further specifies that the microelectronic structure is part of a die, and the unordered region is part of a transition region of the die, under a guard ring of the die, or in a frame of the die.[190] Example 4 includes the subject matter of any of Examples 2-3, and further specifies that the first conductive line includes a conductive material, and the unordered region includes a material having a same material composition as the conductive material.[191] Example 5 includes the subject matter of any of Examples 24, and further specifies that the pattered region includes a dielectric material, and the unordered region includes a material having a same material composition as the dielectric material.[192] Example 6 includes the subject matter of any of Examples 1-5, and further specifies that a spacing between the first conductive line and the second conductive line is less than 15 nanometers.[193] Example 7 includes the subject matter of any of Examples 1-6, and further specifies that a spacing between the first conductive line and the second conductive line is less than 12 nanometers.[194] Example 8 includes the subject matter of any of Examples 1-6, and further specifies that the first conductive line has a width that is less than 15 nanometers.[195] Example 9 includes the subject matter of any of Examples 1-8, and further specifies that the first conductive line has a width that is less than 12 nanometers.[196] Example 10 includes the subject matter of any of Examples 1-9, and further specifies that the second conductive line has a width that is less than 15 nanometers.[197] Example 11 includes the subject matter of any of Examples 1-10, and further specifies that the second conductive line has a width that is less than 12 nanometers.[198] Example 12 includes the subject matter of any of Examples 1-11, and further specifies that the first conductive line and the second conductive line are part of a set of conductive lines, the set of conductive lines includes more than two conductive lines, and the pitch of the first conductive line and the second conductive line is the same as a pitch between adjacent ones of the conductive lines in the set of conductive lines.[199] Example 13 includes the subject matter of any of Examples 1-12, and further specifies that the microelectronic structure further includes pitch-division artifacts proximate to the pattered region. [200] Example 14 includes the subject matter of Example 13, and further specifies that the pitch-division artifacts include one or more half-ring patterns in a dielectric material.[201 ] Example 15 includes the subject matter of any of Examples 1-14, and further specifies that widths of at least some of the conductive lines in the patterned region are periodic across the conductive lines.[202] Example 16 includes the subject matter of any of Examples 1-15, and further specifies that the second conductive line has a width that is greater than a width of the first conductive line.[203] Example 17 includes the subject matter of Example 16, and further specifies that the line edge roughness of the second conductive line is greater than the line edge roughness of the first conductive line.[204] Example 18 includes the subject matter of Example 17, and further specifies that the patterned region includes an other conductive line, a width of the other conductive line is greater than a width of the second conductive line, and a line edge roughness of the other conductive line is greater than the line edge roughness of the second conductive line.[205] Example 19 includes the subject matter of any of Examples 16-18, and further specifies that the width of the second conductive line is at least three times greater than a width of the first conductive line.[206] Example 20 includes the subject matter of any of Examples 1 -15, and further specifies that the first conductive line has a width that is greater than a width of the second conductive line.[207] Example 21 includes the subject matter of Example 20, and further specifies that the line edge roughness of the first conductive line is greater than the line edge roughness of the second conductive line.[208] Example 22 includes the subject matter of Example 21 , and further specifies that the patterned region includes an other conductive line, a width of the other conductive line is greater than a width of the first conductive line, and a line edge roughness of the other conductive line is greater than the line edge roughness of the first conductive line.[209] Example 23 includes the subject matter of Example 22, and further specifies that the width of the first conductive line is at least three times greater than a width of the second conductive line,[210] Example 24 includes the subject matter of any of Examples 1 -23, and further specifies that a spacing between the first conductive line and the second conductive line is greater than a spacing between the second conductive line and a conductive line adjacent to the second conductive line.[211] Example 25 includes the subject matter of any of Examples 1 -23, and further specifies that a spacing between the first conductive line and the second conductive line is less than a spacing between the second conductive line and a conductive line adjacent to the second conductive line.[212] Example 26 includes the subject matter of any of Examples 1-25, and further specifies that the pattered region is a first patterned region, the microelectronic structure further includes a second patterned region including a first conductive line and a second conductive line, wherein the second conductive line of the second patterned region is adjacent to the first conductive line of the second patterned region, the first conductive line of the second patterned region and the second conductive line of the second patterned region have a pitch that is greater than 24 nanometers. [213] Example 27 includes the subject matter of Example 26, and further specifies that the first conductive line of the second pattered region and the second conductive line of the second patterned region have a pitch that is greater than 30 nanometers.[214] Example 28 includes the subject matter of any of Examples 26-27, and further specifies that the first conductive line of the second patterned region has a line edge roughness that is greater than 1.2 nanometers, and the second conductive line has a line edge roughness that is greater than 1.2 nanometers.[215] Example 29 includes the subject matter of any of Examples 26-28, and further specifies that the first conductive line of the second patterned region has a line width roughness and a line edge roughness, and the line width roughness is equal to the line edge roughness multiplied by the square root of 2.[216] Example 30 includes the subject matter of any of Examples 26-29, and further specifies that the second conductive line of the second pattered region has a line width roughness and a line edge roughness, and the line width roughness is equal to the line edge roughness multiplied by the square root of 2.[217] Example 31 includes the subject matter of any of Examples 26-30, and further specifies that the second patterned region is coplanar with the first patterned region.[218] Example 32 includes the subject matter of any of Examples 26-31 , and further specifies that the second patterned region is in a same layer of a metallization stack as the first patterned region.[219] Example 33 includes the subject matter of any of Examples 1-32, and further specifies that the first conductive line has a line width roughness, and the line width roughness of the first conductive line is not equal to the line edge roughness of the first conductive line multiplied by the square root of 2.[220] Example 34 includes the subject matter of any of Examples 1 -33, and further specifies that the second conductive line has a line width roughness, and the line width roughness of the second conductive line is not equal to the line edge roughness of the second conductive line multiplied by the square root of 2.[221] Example 35 includes the subject matter of any of Examples 1-34, and further specifies that the patterned region includes a third conductive line and a fourth conductive line, the third conductive line is between the second conductive line and the fourth conductive line, the third conductive line has a line edge roughness greater than 1.2 nanometers, and the fourth conductive line has a line edge roughness less than 1.2 nanometers,[222] Example 36 includes the subject matter of any of Examples 1 -35, and further specifies that the microelectronic structure further includes a via in conductive contact with the first conductive line.[223] Example 37 includes the subject matter of Example 36, and further specifies that the via is in a dielectric material, and the dielectric material includes a photo acid generator.[224] Example 38 includes the subject matter of any of Examples 36-37, and further specifies that the dielectric material includes a quencher.[225] Example 39 includes the subject matter of any of Examples 36-38, and further specifies that the via has side faces that are self-aligned with side faces of the first conductive line.[226] Example 40 includes the subject matter of any of Examples 36-38, and further specifies that the via does not contact dielectric material adjacent to the first conductive line in the pattered region. [227] Example 41 includes the subject matter of any of Examples 1 -40, and further specifies that the microelectronic structure further includes a via in conductive contact with the second conductive line.[228] Example 42 includes the subject matter of Example 41 , and further specifies that the via is in a dielectric material, and the dielectric material includes a photo add generator.[229] Example 43 indudes the subject matter of any of Examples 4142, and further specifies that the dielectric material indudes a quencher.[230] Example 44 indudes the subject matter of any of Examples 4143, and further specifies that the via has side faces that are self-aligned with side faces of the first conductive line.[231] Example 45 indudes the subject matter of any of Examples 4143, and further specifies that the via does not contad dielectric material adjacent to the second condudive line in the patterned region.[232] Example 46 indudes the subject matter of any of Examples 145, and further spedfies that the microelectronic structure further indudes a device layer, and the patterned region is induded in an interconnect layer above or below the device layer.[233] Example 47 indudes the subject matter of Example 46, and further specifies that the microelectronic structure further includes condudive contacts, and the pattered region is between the condudive contacts and the device layer.[234] Example 48 indudes the subject matter of any of Examples 147, and further spedfies that the pattered region is induded in an MO interconnect layer.[235] Example 49 indudes the subject matter of any of Examples 147, and further spedfies that the patterned region is induded in an M1 interconnect layer.[236] Example 50 indudes the subject matter of any of Examples 147, and further spedfies that the pattered region is induded in an M2 interconnect layer.[237] Example 51 indudes the subject matter of any of Examples 1 -50, and further spedfies that the first conductive line is parallel to the second conductive line.[238] Example 52 is a microelectronic structure, induding: a patterned region induding a first conductive line and a second conductive line, wherein the second conductive line is adjacent to the first conductive line; and an unordered region having an unordered lamellar patter, wherein the unordered region is coplanar with the patterned region.[239] Example 53 indudes the subject matter of Example 52, and further specifies that the first conductive line indudes a conductive material, and the unordered region indudes a material having a same material composition as the conductive material.[240] Example 54 indudes the subject matter of any of Examples 52-53, and further spedfies that the pattered region indudes a dielectric material, and the unordered region indudes a material having a same material composition as the dielectric material.[241] Example 55 indudes the subject matter of any of Examples 52-54, and further spedfies that a spadng between the first conductive line and the second conductive line is less than 15 nanometers. [242] Example 56 includes the subject matter of any of Examples 52-55, and further specifies that a spacing between the first conductive line and the second conductive line is less than 12 nanometers.[243] Example 57 includes the subject matter of any of Examples 52-55, and further specifies that the first conductive line has a width that is less than 15 nanometers.[244] Example 58 includes the subject matter of any of Examples 52-57, and further specifies that the first conductive line has a width that is less than 12 nanometers.[245] Example 59 includes the subject matter of any of Examples 52-58, and further specifies that the second conductive line has a width that is less than 15 nanometers.[246] Example 60 includes the subject matter of any of Examples 52-59, and further specifies that the second conductive line has a width that is less than 12 nanometers.[247] Example 61 includes the subject matter of any of Examples 52-60, and further specifies that the first conductive line and the second conductive line are part of a set of conductive lines, the set of conductive lines includes more than two conductive lines, and a pitch of the first conductive line and the second conductive line is the same as a pitch between adjacent ones of the conductive lines in the set of conductive lines.[248] Example 62 includes the subject matter of any of Examples 52-61 , and further specifies that the microelectronic structure further includes pitch-division artifacts proximate to the patterned region.[249] Example 63 includes the subject matter of Example 62, and further specifies that the pitch-division artifacts include one or more half-ring patterns in a dielectric material.[250] Example 64 includes the subject matter of any of Examples 52-63, and further specifies that widths of at least some of the conductive lines in the patterned region are periodic across the conductive lines.[251] Example 65 includes the subject matter of any of Examples 52-64, and further specifies that the second conductive line has a width that is greater than a width of the first conductive line.[252] Example 66 includes the subject matter of Example 65, and further specifies that a line edge roughness of the second conductive line is greater than a line edge roughness of the first conductive line.[253] Example 67 includes the subject matter of Example 66, and further specifies that the patterned region includes an other conductive line, a width of the other conductive line is greater than a width of the second conductive line, and a line edge roughness of the other conductive line is greater than the line edge roughness of the second conductive line.[254] Example 68 includes the subject matter of any of Examples 65-67, and further specifies that the width of the second conductive line is at least three times greater than a width of the first conductive line.[255] Example 69 includes the subject matter of any of Examples 52-64, and further specifies that the first conductive line has a width that is greater than a width of the second conductive line.[256] Example 70 includes the subject matter of Example 69, and farther specifies that a line edge roughness of the first conductive line is greater than a line edge roughness of the second conductive line.[257] Example 71 includes the subject matter of Example 70, and further specifies that the pattered region includes an other conductive line, a width of the other conductive line is greater than a width of the first conductive line, and a line edge roughness of the other conductive line is greater than the line edge roughness of the first conductive line.[258] Example 72 includes the subject matter of Example 71 , and further specifies that the width of the first conductive line is at least three times greater than a width of the second conductive line.[259] Example 73 includes the subject matter of any of Examples 52-72, and further specifies that a spacing between the first conductive line and the second conductive line is greater than a spacing between the second conductive line and a conductive line adjacent to the second conductive line.[260] Example 74 includes the subject matter of any of Examples 52-72, and further specifies that a spacing between the first conductive line and the second conductive line is less than a spacing between the second conductive line and a conductive line adjacent to the second conductive line,[261] Example 75 includes the subject matter of any of Examples 52-74, and further specifies that the pattered region is a first patterned region, the microelectronic structure further includes a second patterned region including a first conductive line and a second conductive line, wherein the second conductive line of the second pattered region is adjacent to the first conductive line of the second patterned region, the first conductive line of the second patterned region and the second conductive line of the second patterned region have a pitch that is greater than 24 nanometers.[262] Example 76 includes the subject matter of Example 75, and further specifies that the first conductive line of the second patterned region and the second conductive line of the second pattered region have a pitch that is greater than 30 nanometers.[263] Example 77 includes the subject matter of any of Examples 75-76, and further specifies that the first conductive line of the second patterned region has a line edge roughness that is greater than 1.2 nanometers, and the second conductive line has a line edge roughness that is greater than 1.2 nanometers.[264] Example 78 includes the subject matter of any of Examples 75-77, and further specifies that the first conductive line of the second patterned region has a line width roughness and a line edge roughness, and the line width roughness is equal to the line edge roughness multiplied by the square root of 2.[265] Example 79 includes the subject matter of any of Examples 75-78, and further specifies that the second conductive line of the second patterned region has a line width roughness and a line edge roughness, and the line width roughness is equal to the line edge roughness multiplied by the square root of 2.[266] Example 80 includes the subject matter of any of Examples 75-79, and further specifies that the second patterned region is coplanar with the first patterned region.[267] Example 81 includes the subject matter of any of Examples 75-80, and further specifies that the second pattered region is in a same layer of a metallization stack as the first pattered region.[268] Example 82 includes the subject matter of any of Examples 52-81 , and further specifies that the first conductive line has a line width roughness, and the line width roughness of the first conductive line is not equal to the line edge roughness of the first conductive line multiplied by the square root of 2. [269] Example 83 includes the subject matter of any of Examples 52-82, and further specifies that the second conductive line has a line width roughness, and the line width roughness of the second conductive line is not equal to the line edge roughness of the second conductive line multiplied by the square root of 2,[270] Example 84 includes the subject matter of any of Examples 52-83, and further specifies that the pattered region includes a third conductive line and a fourth conductive line, the third conductive line is between the second conductive line and the fourth conductive line, the third conductive line has a line edge roughness greater than 1.2 nanometers, and the fourth conductive line has a line edge roughness less than 1.2 nanometers.[271] Example 85 includes the subject matter of any of Examples 52-84, and further specifies that the microelectronic structure further includes a via in conductive contact with the first conductive line.[272] Example 86 includes the subject matter of Example 85, and further specifies that the via is in a dielectric material, and the dielectric material includes a photo add generator.[273] Example 87 indudes the subject matter of any of Examples 85-86, and further spedfies that the dielectric material indudes a quencher.[274] Example 88 indudes the subject matter of any of Examples 85-87, and further spedfies that the via has side faces that are self-aligned with side faces of the first conductive line.[275] Example 89 indudes the subject matter of any of Examples 85-87, and further spedfies that the via does not contact dielectric material adjacent to the first conductive line in the pattered region.[276] Example 90 indudes the subject matter of any of Examples 52-89, and further spedfies that the microelectronic structure further indudes a via in conductive contact with the second conductive line.[277] Example 91 indudes the subject matter of Example 90, and further spedfies that the via is In a dielectric material, and the dielectric material indudes a photo add generator.[278] Example 92 indudes the subject matter of any of Examples 90-91 , and further spedfies that the dielectric material indudes a quencher.[279] Example 93 indudes the subject matter of any of Examples 90-92, and further spedfies that the via has side faces that are self-aligned with side faces of the first conductive line.[280] Example 94 indudes the subject matter of any of Examples 90-92, and further spedfies that the via does not contact dielectric material adjacent to the second conductive line in the patterned region.[281] Example 95 indudes the subject matter of any of Examples 52-94, and further spedfies that the microelectronic structure further indudes a device layer, and the patterned region is induded in an interconnect layer above or below the device layer.[282] Example 96 indudes the subject matter of Example 95, and further spedfies that the microelectronic structure further indudes conductive contacts, and the patterned region is between the conductive contacts and the device layer,[283] Example 97 indudes the subject matter of any of Examples 52-96, and further spedfies that the pattered region is induded in an M0 interconnect layer. [284] Example 98 includes the subject matter of any of Examples 52-96, and further specifies that the patterned region is included in an M1 interconnect layer.[285] Example 99 includes the subject matter of any of Examples 52-96, and further specifies that the pattered region is included in an M2 interconnect layer.[286] Example 100 includes the subject matter of any of Examples 52-99, and further specifies that the first conductive line is parallel to the second conductive line.[287] Example 101 is a microelectronic structure, including: a first patterned region including a first conductive line; and a second patterned region including a second conductive line, wherein the second patterned region is coplanar with the first patterned region, the first conductive line has a first line width roughness and a first line edge roughness, the first line width roughness is not equal to the first line edge roughness multiplied by the square root of 2, the second conductive line has a second line width roughness and a second line edge roughness, and the second line width roughness is equal to the second line edge roughness multiplied by the square root of 2.[288] Example 102 includes the subject matter of Example 101, and further specifies that the microelectronic structure further includes an unordered region having an unordered lamellar pattern, and the unordered region is coplanar with the first patterned region.[289] Example 103 includes the subject matter of Example 102, and further specifies that the microelectronic structure is part of a die, and the unordered region is part of a transition region of the die, under a guard ring of the die, or in a frame of the die.[290] Example 104 includes the subject matter of any of Examples 102-103, and further specifies that the first conductive line includes a conductive material, and the unordered region includes a material having a same material composition as the conductive material.[291 ] Example 105 includes the subject matter of any of Examples 102-104, and further specifies that the first patterned region includes a dielectric material, and the unordered region includes a material having a same material composition as the dielectric material.[292] Example 106 includes the subject matter of any of Examples 101-105, and further specifies that a pitch of conductive lines in the first patterned region is less than 30 nanometers.[293] Example 107 includes the subject matter of any of Examples 101-106, and further specifies that a pitch of conductive lines in the first patterned region is less than 24 nanometers.[294] Example 108 includes the subject matter of any of Examples 101 -106, and further specifies that the first conductive line has a width that is less than 15 nanometers.[295] Example 109 includes the subject matter of any of Examples 101-108, and further specifies that the first conductive line has a width that is less than 12 nanometers.[296] Example 110 includes the subject matter of any of Examples 101 -109, and further specifies that the microelectronic structure further includes pitch-division artifacts proximate to the first patterned region.[297] Example 111 includes the subject matter of Example 110, and further specifies that the pitch-division artifacts include one or more half-ring patterns in a dielectric material. [298] Example 112 includes the subject matter of any of Examples 101-111, and further specifies that widths of at least some conductive lines in the first patterned region are periodic across the at least some conductive lines.[299] Example 113 includes the subject matter of any of Examples 101-112, and further specifies that the second conductive line has a width that is greater than a width of the first conductive line.[300] Example 114 includes the subject matter of Example 113, and further specifies that the second line edge roughness is greater than the first line edge roughness.[301] Example 115 includes the subject matter of Example 114, and further specifies that the second line edge roughness is greater than 1.2 nanometers.[302] Example 116 includes the subject matter of any of Examples 113-115, and further specifies that the first line edge roughness is less than 1.2 nanometers.[303] Example 117 includes the subject matter of any of Examples 101-112, and further specifies that the first conductive line has a width that is greater than a width of the second conductive line.[304] Example 118 includes the subject matter of any of Examples 101-117, and further specifies that a pitch of conductive lines in the second patterned region is greater than 30 nanometers.[305] Example 119 includes the subject matter of any of Examples 101-118, and further specifies that the second patterned region is coplanar with the first patterned region.[306] Example 120 includes the subject matter of any of Examples 101-119, and further specifies that the second patterned region is in a same layer of a metallization stack as the first patterned region.[307] Example 121 includes the subject matter of any of Examples 101 -120, and further specifies that the microelectronic structure further includes a via in conductive contact with the first conductive line.[308] Example 122 includes the subject matter of Example 121 , and further specifies that the via is in a dielectric material, and the dielectric material includes a photo acid generator.[309] Example 123 includes the subject matter of any of Examples 121-122, and further specifies that the dielectric material includes a quencher.[310] Example 124 includes the subject matter of any of Examples 121-123, and further specifies that the via has side faces that are self-aligned with side faces of the first conductive line.[311] Example 125 includes the subject matter of any of Examples 121-123, and further specifies that the via does not contact dielectric material adjacent to the first conductive line in the first patterned region.[312] Example 126 includes the subject matter of any of Examples 101-125, and further specifies that the microelectronic structure further includes a device layer, and the first patterned region Is included in an interconnect layer above or below the device layer.[313] Example 127 includes the subject matter of Example 126, and further specifies that the microelectronic structure further includes conductive contacts, and the first patterned region is between the conductive contacts and the device layer.[314] Example 128 includes the subject matter of any of Examples 101-127, and further specifies that the first patterned region is included in an M0 interconnect layer. [315] Example 129 includes the subject matter of any of Examples 101-127, and further specifies that the first patterned region is included in an M1 interconnect layer.[316] Example 130 includes the subject matter of any of Examples 101-127, and further specifies that the first patterned region is included in an M2 interconnect layer.[317] Example 131 includes the subject matter of any of Examples 101-130, and further specifies that the first conductive line is parallel to the second conductive line.[318] Example 132 is a computing device, including: a die including any of the microelectronic structures of any of claims 1-131; and a circuit board, wherein the die is communicatively coupled to the circuit board.[319] Example 133 includes the subject matter of Example 132, and further specifies that the die is included in a package, and the package is communicatively coupled to the circuit board,[320] Example 134 includes the subject matter of Example 133, and further specifies that the package is communicatively coupled to the circuit board by solder.[321] Example 135 includes the subject matter of any of Examples 132-134, and further specifies that the circuit board is a motherboard.[322] Example 136 includes the subject matter of any of Examples 132-135, and further specifies that the die is part of a processing device or a memory device.[323] Example 137 includes the subject matter of any of Examples 132-136, and further specifies that the computing device is a mobile computing device.[324] Example 138 includes the subject matter of any of Examples 132-136, and further specifies that the computing device is a laptop computing device.[325] Example 139 includes the subject matter of any of Examples 132-136, and further specifies that the computing device is a desktop computing device.[326] Example 140 includes the subject matter of any of Examples 132-136, and further specifies that the computing device is a wearable computing device,[327] Example 141 includes the subject matter of any of Examples 132-136, and further specifies that the computing device is a server computing device.[328] Example 142 includes the subject matter of any of Examples 132-136, and further specifies that the computing device is a vehicle computing device.[329] Example 143 includes the subject matter of any of Examples 132-142, and further specifies that the computing device further includes a display communicatively coupled to the circuit board.[330] Example 144 includes the subject matter of any of Examples 132-143, and further specifies that the computing device further includes an antenna communicatively coupled to the circuit board.[331] Example 145 includes the subject matter of any of Examples 132-144, and further specifies that the computing device further includes a housing around the die and the circuit board.[332] Example 146 includes the subject matter of Example 145, and further specifies that the housing includes a plastic material.[333] Example 147 includes any of the manufacturing methods disclosed herein.
An integrated circuit (IC) (100) includes a substrate (102) having functional circuitry (106) for realizing at least one circuit function configured together with at least one high voltage isolation component including a top metal feature (132) above the substrate (102). A crack suppressing dielectric structure (155) including at least a crack resistant dielectric layer is on at least a top of thetop metal feature (132). At least one dielectric passivation overcoat (PO) layer (160) is on an outer portion of the top metal feature (132).
1.A method of manufacturing integrated circuits, namely ICs, which includes:Providing a substrate with a functional circuit for realizing at least one circuit function, the functional circuit having at least one high-voltage isolation component including a top metal feature above the substrate;Depositing a crack suppression dielectric structure including at least one crack-resistant dielectric layer on the top metal feature;Patterning and etching at least the top metal feature;Depositing at least one dielectric passivation coating, a dielectric PO layer, on at least the top of the top metal feature, andThe dielectric PO layer is planarized.2.The method of claim 1, wherein the crack suppression dielectric structure is deposited on the top metal feature before patterning and etching the top metal feature.3.The method of claim 1, wherein after patterning and etching the top metal feature, depositing the crack suppression dielectric structure such that the crack suppression dielectric structure is also positioned on the sidewall of the top metal feature on.4.The method according to claim 1, wherein the crack-inhibiting dielectric layer includes a silicon nitride layer, that is, a SiN layer, or a silicon carbide layer, that is, a SiC layer.5.The method according to claim 4, wherein the SiN layer or the SiC layer has a thickness of 200 to 800 A and a compressive stress of 50 to 500 MPa (Mpa).6.The method according to claim 1, wherein the crack suppression dielectric layer comprises a silicon nitride layer (SiN layer) deposited by a plasma-enhanced chemical vapor deposition process.7.The method of claim 6, wherein depositing the crack suppression dielectric structure further comprises depositing a bottom silicon oxide layer before depositing the SiN layer, and depositing a top silicon oxide layer after depositing the SiN layer.8.The method of claim 1, wherein planarizing the dielectric PO layer includes chemical mechanical polishing (CMP).9.The method of claim 1, wherein the high-voltage isolation component includes a high-voltage capacitor.10.The method of claim 1, wherein the high voltage isolation component includes a transformer, and wherein the top metal feature includes a top electrode that is inductively coupled to an externally positioned inductor.11.The method of claim 1, further comprising etching openings through the dielectric PO layer and through the crack suppression dielectric structure to reach the top metal feature.12.A kind of integrated circuit is IC, which includes:A substrate having a functional circuit for realizing at least one circuit function, the functional circuit having at least one high-voltage isolation component including a top metal feature above the substrate;A crack-suppression dielectric structure including at least a crack-resistant dielectric layer on at least the top of the top metal feature, andAt least one dielectric passivation coating or dielectric PO layer on the exterior of the top metal feature.13.The IC of claim 12, wherein the crack suppression dielectric structure is also located on the sidewall of the top metal feature.14.The IC according to claim 12, wherein the anti-crack dielectric layer comprises a silicon nitride layer (SiN layer) or a silicon carbide layer (SiC layer).15.The IC according to claim 14, wherein the SiN layer or the SiC layer has a thickness of 200-800 A and a compressive stress of 50 to 500 MPa, or Mpa.16.The IC according to claim 12, wherein the crack suppression dielectric layer includes a silicon nitride layer, or SiN layer, between the top silicon oxide layer and the bottom silicon oxide layer.17.The IC of claim 12, wherein the high-voltage isolation component includes a high-voltage capacitor.18.The IC of claim 12, wherein the high voltage isolation component includes a transformer, and wherein the top metal feature includes a top electrode that is inductively coupled to an externally positioned inductor.
Crack suppression structure of high-voltage isolation componentTechnical fieldThe present disclosure generally relates to the manufacture of integrated circuit (IC) devices having high-voltage components, such as capacitors or transformers, including crack suppression structures.Background techniqueSome ICs include high voltage (HV) isolation components, such as capacitors or transformers, which typically include a first spiral inductor and a second spiral inductor, where the first spiral inductor functions to magnetically excite the second spiral inductor. The high voltage isolation component is located above the semiconductor surface within the metal stack, usually with top metal features at the top metal layer directly below the passivation layer(s).Chemical mechanical polishing/planarization (CMP) is a widely used process that combines chemical and mechanical forces to smooth the surface. The CMP process uses abrasive and corrosive chemical slurries, as well as polishing pads and retaining rings that are usually larger than the diameter of the wafer. The pad and the wafer are pressed together by a dynamic polishing head and held in place by a retaining ring. The dynamic polishing head rotates with different rotation axes to remove material on the surface of the wafer, and tends to flatten any irregular topography, thereby flattening the wafer.For example, when CMP is used to planarize a passivation layer stack that typically includes silicon oxide (eg, silicon oxynitride on silicon oxide), CMP can create cracks in the polished dielectric layer. CMP is usually performed on silicon oxide, and then the silicon oxynitride portion of the passivation stack is deposited. The CMP process conditions can be changed to minimize the occurrence of such silicon oxide layer cracks.Summary of the inventionThe summary of the present invention is provided to introduce in a simplified form a brief selection of the disclosed concepts, which are described further below in the detailed description including the provided drawings. This summary is not intended to limit the scope of the claimed subject name.The disclosed aspects include an IC that includes a substrate with a functional circuit for implementing at least one circuit function, the functional circuit having at least one high voltage isolation component including a top metal feature above the substrate. A crack suppression dielectric structure including at least a crack resistant dielectric layer on at least the top of the top metal feature. At least one dielectric passivation coating (PO) layer is on the outside of the top metal feature.Description of the drawingsReference will now be made to the drawings, which are not necessarily drawn to scale, in which:Figure 1A depicts a cross-sectional view of a portion of an example IC with a high voltage isolation capacitor (HV ISO capacitor) that includes the disclosed dielectric crack suppression structure on top of the top plate of the HV ISO capacitor.Figure IB depicts a cross-sectional view of a portion of an example IC with an HV ISO capacitor including the disclosed dielectric crack suppression structure on the top of the top plate and along the sidewalls of the HV ISO capacitor.Figure 1C shows an example 3-layer crack suppression structure.2A to 2F are cross-sectional views illustrating a process procedure of an example method for forming an IC having an HV ISO capacitor according to example aspects.2G is a cross-sectional view of steps in a process corresponding to FIG. 2F of an HV ISO capacitor showing an example method for forming an IC having an HV transformer according to an example aspect.Detailed waysExample aspects are described with reference to the drawings, in which like reference numerals are used to indicate similar or equivalent elements. Because certain behaviors or events may occur in a different order and/or occur simultaneously with other behaviors or events, the illustrated sequence of behaviors or events should not be considered restrictive. In addition, certain illustrated behaviors or events may not be required to implement the method according to the present disclosure.In addition, the terms "coupled to" or "coupled with" (and the like) as used herein are intended to describe an indirect or direct electrical connection without further limitation. Therefore, if the first device is "coupled" to the second device, the connection may be through a direct electrical connection with only parasitic bodies in the path, or through an indirect electrical connection through intermediate items including other devices and connections. For indirect coupling, the intermediate term usually does not modify the information of the signal, but its current level, voltage level, and/or power level can be adjusted.Although effective to a certain extent, it is recognized that CMP treatment solutions aimed at reducing dielectric cracks cannot eliminate the cracks caused by the CMP process to a large extent. The present invention also recognizes that dielectric cracks generated during CMP can extend to and terminate at the underlying metal layer, which can cause device failure or performance degradation. An example of device failure occurs when the diluted hydrofluoric acid (HF) cleaner after CMP penetrates the dielectric cracks in the passivation layer, thereby eroding the top metal of the bottom layer and forming a cavity in the top metal. Such voids can cause device failures, including field failures (such as reliability failures) or IC performance degradation.The present invention adds a dielectric crack suppression structure, which includes a crack-resistant dielectric layer between the passivation layer(s) and the top metal layer of the HV isolation component to be crack-protected, and the crack-resistant dielectric layer is significantly Reduces the incidence of dielectric cracks reaching the top metal that can cause IC failure or performance degradation. HV isolation components are usually designed to withstand a voltage of at least 100 volts. For example, the crack-resistant dielectric layer may include a silicon nitride (SiN) layer, which may be deposited on top of the top metal before forming the passivation layer(s), which acts as a crack arrestor The role of the layer is to protect the top metal of the HV isolation component from chemical attack when cracks are formed during the CMP of the passivation layer(s).FIG. 1A depicts a cross-sectional view of a portion of an example IC 100 having an HV ISO capacitor 104 that includes the disclosed dielectric crack suppression structure 155, which is shown by way of example as the HV ISO capacitor 104 A single anti-crack dielectric layer on the outside on top of the top plate 132. The dielectric crack suppression structure 155 is not shown in the inner window opened by the dielectric passivation coating (PO) layer 160 because it is removed during the etching of the PO layer 160.The dielectric anti-crack dielectric layer usually includes SiN, for example, a thickness of 300 to a compressive stress of 100 to 200 MPa. The anti-cracking dielectric layer may also include other anti-cracking materials, such as SiC. The dielectric crack suppression structure 155 may also include two or more layers, such as the three-layer crack suppression structure shown in FIG. 1C described below, in which the anti-crack dielectric layer 155b is located between the top layer 155c and the bottom layer 155a. The top plate 132 may include TiN, aluminum (Al), or TaN. The bottom plate of the HV ISO capacitor 104 is shown in reference 130. The bottom plate 130 may include, for example, aluminum or copper or alloys thereof.The IC 100 may be provided as a part of the IC or as a system on chip (SOC) or the like. Other configurations of the IC 100 (for example, a hybrid circuit) are within the scope of this example. The IC 100 is formed on a substrate 102 such as a silicon wafer. The HV ISO capacitor 104 is configured to provide galvanic isolation between two voltage domains of an IC or system with different voltage levels. For example, a low voltage component of a metal oxide semiconductor (MOS) transistor 106 that can operate at a voltage of about 24 volts or less, depicted as having a gate dielectric layer 110 that is typically less than 70 nanometers thick, has a gate thereon极electrode 113. The MOS transistor 106 is a part of a functional circuit including a substrate 102 formed in the substrate 102 and configured to implement an analog (for example, an amplifier, a power converter, or a power field effect transistor (FET)), together with the HV ISO capacitor 104, A circuit element of at least one circuit function of radio frequency, digital or memory function (including transistors, usually diodes, resistors, capacitors, etc.).A field oxide (FOX) layer or region 112 may be formed in the substrate 102 (e.g., near or adjacent to the top surface of the substrate) to electrically isolate the components of the IC 100 laterally. A pre-metal dielectric (PMD) layer 114 is formed over the substrate 102, which includes any FOX layers or regions before subsequent metal layers (eg, metal levels 118-1 to 118-N) are deposited. The filled via 116 may be provided through the PMD layer 114 to provide electrical connections for low-voltage components such as the MOS transistor 106 and other components or circuit parts (not specifically shown in FIG. 1A) of the microelectronic device 100A.A plurality of metal levels 118-1 (bottom or "first" metal level) to 118-N (top metal level) are arranged above the PMD layer 114. The PMD layer 114 may include connected to the MOS transistor 106 and any additional components and devices. Or the metal interconnection 120 of the circuit part. Inter-level dielectric (ILD) layers 122a, 122b, 122c (for example, a dielectric material or a composition composed of a silicon dioxide-based material, etc.) are disposed between the metal interconnections 120 in each metal level. Each via level 124 is disposed between the metal levels 118-1 to 118 -N, where an example via level 124 may include a metal via 126 connecting the metal interconnect 120. In one arrangement, similar materials can be used to form various dielectric layers in similar process flows. It should be understood that other dielectric materials for the ILD layers 122a, 122b, 122c, such as low dielectric constant (к) materials, are within the scope of this embodiment, for example, FSG (fluorinated silicate glass with к=3.6), OSG (organosilicate glass with к=2.9) and ULK (ultra-low-k dielectric material with к=2.5). The ILD layer may include a cap layer of different dielectric materials (for example, SiN) and an etch stop layer.The bottom plate 130 of the HV ISO capacitor 104 is disposed in one of the metal levels, for example, in the first metal level 118-1 as depicted in FIG. 1A. The top electrode 132 of the HV ISO capacitor 104 formed by the top metal level 118-N is below the PO layer 160, such as PO silicon on another PO layer 156 (eg, a PO silicon oxide layer, which is depicted as above the ILD layer 122c) Oxynitride layer.The thickness of the PO layer 156 after CMP is typically 1.0 to 2.0 μm, for example 1.5 μm as measured above the top metal layer 118-N. The PO layer 160 is generally 2.5 μm to 3.0 μm, for example, about 2.8 μm and includes silicon oxynitride. Since a single PO layer is also possible, but PO including only silicon oxide cannot provide a moisture barrier, and when the top metal layer 118-N includes aluminum, PO including only silicon nitride may provide excessive stress.The bottom plate 130 and top plate 132 of the HV ISO capacitor 104 are arranged vertically together to operate as an HV capacitor, for example, in the example embodiment of the IC 100, to provide suitable galvanic isolation with desired breakdown characteristics, according to some The embodiment has a typical single-capacitor surge capability up to 10kV peak and a series capacitor (reinforced isolation) surge capability up to 17-24kV peak.The dielectric of the HV ISO capacitor 104 including ILD layers 122a, 122b, and 122c may be formed to have a total thickness of at least 2 micrometers (μm), and may be formed by the HV ISO capacitor 104 between its plates 130, 132 and possible substrate 102 The desired operating voltage is determined. For example, an embodiment of the HV ISO capacitor 104, where the top plate 132 is designed to operate at 750 volts, may have a capacitor dielectric with a thickness of 8 μm to 14 μm.FIG. 1B depicts a cross-sectional view of a portion of an example IC 150 having an HV ISO capacitor 104' that includes a disclosed dielectric crack on the top of the top plate 132 and along the sidewalls of the HV ISO capacitor 104' Restraint structure 155'. Like the HV ISO capacitor 104 in FIG. 1A, the dielectric crack suppression structure 155' does not exist in the inner window opened by the PO layer 160 because it is removed during the etching of the PO layer 160. The dielectric crack suppression structure may be deposited on the top metal feature (top plate 132 in FIG. 1B) before patterning and etching of the top metal feature. In this case, a single mask can be used, and the crack-resistant dielectric layer (e.g., SiN) and top metal etching will generally use different chemistries, and the crack suppression structure 155' will only be on the top of the top metal feature ( As shown in Figure 1A) on. After the patterning and etching of the top metal features, crack suppression dielectric structures can also be deposited. In this case, the crack suppression dielectric structure is also positioned on the top of the top metal feature and on its sidewalls and above the layer 122c between the metal features.Figure 1C shows an example 3-layer crack suppression structure, shown as a top layer 155c on a crack-resistant dielectric layer 155b on a bottom layer 155a. The 3-layer crack suppression structure may include a silicon oxide layer used as the adhesion layer 155a, SiN used as the crack-resistant dielectric layer 155b of the crack suppression dielectric layer, and a cap oxide to provide a hydrophilic surface for metal patterning. Layer 155c. A specific example 3-layer dielectric crack suppression structure stack includes a layer 155a as a 50A silicon oxide layer, a crack resistant dielectric layer 155b as a 300 to 500A SiN layer, and a layer 155c as a 50A silicon oxide layer.FIG. 2A illustrates, in a cross-sectional view, the structure of the in-process HV ISO capacitor on the IC shown at the beginning of the formation of the bottom plate 130. 2A depicts the semiconductor substrate 102, the processing layer 212 thereon represents a plurality of layers formed during the front-end processing in the conventional semiconductor processing steps previously performed, the PMD layer 114 on the processing layer, and the PMD layer 114 On the metal level 118-1. The filled via 116 is formed by the PMD layer 114. In the final HV ISO capacitor, the metal level 118-1 will be patterned into the bottom plate 130. In the processing layer 212, previous processing steps, such as photolithography, etching, ion implantation, and diffusion, are used to form various devices (not shown for simplicity) in the substrate 102, and these devices can be interconnected, such as Transistors, diodes, resistors, inductors, capacitors, etc. including MOS transistors, bipolar transistors, or FETs other than MOS.The metal level 118-1 may be, for example, aluminum or copper or an alloy thereof, and the metal is a metal used in a specific semiconductor manufacturing process. Single and dual damascene copper or copper alloy materials can be used to form the metal level 118-1. However, FIGS. 2B to 2G show the use of a non-damascene metal layer that can be made of an aluminum metal layer, which is different from copper and can be directly etched.Figure 2B shows the in-process HV ISO capacitor on the IC after patterning the metal level 118-1 (including the formation of the bottom plate 130 of the HV ISO capacitor, followed by the deposition and subsequent planarization of the ILD layer 122a). FIG. 2C shows a WIP HV ISO capacitor on the IC after forming several more metal interconnection levels (including the formation of filled vias 116 in the ILD layer, followed by patterned metal, etc.) on the IC, several more metal The interconnection level is shown as 118-2, 118-3 separated by ILD 122a, 122b, 122c. In the area where the HV ISO capacitor is formed above the bottom plate 130, only the dielectric is present, as shown by the ILD layers 122a, 122b, and 122c. The metal level of the bottom plate 130 is shown as 118-1.FIG. 2D shows the in-process HV ISO capacitor on the IC after the top metal level 118-4 including the top plate 132 is formed and patterned. FIG. 2E shows a work-in-process HV ISO capacitor on the IC after a dielectric crack suppression structure as shown in the anti-crack dielectric layer 155b is formed on the top of the top plate 132 and along the sidewalls of the HV ISO capacitor. The dielectric layer(s) used for the dielectric crack suppression structure may be deposited by a low pressure chemical vapor deposition (LPCVD) process, such as plasma enhanced CVD (PECVD) or high pressure deposition (HPD). 2F shows the HV ISO capacitor 104" on the IC after the PO layer 160 including the crack-resistant dielectric layer 155b is formed and then planarized with chemical mechanical planarization (CMP).The crack 291 shown is from the surface of the PO layer 160 (which may be due to the CMP process), and it stops at the crack-resistant dielectric layer 155b. As described above, during the etching of the PO layer 160, the crack-resistant dielectric layer 155b or the corresponding layer of the dielectric crack suppression structure including two or more layers is generally removed in the inner window opened by the PO layer 160. What is not shown is a hole etched through the PO layer 160 over a portion of the top plate 132 to bond the bonding wire therewith. Although the contact with the bottom plate 130 is not shown, the contact is usually made by a metal interconnect 120 extending from above the bottom plate to nearby circuit elements (such as digitizers or modulators). The connection to the backplane 130 can be an input node or an output node, depending on whether the HVISO capacitor is in the transmitter channel or the receiver channel.2G is a cross-sectional view showing steps in a process corresponding to FIG. 2F of an HV ISO capacitor of an example method for forming an IC having an HV transformer 250 according to an example aspect. The HV transformer 250 includes a top electrode 132a, a top inductor coil 133, a bottom electrode 130a, and a bottom inductor coil 133'. In the case of a magnetic sensor, only one inductive coil needs to be on the top.The disclosed aspects can be used to form semiconductor dies that can be integrated into various assembly flows to form a variety of different devices and related products. The semiconductor die may include various elements and/or layers thereon, including barrier layers, dielectric layers, device structures, active elements, and passive elements, including source regions, drain regions, bit lines, and bases. , Emitter, collector, conductive wire, conductive via, etc. In addition, semiconductor dies can be formed by a variety of processes, including bipolar, insulated gate bipolar transistor (IGBT), CMOS, BiCMOS, and MEMS.Those skilled in the art involved in the present disclosure will understand that many other aspects are possible within the scope of the claimed invention, and further additions and deletions can be made to the described aspects without departing from the scope of the present disclosure. , Replacement and modification.
A semiconductor device includes a first metallization level, a first diffusion barrier layer, a first etch stop layer, a second etch top layer, a dielectric layer and an opening extending through the dielectric layer, the first and second etch stop layers, and the first diffusion barrier layer. The first diffusion barrier layer is disposed over the first metallization level. The second etch stop layer is disposed over the first diffusion barrier layer, and the first etch stop layer is disposed on the second etch stop layer with a first interface therebetween. The dielectric layer is disposed over the first etch stop layer. The opening can also have rounded corners. A sidewall diffusion barrier layer can be disposed on sidewalls of the opening, and the sidewall diffusion barrier layer is formed from the same material as the first diffusion barrier layer. The first etch stop layer and the barrier diffusion layer can be formed from silicon nitride, and the second etch stop layer can be formed from silicon oxide. Metal within the opening forms a second metal feature, and the metal can comprise copper or a copper alloy. A method of manufacturing the semiconductor device is also disclosed.
What is claimed is: 1. A semiconductor device, comprising:a first metallization level, said first metallization level including a first metal feature; a first diffusion barrier layer, comprising a first material, disposed over said first metallization level; a second etch stop layer, comprising a second material, disposed over said first diffusion barrier layer; a first etch stop layer disposed on said second etch stop layer with a first interface therebetween; a dielectric layer disposed over said first etch stop layer; an opening having side surfaces extending through said dielectric layer, said first and second etch stop layers, and said first diffusion barrier layer to said first metal feature; metal within said opening forming a second metal feature; and a second metallization level above said dielectric layer, wherein said first material is different from said second material, said first diffusion barrier layer, said second etch stop layer, said first etch stop layer and said dielectric layer form a first interlayer level, said first metallization level and said second metallization level form adjacent metallization levels, and said first interlayer level separating said adjacent metallization levels. 2. The semiconductor device according to claim 1, wherein said opening is a via opening, a trench, or a dual damascene opening comprising a lower via opening in communication with an upper trench; and wherein said second metal feature comprises a via, a line, or a combination of a lower via in contact with an upper line, respectively.3. The semiconductor device according to claim 1, wherein said dielectric layer has a dielectric constant less than about 3.5.4. The semiconductor device according to claim 1, further comprising a sidewall diffusion barrier layer disposed on said side surfaces.5. The semiconductor device according to claim 4, further comprising a second diffusion barrier layer disposed on said sidewall diffusion barrier with a second interface therebetween and on said first metal feature.6. The semiconductor device according to claim 4, wherein said sidewall diffusion barrier layer is formed from the same material as said first diffusion barrier layer.7. The semiconductor device according to claim 1, wherein said first material is silicon nitride.8. The semiconductor device according to claim 1, wherein said second material is silicon oxide.9. The semiconductor device according to claim 8, wherein said first etch stop layer comprises silicon nitride.
RELATED APPLICATIONThis application contains subject matter related to the subject matter disclosed in U.S. patent application Ser. Nos. 09/776,750 and 09/776,747, both filed on Feb. 6, 2001.1. Field of the InventionThe present invention relates to the manufacturing of semiconductor devices, and more particularly, to copper and copper alloy metallization in semiconductor devices.2. Background of the InventionThe escalating requirements for high density and performance associated with ultra large scale integration (ULSI) semiconductor device wiring are difficult to satisfy in terms of providing sub-micron-sized, low resistance-capacitance (RC) metallization patterns. This is particularly applicable when the sub-micron-features, such as vias, contact areas, lines, trenches, and other shaped openings or recesses have high aspect ratios (depth-to-width) due to miniaturization.Conventional semiconductor devices typically comprise a semiconductor substrate, usually of doped monocrystalline silicon (Si), and a plurality of sequentially formed inter-metal dielectric layers and electrically conductive patterns. An integrated circuit is formed therefrom containing a plurality of patterns of conductive lines separated by interwiring spacings, and a plurality of interconnect lines, such as bus lines, bit lines, word lines and logic interconnect lines. Typically, the conductive patterns of vertically spaced metallization levels are electrically interconnected by vertically oriented conductive plugs filling via holes formed in the inter-metal dielectric layer separating the metallization levels, while other conductive plugs filling contact holes establish electrical contact with active device regions, such as a source/drain region of a transistor, formed in or on a semiconductor substrate. Conductive lines formed in trench-like openings typically extend substantially parallel to the semiconductor substrate. Semiconductor devices of such type according to current technology may comprise five or more levels of metallization to satisfy device geometry and microminiaturization requirements.A commonly employed method for forming conductive plugs for electrically interconnecting vertically spaced metallization levels is known as "damascene"-type processing. Generally, this process involves forming a via opening in the inter-metal dielectric layer or interlayer dielectric (ILD) between vertically spaced metallization levels which is subsequently filled with metal to form a via electrically connecting the vertically spaced apart metal features. The via opening is typically formed using conventional lithographic and etching techniques. After the via opening is formed, the via is filled with a conductive material, such as tungsten (W), using conventional techniques, and the excess conductive material on the surface of the inter-metal dielectric layer is then typically removed by chemical mechanical planarization (CMP).A variant of the above-described process, termed "dual damascene" processing, involves the formation of an opening having a lower contact or via opening section which communicates with an upper trench section. The opening is then filled with a conductive material to simultaneously form a contact or via in contact with a conductive line. Excess conductive material on the surface of the inter-metal dielectric layer is then removed by CMP. An advantage of the dual damascene process is that contact or via and the upper line are formed simultaneously.High performance microprocessor applications require rapid speed of semiconductor circuitry, and the integrated circuit speed varies inversely with the resistance and capacitance of the interconnection pattern. As integrated circuits become more complex and feature sizes and spacings become smaller, the integrated circuit speed becomes less dependent upon the transistor itself and more dependent upon the interconnection pattern. If the interconnection node is routed over a considerable distance, e.g., hundreds of microns or more, as in submicron technologies, the interconnection capacitance limits the circuit node capacitance loading and, hence, the circuit speed. As integration density increases and feature size decreases, in accordance with submicron design rules, the rejection rate due to integrated circuit speed delays significantly reduces manufacturing throughput and increases manufacturing costs.One way to increase the circuit speed is to reduce the resistance of a conductive pattern. Conventional metallization patterns are typically formed by depositing a layer of conductive material, notably aluminum (Al) or an alloy thereof, and etching, or by damascene techniques. Al is conventionally employed because it is relatively inexpensive, exhibits low resistivity and is relatively easy to etch. However, as the size of openings for vias/contacts and trenches is scaled down to the submicron range, step coverage problems result from the use of Al. Poor step coverage causes high current density and enhanced electromigration. Moreover, low dielectric constant polyamide materials, when employed as inter-metal dielectric layers, create moisture/bias reliability problems when in contact with Al, and these problems have decreased the reliability of interconnections formed between various metallization levels.One approach to improved interconnection paths in vias involves the use of completely filled plugs of a metal, such as W. Accordingly, many current semiconductor devices utilizing VLSI (very large scale integration) technology employ Al for the metallization level and W plugs for interconnections between the different metallization levels. The use of W, however, is attendant with several disadvantages. For example, most W processes are complex and expensive. Furthermore, W has a high resistivity, which decreases circuit speed. Moreover, Joule heating may enhance electromigration of adjacent Al wiring. Still a further problem is that W plugs are susceptible to void formation, and the interface with the metallization level usually results in high contact resistance.Another attempted solution for the Al plug interconnect problem involves depositing Al using chemical vapor deposition (CVD) or physical vapor deposition (PVD) at elevated temperatures. The use of CVD for depositing Al is expensive, and hot PVD Al deposition requires very high process temperatures incompatible with manufacturing integrated circuitry.Copper (Cu) and Cu-based alloys are particularly attractive for use in VLSI and ULSI semiconductor devices, which require multi-level metallization levels. Cu and Cu-based alloy metallization systems have very low resistivities, which are significantly lower than W and even lower than those of previously preferred systems utilizing Al and its alloys. Additionally, Cu has a higher resistance to electromigration. Furthermore, Cu and its alloys enjoy a considerable cost advantage over a number of other conductive materials, notably silver (Ag) and gold (Au). Also, in contrast to Al and refractory-type metals (e.g., titanium (Ti), tantalum (Ta) and W), Cu and its alloys can be readily deposited at low temperatures formed by well-known "wet" plating techniques, such as electroless and electroplating techniques, at deposition rates fully compatible with the requirements of manufacturing throughput.Electroless plating of Cu generally involves the controlled auto-catalytic deposition of a continuous film of Cu or an alloy thereof on a catalytic surface by the interaction of at least a Cu-containing salt and a chemical reducing agent contained in a suitable solution, whereas electroplating comprises employing electrons supplied to an electrode (comprising the surface(s) to be plated) from an external source (i.e., a power supply) for reducing Cu ions in solution and depositing reduced Cu metal atoms on the plating surface(s). In either case, a nucleation/seed layer is required for catalysis and/or deposition on the types of substrates contemplated herein. Finally, while electroplating requires a continuous nucleation/seed layer, very thin and discontinuous islands of a catalytic metal may be employed with electroless plating.Another technique to increase the circuit speed is to reduce the capacitance of the inter-metal dielectric layers. Dielectric materials such as silicon oxide (SiO2) have been commonly used to electrically separate and isolate or insulate conductive elements of the integrated circuit from one another. However, as the spacing between these conductive elements in the integrated circuit structure has become smaller, the capacitance between such conductive elements because of the dielectric being formed from silicon oxide is more of a concern. This capacitance negatively affects the overall performance of the integrated circuit because of increased power consumption, reduced speed of the circuitry, and cross-coupling between adjacent conductive elements.A response to the problem of capacitance between adjacent conductive elements caused by use of silicon oxide dielectrics has led to the use of other dielectric materials, commonly known as low-k dielectrics. Whereas silicon oxide has a dielectric constant of approximately 4.0, many low-k dielectrics have dielectric constants less than 3.5. Examples of low-k dielectric materials include organic or polymeric materials. Another example is porous, low density materials in which a significant fraction of the bulk volume contains air, which has a dielectric constant of approximately 1. The properties of these porous materials are proportional to their porosity. For example, at a porosity of about 80%, the dielectric constant of a porous silica film, i.e. porous SiO2, is approximately 1.5. Still another example of a low-k dielectric material is carbon doped silicon oxide wherein at least a portion of the oxygen atoms bonded to the silicon atoms are replaced by one or more organic groups such as, for example, an alkyl group such as a methyl (CH3-) group.A problem associated with the use of many low-k dielectric materials is that these materials can be damaged by exposure to oxidizing or "ashing" systems, which remove a resist mask used to form openings, such as vias, in the low-k dielectric material. This damage can cause the surface of the low-k dielectric material to become a water absorption site, if and when the damaged surface is exposed to moisture. Subsequent processing, such as annealing, can result in water vapor formation, which can interfere with subsequent filling with a conductive material of a via/opening or a damascene trench formed in the dielectric layer. For this reason, the upper surface of the low-k dielectric material is typically protected from damage during removal of the resist mask by a capping layer, such as silicon oxide, disposed over the upper surface.A number of different variations of a damascene process using low-k dielectrics have been employed during semiconductor manufacturing. With reference to FIGS. 1A-1H, an example of a damascene process for forming vias between vertically spaced metallization levels, according to conventional techniques, will be described. This process can be repeated to form multiple metallization levels, i.e., two or more, stacked one on top of another.In FIG. 1A, a first etch stop layer 12 is deposited over a first metallization level 10. The first etch stop layer 12 acts as a passivation layer that protects the first metallization level 10 from oxidation and contamination and prevents the material of the metallization level 10 from diffusing into a subsequently formed dielectric layer. The first etch stop layer 12 also acts as an etch stop during subsequent etching of the dielectric layer. A typical material used as an etch stop is silicon nitride, and approximately 500 angstroms of silicon nitride is typically deposited over the metallization level 10 to form the first etch stop layer 12. An illustrative process used for depositing silicon nitride is plasma enhanced CVD (PECVD).In FIG. 1B, a first low-k dielectric layer 14 is deposited over the first etch stop layer 12. The majority of low-k dielectric materials used for a dielectric layer are based on organic or inorganic polymers. The liquid dielectric material is typically spun onto the surface under ambient conditions to a desired depth. This is typically followed by a heat treatment to evaporate solvents present within the liquid dielectric material and to cure the film to form the first low-k dielectric layer 14.After formation of the first low-k dielectric layer 14, a capping layer 13 can be formed over the first low-k dielectric layer 14. The function of the capping layer 13 is to protect the first low-k dielectric layer 14 from the process that removes a subsequently formed resist layer. The capping layer 13 can also be used as a mechanical polishing stop to prevent damage to the first low-k dielectric layer 14 during subsequent polishing away of conductive material that is deposited over the first low-k dielectric layer 14 and in a subsequently formed via. Examples of materials used as a capping layer 13 include silicon oxide and silicon nitride.In FIG. 1C, vias 16 are formed in the first low-k dielectric layer 14 using conventional lithographic and etch techniques. The lithographic process involves depositing a resist 17 over the capping layer 13 and exposing and developing the resist 17 to form the desired patterns of the vias 16. The first etch, which is highly selective to the material of the first low-k dielectric layer 14 and the capping layer 13, removes the capping layer 13 and the first low-k dielectric layer 14 until the etchant reaches the first etch stop layer 12. The first etch is typically an anisotropic etch, such as a reactive ion plasma dry etch, that removes only the exposed portions of the first low-k dielectric layer 14 directly below the opening in the resist 17. By using an anisotropic etch, the via 16 can be formed with substantially perpendicular sidewalls.In FIG. 1D, a second etch, which is highly selective to the material of the first etch stop layer 12, removes the first etch stop layer 12 until the etchant reaches the first metallization level 10. The second etch is also typically an anisotropic etch.In FIG. 1E, the corners 18 of the vias 16 can be rounded using a reverse physical sputtering process. The corners 18 of the vias 16 are rounded to prevent problems of void creation associated with subsequent deposition of the conductive plug, and if necessary, a barrier layer. The reverse sputtering process can also be used to clean the first metallization level 10 at the bottom of the via 16. Incomplete etching of the first etch stop layer 12 can leave a portion of the first etch stop layer 12 over the first metallization level 10, and this material can prevent good ohmic contact between the material of the conductive plug and the material of the first metallization level 10. Use of the reverse sputtering process, however, can remove any remaining material of the first etch stop layer 12 and any other contaminants on the first metallization level 10.In FIG. 1F, an adhesion/barrier material, such as tantalum, titanium, tungsten, tantalum nitride, or titanium nitride, is deposited. The combination of the adhesion and barrier material is collectively referred to as a second diffusion barrier layer 20. The second diffusion barrier layer 20 acts to prevent diffusion into the first low-k dielectric layer 14 of the conductive material subsequently deposited into the via 16.In FIG. 1G, a layer 22 of a conductive material, for example, a Cu or Cu-based alloy, is deposited into the via 16 and over the dielectric layer 14. A typical process initially involves depositing a "seed" layer on the second diffusion barrier layer 20 subsequently followed by conventional plating techniques, e.g., electroless or electroplating techniques, to fill the via 16. So as to ensure complete filling of the via 16, the Cu-containing conductive layer 22 is deposited as a blanket (or "overburden") layer 24 so as to overfill the via 16 and cover the upper surface 26 of the capping layer 13.In FIG. 1H, the entire excess thickness of the metal overburden layer 24 over the upper surface 26 of the capping layer 13 is removed using a CMP process. A typical CMP process utilizes an alumina (Al2O3)-based slurry and leaves a conductive plug in the via 16. The conductive plug has an exposed upper surface 30, which is substantially co-planar with the surface 26 of the capping layer 13.A number of different variations of a dual damascene process using low-k dielectrics have been employed during semiconductor manufacturing. With reference to FIGS. 2A-2L, a dual damascene process for forming vias and a second metallization level over a first metallization level, according to conventional techniques, will be described. This process can be repeated to form multiple metallization levels, i.e., two or more, stacked one on top of another.In FIG. 2A, a second etch stop layer 12 is deposited over a first metallization level 10. The second etch stop layer 12 acts as a passivation layer that protects the first metallization level 10 from oxidation and contamination and prevents diffusion of material from the metallization level 10 into a subsequently formed dielectric layer. The second etch stop layer 12 also acts as an etch stop during subsequent etching of the dielectric layer. A typical material used as an etch stop is silicon nitride, and approximately 500 angstroms of silicon nitride is typically deposited over the metallization level 10 to form the second etch stop layer 12. An illustrative process used for depositing silicon nitride is PECVD.In FIG. 2B, a first low-k dielectric layer 14 is deposited over the second etch stop layer 12. The majority of low-k dielectric materials used for a dielectric layer are based on organic or inorganic polymers. The liquid dielectric material is typically spun onto the surface under ambient conditions to a desired depth. This is typically followed by a heat treatment to evaporate solvents present within the liquid dielectric material and to cure the film to form the first low-k dielectric layer 14.In FIG. 2C, a first etch stop layer 40 is deposited over the first low-k dielectric layer 14. The first etch stop layer 40 acts as an etch stop during etching of a dielectric layer subsequently formed over the first etch stop layer 40. As with the second etch stop layer 12, a material typically used as an etch stop is silicon nitride, and approximately 500 angstroms of silicon nitride is typically deposited over the first dielectric layer 40 to form the first etch stop layer 40. An illustrative process used for depositing silicon nitride is PECVD.In FIG. 2D, a second low-k dielectric layer 42 is deposited over the first etch stop layer 40. After formation of the second low-k dielectric layer 42, a capping layer 13 can be formed over the second low-k dielectric layer 42. The function of the capping layer 13 is to protect the second low-k dielectric layer 42 from the process that removes a subsequently formed resist layer. The capping layer 13 can also be used as a mechanical polishing stop to prevent damage to the second low-k dielectric layer 42 during subsequent polishing away of conductive material that is deposited over the second low-k dielectric layer 42 and in a subsequently formed via and trench. Examples of materials used as a capping layer 13 include silicon oxide and silicon nitride.In FIG. 2E, the pattern of the vias are formed in the second low-k dielectric layer 42 and capping layer 13 using conventional lithographic and etch techniques. The lithographic process involves depositing a resist 44 over the capping layer 13 and exposing and developing the resist 44 to form the desired pattern of the vias. The first etch, which is highly selective to the material of the second low-k dielectric layer 42 and capping layer 13, removes the capping layer 13 and the second low-k dielectric layer 42 until the etchant reaches the first etch stop layer 40. The first etch is typically an anisotropic etch, such as a reactive ion plasma dry etch, that removes only the exposed portions of the second low-k dielectric layer 42 directly below the opening in the resist 44.In FIG. 2F, a second etch, which is highly selective to the material of the first etch stop layer 40, removes the first etch stop layer 40 until the etchant reaches the first low-k dielectric layer 14. The second etch is also typically an anisotropic etch.In FIG. 2G, the vias 16 are formed in the first low-k dielectric layer 14 and the trenches 46 of the second metallization level are formed in the second low-k dielectric layer 42 using conventional lithographic and etch techniques. The lithographic process involves depositing a resist 50 over the capping layer 13 and exposing and developing the resist 50 to form the desired pattern of the trenches 46. The third etch, which is highly selective to the material of the first and second dielectric layers 14, 42, removes the first low-k dielectric layer 14 until the etchant reaches the second etch stop layer 12 and removes the second low-k dielectric layer 42 until the etchant reaches the first etch stop layer 40. The third etch is typically an anisotropic etch, such as a reactive ion plasma dry etch, that removes only the exposed portions of the first low-k dielectric layer 14 directly below the opening in the first etch stop layer40 and the exposed portions of the second low-k dielectric layer 42 directly below the opening in the resist 50. By using an anisotropic etch, the via 16 and the trench 46 can be formed with substantially perpendicular sidewalls.In FIG. 2H, a fourth etch, which is highly selective to the material of the first and second etch stop layers 40, 12, then removes the second etch stop layer 12 until the etchant reaches the first metallization level 10 and removes the first etch stop layer 40 until the etchant reaches the first low-k dielectric layer 14. The fourth etch is also typically an anisotropic etch.In FIG. 2I, the corners 18 of the vias 16 and trenches 46 can be rounded using a reverse sputtering process. The corners 18 of the vias 16 and trenches 46 are rounded to prevent problems of void creation associated with subsequent deposition of the conductive plug and second metallization level, and if necessary, a barrier layer. The reverse sputtering process can also be used to clean the first metallization level 10 at the bottom of the via 16. Incomplete etching of the second etch stop layer 12 can leave a portion of the second etch stop layer 12 over the first metallization level 10, and this material can prevent good ohmic contact between the material of the conductive plug and the material of the first metallization level 10. Use of the reverse sputtering process, however, can remove any remaining material of the second etch stop layer 12 and any other contaminants on the first metallization level 10.In FIG. 2J, an adhesion/barrier material, such as tantalum, titanium, tungsten, tantalum nitride, or titanium nitride, is deposited. The combination of the adhesion and barrier material is collectively referred to as a second diffusion barrier layer 20. The second diffusion barrier layer 20 acts to prevent diffusion into the first and second dielectric layers 14, 42 of the conductive material subsequently deposited into the via 16 and trench 46.In FIG. 2K, a layer 22 of a conductive material, for example, a Cu or Cu-based alloy, is deposited in the via 16 and trench 46 and over the capping layer 13. A typical process initially involves depositing a "seed" layer on the barrier layer 20 subsequently followed by conventional plating techniques, e.g., electroless or electroplating techniques, to fill the via 16 and trench 46. So as to ensure complete filling of the via 16 and trench 46, the Cu-containing conductive layer 22 is deposited as a blanket (or "overburden") layer 24 so as to overfill the trench 46 and cover the upper surface 52 of the capping layer 13.In FIG. 2L, the entire excess thickness of the metal overburden layer 24 over the upper surface 52 of the capping layer 13 is removed using a CMP process. A typical CMP process utilizes an alumina (Al2O3)-based slurry, which leaves a conductive plug in the via 16 and a second metallization level in the trench 46. The second metallization level has an exposed upper surface 58, which is substantially co-planar with the upper surface 52 of the capping layer 13.A problem that can be caused by the use of Cu and Cu-based alloys results from Cu having a relatively large diffusion coefficient into silicon oxide and silicon. Once Cu has diffused into these materials, Cu can cause the dielectric strength of these materials to decrease and cause a lack of uniformity in the overall properties of the semiconductor device produced. This problem is particularly prevalent if the dielectric layer has a high porosity as copper can more easily leach, or migrate, into the pores of,the dielectric layer. If Cu from the plug or the metallization level diffuses into the dielectric layer, the layer can become more conductive, and this increase in conductivity can cause short circuits between adjacent conductive regions. These short circuits can therefore result in failure of the semiconductor device. For this reason, Cu conductors are encapsulated by at least one diffusion barrier to prevent diffusion of the Cu into the silicon oxide layer.The above-described processes, however, can still result in copper contamination as a result of the use of reverse physical sputtering or sputter etching to clean the first metallization level and to round the corners of the trenches and vias. Reverse physical sputtering or sputter etching is a process by which atoms or molecules from the surface of a material are dislocated or removed by the impact energy of gas ions, which are accelerated in an electric field. This process involves the creation of a glow discharge or plasma between an anode and a cathode, such as a semiconductor device, wherein the current therebetween is composed of electron flow to the anode and positive ion flow to the cathode. The ions are created by the ionization of gas molecules, such as argon, existing within the flow discharge region between the anode and cathode. The ionization results from the collision of gas particles with the electron flow from the cathode to the anode. A focused beam of these ions can be directed to a very small point on a semiconductor device and then scanned, raster fashion, over a surface where material is to be removed. As an ion impinges on the semiconductor device surface, momentum is transferred from the ion to the impact surface resulting in the removal of one or more surface atoms.The problem of copper contamination as a result of reverse sputtering is illustrated in FIG. 3. The reverse physical sputtering process rounds the corners 18 of the vias 16 and trenches 46 as a result of ionized argon impacting the corners 18 and dislodging atoms. As the atoms of argon are impacting the corners 18, the atoms of argon are also impacting all the other exposed surfaces, such as the Cu of the first metallization level 10. Thus, the impact of the argon atoms onto the first metallization level 10 also dislodges atoms of Cu, and the dislodged atoms of Cu are free to be redeposited on other surfaces. In particular, the dislodged Cu atoms can be deposited onto the exposed sidewall surfaces 15 of the first and second low-k dielectric layers 14, 42. Once the Cu is deposited on the first and second low-k dielectric layers 14, 42, the Cu can then diffuse into the first and second low-k dielectric layers 14, 42. As previously stated, the diffusion of Cu into a low-k dielectric layer 14, 42 causes detrimental effects that can result in the failure of the semiconductor device. The problem of Cu diffusion into the dielectric layers 14, 42 is particularly pronounced when the low-k dielectric material is porous.Another problem associated with above-identified processes is the limited choices of material for the etch stop layers. A commonly used material as an etch stop is silicon nitride, which has a dielectric constant of about 7.0. However, the use of a thick etch stop layer of silicon nitride with a low-k dielectric layer partially negates the benefits obtained by use of a low-k dielectric material because of the increased combined capacitance of the etch stop layer and dielectric layer. Accordingly, a need exists for an improved method of forming copper plugs and copper metallization with low-k dielectric layers that allows for use of reverse sputtering to round corners of vias, so as to minimize the problem of void creation, yet still prevent the low-k dielectric layers from being contaminated with Cu.SUMMARY OF THE INVENTIONThis and other needs are met by embodiments of the present invention which provide a semiconductor device, which includes a first metallization level, a first diffusion barrier layer, a second etch stop layer, a first etch stop layer, a dielectric layer, and an opening. The first diffusion barrier layer is formed from a first material disposed over the first metallization level. The second etch stop layer is formed from a second material disposed over the first diffusion barrier layer, and the first material is different from the second material. The first etch stop layer is disposed on the second etch stop layer with a first interface therebetween, and the dielectric layer is disposed over the first etch stop layer. The opening has side surfaces and extends through the dielectric layer, the first and second etch stop layers, and the first diffusion barrier layer, and the opening can also have rounded corners. A sidewall diffusion barrier layer can also be disposed on sidewalls of the via, and the sidewall diffusion barrier layer is formed from the same material as the first diffusion barrier layer. The first metallization level includes a first metal feature, and metal within the opening forms a second metal feature.By providing a first diffusion barrier layer to the material of the metallization level, the material of the first diffusion barrier layer can be subsequently sputtered onto the sidewalls of the opening. The material deposited on the sidewalls forms a new sidewall diffusion barrier layer that prevents contamination of the dielectric layer caused by the material of the metallization level being deposited on the sidewalls when this material is subsequently sputtered off. The sputtering process also advantageously provides the opening with round corners, which reduce the formation of voids in the opening.In another aspect of the invention, the dielectric layer is formed from a low-k dielectric material, and this low-k dielectric material can have a dielectric constant of less than about 3.5. Furthermore, the low-k dielectric material can be formed with a porous material. Additionally, the semiconductor device can further comprise a capping layer disposed over the dielectric layer. By providing a dielectric layer formed from a low-k dielectric material, the capacitance of the dielectric layer is reduced as compared to dielectric layers formed using conventional dielectric materials.In a further aspect of the invention, the material of the first diffusion barrier layer can include silicon nitride, and the thickness of the first diffusion barrier layer can be from about 80 angstroms to about 120 angstroms. The material of the second etch stop layer can include silicon oxide, and the material of the first etch stop layer can include silicon nitride. The thickness of the first etch stop layer can be from about 400 angstroms to about 600 angstroms. The metal and the first metallization level can comprise copper or a copper alloy. A second diffusion barrier layer can also be disposed over the sidewall diffusion barrier layer with a second interface therebetween.In still another aspect of the invention, the opening can be a via opening, a trench or a dual damascene opening. The dual damascene opening can comprise a lower via opening in communication with an upper trench. Also, the second metal feature can be a via, a line, or a combination of a lower via in contact with an upper line.In an additional embodiment of the present invention, a semiconductor device comprises a first metallization level; a dielectric layer disposed over the first metallization level; a first sidewall diffusion barrier layer formed on sidewalls of an opening; a second diffusion barrier layer disposed on the first sidewall diffusion barrier layer with an interface therebetween; and a conductive plug within the via. The opening extends through the dielectric layer to the first metallization level and can have rounded corners. The first sidewall barrier diffusion layer is formed by sputtering a first diffusion barrier layer disposed over the first metallization level.In a further embodiment of the present invention, a method of manufacturing a semiconductor device is also disclosed. The method of manufacturing includes forming a first diffusion barrier layer over a first metallization level; forming a second etch stop layer over the first diffusion barrier layer; forming a first etch stop layer on the second etch stop layer with a first interface therebetween; depositing a dielectric layer over the first etch stop layer; etching the dielectric layer to form an opening through the dielectric layer and the first etch stop layer; and sputtering the first diffusion barrier layer and the second etch stop layer. The sputtering rounds corners of the opening and deposits material of the first diffusion barrier layer onto sidewalls of the opening to form a sidewall diffusion barrier layer.In an additional aspect of the invention, the method can further include the steps of depositing a conductive material within the opening. Also, the dielectric layer can be formed from a low-k dielectric material, and the first metallization level and the conductive material can comprise copper or a copper alloy. The material of the first diffusion barrier layer and the first etch stop layer can include silicon nitride, and the material of the second etch stop layer can include silicon oxide.In still another embodiment of the present invention, an additional method of manufacturing a semiconductor device is disclosed. The method of manufacturing includes forming a first metallization level; forming a first diffusion barrier layer over the first metallization level; forming a second etch stop layer over the first diffusion barrier layer; forming a first etch stop layer on the second etch stop layer with a first interface therebetween; forming a dielectric layer over the first etch stop layer; depositing a capping layer over the dielectric layer; depositing a resist over the capping layer; patterning the resist; etching the capping layer and the dielectric layer with a first etchant; etching the first etch stop layer with a second etchant; sputtering the first diffusion barrier layer and the second etch stop layer; depositing a conductive material in an opening and over a sidewall diffusion barrier layer; and planarizing a top surface of the capping layer. The etching of the capping layer, dielectric layer and the first etch stop layer forms the opening. The sputtering exposes the first metallization level, rounds corners of the opening, and also deposits material of the first diffusion barrier layer onto sidewalls of the opening to form the sidewall diffusion barrier layer.In another aspect of the invention, the dielectric layer is formed from a low-k dielectric material, and this low-k dielectric material can have a dielectric constant of less than about 3.5. The material of the first diffusion barrier layer and the first etch stop layer can include silicon nitride, and the material of the first etch stop layer can include silicon oxide.In still a further embodiment of the present invention, a semiconductor device comprises a first metallization level, a first diffusion barrier layer, a third etch stop layer, a second etch stop layer, a first dielectric layer, a first etch stop layer, a second dielectric layer, a trench, and a via opening. The first diffusion barrier layer is formed from a first material disposed over the first metallization level. The third etch stop layer is formed from a second material disposed over the first diffusion barrier layer, and the first material is different from the second material. The second etch stop layer is disposed on the third etch stop layer with a first interface therebetween, and the first dielectric layer is disposed over the second etch stop layer. The first etch stop layer is disposed over the first dielectric layer, and the second dielectric layer is disposed over the first etch stop layer. The trench extends through the second dielectric layer and the first etch stop layer, and the via opening extends from the trench through the first dielectric layer, the second and third etch stop layers, and the first diffusion barrier layer to the first metallization level. The via opening and the trench can also have rounded corners. A sidewall diffusion barrier layer can be disposed on sidewalls of the via opening and trench, and the sidewall diffusion barrier layer is formed from the same material as the first diffusion barrier layer. The first metallization level includes a first metal feature. A metal within the via the via and trench can respectively form a lower via and an upper trench.By providing a first diffusion barrier layer to the material of the first metallization level, the material of the first diffusion barrier layer can be subsequently sputtered onto the sidewalls of the via opening and the trench. The material deposited on the sidewalls forms a new sidewall diffusion barrier layer that prevents contamination of the dielectric layers caused by the material of the first metallization level being deposited on the sidewalls when this material is subsequently sputtered off. The sputtering process also advantageously provides the via opening and trench with round corners, which reduce the formation of voids in the via opening and trench.In another aspect of the invention, the dielectric layers are formed from a low-k dielectric material, and this low-k dielectric material can have a dielectric constant of less than about 3.5. Furthermore, the low-k dielectric material can be formed with a porous material. Additionally, the semiconductor device can further comprise a capping layer disposed over the second dielectric layer. By providing dielectric layers formed from a low-k dielectric material, the capacitance of the dielectric layers are reduced over dielectric layers formed with conventional dielectric materials.In a further aspect of the invention, the material of the first diffusion barrier layer can include silicon nitride, and the thickness of the first diffusion barrier layer can be from about 80 angstroms to about 120 angstroms. The material of the third etch stop layer can include silicon oxide, and the material of the first and second etch stop layers can include silicon nitride. The thickness of the first and second etch stop layers can be from about 400 angstroms to about 600 angstroms. The metal and the first metallization level can comprise copper or a copper alloy. A second diffusion barrier layer can also be disposed over the sidewall diffusion barrier layer with a second interface therebetween.In yet another embodiment of the present invention, a semiconductor device comprises a first metallization level; a first dielectric layer disposed over the first metallization level; a second dielectric layer disposed over the first dielectric layer; a first sidewall diffusion barrier layer disposed on sidewalls of a via opening and trench; a second diffusion barrier layer disposed on the first sidewall diffusion barrier layer with an interface therebetween; and conductive material within the via opening and trench. The trench extends through the second dielectric layer to the first dielectric layer, and the via opening extends from the trench through the first dielectric layer to the first metallization level. The via opening and trench can also have rounded corners. The first sidewall diffusion barrier layer is formed by sputtering a first diffusion barrier layer disposed over the first metallization level.In a further embodiment of the present invention, a method of manufacturing a semiconductor device is also disclosed. The method of manufacturing includes forming a first diffusion barrier layer over a first metallization level; forming a third etch stop layer over the first diffusion barrier layer; forming a second etch stop layer on the third etch stop layer with a first interface therebetween; forming a first dielectric layer over the second etch stop layer; forming a second dielectric layer over the first dielectric layer; etching the first and second dielectric layers to form a via opening and a trench; and sputtering the first diffusion barrier layer and the first etch stop layer. The trench is formed through the second dielectric layer and to the first dielectric layer, and the via opening is formed from the trench through the first dielectric layer, the second and third etch stop layers, the first diffusion barrier layer to the first metallization level. Also, the sputtering rounds corners of the via and trench and also deposits material of the first diffusion barrier layer onto sidewalls of the via opening and trench to form a sidewall diffusion barrier layer.In an additional aspect of the invention, the method can further include the steps depositing a first etch stop layer between the first dielectric layer and the second dielectric layer and etching the first etch stop layer during etching of the second etch stop layer. The material of the first diffusion barrier layer can include silicon nitride, the material of the third etch stop layer can include silicon oxide, and the material of the first and second etch stop layers can include silicon nitride. A second diffusion barrier layer can also be deposited over the sidewall diffusion barrier layer with a second interface therebetween, and a conductive material can then be deposited within the via opening and trench. The dielectric layers can also be formed from a low-k dielectric material.In still another embodiment of the present invention, an additional method of manufacturing a semiconductor device is disclosed. The method of manufacturing includes forming a first metallization level; forming a first diffusion barrier layer over the first metallization level; forming a third etch stop layer over the first diffusion barrier layer; forming a second etch stop layer on the third etch stop layer with an interface therebetween; forming a first dielectric layer over the second etch stop layer; forming a first etch stop layer over the first dielectric layer; forming a second dielectric layer over the first etch stop layer; forming a capping layer over the second dielectric layer; forming a first resist over the capping layer; patterning the first resist; etching the capping layer and the second dielectric layer with a first etch; etching the first etch stop layer with a second etch; forming a second resist over the capping layer; patterning the second resist; etching the capping layer and first and second dielectric layers with a third etch; etching the first and second etch stop layers with a fourth etch; sputtering the first diffusion barrier layer and the third etch stop layer; depositing a conductive material in a via opening and a trench; and planarizing a top surface of the capping layer. The etchings form the trench through the capping layer, the second dielectric layer, and the first etch stop layer to the first dielectric layer and form the via opening from the trench through the first dielectric layer and the second etch stop layer to the third etch stop layer. The sputtering exposes the first metallization level, rounds corners of the via and trench, and deposits material of the first diffusion barrier layer onto sidewalls of the via opening and trench to form a sidewall diffusion barrier layer. The conductive layer is also deposited over the sidewall diffusion barrier layer. The material of the first and second etch stop layers and the first diffusion barrier layer can include silicon nitride, and the material of the third etch stop layer can include silicon oxide. Also, the dielectric layers can be formed from a low-k dielectric material.Additional advantages of the present invention will become readily apparent to those skilled in this art from the following detailed description, wherein only the preferred embodiment of the present invention is shown and described, simply by way of illustration of the best mode contemplated for carrying out the present invention. As will be realized, the present invention is capable of other and different embodiments, and its several details are capable of modifications in various obvious respects, all without departing from the invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.BRIEF DESCRIPTION OF THE DRAWINGSReference is made to the attached drawings, wherein elements having the same reference numeral designations represent like elements throughout, and wherein:FIGS. 1A-1H schematically illustrate sequential phases of a conventional single damascene process.FIGS. 2A-2L schematically illustrate sequential phases of a conventional dual damascene process.FIG. 3 illustrates a conventional via and trench during a sputtering process.FIGS. 4A-4J schematically illustrate sequential phases of a single damascene process according to an embodiment of the present invention.FIGS. 5A-5N schematically illustrate sequential phases of a dual damascene process according to an additional embodiment of the present invention.DETAILED DESCRIPTION OF THE INVENTIONThe present invention addresses and solves the problem of contamination during single damascene processing from copper being deposited onto a silicon oxide dielectric layer as a result of reverse physical sputtering, which is used to round corners of a via and to clean contaminants on the copper metallization level below the via. This is achieved, in part, by providing a first etch stop layer, a second etch stop layer and a diffusion barrier layer below the second etch stop layer. Advantageously, after the first etch stop layer is removed using conventional etching techniques, the second etch stop layer and the diffusion barrier layer are sputtered off during the reverse physical sputtering process. Importantly, the material of the diffusion barrier layer that is sputtered off is then deposited onto the exposed portions of the dielectric layer and creates a sidewall diffusion barrier. This is accomplished before the copper from the copper layer is sputtered off onto the dielectric layer. Thus, once the copper layer is reached during the sputtering process and copper is then sputtered off, the copper will be deposited on a diffusion barrier layer and not on the dielectric layer.Furthermore, the present invention addresses problems associated with the high capacitance of inter-metal dielectric layers. This is achieved, in part, by providing a dielectric layer formed from a low-k dielectric material. As used herein, the term low-k dielectric means a dielectric having a dielectric constant of less than about 3.5, e.g., less than about 2.5.An embodiment of the present invention is illustrated in FIGS. 4A-4J. As illustrated in FIG. 4A, a first diffusion barrier layer 111 is formed over a first metallization level 110. The first diffusion barrier layer 111 can be formed from any material that prevents diffusion of the material from the metallization level 110 into a subsequently formed dielectric layer. For example, in current embodiments of the invention, the first metallization level 110 is formed from a Cu or Cu-based alloy. As such, the preferred first diffusion barrier layer 111 for use with Cu or Cu-based alloys acts as a diffusion barrier to Cu. The first diffusion barrier layer 111 can also act as a passivation layer that protects the first metallization level 110 from oxidation and contamination.The thickness of the first diffusion barrier layer 111 depends upon several factors, which include the depth of a subsequently formed via in the dielectric layer over the first diffusion barrier layer 111 and the percentage of the material of the first diffusion barrier layer 111 that is deposited onto the sidewalls of the dielectric layer. As such, the thickness of the first diffusion barrier layer 111 must be enough so that when the first diffusion barrier layer 111 is subsequently sputtered off, enough of the material of the first diffusion barrier layer 111 is deposited on the sidewalls of the dielectric layer to form an effective diffusion barrier from the material of the first metallization level 110. Also, the thickness of the first diffusion barrier layer 111 is preferably sufficient to act as an etch stop and not allow the etchant of the first etch stop layer to reach the first metallization level 110. In current embodiments of the invention, the thickness of the first diffusion barrier layer 111 is at least 50 angstroms and is preferably from about 100 to about 200 angstroms.In an aspect of the invention, the first diffusion barrier layer 111 is formed from silicon nitride although the invention is not limited in this manner. Silicon nitride advantageously acts as a diffusion barrier to copper and also as a passivation layer. Furthermore, silicon nitride acts as an etch stop to an etchant that etches silicon oxide. Any process capable of depositing the first diffusion barrier layer 111 is acceptable for use with the invention, and an illustrative process for depositing silicon nitride is PECVD.In FIG. 4B, a second etch stop layer 113 is deposited over the first diffusion barrier layer 111. The second etch stop layer 113 acts as an etch stop during etching of a subsequently formed first etch stop layer. The thickness of the second etch stop layer 113 is preferably sufficient to act as an etch stop and not allow the etchant of the first etch stop layer to reach the first diffusion barrier layer 111. In an aspect of the invention, the thickness of the second etch stop layer 113 is at least 50 angstroms, and in another aspect of the invention, the thickness of the second etch stop layer 113 is from about 80 to about 120 angstroms.In current embodiments of the invention, the second etch stop layer 113 is formed from silicon oxide although the invention is not limited in this manner. Silicon oxide advantageously acts as an etch stop to an etchant that etches silicon nitride. Any process capable of depositing the second etch stop layer 113 is acceptable for use with the invention, and an illustrative process for depositing silicon oxide is CVD.In FIG. 4C, a first etch stop layer 112 is deposited over the second etch stop layer 113. The first etch stop layer 112 acts as an etch stop during subsequent etching of the dielectric layer formed over the second etch stop layer 112. In an aspect of the invention, the first etch stop layer 112 is formed from silicon nitride although the invention in not limited in this manner. Silicon nitride, however, has the advantage of acting as an etch stop to many etchants used to etch low-k dielectric materials.The thickness of the first etch stop layer 112 is preferably sufficient to act as an etch stop during etching of the dielectric layer. In an aspect of the invention, the thickness of the first etch stop layer 112 is at least 50 angstroms, and in another aspect of the invention, the thickness of the first etch stop layer 112 is from about 400 to about 600 angstroms. Any process capable of depositing the first etch stop layer 112 is acceptable for use with the invention, and an illustrative process for depositing silicon nitride is PECVD.In FIG. 4D, a first dielectric layer 114 is deposited over the first etch stop layer 112. The first dielectric layer 114 can be formed from any material capable of acting as a dielectric, and illustrative materials include silicon oxide and silicon nitride. In one aspect of the invention, the first dielectric layer 114 is formed from a low-k dielectric material. Illustrative examples of low-k dielectric materials include fluorosilicate glass (FSG or SiOF), hydrogenated diamond-like carbon (DLC), polystyrene, fluorinated polyimides, parylene (AF-4), polyarylene ether, and polytetrafluoro ethylene. In another aspect of the invention, the first dielectric layer 114 is formed from a porous low-k dielectric material, such as siloxanes, silsesquioxanes, aerogels, and xerogels. These low-k dielectric materials can be applied via conventional spin-coating, dip coating, spraying, meniscus coating methods, in addition to other coating methods that are well-known in the art.After formation of the first dielectric layer 114, a capping layer 115 can be formed over the first dielectric layer 114. The function of the capping layer 115 is to protect the first dielectric layer 114 from the process that removes a subsequently formed resist layer, and any material so capable is acceptable for use with the invention. The capping layer 115 can also be used as a mechanical polishing stop to prevent damage to the first dielectric layer 114 during subsequent polishing away of conductive material that is deposited over the first dielectric layer 114 and in a subsequently formed via. Examples of materials used as a capping layer 115 include silicon oxide and silicon nitride. In an aspect of the invention, the capping layer 115 is formed from silicon oxide and has a thickness of at least 50 angstroms. In another aspect of the invention, the thickness of the capping layer 115 is from about 400 to about 600 angstroms.In FIG. 4E, vias 116 are formed in the first dielectric layer 114 using conventional lithographic techniques, for example, optical lithography (including, for example, I-line and deep-UV), X-ray, and E-beam lithography, followed by etching. The lithographic process involves depositing a resist 117 over the first dielectric layer 114 and exposing and developing the resist 117 to form the desired pattern of the vias 116. The first etch, which is highly selective to the material of the first dielectric layer 114 and capping layer 115, removes the capping layer 115 and the first dielectric layer 114 until the etchant reaches the first etch stop layer 112. The first etch is typically an anisotropic etch, such as a reactive ion plasma dry etch, that removes only the exposed portions of the first dielectric layer 114 directly below the opening in the resist 117. By using an anisotropic etch, the via 116 can be formed with substantially perpendicular sidewalls.In FIG. 4F, a second etch, which is highly selective to the material of the first etch stop layer 112, then removes the first etch stop layer 112 until the etchant reaches the second etch stop layer 113. The second etch is also typically an anisotropic etch.In FIG. 4G, a reverse sputtering process etches through the second etch stop layer 113 and the first diffusion barrier layer 111 to expose the first metallization level 110. After the second etch stop layer 113 has been removed, the sidewalls of the via 116 include material from the second etch stop layer 113. When the first dielectric layer 114 is formed from a porous material, the material of the second etch stop layer 113 fills the exposed pores on the sidewalls of the via. These pores would otherwise act as entry points for material to contaminate the first dielectric layer 114. Filling these pores advantageously reduces contamination of the first dielectric layer 114 during sputtering of the first metallization level 110, which deposits material of the first metallization level 110 onto the sidewalls of the via 116.During the sputtering of the first diffusion barrier layer 111, material of the first diffusion barrier layer 111 liberated during the sputtering process is deposited on the sidewalls of the via 116. The material of the first diffusion barrier layer 111 deposited on the sidewalls of the via 116 forms a sidewall diffusion barrier layer 119. This sidewall diffusion barrier layer 119 acts as a diffusion barrier that prevents the material of the first metallization level 110 from diffusing into the first dielectric layer 114 after the sputtering process reaches the first metallization level 110 and the material of the first metallization level 110 is sputtered off.The reverse sputtering process also advantageously rounds the corners 118 of the via 116. The corners 118 of the via 116 are rounded to prevent problems associated with subsequent deposition of the conductive plug, and if necessary, a barrier layer. For example, when the material of the conductive plug or the barrier layer is deposited in a via 116 having sharp corners 118, the material tends to build up more quickly at the corners 118 than at the vertical sidewalls of the via 116. Consequentially, the material at opposing corners 118 can form cantilevered bridges that eventually meet in the middle of the via 116. When this occurs, the via 116 is blocked and further deposition of material within the via 116 is prevented, thereby leaving a void in the via 116. The creation of such a void can disadvantageously cause a malfunction in the semiconductor device. However, by rounding the corners 118 of the vias 116, excess buildup of material at the corners 118 is counteracted and the problem of void creation is reduced.The reverse sputtering process can also be used to clean the first metallization level 110 at the bottom of the via 116. As such, any dielectric material or contaminants formed over the first metallization level 110 can be removed by the reverse sputtering process to allow for good ohmic contact between the material of the conductive plug and the material of the first metallization level 110.In FIG. 4H, an adhesion/barrier material, such as tantalum, titanium, tungsten, tantalum nitride, or titanium nitride, is deposited in the via 116 and over the sidewall diffusion barrier layer 119. The combination of the adhesion and barrier material is collectively referred to as a second diffusion barrier layer 120. The second diffusion barrier layer 120 acts to prevent diffusion into the first dielectric layer 114 of the conductive material subsequently deposited into the via 116.In FIG. 4I, a layer 122 of a conductive material is deposited into the via 116 and over the capping layer 115. In an aspect of the invention, the conductive material is a Cu or Cu-based alloy, and any process capable of depositing Cu into the via 116 is acceptable for use with this invention. An illustrative example of a process acceptable for use with this invention involves depositing a "seed" layer on the second diffusion barrier layer 120. After the seed layer has been formed, conventional plating techniques, e.g., electroless or electroplating techniques, are used to fill the via 116. So as to ensure complete filling of the via 116, the Cu-containing conductive layer 122 is deposited as a blanket (or "overburden") layer 124 so as to overfill the via 116 and cover the upper surface 126 of the capping layer 115.In FIG. 4J, the entire excess thickness of the metal overburden layer 124 over the upper surface 126 of the capping layer 115 is removed using a CMP process. A typical CMP process utilizes an alumina (Al2O3)-based slurry and leaves a conductive plug in the via 116. The conductive plug has an exposed upper surface 130, which is preferably substantially co-planar with the surface 126 of the capping layer 115.By providing a barrier layer above a copper metallization level, the material of the barrier layer can be subsequently sputtered onto the sidewalls of a via. The barrier material deposited on the sidewalls during sputtering forms a new barrier layer that advantageously prevents copper contamination of the dielectric layer caused by copper being deposited on the sidewalls when copper from the copper metallization level is also subsequently sputtered off. The sputtering process also advantageously provides a via with round corners, which reduce the formation of voids in the via.In an additional embodiment, the present invention addresses and solves the problem of contamination during dual damascene processing from copper being deposited onto silicon oxide dielectric layers as a result of reverse physical sputtering, which is used to round corners of vias and trenches and to clean contaminants on the copper metallization level below the via. This is achieved, in part, by providing a second etch stop layer, a third etch stop layer, and a diffusion barrier layer below the third etch stop layer. Advantageously, after the second etch stop layer is removed using conventional etching techniques, the third etch stop layer and the diffusion barrier layer are sputtered off during the reverse physical sputtering process. Importantly, the material of the diffusion barrier layer that is sputtered off is then deposited onto the exposed portions of the dielectric layers and creates a sidewall diffusion barrier. This is accomplished before the copper from the copper layer is sputtered off onto the dielectric layers. Thus, once the copper layer is reached during the sputtering process and copper is then sputtered off, the copper will be deposited on a diffusion barrier layer and not on the dielectric layers. Furthermore, the present invention addresses problems associated with the high capacitance of inter-metal dielectric layers. This is achieved, in part, by providing first and second dielectric layers formed from low-k dielectric materials.The additional embodiment of the present invention is illustrated in FIGS. 5A-5N. The dual damascene process to be described is illustrative of one sequence of steps, which can be used to practice the invention. In particular, the process provides a dual damascene structure, which includes a first metallization level, over which first and second dielectric layers are disposed, and the first and second dielectric layers respectively include a via and trench filled with a conductive material. However, the invention is not limited as to particular sequence of steps described to provide the dual damascene structure, as other sequence of steps capable of providing the dual damascene structure can be used to practice the invention.As illustrated in FIG. 5A, a first diffusion barrier layer 111 is formed over a first metallization level 110. The first diffusion barrier layer 111 can be formed from any material that prevents diffusion of the material from the metallization level 110 into a subsequently formed dielectric layer. For example, in current embodiments of the invention, the first metallization level 110 is formed from a Cu or Cu-based alloy. As such, the preferred first diffusion barrier layer 111 for use with Cu or Cu-based alloys acts as a diffusion barrier to Cu. The first diffusion barrier layer 111 can also act as a passivation layer that protects the first metallization level 110 from oxidation and contamination.The thickness of the first diffusion barrier layer 111 depends upon several factors, which include the depth of a subsequently formed via and trench in the dielectric layers over the first diffusion barrier layer 111 and the percentage of the material of the first diffusion barrier layer 111 that is deposited onto the sidewalls of the dielectric layers. As such, the thickness of the first diffusion barrier layer 111 must be enough so that when the first diffusion barrier layer 111 is subsequently sputtered off, enough of the material of the first diffusion barrier layer 111 is deposited on the sidewalls of the dielectric layers to form an effective diffusion barrier from the material of the first metallization level 110. Also, the thickness of the first diffusion barrier layer 111 is preferably sufficient to act as an etch stop and not allow the etchant of the second etch stop layer to reach the first metallization level 110. In current embodiments of the invention, the thickness of the first diffusion barrier layer 111 is at least 50 angstroms and is preferably from about 100 to about 200 angstroms.In an aspect of the invention, the first diffusion barrier layer 111 is formed from silicon nitride although the invention is not limited in this manner. Silicon nitride advantageously acts as a diffusion barrier to copper and also as a passivation layer. Furthermore, silicon nitride acts as an etch stop to an etchant that etches silicon oxide. Any process capable of depositing the first diffusion barrier layer 111 is acceptable for use with the invention, and an illustrative process for depositing silicon nitride is PECVD.In FIG. 5B, a third etch stop layer 113 is deposited over the first diffusion barrier layer 111. The third etch stop layer 113 acts as an etch stop during etching of a second etch stop layer subsequently formed over the third etch stop layer 113. The thickness of the third etch stop layer 113 is preferably sufficient to act as an etch stop and not allow the etchant of the second etch stop layer to reach the first diffusion barrier layer 111. In an aspect of the invention, the thickness of the third etch stop layer 113 is at least 50 angstroms, and in an additional aspect of the invention, the thickness of the third etch stop layer 113 is preferably from about 80 to about 120 angstroms.In current embodiments of the invention, the third etch stop layer 113 is formed from silicon oxide although the invention is not limited in this manner. Silicon oxide advantageously acts as an etch stop to an etchant that etches silicon nitride. Any process capable of depositing the third etch stop layer 113 is acceptable for use with the invention, and an illustrative process for depositing silicon oxide is CVD.In FIG. 5C, a second etch stop layer 112 is deposited over the third etch stop layer 113. The second etch stop layer 112 acts as an etch stop during subsequent etching of the dielectric layer formed above the second etch stop layer 112. In an aspect of the invention, the second etch stop layer 112 is formed from silicon nitride although the invention in not limited in this manner. Silicon nitride, however, has the advantage of acting as an etch stop to many etchants used to etch low-k dielectric materials.The thickness of the second etch stop layer 112 is preferably sufficient to act as an etch stop during etching of the dielectric layer. In an aspect of the invention, the thickness of the second etch stop layer 112 is at least 50 angstroms, and in another aspect of the invention, the thickness of the second etch stop layer 112 is from about 400 to about 600 angstroms. Any process capable of depositing the second etch stop layer112 is acceptable for use with the invention, and an illustrative process for depositing silicon nitride is PECVD.In FIG. 5D, a first dielectric layer 114 is deposited over the second etch stop layer 112. The first dielectric layer 114 can be formed from any material capable of acting as a dielectric, and illustrative materials include silicon oxide and silicon nitride. In one aspect of the invention, the first dielectric layer 114 is formed from a low-k dielectric material. Illustrative examples of low-k dielectric materials include fluorosilicate glass (FSG or SiOF), hydrogenated diamond-like carbon (DLC), polystyrene, fluorinated polyimides, parylene (AF-4), polyarylene ether, and polytetrafluoro ethylene. In another aspect of the invention, the first dielectric layer 114 is formed from a porous low-k dielectric material, such as siloxanes, silsesquioxanes, aerogels, and xerogels. These low-k dielectric materials can be applied via conventional spin-coating, dip coating, spraying, meniscus coating methods, in addition to other coating methods that are well-known in the art.In FIG. 5E, a first etch stop layer 140 is deposited over the first dielectric layer 114. The first etch stop layer 140 acts as an etch stop during subsequent etching of the dielectric layer formed above the first etch stop layer 140. In an aspect of the invention, the first etch stop layer 140 is formed from silicon carbide although the invention in not limited in this manner. However, as with the second etch stop layer 112, the dielectric constant of silicon carbide is lower than the dielectric constant of other etch stop materials, such as silicon nitride, and thereby lowers the combined capacitance of the inter-metal dielectric layers.The thickness of the first etch stop layer 140 is preferably sufficient to act as an etch stop during etching of the dielectric layer formed above the first etch stop layer 140. In one aspect of the invention, the thickness of the first etch stop layer 140 is at least 50 angstroms and is preferably from about 400 to about 600 angstroms. Any process capable of depositing the first etch stop layer 140 is acceptable for use with the invention, and an illustrative process for depositing silicon nitride is PECVD.In FIG. 5F, a second dielectric layer 142 is deposited over the first etch stop layer 140. As with the first dielectric layer 114, the second dielectric layer 142 can be formed from any material suitable for use a dielectric. In one aspect of the invention, however, the second dielectric layer 142 is formed from a low-k dielectric material, and in another aspect of the invention, the second dielectric layer 142 is formed from a porous low-k dielectric material.After formation of the second dielectric layer 142, a capping layer 115 can be formed over the second dielectric layer 142. The function of the capping layer 115 is to protect the second dielectric layer 142 from the process that removes a subsequently formed resist layer, and any material so capable is acceptable for use with the invention. The capping layer 115 can also be used as a mechanical polishing stop to prevent damage to the second dielectric layer 142 during subsequent polishing away of conductive material that is deposited over the second dielectric layer 142 and in a subsequently formed via and trench. Examples of materials used as a capping layer 115 include silicon oxide and silicon nitride. In an aspect of the invention, the capping layer 115 is formed from silicon oxide and has a thickness of at least 50 angstroms. In another aspect of the invention, the thickness of the capping layer is from about 400 to about 600 angstroms.In FIG. 5G, the pattern of the vias are formed in the second dielectric layer 142 using conventional lithographic techniques, for example , optical lithography (including, for example, I-line and deep-UV), X-ray, and E-beam lithography, followed by etching. The lithographic process involves depositing a resist 144 over the second dielectric layer 142 and exposing and developing the resist 144 to form the desired pattern of the vias. The first etch, which is highly selective to the material of the second dielectric layer 142 and capping layer 115, removes the capping layer 115 and second dielectric layer 142 until the etchant reaches the first etch stop layer 140. The first etch is typically an anisotropic etch, such as a reactive ion plasma dry etch, that removes only the exposed portions of the second dielectric layer 142 directly below the opening in the resist 144.In FIG. 5H, a second etch, which is highly selective to the material of the first etch stop layer 140, removes the first etch stop layer 140 until the etchant reaches the first dielectric layer 114. The second etch is also typically an anisotropic etch.In FIG. 5I, the vias 116 are formed in the first dielectric layer 114 and the trenches 146 of the second metallization level are formed in the second dielectric layer 142 using conventional lithographic and etch techniques. The lithographic process involves depositing a resist 150 over the second dielectric layer 142 and exposing and developing the resist 150 to form the desired pattern of the trenches 146. The third etch, which is highly selective to the material of the capping layer 115 and first and second dielectric layers 114, 142, removes the first dielectric layer 114 until the etchant reaches the second etch stop layer 112 and removes the second dielectric layer 142 until the etchant reaches the first etch stop layer 140. The third etch is typically an anisotropic etch, such as a reactive ion plasma dry etch, that removes only the exposed portions of the first dielectric layer 114 directly below the opening in the first etch stop layer 140 and the exposed portions of the second dielectric layer 142 directly below the opening in the resist 150. By using an anisotropic etch, the via 116 and the trench 146 can be formed with substantially perpendicular sidewalls.In FIG. 5J, a fourth etch, which is highly selective to the material of the first and second etch stop layers 140, 112, removes the second etch stop layer 112 until the etchant reaches the third etch stop layer 113 and removes the first etch stop layer 140 until the etchant reaches the first dielectric layer 114. The fourth etch is also typically an anisotropic etch.In FIG. 5K, a reverse sputtering process etches through the third etch stop layer 113 and the first diffusion barrier layer 111 to expose the first metallization level 110. After the third etch stop layer 113 has been removed, the sidewalls of the via 116 and trench 146 include material from the third etch stop layer 113. When the first and second dielectric layers 114, 142 formed from a porous material, the material of the third etch stop layer 113 fills the exposed pores on the sidewalls of the via. These pores would otherwise act as entry points for material to contaminate the first and second dielectric layers 114, 142. Filling these pores advantageously reduces contamination of the first and second dielectric layers 114 during sputtering of the first metallization level 110, which deposits material of the first metallization level 110 onto the sidewalls of the via 116 and trench 146.During the sputtering of the first diffusion barrier layer 111, material of the first diffusion barrier layer 111 liberated during the sputtering process is deposited on the sidewalls of the via 116 and trench 146. The material of the first diffusion barrier layer 111 deposited on the sidewalls of the via 116 and trench 146 forms a sidewall diffusion barrier layer 119. This sidewall diffusion barrier layer 119 acts as a diffusion barrier that prevents the material of the first metallization level 110 from diffusing into the first and second dielectric layers 114, 142 after the sputtering process reaches the first metallization level 110 and the material of the first metallization level 110 is sputtered off.The reverse sputtering process also advantageously rounds the corners 118 of the via 116 and trench 146. The corners 118 of the via 116 and trench 146 are rounded to prevent problems associated with subsequent deposition of the conductive plug and second metallization level, and if necessary, a barrier layer. For example, when the material of the conductive plug or the barrier layer is deposited in a via 116 or trench 146 having sharp corners 118, the material tends to build up more quickly at the corners 118 than at the vertical sidewalls of the via 116 and trench 146. Consequentially, the material at opposing corners 118 can form cantilevered bridges that eventually meet in the middle of the via 116 or trench 146. When this occurs, the via 116 or trench 146 is blocked and further deposition of material within the via 116 or trench 146 is prevented, thereby leaving a void in the via 116 or trench 146. The creation of such a void can disadvantageously cause a malfunction in the semiconductor device. However, by rounding the corners 118 of the via 116 and trench 146, excess buildup of material at the corners 118 is counteracted and the problem of void creation is reduced.The reverse sputtering process can also be used to clean the first metallization level 110 at the bottom of the via 116. As such, any dielectric material or contaminants formed over the first metallization level 110 can be removed by the reverse sputtering process to allow for good ohmic contact between the material of the conductive plug and the material of the first metallization level 110.In FIG. 5L, an adhesion/barrier material, such as tantalum, titanium, tungsten, tantalum nitride, or titanium nitride, is deposited in the via 116 and trench 146 and over the sidewall diffusion barrier layer 119. The combination of the adhesion and barrier material is collectively referred to as a second diffusion barrier layer 120. The second diffusion barrier layer 120 acts to prevent diffusion into the first and second dielectric layers 114, 142 of the conductive material subsequently deposited into the via 116 and trench 146.In FIG. 5M, a layer 122 of a conductive material is deposited into the via 116 and trench 146 and over the capping layer 115. In current embodiments of the invention, the conductive material is a Cu or Cu-based alloy, and any process capable of depositing Cu into the via 116 and trench 146 is acceptable for use with this invention. An illustrative example of a process acceptable for use with this invention involves depositing a "seed" layer on the second diffusion barrier layer 120. After the seed layer has been formed, conventional plating techniques, e.g., electroless or electroplating techniques, are used to fill the via 116 and trench 146. So as to ensure complete filling of the via 116 and trench 146, the Cu-containing conductive layer 122 is deposited as a blanket (or "overburden") layer 124 so as to overfill the trench 146 and cover the upper surface 152 of the capping layer 115.In FIG. 5N, the entire excess thickness of the metal overburden layer 124 over the upper surface 152 of the capping layer 115 is removed using a CMP process. A typical CMP process utilizes an alumina (A12O3)-based slurry, which leaves a conductive plug in the via 116 and a second metallization level in the trench 146. The second metallization level has an exposed upper surface 158, which is substantially co-planar with the upper surface 152 of the capping layer 115.By providing a barrier layer above a copper metallization level, the material of the barrier layer can be subsequently sputtered onto the sidewalls of a via and trench. The barrier material deposited onto the sidewalls during sputtering forms a new barrier layer that advantageously prevents copper contamination of the dielectric layers caused by copper being deposited onto the sidewalls when copper from the copper metallization level is also subsequently sputtered off. The sputtering process also advantageously provides a via and trench with round corners, which reduce the formation of voids in the via or trench.The present invention can be practiced by employing conventional materials, methodology and equipment. Accordingly, the details of such materials, equipment and methodology are not set forth herein in detail. In the previous descriptions, numerous specific details are set forth, such as specific materials, structures, chemicals, processes, etc., in order to provide a thorough understanding of the present invention. However, it should be recognized that the present invention can be practiced without resorting to the details specifically set forth. In other instances, well known processing structures have not been described in detail, in order not to unnecessarily obscure the present invention.Only the preferred embodiment of the present invention and but a few examples of its versatility are shown and described in the present disclosure. It is to be understood that the present invention is capable of use in various other combinations and environments and is capable of changes or modifications within the scope of the inventive concept as expressed herein.
A semiconductor device which includes a power electrode on a surface thereof, a solderable body on the power electrode and a passivation body spaced from but surrounding the solderable body.
WHAT IS CLAIMED IS:1. A semiconductor device comprising: a semiconductor die having a first major surface and an opposing second major surface; a first power electrode on said first major surface having at least one solderable body formed on a portion thereof; a control electrode on said first major surface having at least one solderable body formed on a portion thereof; and a passivation body formed on said first power electrode and including an opening to expose said at least one solderable body on said first power electrode, said opening being wider than said at least one solderable body whereby said at least one solderable body is spaced from said passivation by a gap which surrounds said at least one solderable body on said first power electrode. 2. A semiconductor device according to claim 1, wherein said passivation body includes another opening to expose said at least one solderable body on said control electrode. 3. A semiconductor device according to claim 1, further comprising a plurality of solderable bodies formed on said first power electrode, and a plurality of openings in said passivation body each said opening exposing a respective solderable body on said first power electrode, and being wider than said respective solderable body whereby said respective solderable body is spaced from said passivation by a gap which surrounds said respective solderable body on said first power electrode. <Desc/Clms Page number 11> 4. A semiconductor device according to claim 1, wherein said passivation body is thicker than said at least one solderable body on said first power electrode whereby said at least one solderable body does not extend beyond said passivation body. 5. A semiconductor device according to claim 1, wherein said at least one solderable body on said first electrode includes silver. 6. A semiconductor device according to claim 1, wherein said at least one solderable body on said first electrode is comprised of a solderable trimetal, a top portion of said trimetal being composed of silver. 7. A semiconductor device according to claim 1, further comprising a second power electrode on said second major surface, and a conductive clip, said second power electrode being electrically connected to said conductive clip by a conductive adhesive. 8. A semiconductor device according to claim 7, wherein said conductive clip includes silver on an exterior surface thereof. 9. A semiconductor device according to claim 7, wherein said conductive clip is cup-shaped. 10. A semiconductor device according to claim 1, further comprising a second power electrode on said first major surface, and at least one solderable body on said second power electrode; wherein said passivation includes an opening to expose said solderable body on said second electrode being wider than said at least 00697152.1 <Desc/Clms Page number 12> one solderable body whereby said at least one solderable body on said second power electrode is spaced from said passivation by a gap which surrounds said at least one solderable body on said second power electrode. 11. A semiconductor device according to claim 1, wherein said semiconductor die is a power MOSFET, said first power electrode is a source electrode and said control electrode is a gate electrode. 12. A semiconductor device according to claim 1, wherein said passivation is comprised of epoxy-based passivation. 13. A semiconductor device comprising: a semiconductor die having one side thereof configured for direct connection to a conductive pad with a conductive adhesive, said one side including at least one power electrode, a passivation body formed on said at least one electrode, an opening in said passivation body exposing said at least one electrode, a solderable body formed on said at least one electrode, said solderable body being less wide than said opening whereby a gap exists between said passivation and said solderable body. 14. A semiconductor device according to claim 13, wherein said one side further includes a control electrode, and a solderable body formed over said control electrode, wherein said passivation body includes an opening exposing said solderable body on said control electrode. 15. A semiconductor device according to claim 13, wherein said one side further include another power electrode, and a solderable body on said another power electrode, wherein said passivation body includes an opening exposing said 00697152.1 <Desc/Clms Page number 13> solderable body on said another power electrode, said solderable body being less wide than said opening whereby a gap exists between said passivation and said solderable body on said another power electrode. 16. A semiconductor device according to claim 13, wherein said semiconductor die is a diode. 17. A semiconductor device according to claim 13, wherein said semiconductor die is a power MOSFET. 18. A semiconductor device according to claim 13, further comprising a plurality of solderable bodies on said at least one power electrode and spaced from one another, wherein said passivation includes a plurality of openings each being wider than and exposing a respective solderable body whereby a gap exists between each respective solderable body and said passivation. 19. A semiconductor device according to claim 13, wherein said solderable body includes silver. 20. A semiconductor device according to claim 13, wherein said passivation is comprised of an epoxy.
<Desc/Clms Page number 1> PREPARATION OF FRONT CONTACT FOR SURFACE MOUNTING RELATED APPLICATION [0001] This application is based on and claims benefit of United States Provisional Application No. 60/575,656, filed on May 28, 2004, entitled Preparation of Front Contact for Surface Mounting, to which a claim of priority is hereby made and the disclosure of which is incorporated by reference. BACKGROUND OF THE INVENTION [0002] The present invention relates to semiconductor devices. Chip-scale packaging is a concept driven by the idea of devising a semiconductor package which is nearly the size of the die contained therein. U.S. Patent No. 6,624,522 illustrates several chip-scale packages, each of which includes a power semiconductor die, such as a power MOSFET, with at least one power electrode configured for direct electrical and mechanical connection to conductive pads on a substrate, such as a circuit board, by a conductive adhesive body such as solder, conductive epoxy or the like. To facilitate such a direct connection a solderable body is formed on the power electrode in contact with a passivation body, which itself resides over the power electrode. It has been found that some metals in the solderable body, such as, silver, form dendrites after a period of use. The dendrites damage the passivation body, and in some cases may undesirably short the power electrode to a nearby conductive body. For example, in a power semiconductor package having a die disposed within a conductive clip, the dendrites may grow long enough to short the power electrode to the conductive clip. This condition may be worse when the <Desc/Clms Page number 2> conductive clip also includes a metal that exhibits a tendency to form dendrites, such as silver. It is desirable to avoid the damage in order to ensure longer service life for the power semiconductor device. SUMMARY OF THE INVENTION [0006] In a device according to the present invention a gap exists between the passivation and the solderable body in order to prevent the formation of dendrites, and thus improve the service life of the device. Specifically, a semiconductor device according to the present invention includes a semiconductor die having one side thereof configured for direct connection to a conductive pad with a conductive adhesive, the one side including at least one power electrode, a passivation body formed on the at least one electrode, an opening in the passivation body exposing the at least one electrode, a solderable body formed on the at least one electrode, the solderable body being less wide than the opening whereby a gap exists between the passivation and the solderable body. The preferred embodiment of the present invention includes: a semiconductor die having a first major surface and an opposing second major surface; a first power electrode on the first major surface having at least one solderable body formed on a portion thereof; a control electrode on the first major surface having at least one solderable body formed on a portion thereof; and a passivation body formed on the first power electrode and including an opening to expose the at least one solderable body on the first power electrode, the opening being wider than the at least one solderable body whereby the at least one solderable body is spaced from the passivation by a gap which surrounds the at least one solderable body on the first power electrode. <Desc/Clms Page number 3> Other features and advantages of the present invention will become apparent from the following description of the invention which refers to the accompanying drawings. BRIEF DESCRIPTION OF THE FIGURES [0010] Figure 1 shows a top plan view of a semiconductor device according to the first embodiment of the present invention. Figure 2 shows a cross-sectional view of a device according to the first embodiment of the present invention along line 2-2 and viewed in the direction of the arrows. Figure 3 shows a top plan view of a semiconductor device according to the second embodiment of the present invention. Figure 4 shows a top plan view of a semiconductor device according to the third embodiment of the present invention. Figure 5 shows a top plan view of a package according to the present invention. Figure 6 shows a bottom plan view of a package according to the present invention. Figure 7 shows a cross-sectional view of a package according to the present invention along line 7-7 and viewed in the direction of the arrows as mounted on conductive pads of a substrate. Figure 8 shows a top plan view of a wafer having a plurality of die. Figure 9 shows a top plan view of a wafer having a plurality of die after electrodes have been formed thereon. Figure 10 shows portions 5-5 of the wafer in Figure 4 after formation of a plurality of solderable layers. Figure 11 shows portion 5-5 after formation of a passivation. <Desc/Clms Page number 4> Figure 12 shows portion 5-5 of the wafer after openings have been formed in the passivation over each solderable layer. DETAILED DESCRIPTION OF THE EMBODIMENTS OF THE INVENTION [0022] Referring to Figures 1 and 2, a semiconductor device according to the present invention includes a semiconductor die 10 having first power electrode 12 and control electrode 14 on a first major surface thereof. According to a first embodiment of the present invention at least one solderable body 16 is formed on first power electrode 12 and at least one solderable body 16 is formed on control electrode 14. Furthermore, in a device according to the present invention, a passivation body 18 which is formed preferably from an epoxy that can also function as a solder resist, is disposed on first power electrode 12 and control electrode 14, and includes opening 20 to expose solderable body 16 on first power electrode 14 and opening 22 to expose solderable body 16 on control electrode 14. In the preferred embodiment, electrodes 12, 14 are formed from aluminum or aluminum silicon, and solderable bodies 16 are formed from a trimetal stack or any solderable material that may tend to form dendrites. The trimetal stack may include a silver layer at the top thereof, such as Ti/Pd/Ag trimetal stack. According to an aspect of the present invention, opening 20 is wider than solderable body 16. As a result, solderable body 16 is spaced from passivation 18 by a gap 24 which surrounds solderable body 16. It should be noted that in the preferred embodiment, opening 22 is also wider than solderable body 16 on control electrode 14 whereby gap 26 is created between passivation body 18 and solderable body 16 on control electrode 14. In the preferred embodiment, passivation body 18 includes a plurality of openings 20 each being wider than and exposing a respective solderable body 16 on first power electrode 12 whereby a respective gap 24 is formed between each <Desc/Clms Page number 5> solderable body 16 and passivation body 18. Also, in the preferred embodiment, passivation body 18 is thicker than solderable bodies 16. As a result, solderable bodies 16 do not extend beyond passivation body 18. That is, each solderable body 16 is preferably disposed at the bottom of its respective opening 20 and does not reach the top thereof. ] A semiconductor device according to the embodiment shown by Figures 1 and 2 can be of a vertical conduction variety and thus includes second power electrode 28 on second major surface thereof opposite to the first major surface. For example, a device according to the embodiment shown by Figures 1 and 2 can be a power MOSFET in which first power electrode 12 is the source electrode, second power electrode 28 is the drain electrode, and control electrode 14 is the gate electrode. ] A device according to the present invention is not limited to vertical conduction type devices. Referring to Figure 3, in which like numerals identify like features, a device according to the second embodiment may be of the flip-chip variety, in which case first power electrode 12, second power electrode 28, and control electrode 14 are disposed on a common surface of die 10. A device according to the second embodiment may be a power device such as a power MOSFET, in which case first power electrode 12 is the source electrode, second power electrode 28 is the drain electrode and control electrode 14 is the gate electrode. ] Referring next to Figure 4, in which like numerals identify like elements, a semiconductor device according to the third embodiment includes only a single power electrode 30 on a major surface thereof, and unlike the first embodiment and the second embodiment does not include a control electrode. A device according to the third embodiment can be, for example, a vertical conduction type diode in which one of its power electrodes (i. e., either the anode electrode or the cathode electrode) 00697152.1 <Desc/Clms Page number 6> includes passivation body 18 on a surface thereof with openings over solderable bodies 16, in each opening being wider than a respective solderable body 16 that it surrounds and passivation 18 being preferably thicker than solderable bodies 16. All three embodiments are similar in that in each case all of the electrodes on one side are configured for direct connection with a conductive adhesive such as solder or conductive epoxy to a conductive pad on a substrate such as a circuit board. That is, solderable bodies 16 are provided on all electrodes on the same surface to allow for direct connection to a conductive pad on a substrate, while advantageously a gap 24 between each solderable body 16 and passivation body 18 prevents the formation of dendrites. Referring next to Figures 5,6 and 7, a semiconductor device according to the present invention can be packaged using a conductive clip 32 according to the concept shown by U. S. Patent No. 6,624,522. For example, a semiconductor device according to the first embodiment can have its second power electrode 28 electrically connected to the web portion 34 of a cup-shaped or can-shaped conductive clip 32 by a conductive adhesive 44 such as solder or conductive epoxy. Thus, conductive clip 32 can act as an electrical connector for external electrical connection to second power electrode 28. Conductive clip 32 is preferably made from copper or an alloy of copper and may include gold or silver on its exterior surface. Preferably, conductive clip 32 includes a rim 36 which is integral with web portion 34 and defines an interior space within which a semiconductor device according to the present invention is received. Note that rim 36 acts as an electrical connector between web portion 34 (which is electrically connected to second power electrode 28) to preferably two terminal connection surfaces 38. Connection surfaces 38 serve to electrically connect conductive clip 32 to conductive pads 40 on a substrate 42 such as a circuit board. Note that connection surfaces 38 are electrically connected to pads 40 by a 00697152.1 <Desc/Clms Page number 7> conductive adhesive 44 such as solder or a conductive epoxy. Also, as explained above, a semiconductor device according to the present invention is configured in order to have the electrodes on one side thereof directly electrically connected to the conductive pads of a substrate. Thus, as seen in Figure 7, first power electrode 12 is electrically connectable to a respective conductive pad 46 by a conductive adhesive 44 such as solder or a conductive epoxy, and control electrode 14 is similarly electrically connectable to a respective conductive pad 48 on substrate 42. A semiconductor device according to the present invention may be manufactured according to the following process. Referring to Figure 8, first a plurality of die 10 are formed in a wafer 50 in a conventional manner. Thus, for example, in the preferred embodiment, a plurality of vertical conduction type power MOSFETs are formed in any known manner in a silicon wafer. Next, a contact metal layer is deposited and patterned in any known conventional manner. Thus, in the preferred embodiment a front metal layer is deposited over wafer 50 in which the MOSFETs are formed, and patterned to form first power electrode 12 (hereafter source contact or source electrode) and control electrode 14 (hereafter gate contact or gate electrode) for each die 10 as shown by Figure 4. A suitable front metal for this purpose may be Al or AlSi. Next, a solderable front metal is deposited over the contact metal layer. The solderable front metal may be any suitable metal combination such as the trimetal combination Ti/Pd/Ag. In the preferred embodiment, the solderable front metal layer includes a top layer of silver. Thereafter, the solderable front metal layer is patterned leaving at least one solderable body 16 over each contact e. g., source contact 12, as illustrated by Figure 10. Thus, in the preferred embodiment, the solderable front metal is patterned 00697152.1 <Desc/Clms Page number 8> to result in at least one solderable body 16 on gate electrode 14 and source electrode 12, or preferably a plurality of solderable bodies 16 over source electrode 12. Thereafter, a back metal contact (not shown) is deposited over the back of the wafer 24 if such is required for a second power electrode for each die. Thus, for example, in the preferred embodiment, a drain back metal is formed in the back of the wafer. The drain back metal may be formed of Al or AISi and further processed to include a solderable trimetal combination. Next, a passivation body 18 is formed over the front side of wafer 50 as illustrated in Figure 11 by slanted lines. Passivation body 18 may be any suitable epoxy passivation which may also be able to act as a solder resist. The epoxy passivation may be screen printed. Thus, in the preferred embodiment, a suitable epoxy passivation may be formed over source electrodes 12 and gate electrodes 14. Thereafter, passivation 18 is removed from the top of each solderable body 16 over each contact. The removal of passivation 18 creates openings 20, 22 that extend to the contact layer below. Thus, in the preferred embodiment of the present invention, an opening is created in passivation 18 over each source electrode 12 and an opening is created over gate electrode 14 exposing respective solderable bodies thereon as seen in Figure 12. According to an aspect of the present invention openings 20 and preferably openings 22 are created wide enough so that each solderable body 16 may be spaced from passivation 18 by a respective gap. Next, each die is singulated by any known method, such as sawing. Each singulated die may then be packaged in a conductive clip 32 to obtain a semiconductor package as described herein. Although the present invention has been described in relation to particular embodiments thereof, many other variations and modifications and other uses will become apparent to those skilled in the art. It is preferred, therefore, that 00697152.1 <Desc/Clms Page number 9> the present invention be limited not by the specific disclosure herein, but only by the appended claims.
The application provides techniques for the analysis and debugging of graphics applications. For instance, an apparatus may include a graphics application program interface (API), a graphics engine, and a graphics analysis tool. The graphics analysis tool may receive multiple draw calls issued to the graphics API, and arrange the draw calls into multiple sequences, each sequence corresponding to a particular render target, From this information various analysis tasks may be performed. For instance, overdraw images may be generated and such overdraw images may be enhanced to improve their dynamic range. Also, pixel histories may be generated based on corresponding pixel selections and the effect of draw calls on selected pixels may also be determined. These tasks may also be performed on a per render target basis.
CLAIMSI. A method comprising: generating a pixel history for a pixel, the pixel history including a sequence of graphics API calls; and determining a number of times a draw call within the second sequence of graphic API calls causes the selected pixel to be written to. 2. The method of claim I, wherein the first sequence of graphics API draw calls corresponds to a render target. 3. The method of claim 2, further comprising receiving a user selection of the render target. 4. The method of claim I, further comprising: creating an overdraw image based on a further sequence of graphics application program interface (API) draw calls; and based on the overdraw image, selecting the pixel. 5. The method of claim 4, wherein said creating an overdraw image comprises incrementing a pixel hit count each time the pixel it is written to. 6. The method of claim 1, wherein said determining is performed with one or more graphics pipeline tests disabled. 7. An apparatus, comprising: a graphics application program interface (API); a graphics engine; and a graphics analysis tool to receive a plurality of draw calls issued to the graphics API, arrange the plurality of draw calls into a plurality of draw call sequences, each of the plurality of sequences corresponding to one of a plurality of render targets, select one of the plurality of render targets, and create an overdraw image based on the sequence of draw calls corresponding to the selected render target. 8. The apparatus of claim 7, wherein the graphics analysis tool is to: based on the overdraw image, select a pixel within the selected render target; and generate a pixel history for the selected pixel. 9. The apparatus of claim 7, further comprising a graphics application to generate the plurality of draw calls. 10. The apparatus of claim 7, wherein the graphics engine comprises a graphics pipeline. 11. The apparatus of claim 7, wherein the graphics analysis tool comprises a database to store the plurality of draw calls. 12. The apparatus of claim 7, wherein the graphics analysis tool comprises a database to store a rendered frame. 13. The apparatus of claim 7, wherein the pixel history includes a sequence of draw calls that affect the selected pixel. 14. A method, comprising: storing a plurality of graphics application program interface (API) draw calls; arranging the plurality of graphics API draw calls into a plurality of sequences of draw calls, each of the plurality of sequences corresponding to one of a plurality of render targets; receiving a user selection of one of the plurality of render targets; creating an overdraw image based on the sequence of draw calls corresponding to the selected render target. 15. The method of claim 14, further comprising: based on the overdraw image, receiving a user selection of a pixel within the selected render target; and generating a pixel history for the selected pixel. 16. A method, comprising: receiving a user selection of one of a plurality of render targets; receiving a user selection of a pixel within the selected render target; receiving a user selection of a graphics application program interface (API) draw call corresponding to the selected pixel; and determining a number of times the draw call causes the selected pixel to be written to. 17. The method of claim 16, further comprising: generating an image indicating the number of times the draw call causes the selected pixel to be written to. 18. The method of claim 16, wherein said determining is performed with one or more graphics pipeline tests disabled. 19. An article comprising a machine-accessible medium having stored thereon instructions that, when executed by a machine, cause the machine to: generate a pixel history for a pixel, the pixel history including a sequence of graphics API calls; and determine a number of times a draw call within the second sequence of graphic API calls causes the selected pixel to be written to. 20. The article of claim 19, wherein the first sequence of graphics API draw calls corresponds to a render target.
GRAPHICS ANALYSIS TECHNIQUES BACKGROUND The graphics employed by computer applications are becoming increasingly complex. For example, gaming applications commonly provide three-dimensional graphics that are animated on a real-time basis. Such graphics continue to become progressively more realistic. As the intricacy of such graphics increases, so does the challenge for application developers. For instance, developers must debug graphics renderings that do not operate or appear correctly. Also, developers must deal with limited processing capacities. Therefore, the processing loads imposed by graphics renderings need to be analyzed and often improved to fit within such limited capacities. 1 5 Tools to test, analyze, and debug graphics are important for the development of graphics applications. BRIEF DESCRIPTION OF THE DRAWINGS In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the reference number. The present invention will be described with reference to the accompanying drawings, wherein: FIG. I is a diagram of an exemplary operational environment; FIG. 2 is a diagram of an implementation that may be included in a graphics analysis tool; FIG. 3 is a diagram of an exemplary user interface; FIGs. 4-9 are logic flow diagrams; and FIG. 10 is a diagram of an exemplary platform. DETAILED DESCRIPTION Embodiments provide techniques for the analysis of graphics applications. For instance, an apparatus may include a graphics application program interface (API), a graphics engine, and a graphics analysis tool, The graphics analysis tool may receive multiple draw calls issued to the graphics API, and arrange the draw calls into multiple sequences, each sequence corresponding to a particular render target. From this information various analysis tasks may be performed. For instance, overdraw images may be generated. Also, pixel histories may be generated based on corresponding pixel selections. Further, the effect of draw calls on selected pixels may also be determined. Moreover, such tasks may be performed on a per render target basis. Reference throughout this specification to one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. FIG. I is a diagram of an exemplary operational environment 100, which may employ the techniques described herein. Environment 100 may include various elements. For instance, FIG. 1 shows environment 100 including a graphics application 102, a graphics application program interface (API) 104, a graphics engine 106, and a graphics analysis tool 110. These elements may be implemented in any combination of hardware and/or software. Moreover, embodiments are not limited to the elements of FIG. 1. In embodiments, graphics application 102 is a computer application (e.g., a user application) that employs graphics (e.g., three-dimensional graphics) that are output on one or more displays. Exemplary user applications include (but are not limited to) video games and simulation tools. Graphics API 104 provides graphics application 1 02 with services of graphics engine 106. In embodiments, this may be provided through various routines, data structures, object classes, and/or protocols. Such employments of graphics API 104 are referred to herein as "draw calls". Embodiments may employ (but are not limited to) commercially available APIs. Examples of such APIs include OpenGL, DirectX, and others. In general operation, graphics application 102 may employ multiple render targets in the generation of graphics. A render target is a buffer (maintained by graphics engine 106) affected by drawing operations that graphics application 102 initiates through graphics API 104. Multiple render targets may be employed in various ways. For instance, graphics application 1 02 may employ multiple render targets in which each render target corresponds to a particular effect. Exemplary effects include (but are not limited to) shadowing, fog, lighting, blurring associated with motion, and so forth. Additionally or alternatively, each of multiple render targets may correspond one or more rendered objects or primitives. Embodiments, however, are not limited to these exemplary uses of render targets. Graphics engine 106 performs graphics operations for graphics application 102. As described above, such operations may be performed in response to draw calls received and processed through graphics API 104. Exemplary operations include rendering and outputting images (frames) to a display device. Thus, graphics engine 106 employs a graphics pipeline. As described above, graphics engine 106 maybe implemented in any combination of hardware and/or software. Thus, in embodiments, graphics engine 106 includes a graphics processing unit (GPU). FIG. 1 shows that graphics analysis tool 110 is coupled to both graphics API 104 and graphics engine 106. Graphics analysis tool 110 may perform operations involving the analysis of graphics applications. To do this, graphics analysis tool 110 may obtain draw calls made by graphics application 102. Based on such draw calls, graphics analysis tool 110 may utitlize graphics engine 106 to generate operational information regarding the draw calls. Such information may include (but is not limited to) overdraw images and pixel histories. Also, graphics analysis tool 110 may obtain (or capture) frames rendered by graphics engine 106. Also, graphics analysis tool 110 may obtain draw calls related to such captured frames. Moreover, graphics analysis tool 110 may control the rate at which frames corresponding to application 102 are generated. Through such control, frames may be stepped through at a desired pace. Such a pace may be established by a user. As described above with reference to FIG. 1, graphics analysis tool 11 0 may perform operations involving the analysis of graphics applications. FIG. 2 is a diagram of an exemplary implementation 200 that may be included in graphics analysis tool 110. FIG. 2 shows implementation 200 including a graphics API interceptor module 202, a graphics API call log database 204, a reconstruction module 206, an overdraw analysis module 207, pixel history analysis module 208, a playback module 210, and a user interface module 212. These elements maybe implemented in any combination of hardware and/or software. Graphics API interceptor module 202 copies graphics API operations (referred to herein as draw calls) that a graphics application (such as graphics application 102) generates. Further, graphics API interceptor module 202 forwards the copied draw calls to graphics API call log database 204. In turn, graphics API call log database 204 stores these received draw calls. Graphics API call log database 204 may store received draw calls in various ways. For example, graphics API call log database 204 may store draw calls chronologically. Further, such chronological storage may be arranged by draw calls for each of multiple render targets. Reconstruction module 206 may generate various images (frames) based on draw calls stored in API call log database 204. Such reconstructions involve the employment of a graphics engine, such as graphics engine 106. Moreover, in generating such images, reconstruction module 206 may direct the graphics engine to employ certain settings. For example, setting may be altered so that render images indicate the number of particular events (e.g., overdraws) that occurred. Also, reconstruction module 206 may activate or deactivate various pipeline tests within the graphics engine. Overdraw analysis module 207 determines (on a per render target basis) overdraws associated with particular draw calls. This may involve directing reconstruction module 206 to generate various overdraw images based selected (either user selected or automatically selected) draw calls and render targets. Exemplary operations regarding such features are described below with reference to FIG. 4. Pixel history analysis module 208 determines a sequence of draw calls that cause a pixel to be written to ("touched"). This may involve directing 1 0 reconstruction module 206 to generate particular images in accordance with various settings. Exemplary operations regarding such features are described below with reference to FIGS. 5, 6, and 7. User interface module 210 provides for user interaction with graphics analysis tool 110. For instance, in embodiments, user interface module 210 provides a graphical user interface that provides for efficient user operation of the techniques described herein. Frame storage module 212 stores one or more frames (images) generated by a graphics engine (e.g., graphics engine 106). These frames comprise multiple pixels corresponding to positions that may be output on a display device (not shown). The frames may be in various formats. Exemplary formats include (but are not limited to) various RGB and CMYK formats. Also, frame capture database 108 may employ various compression and or encoding schemes to store frame(s). In embodiments, these frames may be generated in accordance with techniques described herein. For instance, frame storage module 212 may store frames generated through normal operation of application 102, graphics API 104, and graphics engine 106. Also, frame storage module 212 may Store frames comprising overdraw image frames, pixel history frames, and other types of frames. Embodiments provide a user interface that displays graphics-related information to a user. FIG. 3 is a diagram of an exemplary user interface (also referred to as an interface console) 300. Interface console 300 allows for a user to view various information associated with graphics operations. As shown in FIG. 3, interface console 300 includes a draw call field 302, a render target field 304, an image rendering field 306, and a pixel history field 308. Render target field 304 indicates multiple render targets associated with a particular image. In embodiments, such render targets may be indicated as icons or other graphical representations. A user may select a particular draw call through GUI interaction (e.g., by cursor positioning and double clicking). In embodiments, draw call field 302 indicates draw calls (e.g., draw calls corresponding to a render target selected at field 304) in a chronological (e.g., from left to right) bar chart form. Thus, each bar corresponds to a particular draw call. Moreover, the height of each bar indicates the number of graphics engine operations (e.g., pipeline operations) caused by the corresponding draw call. In embodiments, a user may select a particular draw call through GUI interaction (e.g., by cursor positioning and double clicking). Based on a selected draw call, the user may view an overdraw image in image rendering field 306. Features involving overdraw images are described below with reference to FIG. 4. From such overdraw images, the user may select a particular pixel for analysis. From such pixel selections, pixel history images may be displayed in image rendering field 306. Moreover, a corresponding pixel history of draw calls may be displayed in pixel history field 308. Exemplary details regarding pixel histories are provided below with reference to FIGs. 5-7. Operations for various embodiments may be further described with reference to the following figures and accompanying examples. Some of the figures may include a logic flow. Although such figures presented herein may include a particular logic flow, it can be appreciated that the logic flow merely provides an example of how the general functionality described herein may be implemented. Further, the given logic flow does not necessarily have to be executed in the order presented unless otherwise indicated. In addition, the given logic flow may be implemented by a hardware element, a software element executed by one or more processors, or any combination thereof Embodiments are not limited to this context. Embodiments provide various techniques for analyzing graphics applications. Such techniques involve the analysis of overdraws, pixel histories, draw calls, and so forth. Further, such techniques may employ alternate image reconstructions and representations. These alternate reconstructions and representations may advantageously allow performance-relevant information to be more quickly extracted from a graphics scene. An example of such alternate reconstructions and representations involves overdraw analysis being performed on a per render target basis. From 1 0 this analysis, corresponding pixel histories may be generated. These features may advantageously provide for the extraction of otherwise hidden information from a sequence of graphics operations. An example involving such overdraw analysis and pixel history generation is described below with reference to FIG. 4. FIG. 4 illustrates a logic flow 400, which may be representative of operations executed by one or more embodiments. This flow is described in the context of FIGs. 1 and 2. However, this flow may be employed in other contexts. Although FIG. 4 shows a particular sequence, other sequences may be employed. Also, the depicted operations may be performed in various parallel and/or sequential combinations. At a block 402, a sequence of graphics API calls (draw calls) is stored. This sequence may correspond to a particular image (frame). In the context of FIG. 2, this may involve API interceptor module 202 intercepting these draw calls, and graphics API call log database 204 storing them. At a block 404, the stored sequence of API calls is sorted into sequences for each of multiple render targets. With reference to FIG. 2, this may involve reconstruction module 206 determining render targets from each of the API calls. In embodiments, reconstruction module 206 may employ a graphics engine (e.g., graphics engine 106) to do this. For each of these render target groups, an overdraw image is created at a block 406. Referring again to FIG. 2, this may involve reconstruction module 206 directing a graphics engine (e.g., graphics engine 106) to perform draw calls for each of the render targets. However, instead of operating normally, the graphics engine is configured in a way such that pixel values are accumulated. More particularly, the graphics engine may be configured so that draw calls cause corresponding render target pixel values to accumulate (incrementing a pixel hit count) each time they are written to ("touched"). Thus, in each render target, a count is generated for each pixel. This count indicates how many times the pixel was touched. At a block 408, one of the render targets is selected. This selection may be by a user or automatic. in embodiments, this render target may be selected from a listing of multiple render targets provided on a display (e.g., as icons or other graphical representations). At a block 409, the overdraw image for the selected render target may be visually enhanced. This enhancement may involve increasing the overdraw image's dynamic range. Details regarding such enhancements are described below with reference to FIG. 8. At a block 410, the overdraw image is displayed for the selected render target. As described above, the overdraw images represent the number of times pixels have been touched. In embodiments, these numbers may be represented by pixel brightness in the displayed overdraw image. However, embodiments are not limited to this feature. One or more corresponding pixels of this overdraw image may be selected at a block 412. This selection may be by a user or through automatic techniques. User selection may involve interacting with the overdraw image through a graphical user interface (GUI). For example, the user may select the pixel through a cursor, through graphical cross-hairs, through the entry of numerical coordinates, and/or through other techniques. Alternatively, automatic selection may involve automatically selecting pixel(s) based on their overdraw values. For example, pixels indicating the highest number of touches may be automatically selected. At a block 414, a pixel history for each of the one or more selected pixels is obtained for the user. The pixel history may comprise each draw call that affected (or caused a writing to) the pixel. In embodiments, these draw calls may be constrained to only those that affect the render target selected at block 408. However, embodiments may additionally or alternatively provide corresponding draw calls affecting other render targets. This pixel history may be displayed at a block 416. In embodiments, this display may be in the form of a sequence of icons that each represent a draw call. A user may select (e.g., double click) on such an icon to view detailed information regarding the draw call. However, other techniques of displaying pixel histories (e.g., text listings) may be employed. The flow of FIG. 4 advantageously provides a direct way to extract information from the most complex pixels in the scene. For example, from the overdraw image displayed at block 410, a pixel may be selected that has been touched a relatively large number of times (also referred to as a pixel having a high complexity). As indicated above, such pixels may be made apparent through their displayed brightness. Upon selecting a pixel having high complexity, the user may view the API calls that affected the pixel (the pixel history). Additionally, the user may view a normal representation (not an overdraw representation) of the image. Through analyzing the draw calls, the user may determine whether a more efficient set of API calls can be made. Thus, through the identification of such pixels, developers may improve applications to reduce the number of times pixels are drawn. This may advantageously increase graphics application performance. As described above, embodiments provide for the extraction of information regarding pixels. Also, embodiments may inform users about the number of times draw calls cause pixels to be touched. A single draw call can touch a particular pixel more than once. For instance, a draw call instructing a graphics engine (e.g., graphics engine 106) to render a three dimensional object may entail the graphics engine rendering surfaces that have overlapping pixel locations. As an example, a rendered cube may have overlapping front and back sides. Embodiments may (for a particular render target) detect and indicate each time a pixel was touched by a particular draw call. An example involving such O 10 features is described below with reference to FIG. 5. FIG. 5 illustrates a logic flow 500, which may be representative of operations executed by one or more embodiments. This flow is described in the context of FIGs. 1 and 2. However, this flow may be employed in other contexts. Although FIG. 5 shows a particular sequence, other sequences may be employed. Also, the depicted operations may be performed in various parallel and/or sequential combinations. At a block 502, a render target is selected. This selection may be by a user. Alternatively, this selection may be automatic. Also, at a block 504, one or more pixels are selected This pixel may be selected in various ways. For example, this pixel may be user selected or automatically selected based overdraws, as described above with reference to FIG. 4. Embodiments, however, are not limited to this context. At a block 506, the user selects a draw call corresponding to the selected pixel. For example, the user may select this draw call from multiple draw calls (e.g., a sequence of draw calls) identified in a pixel history, such as a pixel history generated at block 414 of FIG. 4. Accordingly, if this is the first selection in a sequence, embodiments may designate this as an initially selected draw call at a block 507. At this point, information may be determined at a block 509 regarding the number of times the selected pixel (in the selected render target) was touched. In embodiments, this determination may be for selected draw call. Alternatively, this determination may be for a sequence of draw calls starting with the initial draw call designated at block 507 and ending with the draw call selected at block 506. As shown in FIG. 5, such determinations may involve multiple passes. For example, a first pass may involve performing the draw call(s) without any graphics pipeline tests activated. Also, a second pass may involve the performing the draw call(s) with particular graphics pipeline tests activated. Further, a third pass may involve performing the draw call(s) with alpha blending. These three passes within block 509 are described for purposes of illustration, and not limitation. Accordingly, enibodiments may employ other combinations of passes and/or other sequences. Further details regarding these passes are provided below with reference to FIG. 6. At a block 510, a user may select a next draw call. This next draw call may be in a chronological sequence of draw calls that affect the selected pixel in the selected render target. For instance, this next draw call may be a draw call following the draw call that was most recently selected (either at block 506 or block 510). As shown in FIG. 5, if a next draw call is selected, then operation returns to block 509. Thus, the features of FIG. 5 allow a user select one or more draw calls of interest, and analyze its (or their) impact on particular pixels. Such features advantageously allow a scene's complexity on any pixel of a particular render target to be evaluated. This information may advantageously assist users to decide whether to decrease the complexity of a particular scene or to modify application settings to increase the performance of particular 3D operations. As described above, determinations regarding the number of times a selected pixel (in the selected render target) was touched may be performed in multiple passes. An example, of such multiple passes is provided by FIG. 6. In particular, FIG. 6 is a flow diagram 600 that includes a first pass 602, a second pass 604, and a third pass 606. Each of these passes involves multiple operations. For instance, FIG. 6 shows first pass 602 including blocks 610-614. At block 610, particular graphics pipeline tests (e.g., scissor test, rectangle test, Z test, stencil test, and/or alpha test) are disabled. At a block 612, it is determined how many times the draw call(s) touched the selected pixel while these tests are disabled. This may involve performing the draw call(s) with modified rendering operations. For example, pixel shader code may be modified so that the selected pixel's shading is increased (e.g., incremented by one) each time the pixel is touched by the draw call. Embodiments, however, are not limited to this technique. The results of this determination are output to the user at block 614. These results may be outputted graphical form (e.g., displayed at the selected pixel location as a corresponding brightness, shading, or transparency level). Alternatively, these results may be output in text form. FIG. 6 shows second pass 604 including blocks 620-632. At block 620, the user may activate a particular test. For example, the user may activate a ranged scissor test, a z test, or a stencil test. At a block 622, it is determined whether a test was selected. if so, operation proceeds to block a 624. Otherwise, operation proceeds to a third pass at a block 640. At block 624, the selected render target is cleared. Following this, the selected test is enabled in the graphics engine at a block 626. At a block 628, the draw call(s) are performed. The results of these draw call(s) are output to the user at a block 630. These results may be outputted graphical form (e.g., displayed at the selected pixel location as a corresponding brightness or transparency level). Alternatively, these results may be output in text form. If the result is zero (no pixel value), this indicates that the selected test failed each time it was invoked. However, if the result is non-zero, then the selected test passed at least one of the times it was invoked. As indicated by a block 632, the user may select another test. If so, then operation returns to block 620. Otherwise, operation proceeds to the third pass at block 640. FIG. 6 shows the third pass including a block 640. At this block, a normal rendering is performed to determine the output color of the pixel. Further to the flow of FIG. 6, various non-limiting illustrative examples are now provided regarding the number of times a pixel has been touched. For instance, the number of "touches" may be the number of times a rendered geometry intersects the pixel. For instance, when the geometry being rendered is a sphere containing two sided triangles, then the center pixels will have an overdraw count of two: once for front, once for back. (z-test is disabled in this case). Further, if the geometry being rendered is a sphere containing one sided triangles, then center pixels will have an overdraw count of one: once for front, zero for the back (since it was back-face culled (culling enabled in this case)). Moreover, the number of "touches" may be the number of times the pixel was actually written to the buffer (discarding the times that the geometry was z-test rejected). For instance, when rendering three triangles above each other from the viewer's perspective, the number of touches would depend upon the draw order. For example, if the furthest is drawn first, then the middle, and then the closest, then the count would be 3 for pixels in which all three triangles overlap. However, if the closest is drawn first, then the middle, and then the furthest, the count would be one for the same pixels. This is because the latter two would have been z-test rejected. As described above, embodiments allow users to determine the effect of draw calls on particular pixels. Further, embodiments allow the disabling of draw calls. For example, a user may disable one or more draw calls and determine the impact of this disabling on pixel processing. Thus, computational differences may be determined. For example, it can be determined whether a scene rendering became faster. Also, the user can determine the visual impact of this disabling on the rendered image. For instance, a user may determine whether disabling the draw call(s) made the scene look unacceptable. An example involving such features is described below with reference to FIG. 7. FIG. 7 illustrates a logic flow 700, which may be representative of operations executed by one or more embodiments. This flow is described in the context of FIGs. 1 and 2. However, this flow may be employed in other contexts. Although FIG. 7 shows a particular sequence, other sequences may be employed. Also, the depicted operations may be performed in various parallel and/or sequential combinations. As shown in FIG. 7, a render target is selected at a block 702. This selection may be by a user or through automatic techniques. Also, at a block 704, a pixel is selected. This pixel may be selected in various ways. For example, the pixel may be user selected or automatically selected, as described above with reference to FIG. 4. Embodiments, however, are not limited to this context. A user selects one or more draw calls to be disabled at a block 706. This selection may be from a sequence of draw calls. At a block 708, pixel history is S 14 determined for the sequence of draw calls. In embodiments, this may involve performing operations described above with reference to FIG. 5. Thus, the number of times a pixel was touched with the disabled draw call(s) may be determined. As described above with reference to FIG. 5, such determinations may be made with various graphics pipeline test(s) (e.g., scissor test, rectangle test, Z test, stencil test, and/or alpha test) activated or deactivated. Because the disabled draw call(s) are not rendered, such tests for subsequent draw calls may have different test outcomes. Thus, disabling draw calls provides a dynamic pixel history. At a block 710, an image is rendered for the sequence of draw calls with the selected draw call(s) disabled. This rendered image is displayed to the user at a block 712. As described above, embodiments provide overdraw images. Moreover, embodiments, provide the ability to enable or disable any user selected draw call to break down the composition of an overdraw image and more deeply examine the rendered pixels. Overdraw images may contain subtle differences that are hard to visually identify. For example, one region may be slightly darker than a neighboring region (thus indicating a difference in the number of times the regions have been touched). Embodiments may visually enhance these differences. For example, at block 409 of FIG. 4, an overdraw image is enhanced. Such enhancements may increase an overdraw image's dynamic range. As a result, subtle differences in the overdraw images become significantly easier to see. Various techniques may be employed to provide such increases in dynamic range. For instance, embodiments may employ clamping operations. FIG. 8 illustrates a logic flow 800, which may be representative of operations executed by one or more embodiments. Although FIG. 8 shows a particular sequence, other sequences may be employed. Also, the depicted operations may be performed in various parallel and/or sequential combinations. At a block 802, an overdraw image is provided. At a block 804, minimum and maximum values in the overdraw image are determined. At a block 806, a matrix is generated based on the minimum and maximum values. The overdraw image is then processed with the matrix at a block 808. This processing produces an enhanced overdraw image, which may be displayed at a block 810. An example of this technique is now provided. This example involves a 5x5 matrix. This is standard technique to perform linear operations on image data. Assuming that the pixel values are in the range [0, 255], and the matrix values are floating point scale factors, the following formula is employed: [Or] [MrrMrgMrbMrao] [Og] [MgrMgbMgbMgao] [Ob] [Jr Ig lb lEa 1] [Mbr Mbg Mbb Mba 0] [Oa] [MarMagMabMaa0] [0] [TlrTlgTlbTlal] This is a basic matrix multiply operation, where original RGBA image colors (Ir, Ig, Ib, lEa) are multiplied by the 5x5 matrix M to compute the adjusted RGBA image colors (Or, Og, Ob, Oa). By adjusting the values of the 20 variable elements of M, many color adjustment operations may be performed. In the case of overdraw images, the matrix values are adjusted to scale the overdraw Image to expand the range of interesting data. As an example, suppose every pixel in the entire render target was written to 20 times (suppose full render target quads were all blended on top of each other), and then suppose 3 more small triangles were written to the render target, two of which have some overlap. In this example, most pixels will have an overdraw of 20, a few will have overdraw of2I (the ones that were touched by the small triangles), and a few will have an overdraw of 22 (the ones that were in intersection of the two small triangles that overlapped). If a user viewed this image normally, it would look dark, and it would be extremely difficult to see the portions of the frame having overdraw values of 21 and 22. In this embodiment, view an adjusted image, where each of the pixels in the the normal image get multiplied by the matrix M to compute the overdraw image adjusted colors. Here are the values used, assuming 8 bit color channels (each of the RGBA values in the original and overdraw images have values in [0, 255]), but but this technique would not be limited to this case and would work if different color depths were used: let mm = 20 I/the minimum value in the overdraw image let max = 22 I/the maximum value in the overdraw image let delta (mm -max) II negative delta if(delta0) delta = -I; II ensure delta is non-zero let s < (-255 / delta) / 3; /7 scale let t <mm / delta; I/translate set M to: [s s s 0 0] [s s s 0 0] [s s s 0 0] [0 0 0 0 0] [t t t 1 1] Given this example, and following the matrix computation, there are three unique original values in the image: [20/255 20/255 20/255 11] [21/255 21/255 21/255 Il] [22/255 22/255 22/255 11] These go through the following transformations by the matrix multiply: [20s/255+20s/255+20s/255+0+t] [s s s 0 0] [20s/255+20s/255+20s/255+0+t] [s s s 0 0] [20s/255 + 20s/255 + 20s/255 + 0 + t] [20/255 20/255 20/255 11] [s s s 0 0] [0 +0 +0 +0+1] [0 0 0 0 0] [0 +0 +0 +0+1] [1 t t 1 1] Since: 3*(20/255)s = 3 * (20/255) * ((-255 / -2) / 3) (20) * ((1 / 2)) 10 = 20 / -2 = -10 Thus: [0] [s s s 0 0] [0] [sss00] [0] = [ 20/255 20/255 20/255 II] [s s s 0 0] [1] [0 0 0 0 0] [1] [t t t 1 1] For the middle (21) values, the following is performed: 3*(21/255)s 3 * (21/255) * ((-255 1-2) / 3) = (21) * ((1 / 2)) 10.5 t=20/-2=-10 Accordingly: [0.5] {s s s 0 0] [0.5] [s s s 0 0] [0.5]=[21/255 21/255 21/255 1 liEs s s 0 0] [1] [0 0 0 0 0] [1] [t t t 1 1] For the highest (22) values, the following is performed: S 18 3*(22/255)s = 3 * (22/255) * ((-255 / -2) / 3) = (22) * ((1 / 2)) = 11 t = 20 I -2 -10 Thus: [1] [s s s 0 0] [1] [s s sO 0] [1] [22/255 22/255 22/255 II] [s s s 0 0] [1] [00000] [1] [t t t I I] The above technique advantageously allows a small range of values to be expanded to a maximum range, so that visibility is maximized. This technique is provided for illustration, and not limitation. Accordingly, further techniques may be employed to increase the visibility of images. As described above, elements of embodiments (e.g., elements of FIGs. I and 2) may be implemented in any combination of hardware and/or software. Accordingly, FIG. 10 is a diagram of an exemplary platform 1002 in which functionality of the present invention as described herein may be implemented. As described above, embodiments provide techniques for capturing frames. In addition, embodiments provide techniques for single stepping through graphics frames (e.g., a single graphics frame at a time) that are generated by an application. Such features advantageously allow an application's workload to be captured "on the fly", thus enabling an exact frame of interest to be captured. In embodiments, such single stepping may be controlled by user interaction. For instance, a user may be provided with a pause button, a step button, and a capture button. Such buttons may be provided through a graphical user interface (e.g., interface console 300 of FIG. 3). Embodiments, however, are not limited to this context. Moreover, embodiments are not limited to the employment of graphical user interfaces. The pause button feature allows the user to remotely pause an application (e.g., graphics application 102) at its current frame. In turn, the step button feature allows the user to single step forward exactly one frame. The capture button/feature that allows the user to decide to save the frame data (as well as its corresponding draw calls) for analysis. FIG. 9 illustrates a logic flow 900, which may be representative of operations involving such stepping and frame capturing features. This flow is described iii the context of FIGs. I and 2. However, this flow may be employed in other contexts. Also, although FIG. 9 shows a particular sequence, other sequences may be employed. Also, the depicted operations may be performed in various parallel and/or sequential combinations. At a block 902, a graphics application (e.g., a game) is commenced. For example, in the context of FIG. 1, this may comprise starting execution of graphics application 102. At a block 904, the user activates a pause button. As a result, the graphics application halts operation upon the completion of its current frame at a block 906. This feature may involve graphics analysis tool 11 0 awaiting application 102 to make a present call" (through graphics API 104). Such a call indicates that its draw calls for the current frame are complete and the current frame is ready for rendering by graphics engine 106. Once such a call is placed, then graphics analysis tool 110 may pause graphics application 102. This may be performed, for example, through one or more operating system calls. For instance, graphics analysis tool 110 may halt all running Cpu software threads associated with the graphics application. Upon this pausing, the current frame may be displayed. At a block 907, the user activates a button. If it is the step button, then operation proceeds to a block 909, where the graphics application is resumed and the next frame becomes the current frame. In the context of FIG. 1, this may involve graphics analysis tool 110 placing one or more operating system calls. Following this, operation returns to block 906 and the graphics application is again paused once it completes the frame. However, if the user activates the capture button, the frame generated by the graphics engine (e.g., graphics engine 106) is stored at a block 910. With S 20 reference to FIG. 2, this may comprise storing the current frame's pixel data in frame storage module 212. Also, at a block 912, additional information related to the current frame is stored. Such information may include (but is not limited to) the draw calls corresponding to the current frame, state information -involving the graphics engine (e.g., pipeline state information), the graphics API, and/or the graphics application. Also, such information may include resources provided by application (e.g., world model information, vertex information, shading information, texture information, and so forth.). The embodiments are not limited to these examples. In the context of FIG. 2, some of such information may be intercepted and identified by graphics APT draw call interceptor module 202 and stored in graphics API call log database 204. FIG. 9 shows that following blocks 910 and 912, operation may proceed to block 909, where the graphics application is resumed, and the next frame becomes the current frame. Following this, operation returns to block 906, and the graphics application is again paused once it completes the frame. As a further alternative, FIG. 9 shows that if the user activates the pause button, then normal operation of the graphics application is resumed at a block 914. In the context of FIG. 1, this may involve graphics analysis tool 110 placing one or more operating system calls. Following this, a block 916 indicates that operation may return to block 904 to perform further frame stepping and frame capture operations. Thus, these frame stepping and frame capture techniques allow a user to have fine grained control over which exact frame to debug and profile. As a result, a user may find and fix bottlenecks, and/or remove redundant or insignificant graphics operations. The flow of FIG. 9 is described in the context of GUI buttons. This is for illustration and not limitation. Accordingly other user interaction techniques/devices may be employed. Thus embodiments may provide various user controls of various for the aforementioned pausing, stepping, capturing, and resuming features. In embodiments, platform 1002 may comprise a CPU 1012, a GPU 1013, one or more drivers 1014, one or more network connections 1015, an operating system 1016, storage 1018, a display device 1019. CPU 1012 may comprise one or more processors such as dual-core processors. Examples of dual-core processors include the Pentium� D processor and the Pentium� processor Extreme Edition both made by Intel� Corporation, which may be referred to as the Intel Core Duo� processors, for example. GPU 1013 may comprise graphics various graphics processors, such as a peripheral component interconnect (PCI) Express graphics card. Embodiments, however, are not limited to this example. With reference to FIG. 1, GPU 1013 may provide features of graphics engine 106. In one embodiment, network connections 1015 may comprise the PRO/l000 PM or PRO/100 VE/VM network connection, both made by Intel� Corporation. In embodiments, operating system 1016 may comprise the Windows� XP Media Center made by Microsoft� Corporation. In further embodiments, operating system 1016 may comprise Linux�, as well as other types of operating systems. In one embodiment, storage 1018 may comprise various tangible storage media. Such storage media may include data (e.g., images or frames) as well as instructions or control logic (e.g., software) to implement various features described herein (e.g., elements of FIGs. 1 and 2). Such instructions or control logic maybe executed by CPU 1012 and/or GPU 1013. Examples of storage media are described further below. Display device 1019 may output information to a user. In addition, display device 1019 may allow user interaction, as described herein. For instance, such user interaction may be through exemplary user interface 300. As described herein, various embodiments may be implemented using hardware elements, software elements, or any combination thereof. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Some embodiments may be implemented, for example, using a tangible machine-readable medium (storage medium) or article which may store an instruction or a set of instructions that, if executed by a machine, may cause the machine to perform a method and/or operations in accordance with the embodiments. Such a machine may include, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, processor, or the like, and may be implemented using any suitable combination of hardware and/or software. The machine-readable medium (storage medium) or article may include, for example, any suitable type of memory unit, memory device, memory article, memory medium, storage device, storage article, storage medium and/or storage unit, for example, memory, removable or non-removable media, erasable or non-erasable media, writeable or re-writeable media, digital or analog media, hard disk, floppy disk, Compact Disk Read Only Memory (CD-ROM), Compact Disk Recordable (CD-R), Compact Disk Rewnteable (CD-RW), optical disk, magnetic media, magneto-optical media, removable memory cards or disks, various types of Digital Versatile Disk (DVD), a tape, a cassette, or the like. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, encrypted code, and the like, implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language. Some embodiments may be described using the expression "coupled" and "connected" along with their derivatives. These terms are not intended as synonyms for each other. For example, some embodiments may be described using the terms "connected" and/or "coupled" to indicate that two or more elements are in direct physical or electrical contact with each other. The term "coupled," however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not in limitation. Accordingly, it will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the invention. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
An architecture for storing, addressing and retrieving graphics data from one of multiple memory controllers. In a first embodiment of the invention, one of the memory controllers having an accelerated graphics port (AGP) includes a set of registers defining a range of addresses handled by the memory controller that are preferably to be used for all AGP transactions. The AGP uses a graphics address remapping table (GART) for mapping memory. The GART includes page table entries having translation information to remap virtual addresses falling within the GART range to their corresponding physical addresses. In a second embodiment of the invention, a plurality of the memory controllers have an AGP, wherein each of the plurality of the memory controllers supplies a set of registers defining a range of addresses that is preferably used for AGP transactions. In a third embodiment of the invention, a plurality of memory controllers implemented on a single chip each contain an AGP and a set of configuration registers identifying a range of addresses that are preferably used for AGP transactions.
What is claimed is: 1. A method of manufacturing a multiple memory controller computer comprising:providing at least two memory controllers for controlling a main memory; connecting a first of the at least two memory controllers to an accelerated graphics processor via a dedicated point-to-point connection, wherein the point-to-point connection is configured to exclusively transfer graphics related information; directly connecting at least a first of the at least two memory controllers to a central processing unit bus, a bus supporting a peripheral component, and a main memory; and connecting at least one configuration register to the first of the at least two memory controllers, wherein the at least one configuration register defines a group of addresses in the main memory that are preferentially used over other addresses for storage of graphics data for use with the point-to-point connection. 2. The method of claim 1, further comprising the act of providing in the main memory a graphical address remapping table having at least one page table entry (PTE) which provides information for a translation of a virtual address to a physical address, wherein the virtual address includes a first portion and a second portion, the first portion corresponding to a PTE in the graphical address remapping table and wherein the second portion and information provided by the PTE are combined to provide the physical address.3. The method of claim 2, further comprising the act of allocating a virtual page number field to the first portion.4. The method of claim 2, further comprising the act of allocating an offset field to the second portion.5. The method of claim 2, further comprising the act of configuring said at least one configuration register to receive data during boot up of a computer system.6. The method of claim 5, further comprising the act of configuring said at least one configuration register to receive data in a base address register defining the starting point of memory preferentially used over other addresses for storage of graphics data for accelerated graphics port transactions.7. The method of claim 5, further comprising the act of configuring said at least one configuration register to receive data setting a boundary address register defining the lowest address of a graphical address remapping table.8. The method of claim 5, further comprising the act of configuring said at least one configuring register to receive data in a range register defining the amount of memory that is preferentially used over other addresses for storage of graphics data for transactions via the point-to-point connection.9. The method of claim 5, further comprising the act of configuring said at least one configuration register to receive data from an initialization BIOS.10. The method of claim 5, further comprising the act of configuring said at least one configuration register to receive data from the operating system API.11. The method of claim 1, further comprising the act of manufacturing said at least two memory controllers and said main memory on a single semiconductor chip.12. A method of using a multiple memory controller system comprising:storing a graphical address remapping table in a main memory on a computer system having at least two memory controllers for controlling the main memory, wherein at least a first of the memory controllers is directly connected to a central processing unit bus, a bus supporting a peripheral component, and a main memory; loading at least one configuration register, the configuration register stored in the first of the memory controllers, with data that defines a group of addresses in the main memory that are preferentially used over other addresses for storage of graphics data for transactions via a dedicated point-to-point connection between the first of the memory controllers and an accelerated graphics processor; sending a memory request from the accelerated graphics processor to the first memory controller; and if the main memory requested by the memory request is within the group of addresses that is preferentially used for transactions via the dedicated point-to-point connection accessing graphics data stored in the main memory through the first memory controller. 13. The method of claim 12, wherein the step of storing the graphical address remapping table further comprises storing at least one page table entry (PTE) which provides information for a translation of a virtual address to a physical address, wherein the virtual address includes a first portion and a second portion, the first portion corresponding to a PTE in the graphical address remapping table and wherein the second portion and information provided by the PTE are combined to provide the physical address.14. The method of claim 13, further comprising the act of storing in the first portion a virtual page number field.15. The method of claim 13, further comprising the act of storing in the second portion an offset field.16. The method of claim 13, further comprising the act of loading said at least one configuration register during boot up of a computer system.17. The method of claim 13, further comprising the act of including in said at least one configuration register a base address of a graphical address remapping table.18. The method of claim 12, further comprising the act of storing a boundary address in the at least one configuration register to define the lowest address of a graphical address remapping table range.19. The method of claim 12, further comprising the act of defining in said at least one configuration register the amount of main memory that is preferentially used over other addresses for storage of graphics data for transactions via said dedicated point-to-point connection.20. The method of claim 12, further comprising the act of loading said at least one configuration register by an initialization BIOS.21. The method of claim 12, further comprising the act of loading said at least one configuration register by an operating system API.22. A method of using a multiple memory controller system having a main memory, at least two memory controllers for controlling a main memory, and an accelerated graphics processor connected to a first of the memory controllers via a dedicated point-to-point connection, wherein at least a first of the memory controllers is directly connected to a central processing unit bus, a peripheral bus that supports a component, and a main memory, the method comprising:storing a graphical address remapping table in the main memory; providing the first of the memory controllers with a base register and a range register, wherein the base register defines the starting address of main memory that is available for preferential use over other addresses for storage of graphics data for transactions via the point-to-point connection, and wherein the range register defines a group of addresses following the address referenced by the base register that are available for transactions via the point-to-point connection; and programming an operating system to preferentially use addresses in the group defined by the base and range registers over other addresses for storage of graphics data when allocating main memory space for transactions via the point-to-point connection. 23. The method of claim 22, additionally comprising:sending a memory request from the accelerated graphics processor to the first memory controller; and accessing the main memory through the first memory controller if the main memory requested by the memory request is within the group of addresses that is preferentially used for accelerated graphics port transactions. 24. A method of manufacturing a multiple memory controller computer, the method comprising:providing at least two memory controllers and at least two main memories; directly connecting a first of the at least two memory controllers to a first of the at least two main memories; directly connecting a second of the at least two memory controllers to a second of the at least two main memories; connecting the first of the at least two memory controllers to an accelerated graphics processor via a dedicated point-to-point connection; connecting each of the at least two memory controllers separately and directly to a central processing unit bus and a bus that supports a peripheral component; providing at least one configuration register connected to the first of the at least two memory controllers, wherein the configuration register defines a group of addresses that are available for transactions via the point-to-point connection; and configuring the at least two main memories as non-symmetric, with the first of the at least two main memories connected to the first of the at least two memory controllers preferentially used for storing data for use in transactions via the point-to-point connection.
CROSS REFERENCE TO RELATED APPLICATIONSThis patent application is a continuation of and incorporates by reference, in its entirety, U.S. patent application Ser. No. 09/000,517 filed on Dec. 30, 1997 now U.S. Pat. No. 6,157,398. The patent and patent applications listed below are related to the present application, and are each hereby incorporated by reference in their entirety.SYSTEM FOR ACCELERATED GRAPHICS PORT ADDRESS REMAPPING INTERFACE TO MAIN MEMORY, U.S. Pat. No. 6,073,198, filed on Jun. 25, 1997; Issued on May 30, 2000.ACCELERATED GRAPHICS PORT FOR MULTIPLE MEMORY CONTROLLER COMPUTER SYSTEM, U.S. patent application Ser. No. 09/000,511, filed on Dec. 30, 1997 now U.S. Pat. No. 6,252,612.APPARATUS FOR GRAPHIC ADDRESS REMAPPING, U.S. patent application Ser. No. 08/882,054, filed on Jun. 25, 1997.METHOD FOR PERFORMING GRAPHIC ADDRESS REMAPPING, U.S. patent application Ser. No. 08/882,327 on Jun. 25, 1997 now U.S. Pat. No. 6,282,625.BACKGROUND OF THE INVENTION1. Field of the InventionThe present invention relates to computer systems, and more particularly, to a method of using a second memory controller having an accelerated graphics port.2. Description of the Related TechnologyAs shown in FIG. 1, a conventional computer system architecture 100 includes a processor 102, system logic 104, main memory 106, a system bus 108, a graphics accelerator 110 communicating with a local frame buffer 112 and a plurality of peripherals 114. The processor 102 communicates with main memory 106 through a memory management unit (MMU) in the processor 102. Peripherals 114 and the graphics accelerator 110 communicate with main memory 106 and system logic 104 through the system bus 108. The standard system bus 108 is currently the Peripherals Component Interface (PCI). The original personal computer bus, the Industry Standard Architecture (ISA), is capable of a peak data transfer rate of 8 megabytes/sec and is still used for low-bandwidth peripherals, such as audio. On the other hand, PCI supports multiple peripheral components and add-in cards at a peak bandwidth of 132 megabytes/sec. Thus, PCI is capable of supporting full motion video playback at 30 frames/sec, true color high-resolution graphics and 100 megabits/sec Ethernet local area networks. However, the emergence of high-bandwidth applications, such as three dimensional (3D) graphics applications, threatens to overload the PCI bus.For example, a 3D graphics image is formed by taking a two dimensional image and applying, or mapping, it as a surface onto a 3D object. The major kinds of maps include texture maps, which deal with colors and textures, bump maps, which deal with physical surfaces, reflection maps, refraction maps and chrome maps. Moreover, to add realism to a scene, 3D graphics accelerators often employ a z-buffer for hidden line removal and for depth queuing, wherein an intensity value is used to modify the brightness of a pixel as a function of distance. A z-buffer memory can be as large or larger than the memory needed to store two dimensional images. The graphics accelerator 110 retrieves and manipulates image data from the local frame buffer 112, which is a type of expensive high performance memory. For example, to transfer an average 3D scene (polygon overlap of three) in 16-bit color at 30 frames/sec at 75 Hz screen refresh, estimated bandwidths of 370 megabytes/sec to 840 megabytes/sec are needed for screen resolutions from 640*480 resolution (VGA) to 1024*768 resolution (XGA). Thus, rendering of 3D graphics on a display requires a large amount of bandwidth between the graphics accelerator 110 and the local frame buffer 112, where 3D texture maps and z-buffer data typically reside.In addition, many computer systems use virtual memory systems to permit the processor 102 to address more memory than is physically present in the main memory 106. A virtual memory system allows addressing of very large amounts of memory as though all of that memory were a part of the main memory of the computer system. A virtual memory system allows this even though actual main memory may consist of some substantially lesser amount of storage space than is addressable. For example, main memory may include sixteen megabytes (16,777,216 bytes) of random access memory while a virtual memory addressing system permits the addressing of four gigabytes (4,294,967,296 bytes) of memory.Virtual memory systems provide this capability using a memory management unit (MMU) to translate virtual memory addresses into their corresponding physical memory addresses, where the desired information actually resides. A particular physical address holding desired information may reside in main memory or in mass storage, such as a tape drive or hard disk. If the physical address of the information is in main memory, the information is readily accessed and utilized. Otherwise, the information referenced by the physical address is in mass storage and the system transfers this information (usually in a block referred to as a page) to main memory for subsequent use. This transfer may require the swapping of other information out of main memory into mass storage in order to make room for the new information. If so, the MMU controls the swapping of information to mass storage.Pages are the usual mechanism used for addressing information in a virtual memory system. Pages are numbered, and both physical and virtual addresses often include a page number and an offset into the page. Moreover, the physical offset and the virtual offset are typically the same. In order to translate between the virtual and physical addresses, a basic virtual memory system creates a series of lookup tables, called page tables, stored in main memory. These page tables store the virtual address page numbers used by the computer. Stored with each virtual address page number is the corresponding physical address page number which must be accessed to obtain the information. Often, the page tables are so large that they are paged themselves. The page number of any virtual address presented to the memory management unit is compared to the values stored in these tables in order to find a matching virtual address page number for use in retrieving the corresponding physical address page number.There are often several levels of tables, and the comparison uses a substantial amount of system clock time. For example, to retrieve a physical page address using lookup tables stored in main memory, the typical MMU first looks to a register for the address of a base table which stores pointers to other levels of tables. The MMU retrieves this pointer from the base table and places it in another register. The MMU then uses this pointer to go to the next level of table. This process continues until the physical page address of the information sought is recovered. When the physical address is recovered, it is combined with the offset furnished as a part of the virtual address and the processor uses the result to access the particular information desired. Completion of a typical lookup in the page tables may take from ten to fifteen clock cycles at each level of the search. Such performance is unacceptable in processing graphical applications.One solution to facilitate the processing of graphical data includes having a point to point connection between the memory controller and a graphics accelerator. Such an architecture is defined by the Accelerated Graphics Port Interface Specification, Revision 1.0, (Jul. 31, 1996) released by Intel Corporation. However, one problem with these systems is that the PCI bus acts as a bottleneck for all memory transactions. Computer manufacturers are in need of a system to eliminate this bottleneck.Other solutions to facilitate the access of memory exist. The U.S. Pat. No. 4,016,545 to Lipovski teaches the use of multiple memory controllers. However, Lipovski does not describe a point to point connection between a memory controller and a graphics accelerator. Such a connection is needed for the high speed processing of graphic data.Additionally, U.S. Pat. No. 4,507,730 to Johnson teaches the use of multiple memory controllers. However, Johnson uses multiple memory controllers for fault tolerance. In Johnson, once a memory controller is found to be faulty, it is switched off line and another memory controller is activated in its place. The memory controllers in Johnson do not facilitate the efficient transfer of memory for graphic applications.In view of the limitations discussed above, computer manufacturers require an architecture with improved methods for storing, addressing and retrieving graphics data from main memory. Moreover, to address the needs of high bandwidth graphics applications without substantial increases in system cost, computer manufacturers require improved technology to overcome current system bus bandwidth limitations.SUMMARY OF THE INVENTIONOne embodiment of the invention is a method of manufacturing a multiple memory controller computer comprising connecting at least two memory controllers to at least one processing unit; and connecting at least one configuration register to one of the at least two memory controllers, wherein the at least one configuration register defines a range of addresses that are available for accelerated graphic port transactions.Yet another embodiment of the invention is a method of using a multiple memory controller system, comprising storing a graphical address remapping table in a memory on a computer system having at least two memory controllers; connecting a graphics accelerator to a memory controller which has at least one configuration register that defines a range of addresses that are available for accelerated graphics port transactions; and storing a graphics address relocation table in a memory connected to said memory controller having at least one configuration register.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block diagram illustrating the architecture of a prior art computer system.FIG. 2 is a block diagram illustrating one embodiment of a computer system of the invention.FIG. 3 is a block diagram illustrating the address space of a processor of one embodiment of the invention.FIG. 4 is a block diagram illustrating a second embodiment of the invention.FIG. 5 is a block diagram illustrating the translation of a virtual address to a physical address of one embodiment of the invention.FIG. 6 is an illustration of a page table entry of the graphics address remapping table of one embodiment of the invention.FIG. 7 is a block diagram illustrating the generation of a translation lookaside buffer entry of one embodiment of the invention.DETAILED DESCRIPTION OF THE INVENTIONThe following detailed description presents a description of certain specific embodiments of the invention. However, the invention can be embodied in a multitude of different ways as defined and covered by the claims. In this description, reference is made to the drawings wherein like parts are designated with like numerals throughout.FIG. 2 is a block diagram illustrating a computer system of one embodiment of the invention. This computer 150 includes at least one processor 152 connected to a first memory controller 154 and a second memory controller 155 by a processor or host bus. The computer 150 also has a first main memory 156 and a second main memory 157 connected to the first memory controller 154 and the second memory controller 155, respectively. A graphics accelerator 160 communicates with a local frame buffer 162 and the first memory controller 154 through an accelerated graphics port (AGP) 166. The AGP 166 is not a bus, but is a point-to-point connection between an AGP compliant target which is the first memory controller 154, and an AGP-compliant master, which is the graphics accelerator 160. The AGP 166 point-to-point connection enables data transfer on both the rising and falling clock edges, improves data integrity, simplifies AGP protocols and eliminates bus arbitration overhead. AGP provides a protocol enhancement enabling pipelining for read and write accesses to the main memory 156. The first memory controller 154 and the second memory controller 155 also accept memory requests from a PCI bus 158.As noted above, the embodiment of FIG. 2 enables the graphics accelerator 160 to access both the first main memory 156 and the local frame buffer 162. From the perspective of the graphics accelerator 160, the main memory 156 and the local frame buffer 162 are logically equivalent. Thus, to optimize system performance, graphics data may be stored in either the first main memory 156 or the local frame buffer 162. In contrast to the direct memory access (DMA) model where graphics data is copied from the main memory 156 into the local frame buffer 162 by a long sequential block transfer prior to use, the graphics accelerator 160 of the present invention can also use, or "execute," graphics data directly from the memory in which it resides (the "execute" model).The interface between the first memory controller 154 and the graphics accelerator 160 is defined by Accelerated Graphics Port Interface Specification, Revision 1.0, (Jul. 31, 1996) released by Intel Corporation and available from Intel in Adobe(R) Acrobat(R) format on the World Wide Web at the URL: developer.intel.com/pc-supp/platform/agfxport/INDEX.htm. This document is hereby incorporated by reference.FIG. 3 illustrates an embodiment of the address space 180 of the computer system 150 (FIG. 2) of the invention. For example, a 32 bit processor 152 (FIG. 2) has an address space 180 including 2<32 >(or 4,294,967,296) different addresses. A computer system 150 (FIG. 2) typically uses different ranges of the address space 180 for different devices and system agents. In one embodiment, the address space 180 includes a graphics address remapping table (GART) range 184 and a main memory range 186.The first memory controller 154 provides a set of registers to define the range of available for AGP transactions. A base register 165 is used to define the base address of the AGP addresses. A range register 166 is used to establish the amount of memory following the base address that is dedicated to AGP transactions. Alternatively, a lower and upper address register may be used to define the AGP address range. An operating system provided with these values will attempt to allocate GART pages within this memory range. In contrast to prior art systems, the operating system attempts to first remap the addresses falling within the GART range 184 to the first memory controller 154.By employing a first and second main memory 156, 157 respectively, and two memory controllers 154, 155 faster transaction processing is realized than in those prior art systems employing a single system memory and a single memory controller. In particular, two memory transactions can be executed simultaneously by executing one transaction using the first memory controller 154 while another transaction is being executed by the second memory controller 155. Graphics data typically is read many times without ever being changed or written to. Read and write delays are reduced by storing the graphic data in the first memory controller 154, while storing other data in the second memory controller 155.Referring again to FIG. 3, the computer 150 has 64 megabytes of main memory 218 encompassing physical addresses 0 through 0x03FFFFFF. 32 megabytes of memory are assigned to the first memory controller 154 and 32 megabytes are assigned to the second memory controller 155. Using the base 165 and range 166 registers provided by the first memory controller 154, the operating system has set the AGP related data occupying the lower 32 megabytes of the first main memory 156 referenced by physical addresses 0x00000000 through 0x01FFFFFF. For example, if the GART Range 184 begins at the 256 megabyte virtual address boundary 0x10000000, the invention enables translation of virtual addresses within the GART Range 184 to physical addresses in the lower 32 megabytes of the first main memory 156 corresponding to physical addresses in the range 0x00000000 through 0x01FFFFFF.Upon a request from the graphics accelerator 160 the first memory controller 154 analyzes the address in the request to identify whether the address is in the first main memory 156. If the address is not within the first main memory 156, the first memory controller 154 re-routes the request to the second memory controller 155. By having the GART tables and their referenced memory located on the first memory controller 154 having the AGP, the re-routing of memory requests to the other memory controller 155 is minimized.In one embodiment, a hardware abstraction layer (HAL) directs the operating system to place the GART table and texture memory in the first memory controller 154. The HAL is a small layer of software that presents the rest of the computer system with an abstract model of any hardware that is not part of the processors 152. The HAL hides platform-specific details from the rest of the system and removes the need to have different versions of the operating system for platforms from different vendors.Referring to FIG. 4, a second embodiment of the invention is illustrated. This second embodiment has a second memory controller 190 also having an accelerated graphics port 192 for use by a graphics accelerator 170. Each of the memory controllers 154, 190 provide a set of registers defining a range of addresses that are used by the operating system for accelerated graphics port transactions. In a third embodiment of the invention, a single chip contains a plurality of memory controllers each memory controller having an AGP and a set of configuration registers identifying a range of addresses that are used for AGP transactions.FIG. 5 illustrates the translation of a virtual address 200 to a physical address 202 in one embodiment of the invention. As discussed previously, in one embodiment, the operating system attempts to allocate those virtual addresses falling within the GART range 184 (FIG. 3) to the first main memory 156 (FIG. 3).A virtual address 200 includes a virtual page number field 204 and an offset field 206. Translation of the contents of the virtual page number field 204 occurs by finding a page table entry (PTE) corresponding to the virtual page number field 204 among the plurality of GART PTEs 208 in the GART table 210. To identify the appropriate PTE having the physical address translation, the GART base address 212 is combined at a state 213 with the contents of the virtual page number field 204 to obtain a PTE address 214. The contents referenced by the PTE address 214 provide the physical page number 216 corresponding to the virtual page number 204. The physical page number 216 is then combined at a state 217 with the contents of the offset field 206 to form the physical address 202. The physical address 202 in turn references a location in the first main memory 156 having the desired information.The GART table 210 may include a plurality of PTEs 208 having a size corresponding to the memory page size used by the processors 152 (FIG. 2). For example, an Intel(R) Pentium(R) or Pentium(R) Pro processor operates on memory pages having a size of 4K. Thus, a GART table 210 adapted for use with these processors may include PTEs referencing 4K pages. In one embodiment, the virtual page number field 204 comprises the upper 20 bits and the offset field 206 comprises the lower 12 bits of a 32 bit virtual address 200. Thus, each page includes 2<12≥4096 (4K) addresses and the lower 12 bits of the offset field 206 locate the desired information within a page referenced by the upper 20 bits of the virtual page number field 204.FIG. 6 illustrates one possible format for a GART PTE 220. The GART PTE 220 includes a feature bits field 222 and a physical page translation (PPT) field 224. In contrast to prior art systems where hardwired circuitry defines a page table format, the GART table 210 (FIG. 5) may include PTEs of configurable length enabling optimization of table size and the use of feature bits defined by software. The PPT field224 includes PPTSize bits to generate a physical address 202 (FIG. 5). The PPTSize defines the number of translatable addresses.In one embodiment, an initialization BIOS implements the GART table 210 (FIG. 5) by loading configuration registers in the first memory controller 154 (FIG. 2) during system boot up. In another embodiment, the operating system implements the GART table 210 (FIG. 5) using an API to load the configuration registers in the first memory controller 154 (FIG. 3) during system boot up.As noted earlier, a GART table 210 includes multiple PTEs, each having physical page translation information 224 and software feature bits 222. The GART table 210 may be located at any physical address in the main memory 218, such as the 2 megabyte physical address 0x00200000. The operating system attempts to place the GART table 210 in the memory range provided by the registers 165, 166 in the first memory controller 154 if space is available. By placing the GART table 210 in this memory range, fewer memory requests from the graphic accelerator 160 need to travel over the PCI bus 158 to the second memory controller 166 as compared to traditional systems. For a system having a 4K memory page size and a GART PTE 220 of 8 byte length, the GART table 210 is configured as follows:<tb> <sep>PhysBase: =<sep>0x00000000<sep>Start of remapped physical address<tb> <sep>PhysSize: =<sep>32 megabytes<sep>Size of remapped physical addresses<tb> <sep>AGPAperture: =<sep>0x10000000<sep>Start address of GART Range<tb> <sep>GARTBase: =<sep>0x00200000<sep>Start address of GART table<tb> <sep>2<PTESize>: =<sep> 8 bytes<sep>Size of each GART Page Table Entry<tb> <sep>PageSize: =<sep> 4 kilobytes<sep>Memory page sizeTo determine the number of PTEs in the GART table 210, the size of the physical address space in main memory 218 allocated to AGP related data, the upper 32 megabytes=33554432 bytes, is divided by the memory page size, 4K=4096 bytes, to obtain 8192 PTEs. Since there are 8 bytes in each PTE, the GART table consists of 65,536 bytes (8192*8). Note that 8192=2<13≥2<PTESize >and thus, PTESize=13. Using the values supplied by the base and range registers, the operating system programs the configuration registers with the following values to set up the GART table 210:<tb> <sep>PhysBase: =<sep>0x00000000<sep>Start of remapped physical address<tb> <sep>AGPAperture: =<sep>0x10000000<sep>Start address of GART Range<tb> <sep>GARTBase: =<sep>0x00000000<sep>Start address of GART table<tb> <sep>PTESize: =<sep> 3<sep>2<PTESize ≥ Size in bytes of the PTE<tb> <sep>PPTSize: =<sep>13<sep>Number of PPT bits in each PTE<tb> <sep>Base Register 165: =<sep>0x00000000<sep>Starting point of memory in the<tb> <sep> <sep> <sep>first memory controller 154<tb> <sep>Range Register 166: =<sep>0x01FFFFFF<sep>Range of memory available for<tb> <sep> <sep> <sep>AGP transactionsNote that the operating system chose to set up the GARTBase and PhysBase in the range of addresses suggested by the base register 165 and range register 166 located in first memory controller 154.FIG. 7 illustrates the translation of a virtual address 200 to a physical address 202 (FIG. 5a) using a translation lookaside buffer (TLB) 240. As before, a virtual address 200 includes a virtual page number field 204 and an offset field 206. Translation of the virtual page number field 204 occurs by finding a PTE of the GART table 210 corresponding to the contents of the virtual page number field 204. The GART base address 212 is combined at 213 with the contents of the virtual page number field 204 to obtain a PTE address 214. The PTE address 214 in turn provides the physical page number 216 corresponding to the virtual page number 204. At this point, a TLB entry 242 is formed having a virtual page field 246, its corresponding physical page field 244, a least recently used (LRU) counter 250 to determine the relative age of the TLB entry 242 and a status indicator 248 to determine when the TLB 240 has valid information. The TLB entry 242 is stored in a TLB 240 having a plurality of TLB entries 252. In one embodiment, there are a sufficient quantity of TLB entries 252 to cover all of the translatable addresses in the entire GART range 184 (FIG. 3). In this embodiment, the first memory controller 154 (FIG. 2) includes a block of registers to implement the TLB 240. In another embodiment, first memory controller 154 (FIG. 2) includes a fast memory portion, such as cache SRAM, to implement the TLB 240.The invention advantageously overcomes several limitations of existing technologies and alternatives. For example, the AGP connection can support data transfers over 500 megabytes a second. By defining a set of memory that is available for AGP transaction, operating systems can optimize system performance by keeping the graphic data on the memory controller with the accelerated graphics port. The memory controller having the accelerated graphics port handles memory transactions concurrently with transactions being processed by the other memory controller.Additionally, the invention enables storing, addressing and retrieving graphics data from relatively inexpensive main memory without the bandwidth limitations of current system bus designs. It is to be noted that in an alternative embodiment of the invention, the memory controllers may be on the same semiconductor chip as the memory that they control.In contrast to the conventional computer system architecture 100 (FIG. 1), embodiments of the invention enable relocation of a portion of the 3D graphics data, such as the texture data, from the local frame buffer to main memory connected to a dedicated memory controller to reduce the size, and thus the cost, of the local frame buffer and to improve system performance. For example, as texture data is generally read only, moving it to main memory does not cause coherency or data consistency problems.Moreover, as the complexity and quality of 3D images has increased, leaving 3D graphics data in the local frame buffer 112 has served to increase the computer system cost over time. By moving 3D graphics data to a memory controller with its main memory, the architecture of the invention reduces the total system cost since it is less expensive to increase main memory 156 with a second controller 154 than to increase local frame buffer memory 112.The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiment is to be considered in all respects only as illustrative and not restrictive and the scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced with their scope.
Various embodiments are generally directed to the providing for mutual authentication and secure distributed processing of multi-party data. In particular, an experiment may be submitted to include the distributed processing of private data owned by multiple distrustful entities. Private data providers may authorize the experiment and securely transfer the private data for processing by trusted computing nodes in a pool of trusted computing nodes.
1.A device includes:Logic, part of which is implemented in hardware, the logic includes:Secure channel interface for establishing secure channels with private data providers in multiple private data providers in a private data source pool; andExperimental processor for:Receiving encrypted private data from the private data provider via the secure channel;One or more processes are applied to the encrypted private data.2.The apparatus of claim 1, the experiment processor to receive an encryption key and decrypt the encrypted private data based on the encryption key.3.The apparatus of claim 2, wherein the secure channel is a first secure channel, the secure channel interface is used to establish a second secure channel with an experimental coordinator, and the experiment processor is configured to receive from the experimental coordinator Describe the encryption key.4.The apparatus of claim 1, comprising an attestation engine for sending an information element to the experiment coordinator including an indication of the trust root of the apparatus.5.The apparatus of claim 1, said apparatus being a trusted computing node in a pool of distributed trusted computing nodes.6.The apparatus of claim 2, the experiment processor to receive an information element from an experiment coordinator, the information element including an indication of a portion of an experimental description.7.The apparatus of claim 6, the experiment processor to receive the encryption key from the private data provider.8.The apparatus of claim 7, the experimental description comprising a directed acyclic graph (DAG).9.The apparatus of claim 8, the DAG includes a plurality of mapping, thinning, or parsing operations to be performed on private data corresponding to the plurality of private data provided from the private data source pool Available private data.10.The apparatus according to any one of claims 1 to 9, said secure channel interface and said experiment processor being implemented in a trusted execution environment.11.A device includes:Trusted Execution Environment (TEE):a secure tunnel interface implemented by the TEE for establishing a secure channel with a private data provider in a plurality of private data providers in a pool of private data sources; andThe experimental authenticator executed by the TEE, the experimental authenticator is used to:Sending a first information element to the private data provider via the secure channel, the first information element including an indication of an experimental description of operation of private data available from the private data source;A second information element is received from the private data provider via the secure channel, the second information element including an indication of whether the experimental description is approved.12.The apparatus of claim 11, comprising a compute node authenticator for authenticating the trusted computing node to allow the trusted computing node to enter the trusted compute node pool.13.The apparatus of claim 12, the compute node authenticator to receive a root of trust from the trusted computing node and authenticate the trusted computing node based on the root of trust.14.The apparatus of claim 12, the compute node authenticator is for:Sending a third information element to the private data provider via the secure channel, the third information element including an indication of the authenticity of the trusted computing node; andA fourth information element is received from the private data provider via the secure channel, the fourth information element including an indication of whether the trusted computing node is authorized to receive the private data.15.The apparatus of claim 11, the experimental authenticator to receive a third information element from an experiment portal, the third information element including an indication of the experimental description.16.The apparatus of claim 11, the experimental description comprising a directed acyclic graph.17.At least one machine-readable storage medium includes instructions that when executed by a trusted execution environment (TEE) cause the TEE to:Establish a secure channel with private data providers in multiple private data providers in a private data source pool;Receiving encrypted private data from the private data provider via the secure channel; andOne or more processes are applied to the encrypted private data.18.The at least one machine-readable storage medium of claim 17, comprising instructions that further cause the TEE to receive an encryption key and decrypt the encrypted private data based on the encryption key.19.The at least one machine-readable storage medium of claim 18, comprising instructions that further cause the TEE to send an information element that includes an indication of the device's trust root to an experiment coordinator.20.The at least one machine-readable storage medium of claim 19, comprising instructions that further cause the TEE to receive the encryption key from the private data provider.21.A computer-implemented method includes:Establish a secure channel with private data providers in multiple private data providers in a private data source pool;Sending a first information element to the private data provider via the secure channel, the first information element including an indication of an experimental description of operation of private data available from the private data source; andA second information element is received from the private data provider via the secure channel, the second information element including an indication of whether the experimental description is approved.22.The computer-implemented method of claim 21 including authenticating the trusted computing node to allow the trusted computing node to enter the pool of trusted computing nodes.23.The computer-implemented method of claim 22, comprising receiving a root of trust from the trusted computing node and authenticating the trusted computing node based on the root of trust.24.The computer-implemented method of claim 23, comprising:Sending a third information element to the private data provider via the secure channel, the third information element including an indication of the authenticity of the trusted computing node; andA fourth information element is received from the private data provider via the secure channel, the fourth information element including an indication of whether the trusted computing node is authorized to receive the private data.25.The computer-implemented method of claim 21 including receiving a third information element from an experiment portal, said third information element including an indication of said experiment description.
Mutual recognition of the calculation of privacy reservedCross-reference to related applicationsThis application claims the benefit of and priority to previously filed U.S. Patent Application Serial No. 14/866,264, filed on September 25, 2015, the subject matter of which is incorporated herein by reference in its entirety.Technical fieldThe embodiments described herein generally relate to preserving the privacy of data sets for distributed computing or cloud computing.Background techniqueModern research activities may include processing large amounts of data that may be owned by multiple entities. For example, precise medications in the emerging medical field where medical decisions, practices, and/or products can be tailored to individual patients based on computational diagnoses. However, such diagnoses often require the processing of a large amount of genomic data corresponding to the patient as well as a number of other patients. Often, these data are private, subject to various anti-disclosure laws or requirements, and are owned by a number of different entities (eg, hospitals, clinics, etc.). Therefore, due to such privacy or other problems, the data must be kept confidential before, during, and after the calculation process is completed.Description of the drawingsFIG. 1 shows a block diagram of a system according to one embodiment.FIG. 2 shows a block diagram of an implementation of the system of FIG. 1 according to one embodiment.Figures 3-6 illustrate block diagrams of portions of the system of Figure 1 in accordance with various embodiments.FIG. 7 shows a block diagram of a portion of the implementation of FIG. 2 according to one embodiment.FIG. 8 shows a technique according to one embodiment.9-10 each illustrate a logic flow in accordance with various embodiments.FIG. 11 shows an embodiment of a computer-readable storage medium.FIG. 12 illustrates an embodiment of a processing architecture.detailed descriptionVarious embodiments typically involve processing multiple data sets owned by different entities while preserving the privacy and control of the data set. In particular, the present disclosure provides a computing system in which trusted computing of data owned by multiple parties may be performed on a distributed computing resource pool. Each data owner can retain control of the data before, during, and after the workflow. Typically, the system provides a directed acyclic graph (DAG) on the combined data set. A trusted computing cluster can be provided to apply various modes of DAG calculations such as mapping, simplification, analysis, results, and the like. In addition, data privacy can be maintained. Specifically, the system can maintain data confidentiality and integrity during transmission of data providers to a trusted computer in the pool, during storage, and during execution, for example, through encryption, use of a trusted execution engine, and the like. In addition, the results may be protected to preserve privacy and reduce data identification.Notably, the present disclosure provides a system for performing trusted computing on a data set provided by multiple untrusted entities. As a result, distributed computing of data sets can be facilitated without the need for legal protocols, which may be cumbersome arrangements, and data theft is not prevented. In addition, certain types of data (eg, medical images, genomic data, etc.) are difficult to confuse without rendering data uselessly. As such, conventional data de-identification techniques often cannot be used for this type of data.Generally reference is made to the symbols and terminology used herein, and the portion of the detailed description that follows may be presented in terms of a program procedure executed on a computer or computer network. Those skilled in the art will use these procedural descriptions and representations to most effectively convey the substance of their work to others skilled in the art. The program is generally considered here as an independent sequence of operations that achieves the desired result. These operations are operations that require physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic or optical signals capable of being stored, transferred, combined, compared, and otherwise manipulated. Primarily for reasons of common usage, these signals are sometimes referred to as bits, values, elements, symbols, characters, terms, numbers, and the like. However, it should be pointed out that all of these and similar terms are associated with appropriate physical quantities and are merely convenient labels applicable to these quantities.In addition, these operations are often referred to as terms such as adding or comparing, which are often associated with mental operations performed by a human operator. However, in any of the operations described herein that form part of one or more embodiments, this capability of the human operator is not necessary or desirable in most situations. Instead, these operations are machine operations. Useful machines for performing the operations of various embodiments include a general-purpose digital computer selectively activated or configured by a computer program stored therein, the computer program being written in accordance with the teachings herein, and/or including specifically Need to construct the device. Various embodiments also relate to a device or system for performing these operations. These devices may be specially constructed for the required purpose or may include general-purpose computing devices. The required structure of various machines can be seen from the description given.Reference is now made to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding thereof. However, it will be apparent that novel embodiments may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate the description. The present disclosure provides a thorough description so that all modifications, equivalents, and alternatives falling within the scope of the claims are fully described.In addition, reference may be made to variables representing components that can implement more than one component, such as "a", "b", "c". It should be noted that multiple components are not necessarily needed, and that in the case of multiple components, they do not have to be the same. Instead, using variables to refer to the components in the diagram is for convenience and clarity of presentation.Figure 1 depicts a system for distributed computing of data owned by multiple untrusted entities. In general, system 1000 includes a plurality of nodes operably coupled (eg, via the Internet, etc.) and configured to process data on a plurality of nodes, where the source of the data is also a plurality of nodes (eg, the same, or different nodes). ). More specifically, system 1000 may include data provider nodes 100-1 through 100-N, trusted computing nodes 200-1 through 200-N, and trusted computing and data provider nodes 300-1 through 300-N. Note that N can be any positive integer, and more specifically, system 1000 can be implemented with any number of nodes 100, 200, and/or 300. In addition, the number of data provider nodes 100 need not be the same as the number of trusted computing nodes 200. As such, the number of data provider nodes 100 and/or trusted computing nodes 200 need not be the same as the number of trusted computing and data provider nodes 300 .Each of the nodes 100-1 to 100-N, 200-1 to 200-N, and 300-1 to 300-N is operatively coupled via the network 400. In general, network 400 may be any network such as, for example, a wide area network, the Internet, and the like. Nodes can be deployed at various institutions. For example, in the case of medical and/or drug research, nodes can be deployed in universities, hospitals, clinics, drug research institutions, government agencies, and the like. For another example, in the context of economic research, nodes can be deployed in universities, banks, investment institutions, government agencies, economic research institutions, and population research institutions. It should be noted that these examples are given merely for clarity of presentation and are not restrictive. In particular, the present disclosure may be implemented to provide secure distributed processing of data owned by a plurality of untrusted entities in any case, and is not particularly limited to medical, pharmaceutical, or economic research.An example portion of system 1000 is described in more detail below, particularly with respect to FIGS. 2-7 providing an example configuration of system 1000, and FIG. 8 depicts example techniques implemented by system 1000. However, before turning more specifically to these respective figures, an overall description of the operation of the system 1000 is provided herein. In general, the data provider nodes 100-1 to 100-N and the trusted computing and data provider nodes 300-1 to 300-N include private data (for example, refer to FIG. 3 and FIG. 5) and may use at least some of the trusted data. Computing nodes 200-1 to 200-N and/or trusted computing and data provider nodes 300-1 to 300-N access the private data for the purpose of applying some analytical calculations to the data. More specifically, experiments may be submitted (eg, by a researcher, etc.) through the experiment portal 500 , including the data to be accessed and the description of the calculations to apply to the data. Experimental portal 500 may be untrusted or may be trusted. More specifically, the experiment portal 500 may be an entry accessible via the Internet, even an entity that does not require access to private data (eg, research, etc.). In other examples, the experiment portal 500 may be accessed via secure processing such as, for example, a private network that requires access to a certificate. In some examples, experiment entrance 500 may be a web form operatively coupled to system 1000, where entities may submit experiments to system 1000.As described above, by receiving an experimental description (eg, referring to FIG. 6) through the experimental portal 500, the system 1000 can be used to initiate the experiment. In some examples, the experimental description may include a directed acyclic graph (DAG). Note that DAG is a directed graph with no orientation loop. More specifically, a DAG is formed by a set of vertices and directed edges, each of which connects one vertex to another, making it impossible to begin at a vertex and eventually loop through a series of edges and return to the same vertex. DAGs can be generated to simulate experiments to apply to private data. More specifically, a set of tasks (eg, data processing, database queries, etc.) may be ordered into a sequence subject to various constraints. A DAG can be formed where each vertex of the DAG is a task and each edge is a constraint.The experiment portal 500 may select one of the trusted computing nodes 200-1 to 200-N or one of the trusted computing and data provider nodes 300-1 to 300-N as an experimental coordinator (see, for example, FIG. 2). The experimental coordinator proves (eg, its authenticity, verification programming, etc.) to each of the data provider nodes 100-1 to 100-N and the trusted computing and data provider nodes 300-1 to 300-N. In addition, the experimental coordinator establishes a secure channel with each data provider node 100-1 to 100-N and with each trusted computing and data provider node 300-1 to 300-N. The experimental coordinator submits the DAG to each of the data provider nodes 100-1 to 100-N and the trusted computing and data provider nodes 300-1 to 300-N, for example via established secure channels. Data provider nodes 100-1 through 100-N and trusted computing and data provider nodes 300-1 through 300-N may verify that the DAG description uses approved applications and/or computations to process private data. More specifically, in some examples, the owner of private data may require private data to be processed only with a particular application. In this way, data provider nodes 100-1 through 100-N and trusted computing and data provider nodes 300-1 through 300-N can verify that the DAG only references and/or executes approved applications.Once the data provider nodes 100-1 to 100-N and the trusted computing and data provider nodes 300-1 to 300-N approve the DAG, the research on the part of the portal arrangement experiment description (for example, the application presentation or the similarity described in the DAG) is performed. Are performed on the trusted computing nodes 200-1 to 200-N available and the trusted computing and data provider nodes 300-1 to 300-N. In particular, trusted computing nodes 200-1 through 200-N and trusted computing and data provider nodes 300-1 through 300-N are allowed into the computing pool (eg, see FIG. 2) and given the address of the experimental coordinator . Each of the trusted computing nodes 200-1 through 200-N and the trusted computing and data provider nodes 300-1 through 300-N can demonstrate their application, startup parameters, and trust root when they join the computing pool. Accordingly, the experiment coordinator may authenticate or verify the authenticity of each of the trusted computing nodes 200-1 through 200-N and the trusted computing and data provider nodes 300-1 through 300-N that are allowed into the computing pool. The experimental coordinator may establish trusted computing nodes 200-1 to 200-N in the pool and trusted computing and data provider nodes 300-1 to 300-N and data provider nodes 100-1 to 100-N and Trusted computing and data provider nodes 300-1 to 300-N provide a secure path to private data. In some examples, data provider nodes 200-1 through 200-N and trusted computing and data provider nodes 300-1 through 300-N may request trusted computing node 200-1 during establishment of the trusted path. The 200-N and trusted computing and data provider nodes 300-1 to 300-N provide evidence of their authenticity.During the execution of the experimental description (eg, DAG), the data is encrypted and sent via secure channels to trusted computing nodes 200-1 through 200-N and trusted computing and data provider nodes 300-1 through 300-N. The encryption key can be directly distributed to the trusted computing nodes 200-1 to 200-N and the trusted computing and data provider nodes 300-1 to 300-N in the pool where the private data needs to be decrypted and/or can be distributed to the computing pool. All trusted nodes 200-1 to 200-N and trusted computing and data provider nodes 300-1 to 300-N. In some examples, the encryption key is distributed by the experimental coordinator. In some examples, encryption keys are distributed by data provider nodes 200-1 through 200-N and trusted computing and data provider nodes 300-1 through 300-N. In order to enhance the security of system 1000, data is stored, decrypted, and the application executes on data within a trusted environment (eg, a trusted execution engine). When the trusted computing nodes 200-1 to 200-N and trusted computing and data provider nodes 300-1 to 300-N within the computing pool terminate computing and/or leave the computing pool, the encrypted data, encryption keys and report results Be destroyed (for example, removed safely, etc.).At the end of the experiment, the results may enhance privacy to reduce the possibility of leaking private data through the results of the experiment. In particular, trusted computing nodes 200-1 through 200-N and trusted computing and data provider nodes 300-1 through 300-N may: add noise, encrypt results, and the like. In some examples, data provider nodes 200-1 through 200-N and trusted computing and data provider nodes 300-1 through 300-N (eg, owners of private data) may specify policies regarding the release of results.Turning more specifically to FIGS. 2-8, a block diagram is depicted. In particular, FIG. 2 depicts an example implementation of a distributed computing pool and a private data source. Figure 3 depicts a block diagram of an example of a data provider node 200; Figure 4 depicts a block diagram of an example of a trusted computing node 200; Figure 5 depicts a block diagram of an example of a trusted computing and data provider node 300; and Figure 6 depicts A block diagram of an example of a study portal 500 is shown. FIG. 7 depicts a portion of an implementation 1001 depicted in FIG. 2 in more detail. FIG. 8 depicts a block diagram of a technique for secure distributed processing of multiple sets of private data owned by different entities. A brief description of FIGS. 2-7 is given, followed by a description of FIG. Note that for clarity of explanation and not limitation, the example distributed computing pool depicted in FIG. 2 and the techniques depicted in FIG. 8 are described with reference to the block diagrams depicted in FIGS. 3-7 . In addition, FIGS. 2-8 are described with reference to system 1000 shown in FIG. However, this is also done for the sake of clarity rather than limitation.Turning more specifically to FIG. 2, an example implementation 1001 of system 1000 is depicted. In particular, an example implementation 1001 may include a distributed computing pool 1010 and a private data source 1020 is depicted. The distributed computing pool 1010 is depicted as including a plurality of trusted computing nodes 200-1 through 200-N and trusted computing and data provider nodes 300-1 through 300-N. In particular, distributed computing pool 1010 is depicted as including trusted computing nodes 200-1 and 200-2 and trusted computing and data provider node 300-1. The private data source 1020 is described as including multiple data provider nodes 100-1 through 100-N and trusted computing and data provider nodes 300-1 through 300-N. In particular, private data source 1020 is depicted as including data provider nodes 100-1 and 100-2 and trusted computing and data provider node 300-2. In addition, example implementation 1001 includes experiment coordinator 1030 . During operation, one of the trusted computing nodes 200-1 through 200-N or one of the trusted computing and data provider nodes 300-1 through 300-N may serve as the experiment coordinator 1030 at the start of the experiment. It is noted that trusted computing node 200-3 is described as experimental coordinator 1030. However, this is done for the purpose of explanation rather than limitation. The experimental coordinator is operatively coupled to distributed computing pool 1010 and private data source 1020. In addition, an experimental coordinator is operably coupled to the experimental portal 500. The operation of the example implementation 1001 is described in more detail below with reference to FIG. However, an explanation of an example portion of the system 1000 is first given with respect to FIGS. 3-7.Turning more specifically to FIG. 3, the depicted data provider node 100 may correspond to any of the data provider nodes 100-1 to 100-N of the system 1000 shown in FIGS. 1-2. The data provider node 100 may include a processor element 110, a computer-readable storage 120, a controller 140, and an interface 150. Computer-readable storage 120 may store one or more of control routines 122, private data 124, private data encryption keys 126, and encrypted private data 128.In general, control routine 122 implements logic in conjunction with sequences of instructions that operate on components of device 100 (eg, processor element 110, control 140, interface 150, etc.) to provide the private data 124 for processing described herein. Specifically, in execution control routine 122, processing element 110 may use private data 124 to receive a request to perform an experiment. Processor element 110 may verify that the experiment includes authorized actions (eg, applications, operations, etc.) on private data 124 . In addition, the processor element 110 may encrypt the private data 124 using private data encryption keys 126, for example, to form encrypted private data 128. Processor element 110 may provide encrypted private data 128 to one of trusted computing nodes 200-1 through 200-N and/or one of trusted computing and data provider nodes 300-1 through 300-N, and The encryption key 126 can be provided to the experiment coordinator. In this way, the data provider node 100 can facilitate distributed processing of private data 124 where distributed processing of multiple sets of private data owned by multiple untrusted entities (eg, private data 124-1 to private data 124-N, etc.) On the operation.In general, private data 124 may be any data that requires distributed processing. For example, private data may be medical data, economic data, financial data, historical data, business data, demographic data, and the like.Turning more specifically to FIG. 4, the depicted trusted computing node 200 may correspond to any of the trusted computing nodes 200-1 through 200-N of the system 1000 shown in FIGS. 1-2. Trusted computing node 200 may include a processor element 210, a computer-readable storage device 220, a trusted execution environment (TEE) 230, a control 240, and an interface 250. Computer readable memory 220 may store one or more control routines 222, while trusted execution environment 230 may store one or more of control routines 232, results 234, private data encryption keys 126, and encrypted private data 128. .In general, control routine 222 implements providing a private data set (eg, private data 124) in conjunction with a sequence of instructions operating on components of device 200 (eg, processor element 210, TEE 230, control 240, interface 250, etc.). The logic of secure distributed processing. In addition, the control routine 232 implements a sequence of instructions that operate on components of the device 200 (eg, processor element 210, control 240, interface 250, etc.) to implement decryption of encrypted private data 128 and decrypted private data applications. The logic of computing operations.Notably, TEE 230 may include logic, functionality, features, and/or storage for securely implementing the functions described herein. In addition, TEE 230 may be incorporated into processor element 210 and/or memory 220 . However, TEE 230 is depicted separate from processor element 210 and storage device 220 for clarity. In some examples, TEE 230 may be implemented as a secure enclave, a security co-processor, or the like. As such, trusted computing node 200 may demonstrate its ability to securely decrypt, store, and process private data 124 .When the control routine 232 is executed, the processing element 210 can join the computation pool and can prove its authenticity to the experimental coordinator. The processing element 210 may receive from the experiment coordinator an information element that includes an indication of an operation (eg, part of a DAG, etc.) applied to the private data 124 . Processor element 210 may receive an information element that includes an indication of encrypted private data 128 (eg, from one of private data sources 1030). In addition, the processor element 210 may receive from the experimental coordinator an information element that includes an indication of the private data encryption key 126 corresponding to the received encrypted private data 128 .In executing the control routine 232, the processor element 210 may apply the operations described in the experimental part (eg, DAG) received from the experimental coordinator to the received private data. In other words, processor element 210 may generate results 234 based at least in part on processing private data (eg, by decrypting encrypted private data 128 using private data encryption key 126, etc.), as by trusted computing node 200. Received part of the DAG indicated. In addition, the processor element may obscure the result 234 (eg, by adding noise, etc.) to reduce the likelihood of identifying private data based on the result 234 .Turning more specifically to FIG. 5, the depicted trusted computing and data provider node 300 may correspond to the trusted computing and data provider nodes 300-1 through 300-N of the system 1000 shown in FIGS. 1-2. anyone. Note that trusted computing node 300 may be a combination of nodes 100 and 200 described with respect to FIGS. 3-4. In particular, trusted computing and data provider node 300 may provide private data (eg, similar to node 100) or may process private data (eg, similar to node 200). With some examples, node 300 may provide or process private data. In some examples, node 300 may provide and process private data. In particular, since private data processing is limited to TEE, private data can be kept secure even if the private data being processed is owned by a different entity.Node 300 may include a processor element 310, a computer-readable storage device 320, a TEE 330, a control 340, and an interface 350. Computer-readable storage 320 may store one or more of control routines 322, private data 124, private data encryption keys 126, and encrypted private data 128. The TEE 330 may include one or more of a control routine 332, a result 234, a private data encryption key 126, and encrypted private data 128.In general, control routine 322 incorporates sequences of instructions that operate on components of device 300 (eg, processor element 310, control 340, interface 350, etc.) to implement the logic that provides the private data 124 for processing described herein. In particular, when the control routine 322 is executed, the processing element 310 may use the private data 124 to receive a request to perform an experiment. Processor element 310 may verify that the experiment includes an action (eg, application, operation, etc.) for authorization of private data 124 . In addition, processor element 310 may encrypt private data 124 (eg, using private data encryption key 126) to form encrypted private data 128. Processor element 310 may provide encrypted private data 128 to one of trusted computing nodes 200-1 to 200-N and/or one of trusted computing and data provider nodes 300-1 to 300-N, and The encryption key 126 can be provided to the experiment coordinator.In general, control routine 332 incorporates sequences of instructions that operate on components of device 300 (eg, processor element 310, control 340, interface 350, etc.) to implement private data for decryption 128 and private to decryption. Data applies the logic of various computing operations.It is important to note that the TEE 330 may include logic, functionality, features, and/or storage for securely implementing the functions described herein. In addition, TEE 330 may be incorporated into processor element 310 and/or storage device 320 . However, TEE 330 is depicted separate from processor element 310 and memory 320 for clarity. In some examples, TEE 330 may be implemented as a secure enclave, a security co-processor, or the like. In this way, node 300 may prove that it can securely decrypt, store, and process private data 124.When the control routine 332 is executed, the processing element 310 can join the computing pool and can prove its authenticity to the experimental coordinator. Processing element 310 may receive from the experiment coordinator an information element that includes an indication of an operation (eg, a portion of a DAG, etc.) applied to private data 124 . Processor element 310 may receive an information element that includes an indication of encrypted private data 128 (eg, from one of private data sources 1030). In addition, the processor element 310 may receive an information element including an indication of the private data encryption key 126 corresponding to the received encrypted private data 128 from the experimental coordinator.In executing the control routine 332, the processor element 310 may apply the operations described in the experimental part (eg, DAG) received from the experimental coordinator to the received private data. In other words, processor element 310 may generate results 234 based at least in part on processing private data (eg, by decrypting encrypted private data 128 using private data encryption key 126, etc.), such as the DAGs received by node 300. As indicated in the section. In addition, the processor element 310 may obscure the result 234 (eg, by adding noise, etc.) to reduce the likelihood of identifying private data based on the result 234 .Turning more specifically to FIG. 6 , an example of the depicted experimental portal 500 is depicted. The experiment portal 500 may include a processor element 510, a computer-readable storage device 520, a control 540, and an interface 550. Computer-readable storage 520 may store one or more of control routine 522, experiment description 524, and results 234.In general, control routine 522 incorporates sequences of instructions that operate on components of device 500 (eg, processor element 510, control 540, interface 550, etc.) to implement receiving experiment description 524 (eg, from a researcher's device, etc.). The logic. Processor element 510 may select one of trusted computing nodes 200-1 through 200-N to serve as experiment coordinator 1030, and may communicate experiment description 524 to experiment coordinator 1030. Additionally, while executing the control routine 532, the processor element 510 can schedule portions of the experiment description 524 to operate on available nodes within the distributed computing pool 1010. In general, the experimental description may be any description that includes an indication of the private data 124 to be accessed and the operations to be performed on the private data. In some examples, the experimental description will be a DAG that references private data 124 and various mapping functions, reduced functionality, analytic functions, and the like.Devices 100, 200, 300, and/or 500 may be any of various types of computing devices, including but not limited to servers, workstations, data centers, laptops,computers, tablets, smart phones, and the like.In various embodiments, the processor elements 110, 210, 310, and/or 510 may include any of a variety of commercially available processors, including but not limited to aorprocessor;Applications, embedded or secure processors;and/ororprocessors; IBM and/orCell processors; orCore(2)Core(2)CoreCoreorprocessor. In addition, one or more of these processor elements may include multi-core processors (whether multiple cores coexist on the same or separate die), and/or multiple physically separate processors at some One of the other types of multiprocessor architectures that the method links to. In addition, in various embodiments, any number of processor elements 110, 210, and/or 410 may include a trusted execution environment (eg, IntelIntelIntelIntelARM, etc.) to provide processing of sensitive information. And/or storage. The trusted execution environment can be accessed using the geolocation techniques described herein.In various embodiments, storage devices 120, 220, 320, and/or 520 may be based on any of a variety of information storage technologies, may include volatile technologies that require uninterrupted power supply, and may include the need to use It is a technology that may also not be a removable machine-readable storage medium. Thus, each of these storage devices may include any of various types (or combinations of types) of storage devices, including but not limited to read only memory (ROM), random access memory (RAM), dynamic RAM (DRAM). ), Double Data Rate DRAM (DDR-DRAM), Synchronous DRAM (SDRAM), Static RAM (SRAM), Programmable ROM (PROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM) ), Flash memory, polymer memory (eg ferroelectric polymer memory), bidirectional memory, phase change or ferroelectric memory, silicon oxide nitride oxide silicon (SONOS) memory, magnetic card or optical card, one or more individual Ferromagnetic disk drives or multiple storage devices organized into one or more arrays (eg, multiple ferromagnetic disk drives organized as redundant arrays of independent disk arrays or RAID arrays). It should be noted that although each of these storage devices is described as a single block, one or more of these may include multiple storage devices that may be based on different storage technologies. Thus, for example, one or more of each of these depicted storage devices may represent optical drives or flash memories on which programs and/or data may be stored and transferred to some form of machine-readable storage media. Card readers, ferromagnetic disk drives that store programs and/or data locally for a relatively long period of time, and one or more volatile solid-state memory devices (such as SRAMs) that are capable of relatively fast access to programs and/or data Or DRAM) combination. It should also be noted that each of these storage devices can be composed of a plurality of storage components that are based on the same storage technology but that are separately maintained due to specialized use (for example, some DRAM devices are used as main memory, and other DRAMs) The device is used as a different frame buffer for the graphics controller).In various embodiments, the control routines 122, 222, 232, 322, 332, and/or 522 may include an operating system, device drivers, and/or application-level routines (eg, so-called on a disk medium). "Software Suite," "small programs," obtained from a remote server, etc.). Where an operating system is included, the operating system may be any of a variety of available operating systems suitable for any of the processor elements (eg, 110, 210, 310, 510, etc.). When one or more device drivers are included, these device drivers can provide support for any of the various other components of the device, whether they are hardware components or software components.In various embodiments, controls 140, 240, 340, and/or 540 may be any of a variety of input and/or output devices such as, for example, a touch screen, keyboard, mouse, keypad, touchpad, touch Pens, monitors, speakers, tactile feedback devices, etc. These controls may be local or remote, and may be wireless or wired.In various embodiments, interfaces 150, 250, 350, and/or 550 may employ any of a variety of signaling techniques so that components can be coupled through network 400. Specifically, devices 100, 200, 300, and/or 500 may exchange signals (eg, with one another, with another device, etc.), communicating information and/or data associated with private data 124 and results 234.In various embodiments, the network 400 may be a single network that may be limited to extending within a single building or other relatively limited area, a combination of connected networks that may extend over a significant distance, and/or may include the Internet. Thus, network 400 may be based on any of a variety of (or combined) communication technologies through which signals may be exchanged, including but not limited to wired technologies using conductive and/or optical cables, and infrared, radio frequency or other forms of wireless Transmission wireless technology. Thus, the interfaces 150, 250, 350, and/or 550 may include circuitry that provides at least some necessary functionality to achieve such coupling. However, interfaces 150, 250, 350, and/or 550 may also be implemented at least in part by a sequence of instructions executed by a processor element (eg, implementing a protocol stack or other feature). Where conductive and/or optical cables are used in one or more portions of network 400, the interface may employ signaling and/or protocols that conform to any of a variety of industry standards, including but not limited to RS-232C, RS. -422, USB, Ethernet (IEEE-802.3) or IEEE-1394. Alternatively or additionally, where one or more portions of network 400 require the use of wireless signal transmission, corresponding ones of these interfaces may employ signaling and/or protocols that conform to any of a variety of industry standards. , including but not limited to IEEE 802.11a, 802.11b, 802.11g, 802.16, 802.20 (commonly referred to as "mobile broadband wireless access"); Bluetooth; ZigBee; or GSM (GSM/GPRS) such as with general packet radio service, CDMA/1xRTT, Evolution Advanced Data Rate (EDGE), Evolutionary Data Uniqueness/Optimization (EV-DO), Data and Voice Evolution (EV-DV), High Speed ​​Downlink Packet Access (HSDPA), High Speed ​​Uplink Packet Access (HSUPA), 4G LTE, etc. It should be noted that although the interface is described as a single box, it may include multiple interfaces, which may be based on different signaling technologies. This may be the case, particularly where one or more of these interfaces couple these components to more than one network, each network employing a different communication technology.Turning more specifically to FIG. 7, a portion of the example 1001 of FIG. 2 is depicted in more detail. As shown, an experiment coordinator 1030 may correspond to a trusted computing node 200-3, which may include a control routine 232 and/or other logic, at least a portion of which may be implemented in hardware, which may include a certification engine 2321, security, Tunnel interface 2322, experiment authenticator 2323, and compute node authenticator 2324. The computing node from distributed computing pool 1010 is depicted. For example, a trusted computing node 200-1 is depicted, which may include control routines 232 and/or other logic, at least a portion of which may be implemented in hardware, which may include a certification engine 2321, a secure channel interface 2322, and an experiment processor. 2325. The data provider node from the private data source 1020 is depicted. For example, a data provider node 100-1 is depicted, which may include a control routine 122 and/or other logic, at least a portion of which may be implemented in hardware, which may include a gateway component 1221, an experiment verifier 1222, and an encryption engine 1223. An experiment entrance 500 is also depicted, which may include a control routine 522 and/or other logic, at least a portion of which may be implemented in hardware, which may include a scheduler 5221 . In addition, a secure channel 710-1 established between secure channel interfaces 2322 of nodes 200-1 and 200-3 is depicted, and each secured channel interface at nodes 200-1 and 200 and node 100-1 are depicted Security channels 710-2 and 710-3 established between gateways 1221.Turning more particularly to FIG. 8 , a technique 1100 of using a distributed computing resource pool to securely process multiple sets of data owned by different entities is depicted. The technique includes operations or Box 8.X, where X is a positive integer. Starting from Box 8.1, the experiment portal may receive an experimental description. For example, experiment transceiver 5221 may receive an information element that includes an indication of experiment description 524 . In addition, at block 8.1, the experimental transceiver 5221 selects one of the computing nodes (eg, trusted computing nodes 200-1 to 200-N or trusted computing and data provider nodes 300-1 to 300-N) to serve as experimental coordination器1030。 1030. For example, experimental transceiver 5221 may select trusted computing node 200-3 to serve as experiment coordinator 1030. In addition, at block 8.1, the experiment transceiver 5221 may send an information element that includes an indication of the experiment description to the experiment coordinator 1030.At block 8.2, the experiment coordinator 1030 may prove its authenticity to the data provider node of the private data source 1020 and its ability to act as an experiment coordinator. For example, certification engine 2321 may prove to nodes 100-1, 100-2, and 300-2. Proceeding to block 8.3, the experimental coordinator and the nodes in private data provider pool 1020 establish a trusted communication channel (eg, a secure cross-border channel, an enhanced communication channel, etc.). For example, secure channel interface 2322 may establish a secure channel with each gateway 1221 of data provider nodes 100-1, 100-2, and 300-2.Proceeding to block 8.3, the experiment coordinator may send an information element including an indication of an experiment description (eg, a workflow DAG, etc.) to the node of the private data source 1020 on the established secure channel. For example, the experiment authenticator 2323 may communicate the experiment description 524 to the data provider node through the secure channel and more specifically via the secure channel interface 2322 and the gateway 1221. In addition, at block 8.3, the nodes (eg, nodes 100-1, 100-2, and 300-2) of the private data source 1020 can verify that the experiment description 524 only describes the approved workflow. For example, the node may verify that the experiment description references approved applications, processes, operations, transitions, and the like.Proceeding to block 8.4, the experiment portal 500 may schedule distributed computing based on the experimental description 524. For example, scheduler 5222 may schedule some nodes in pool 1010 to perform various portions of experiment description 524 . At block 8.4, the experiment processor 2325 may receive (eg, from the scheduler 5222) an information element including an indication of the address of the experiment coordinator 1030, an indication of the required private data, and an indication of the operation to be performed on the private data. In some examples, scheduling may be based at least in part on proximity to data, resource availability, and the like.Proceeding to block 8.5, the experiment coordinator 1030 may allow the node to enter the distributed computing pool 1010 and/or may node in the authentication pool 1010. For example, computing node authenticator 2324 may receive information elements from a node in pool 1010 or a node requesting admission pool 1010, which includes an indication of the authenticity of the node. For example, the certificate engine 2321 of the node 200-1 may communicate with the compute node authenticator 2324 (eg, an experiment coordinator) of the node 200-3 to prove its application, startup parameters, trust roots, and the like. The secure channel interface 2322 of the node 200-3 may facilitate establishing a secure channel between the secure channel interface 2322 of the node 200-1 and the gateway 1221. More specifically, the experiment coordinator 1030 may facilitate establishing a secure channel between the nodes in the pool 1010 and the source node 1020.Continuing to blocks 8.6, 8.7, and 8.8, a node in pool 1010 may receive encrypted private data from one of the nodes in source 1020 and may receive an encryption key to operate to decrypt the encrypted private data. For example, at block 8.6, the experiment processor 2325 of the node 200-1 may receive the encrypted private data 128 from the node 300-2. In some examples, the encrypted private data 128 is received through a secure channel (eg, via secure channel interface 2322 and gateway 1221, etc.). At block 8.7, the experiment processor 2325 of the node 200-2 may receive the encrypted private data 128 from the node 100-2. In some examples, the encrypted private data 128 is received through a secure channel (eg, via secure channel interface 2322 and gateway 1221, etc.). At block 8.8, the experiment processor 2325 of the node 300-1 may receive the encrypted private data 128 from the node 100-1. In some examples, the encrypted private data 128 is received through a secure channel (eg, via secure channel interface 2322 and gateway 1221, etc.).In some examples, the private data encryption key 126 is provided to the node by the experimental coordinator 1030. For example, at blocks 8.6, 8.7, and 8.8, the experiment coordinator may provide appropriate encryption keys to nodes in pool 1010 and nodes in source 1020 via a secure channel (eg, via secure channel interface 2322 and gateway 1221, etc.). In some examples, the private data encryption key 126 is provided by a data source. For example, a data source can provide encryption keys to nodes that request private data over a secure channel.Continuing with blocks 8.9, 8.10, and 8.11, the nodes in the pool can process the received private data. For example, at blocks 8.9, 8.10, and 8.11, the experiment processor of each node may decrypt the received private data, and may apply the various processes indicated in the part of the experimental description received from the scheduler at block 8.4 ( For example, mapping, streamlining, analysis, etc.) In addition, at blocks 8.9, 8.10, and 8.11, the experiment processor of each node may generate a result 234.It is important to note that the nodes in pool 1010 can isolate the processing of received private data. More specifically, each node may isolate the execution associated with processing the private data from all other instances executing on the node and/or from all other instances executing in the pool 1010.Note that boxes 8.9, 8.10, and 8.11 may require intermediate results and/or private data as part of the processing of boxes 8.9, 8.10, and 8.11. In other words, some of the results 234 may be intermediate results rather than the final results that are passed to the experimental portal. Accordingly, at blocks 8.9, 8.10, and 8.11, the nodes in the pool may encrypt the result 234 for transmission to another node in the system, or for further processing on the same node (eg, by another application, etc.).Proceeding to block 8.12, the experiment entry 500 may receive results from the nodes in the pool. In some examples, a node in the pool may apply a de-identification technique on the result 234 (eg, at blocks 8.9, 8.10, 8.11, etc.). For example, noise may be added to result 234 to reduce the probability of identifying private data 124 from the results.Continuing to blocks 8.13, 8.14, and 8.15, the nodes in pool 1010 may destroy (eg, erase, overwrite, etc.) results 234, encrypted private data 128, and encryption key 126.Figures 9-10 illustrate embodiments of a mutually agreed logic flow for distributed processing of providing multi-party data. For example, a logic flow may be implemented to provide mutual recognition of data processed by the distributed pool 1010 from the private data source 1020. It should be understood that the logic flow is described with reference to FIGS. 1-8. However, in this case, the examples are not limited, and specifically, systems and/or devices including components similar or different from those depicted in FIGS. 1-8 may implement logic flows.Turning more specifically to FIG. 9, the logic flow 1200 may begin at block 1210. At block 1210, "establishing a secure channel with a private data provider from a private data provider pool", the control routine 232 may establish security between the node (eg, the pool 1010 node) and the data provider in the pool 1020. Channel (eg, secure channel 710-2, etc.). More specifically, the node's secure channel interface 2322 may establish a secure channel 710-2 with the gateway 1221.Proceeding to block 1220, "Receiving encrypted data from a private data provider via a secure channel", the node may receive encrypted private data 128 from a node in the private data pool via a secure channel. For example, experiment processor 2325 of a node in pool 1010 may receive private data 128 via secure channel 710-2.Proceeding to block 1230, "Apply the process to encrypted data," the node can apply processing to the encrypted data, and more specifically, the experiment processor 2325 of the node in the pool 1010 can process (eg, based on an experimental description, DAG, etc.). ) Applied to encrypted private data.Turning more specifically to FIG. 10, a logic flow 1300 is depicted. Logic flow 1300 may begin at block 1310. At block 1310, "establishing a secure channel with a private data provider in a private data provider pool", the control routine 232 of the experiment coordinator 1030 may establish security between the data provider in the experiment coordinator 1030 and the pool 1020. Channel (eg, secure channel 710-3, etc.). More specifically, the secure channel interface 2322 of the experiment coordinator 1030 may establish a secure channel 71 03 with the gateway 1221 . Likewise, the secure channel interface 2322 may establish other secure channels with other gateways of the corresponding data providers in the pool 1020.Proceeding to block 1320, "The information element including the instruction of the experiment description is sent via the secure channel, the experiment description includes an indication of the operation of the private data available from the private data provider", and the control routine 232 of the experiment coordinator 1030 may The information element including the indication of the experiment description 524 is sent to the private data provider (eg, the private data provider 100-1, etc.) via the secure channel 710-3 and the like.Proceeding to block 1330, "Receiving through the secure channel an information element including an indication of whether the experiment description is approved," the control routine 232 of the experiment coordinator 1030 may receive from a private data provider (eg, data provider 100-1, etc.) An information element that includes an indication of whether the experiment description 524 is approved or authorized to operate on private data 124 available from a private data provider.FIG. 11 shows an embodiment of a storage medium 2000. The storage medium 2000 may include an article of manufacture. In some examples, storage medium 2000 may include any non-transitory computer-readable medium or machine-readable medium, such as an optical, magnetic, or semiconductor memory device. Storage medium 2000 may store various types of computer-executable instructions, such as 2002). For example, storage medium 2000 may store various types of computer-executable instructions to implement logic flow 1100 . In some examples, storage medium 2000 may store various types of computer-executable instructions to implement logic flow 1200 . In some examples, storage medium 2000 may store various types of computer-executable instructions to implement logic flow 1300 .An example of a computer-readable or machine-readable storage medium may include any tangible medium capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, Writable or rewritable memory, etc. Examples of computer-executable instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, object-oriented code, visual code, and the like. In this context, examples are not limited.FIG. 12 illustrates an embodiment of an exemplary processing architecture 3000 suitable for implementing various embodiments as previously described. More specifically, processing architecture 3000 (or variations thereof) may be implemented as part of system 1000 and/or devices 100, 200, 300, and/or 500 described herein.Processing architecture 3000 includes various elements commonly used in digital processing, including but not limited to one or more processors, multi-core processors, co-processors, memory units, chipsets, controllers, peripherals, interfaces, oscillators, Timing equipment, video cards, sound cards, multimedia input/output (I/O) components, power supplies, etc. As used in this application, the terms "system" and "component" are intended to refer to an entity of a computing device in which digital processing is performed, the entity being hardware, a combination of hardware and software, software or software in execution, examples of which are The depicted exemplary processing architecture is provided. For example, a component may be, but is not limited to, a process running on a processor element, a processor component itself, a storage device (eg, a hard disk drive, multiple storage drives in an array, etc.) that may employ optical and/or magnetic properties. A storage medium, a software object, an executable sequence of instructions, a thread of execution, a program, and/or an entire computing device (eg, an entire computer). For example, both the application running on the server and the server can be components. One or more components can reside within a process and/or thread of execution and a component can be localized on one computing device and/or distributed between two or more computing devices. In addition, components may be communicatively coupled to each other through various types of communication media to coordinate operations. Coordination may involve one-way or two-way exchange of information. For example, a component may communicate information in the form of a signal transmitted over a communications medium. This information can be implemented as a signal assigned to one or more signal lines. Each message may be a signal or multiple signals transmitted in series or substantially in parallel.As depicted, when implementing the processing architecture 3000, the computing device includes at least a processor element 910, a storage device 930, an interface 990 to other devices, and a coupler 915. Depending on various aspects of the computing device implementing the processing architecture 3000, including its intended use and/or use conditions, such a computing device may further incorporate additional components, such as but not limited to a counter element 915.Coupler 915 incorporates one or more buses, point-to-point interconnects, transceivers, buffers, crosspoint switches, and/or other conductors and/or logic that at least communicatively couple processor element 910 to storage device 930 . Coupler 915 may further couple processor element 910 to one or more of interface 990 and display interface 955 (depending on which of these and/or other components also exist). In the case where the processor element 910 is thus coupled via the coupler 915, the processor element 910 can perform the various tasks of the processing architecture 3000 described in detail above. The coupler 915 can be implemented with any one of or a combination of techniques by which the signal is optically and/or transmitted. In addition, at least portions of the coupler 915 may employ timing and/or protocols that conform to any of a variety of industry standards, including but not limited to Accelerated Graphics Port (AGP), CardBus, Extended Industry Standard Architecture (E-ISA) , Micro Channel Architecture (MCA), NuBus, Peripheral Component Interconnect (Extension) (PCI-X), PCI Express (PCI-E), Personal Computer Memory Card International Association (PCMCIA) Bus, HyperTransportTM, QuickPath, etc.As previously described, the processor element 910 may include any of a variety of commercially available processors, employ any of a variety of technologies and utilize a physical combination in any of a variety of ways. One or more cores to achieve.As previously described, storage device 930 may include one or more different storage devices based on any one or a combination of technologies. More specifically, as shown, the storage device 930 may include a volatile storage device 931 (eg, a solid state storage device based on one or more forms of RAM technology), a non-volatile storage device 932 (eg, Solid state, ferromagnetic or other storage devices that do not require constant power supply to preserve its contents), and removable media storage devices 933 (eg, removable disks or solid state memory card storage devices, through which information can be transferred between computing devices ) One or more of them. This description of storage device 930, which may include many different types of storage, recognizes the common use of more than one type of storage device in a computing device, where one type provides relatively fast read and write capabilities In order to achieve faster data manipulation of the processor element 910 (but possibly using "volatile" technology that continuously requires power), while the other type provides relatively high density of non-volatile storage (but may provide relative Slower read and write capabilities).Taking into account the generally different characteristics of different storage devices using different technologies, it is common that these different storage devices are coupled to other parts of the computing device through different storage controllers. The storage controllers are coupled to different storage devices through different interfaces. . For example, in the presence of volatile memory device 931 and based on RAM technology, volatile memory device 931 may be communicatively coupled to coupler 915a through memory controller 935a, which is volatile to memory controller 935a. The storage device 931 provides a suitable interface, the volatile storage device 931 may employ row and column addressing, and wherein the storage controller 935a may perform line refresh and/or other maintenance tasks to help preserve and store in the volatile storage device 931 Information inside. As another example, where non-volatile storage 932 is present and includes one or more ferromagnetic and/or solid state disk drives, non-volatile storage 932 may be communicatively coupled to the coupler through storage controller 935b. 915, storage controller 935b provides a suitable interface to non-volatile storage device 932, which may employ addressing of information blocks and/or cylinders and sectors. As another example, where the removable media storage device 933 is present and includes one or more optical and/or solid state disk drives employing one or more removable machine-readable storage media 939, the removable media storage device 933 may be communicatively coupled to a coupler 915 through a memory controller 935c, the memory controller 935c providing an appropriate interface to a removable media storage device 933, which may employ addressing of information blocks, and wherein Storage controller 935c can coordinate read, erase, and write operations in a manner that is biased toward extending the useful life of machine-readable storage media 939 .One or the other of the volatile memory device 931 or the non-volatile memory device 932 may include an article in the form of a machine-readable storage medium on which a routine including a sequence of instructions executable by the processor element 910 may be stored. This depends on the technology on which each storage device is based. For example, in the case where the nonvolatile memory device 932 includes a ferromagnetic based magnetic disk drive (eg, a so-called "hard disk drive"), each such disk drive typically employs one or more rotating disks in its The coating on which the magnetically-responsive particles are deposited, is magnetically oriented in various modes to store information such as a sequence of instructions in a manner similar to a removable storage medium such as a floppy disk. As another example, the nonvolatile memory device 932 may be composed of a solid state memory device bank, storing information such as a sequence of instructions in a manner similar to a compact flash memory card. Again, it is common to use different types of storage devices in computing devices to store executable routines and/or data at different times. Thus, a routine including a sequence of instructions to be executed by the processor element 910 may initially be stored on the machine-readable storage medium 939, and then the removable medium storage 933 may be used to copy the routine to non-volatile Sex storage device 932 is used for long-term storage without the persistence of machine-readable storage media 939 and/or volatile storage 931 to enable faster access by processor element 910 when executing this routine.As previously discussed, interface 990 may employ any of a variety of signaling techniques that correspond to any of a variety of communication technologies that may be used to communicatively couple a computing device to one or more other devices. Also, one or both of various forms of wired or wireless signaling may be employed to enable the processor element 910 to interact with an input/output device (eg, the depicted example keyboard 940 or printer 945) and/or other calculations. The device network may interact via a network (eg, network 999) or an interconnected set of networks. Recognizing the often-different features of the various types of signaling and/or protocols that any computing device must support, the interface 990 is depicted as including a plurality of different interface controllers 995a, 995b, and 995c. The interface controller 995a may employ any of various types of wired digital serial interfaces or radio frequency wireless interfaces to receive serially transmitted messages from a user input device (eg, the depicted keyboard 940). The interface controller 995b may employ any of a variety of cable-based or wireless signaling, through the depicted network 999 (which may be a network including one or more links, a smaller network, or possibly the Internet. ) To access the timing and/or protocols of other computing devices. Interface 995c may employ any of a variety of conductive cables to enable transmission of data to the depicted printer 945 using serial or parallel signaling. Other examples of devices that may be communicatively coupled through one or more interface controllers 990 of the interface include, but are not limited to, microphones, remote controls, stylus, card readers, fingerprint readers, virtual reality interactive gloves, graphics Input boards, joysticks, other keyboards, retina scanners, touch screen touch input components, trackballs, various sensors, laser printers, inkjet printers, robots, milling machines, etc.Such a computing device implementing a processing architecture 3000 may also include a display interface 955 where the device is communicably coupled to (or may actually incorporate) a display (eg, the depicted example display 950). Although a more generalized interface type can be used to communicatively couple to the display, the slightly specialized additional processing often required to visually display various forms of content on the display, as well as the cable-based interface used The slightly specialized nature of the often makes it desirable to provide a unique display interface. Wired and/or wireless signaling technologies that the display interface 955 may use in the communication coupler of the display 950 may utilize signaling and/or protocols that conform to any of a variety of industry standards, including but not limited to various analog video Interface, digital video interface (DVI), DisplayPort, etc.More generally, various elements of the devices described herein may include various hardware elements, software elements, or a combination of both. Examples of hardware elements may include devices, logic devices, components, processors, microprocessors, circuits, processor components, circuit elements (eg, transistors, resistors, capacitors, inductors, etc.), integrated circuits, application specific integrated circuits ( ASICs), Programmable Logic Devices (PLDs), Digital Signal Processors (DSPs), Field Programmable Gate Arrays (FPGAs), memory cells, logic gates, registers, semiconductor devices, chips, microchips, chip sets, and the like. Examples of software elements may include software components, programs, applications, computer programs, application programs, system programs, software development programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, A method, program, software interface, application program interface (API), instruction set, computational code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. However, determining whether an embodiment uses hardware elements and/or software elements to achieve a desired calculation rate, power level, thermal resistance, processing cycle budget, input data rate, output data that may be required, such as for a given implementation The number of factors such as rate, memory resources, data bus speed, and other design or performance constraints vary.Some embodiments may be described using the expression "one embodiment" or "embodiments" and their derivatives. These terms mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment. In addition, some embodiments may be described using the expressions "coupled" and "connected," along with their derivatives. These terms are not necessarily synonymous with each other. For example, the terms "connected" and/or "coupled" may be used to describe some embodiments to indicate that two or more elements are in direct physical or electrical contact with each other. However, the term "coupled" may also mean that two or more elements are not in direct contact with each other but still cooperate or interact with each other.It is emphasized that the abstract of the present disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It should be understood that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. In the appended claims, the terms "including" and "in which" are used as the plain-English equivalents of the respective terms "comprising" and "wherein," respectively. In addition, the terms "first," "second," "third," and the like are used merely as labels and are not intended to impose numerical requirements on their objects.The above description includes examples of the disclosed architecture. Of course, it is not possible to describe every possible combination of components and/or methods, but one of ordinary skill in the art will recognize that many further combinations and permutations are possible. Accordingly, the novel architecture is intended to embrace all such alterations, modifications, and variations that fall within the spirit and scope of the appended claims. The present disclosure now turns to providing various example implementations.Example 1. An apparatus, comprising: logic, a portion of which is implemented in hardware, the logic comprising: a secure channel interface for establishing a secure channel with private data providers in a plurality of private data providers in a private data source pool And an experiment processor for receiving encrypted private data from the private data provider via the secure channel; applying one or more processes to the encrypted private data.Example 2. According to the apparatus of Example 1, the experiment processor is configured to receive the encryption key and decrypt the encrypted private data based on the encryption key.Example 3. The apparatus of example 2 wherein the secure channel is a first secure channel, the secure channel interface is used to establish a second secure channel with the experiment coordinator, and the experiment processor is configured to receive the encryption key from the experiment coordinator.Example 4. The apparatus of example 1, comprising a certification engine for sending an information element to the experiment coordinator including an indication of the device's trust root.Example 5. The apparatus of example 1 being a trusted computing node in a pool of distributed trusted computing nodes.Example 6. The apparatus of example 2, the experiment processor for receiving an information element from the experiment coordinator, the information element including an indication of a portion of the experiment description.Example 7. The apparatus of example 6 wherein the experiment processor is to receive the encryption key from the private data provider.Example 8. The apparatus according to Example 7, the experimental description comprising a directed acyclic graph (DAG).Example 9: The apparatus of example 8, the DAG includes a plurality of mapping, thinning or parsing operations to be performed on private data corresponding to the plurality of privates from the private data source pool Private data available to data providers.Example 10. The apparatus according to any one of examples 1 to 9, a secure channel interface and an experiment processor for executing in a trusted execution environment.Example 11. An apparatus, comprising: logic, a portion of which is implemented in hardware, the logic comprising: a secure channel interface, establishing a secure channel with a private data provider in a plurality of private data providers in a pool of private data sources; and An experiment authenticator for sending a first information element to the private data provider via the secure channel, the first information element including an experimental description of an operation of private data available from the private data source Indication; receiving a second information element from the private data provider via the secure channel, the second information element including an indication of whether the experimental description is approved.Example 12. The apparatus of example 11, the logic comprising a compute node authenticator for authenticating a trusted computing node to allow the trusted computing node to enter a pool of trusted compute nodes.Example 13. According to the apparatus of Example 12, the compute node authenticator receives the trust root from the trusted compute node and authenticates the trusted compute node based on the trust root.Example 14. The apparatus of example 12, the compute node authenticator configured to send a third information element to the private data provider via the secure channel, the third information element including an indication of the authenticity of the trusted computing node; and via the A secure channel receives a fourth information element from the private data provider, the fourth information element including an indication of whether the trusted computing node is authorized to receive the private data.Example 15. The apparatus of Example 12, the experimental authenticator to receive a third information element from the experiment portal, the third information element including an indication of the experiment description.Example 16. The device according to Example 12, the experimental description comprising a directed acyclic graph.Example 17. An apparatus comprising: a trusted execution environment (TEE); a secure channel interface implemented by a TEE, the secure channel interface establishing a secure channel with a private data provider among private data providers in a private data source pool And an experimental processor executed by the TEE for receiving encrypted private data from the private data provider via the secure channel; applying one or more processes to the encrypted private data.Example 18. The apparatus of Example 17, the experiment processor to receive the encryption key and decrypt the encrypted private data based on the encryption key.Example 19. The apparatus of example 18, the secure channel is a first secure channel, the secure channel interface is a second secure channel with the experimental coordinator, and the experiment processor is configured to receive the encryption key from the experimental coordinator.Example 20. The apparatus of example 17, comprising an attestation engine for sending an information element to the experiment coordinator that includes an indication of the device's trust root.Example 21. The device of Example 17, the trusted computing node in the pool of distributed trusted computing nodes.Example 22: The apparatus of example 18, the experiment processor to receive an information element from an experiment coordinator, the information element including an indication of a portion of an experiment description.Example 23. The apparatus of example 22, an experiment processor for receiving an encryption key from a private data provider.Example 24. The apparatus according to Example 23, the experimental description comprising a directed acyclic graph (DAG).Example 25. The apparatus of example 24, the DAG includes a plurality of mapping, thinning, or parsing operations to be performed on private data corresponding to the plurality of private data providers from a private data pool Available private data sources.Example 26. The apparatus according to any one of examples 17 to 25, the secure channel interface and the experiment processor being for execution in a trusted execution environment.Example 27. An apparatus comprising: a trusted execution environment (TEE): a secure channel interface performed by the TEE, the secure channel interface being provided with private data from multiple private data providers in a private data source pool Establish a secure channel; and an experimental authenticator executed by the TEE, the experimental authenticator being configured to send a first information element to the private data provider via the secure channel, the first information element comprising a pair of private An indication of an experimental description of the operation of the private data obtained by the data source; receiving a second information element from the private data provider via the secure channel, the second information element including an indication of whether the experimental description is approved.Example 28. The apparatus of example 27, comprising a compute node authenticator authenticating the trusted computing node to allow the trusted computing node to enter the trusted compute node pool.Example 29. The apparatus of example 28, a compute node authenticator for receiving a root of trust from a trusted computing node and authenticating the trusted computing node based on the root of trust.Example 30. The apparatus of example 28, the compute node authenticator configured to send a third information element to the private data provider via the secure channel, the third information element including the trusted computing node An indication of the authenticity of the; and receiving a fourth information element from the private data provider via the secure channel, the fourth information element including an indication of whether the trusted computing node is authorized to receive the private data.Example 31. The apparatus of example 27, an experimental authenticator for receiving a third information element from the experiment portal, the third information element including an indication of the experiment description.Example 32. According to the apparatus of Example 27, the experimental description includes a directed acyclic graph.Example 33. At least one machine-readable storage medium, comprising instructions that, when executed by a trusted execution environment (TEE), cause the TEE to be private data among private data providers in a private data source pool The provider establishes a secure channel; receives encrypted private data from the private data provider via the secure channel; and applies one or more processes to the encrypted private data.Example 34. The at least one machine-readable storage medium of example 33, comprising instructions further causing the TEE to receive the encryption key and decrypt the encrypted private data based on the encryption key.Example 35. The at least one machine readable storage medium of example 34, the secure channel being a first secure channel, the medium comprising further establishing the second secure channel with the experimental coordinator and from the experiment The coordinator secure channel receives instructions for the encryption key.Example 36. The at least one machine-readable storage medium of example 33, comprising instructions further causing the TEE to send an information element to the experimental coordinator that includes an indication of the trust root of the device.Example 37. The at least one machine-readable storage medium of example 34, comprising instructions further for causing the TEE to receive an information element from an experimental coordinator, the information element including an indication of a portion of an experimental description.Example 38. The at least one machine-readable storage medium of example 37, comprising instructions further causing the TEE to receive the encryption key from the private data provider.Example 39. The at least one machine-readable storage medium of example 38, the experimental description comprising a directed acyclic graph (DAG).Example 40. At least one machine-readable storage medium according to example 39, the DAG includes a plurality of mapping, thinning, or parsing operations to be performed on private data corresponding to a pool of private data sources The private data obtained by the multiple private data providers.Example 41. At least one machine-readable storage medium, comprising instructions that, when executed by a trusted execution environment (TEE), cause the TEE to be private data among private data providers in a private data source pool The provider establishes a secure channel; sends a first information element to the private data provider via the secure channel, the first information element including an indication of an experimental description of the operation of private data available from the private data source And receiving a second information element from the private data provider via the secure channel, the second information element including an indication of whether the experimental description is approved.Example 42. The at least one machine-readable storage medium of example 41 including instructions further causing the TEE to authenticate the trusted computing node to allow the trusted computing node to enter the pool of trusted computing nodes.Example 43. The at least one machine-readable storage medium according to Example 42, including instructions further causing the TEE to receive the trust root from the trusted computing node and to authenticate the trusted computing node based on the root of trust.Example 44. The at least one machine-readable storage medium of example 43, comprising instructions further causing the TEE to perform: sending a third information element to the private data provider via the secure channel, the first The three information element includes an indication of the authenticity of the trusted computing node; and receiving a fourth information element from the private data provider via the secure channel, the fourth information element including whether the trusted computing node is authorized to receive The indication of the private data.Example 45. The at least one machine-readable storage medium of example 41 comprising instructions further causing the TEE to receive a third information element from the experiment portal, the third information element including an indication of an experiment description.Example 46. The at least one machine-readable storage medium of Example 41, the experimental description comprising a directed acyclic graph.Example 47. A computer-implemented method comprising: establishing a secure channel with a private data provider in a plurality of private data providers in a pool of private data sources; receiving encrypted data from the private data provider via the secure channel Private data; applies one or more processes to encrypted private data.Example 48. The computer-implemented method of example 47, comprising receiving an encryption key and decrypting the encrypted private data based on the encryption key.Example 49. The computer-implemented method of example 48, the secure channel being a first secure channel, the method comprising establishing a second secure channel with an experimental coordinator and receiving an encryption key from the experiment coordinator.Example 50. The computer-implemented method of example 47, comprising sending an information element that includes an indication of the device's trust root to an experimental coordinator.Example 51. The computer-implemented method of example 48, comprising receiving an information element from an experiment coordinator, the information element including an indication of a portion of an experimental description.Example 52. The computer-implemented method of example 51, comprising receiving an encryption key from a private data provider.Example 53. The computer-implemented method of example 52, the experimental description comprising a directed acyclic graph (DAG).Example 54. The computer-implemented method of example 53, the DAG includes a plurality of mapping, thinning, or parsing operations to be performed on private data corresponding to a pool of private data that can be retrieved from the pool of private data sources. Describe private data obtained by multiple private data providers.Example 55. A computer-implemented method comprising: establishing a secure channel with a private data provider in a plurality of private data providers in a pool of private data sources; sending a first via the secure channel to the private data provider An information element, the first information element including an indication of an experimental description of operation of private data available from the private data source; and receiving a second information element from the private data provider via the secure channel, The second information element includes an indication as to whether the experimental description is approved.Example 56. The computer-implemented method of example 55, comprising authenticating the trusted computing node to allow the trusted computing node to enter the trusted computing node pool.Example 56. The computer-implemented method of example 56 comprising receiving a root of trust from a trusted computing node and authenticating the trusted computing node based on the root of trust.Example 58. The computer-implemented method of example 57, comprising sending a third information element to the private data provider via the secure channel, the third information element including the authenticity of the trusted computing node An indication of; and receiving a fourth information element from the private data provider via the secure channel, the fourth information element including an indication of whether the trusted computing node is authorized to receive the private data.Example 59. The computer-implemented method of example 55, comprising receiving a third information element from an experiment portal, the third information element including an indication of an experiment description.Example 60. The computer-implemented method of example 59, the experimental description comprising a directed acyclic graph.Example 61 An apparatus for a device, the apparatus comprising a unit for performing the method of any of Examples 47-60.